text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
I’m creating a new method in the SPI definition of the TinyClr for the G120. How does that method show up in visual studio?
How do new methods show up in Visual Studio when you update TinyClr
Creating a new definition in SPI where? I’m not sure what you mean
I’m trying to create a new method in the C++ file: LPC17_SPI.cpp
TinyCLR_Result LPC17_Spi_TransferFullDuplexWithOffsets(const TinyCLR_Spi_Provider* self, const uint8_t* writeBuffer, size_t& writeOffset, size_t& writeLength, uint8_t* readBuffer, size_t& readOffset, size_t& readLength, size_t& startReadOffset) {
}
in the SPI source file. But how do I get this method to show up in visual studio intellisense after I compile the firmware.
So then, it didn’t compile everything. How do I do a complete compile?
You don’t, the only way would be to get John to add it as all the white man’s magic is closed and pre compiled.
looks like “…Read the code; fork the code; fix the code; problem solved…” does not work
I guess not Kevin.
Hmm, This questions doing my head in!!!
So It was an excuse to pulled down the TinyCLR porting source and managed to compile the firmware for a G120 for the first time.
That was fun, enjoyed seeing that getting built and seeing those G120Firmware.glb files appear !
Um SO the reason this is doing my head in (I’m stupid so its easy!) is a few reasons and I might be having a brain fade day, but I’ve just confused myself big time thinking about this. So I thought Id embarrass myself, as maybe I’ll learn something!
First I thought this sounds doable, but then went ummm err!!!??
I thought of using an mock extension method, as a work around just to get it to appear in IntelliSense, but…
From what I think your saying is you roughly want to add a method to the SPI controller in the G120 firmware, and then see that new method pop up in the IntelliSense in Visual Studio.?
Um… but isn’t this Native code on the Hardware, and C# CLR managed code in VS?
The DLL’s in VS which are installed via the nuget packages are c# libraries.
To call the ‘new method’ in the G120 firmware, would you have to call out ‘invoke’ from C# to native code?
Oh man! This is doing my head in.
Can some smart person please explain all this as my head is spinning!!!
Can we get a over view of Firmware / What the source ‘builds’ / Whats in the NuGet GHIElectronics.TinyCLR packages, and how those .DLL’s are built (do we not have access to the source for these?)
Am also confused at what ‘source.zip’ is at
Seems both links pull down “TinyCLR_Core.0.6.0.zip” even though source link points to “v0.6.0.zip”, maybe GitHub works like that and I’ve never noticed?
Just adding a new method in the CPP file isn’t enough. There are two separate concepts in TinyCLR: interops and APIs.
Interops are what allow you to call from managed code into native code. You can’t add more interops to an existing class in our library just like you can’t add new methods to our libraries. You can, however, create your own library that has its own interops. Interops are very similar to our old RLP, but now they’re much more integrated with the core and easier to use (there are many more improvements coming).
APIs are just a published way to interacting with various services. All the services we define are listed in TinyCLR.h. You cannot add your own methods to this file, but you can create your own API. You’ll just have to distribute your definition to other users. If you reuse an existing TinyCLR_Api_Type, you’ll be expected to conform to the corresponding API we defined. 0x80000000 and up are reserved for custom types. Particularly useful is the “API Provider” api itself. It is used to find other APIs in the system and an instance of it is passed to each interop call and to the firmware port on startup.
The real power is when these two are combined. In an interop, you’re running native code so you can access any device registers you need. What may be easier is to find and interact with an existing API in the system. Some APIs you may have to, like time and memory. The interop API is very useful since it allows you to marshal data to and from managed code, raise managed events, and interact with and create managed objects.
For GHIElectronics.TinyCLR.Devices (and our other libraries), the managed and interop APIs are very similar But you’re free to create your own if you need to. The TinyCLR core no longer has any knowledge of any specific device peripheral. All that it needs is passed to the various
TinyCLR_Startup functions in the firmware
main(). Particularly: heap location, devince name and version, the debugger API, plus the deployment, time, interrupt, and power APIs. The main.cpp we provide is just a reference implementation. You can always create your own as long as you call the functions as required.
Based on your other thread, it seems the SPI api is a bit ineffcient since it requires exact sized arrays. I agree it’s not ideal, but we’re following the Windows IoT API in our official GHIElectronics.TinyCLR.Devices package. While we’re investingating a more low level package that Devices will build on top of, in the mean time you can always create your own custom SPI API if you need.
Just clone ports and get the latest core library. Clone whichever device you’re starting with, say the G120, and change the name and various IDs as required in the Device.h file. Depending on the changes you want to make, cloning a new port may be required. In either case, change
LPC17_Spi_GetApi to use the API struct that you define (make sure to follow the pattern from TinyCLR.h exactly and use function pointers) and update the read write functions to take an additional index and offset. Update the API name and author, type too to custom.
Now that you’ve defined your API, you actually need to use it. So you’ll want to create a new class library that has whatever interops you deem appropriate, likely mirroring what you have in your custom API. Design the managed API as you see fit as well. Then in the native code for your interops, you’ll get an instance of the API provider API from the
TinyCLR_Interop_MethodData parameter in the interop. You can use this to get an instance of your custom SPI provider that you interact with by calling the function pointers. You’ll also need to get an instance of the interop api so you can read the parameters passed to the function.
Of course, you could just implement the SPI functions directly in your interop and not use an API at all. Defining an API has the benefit that other systems in the firmware can use your API. A long term goal of ours is that you can distribute that API and interop source with a nuget pacakge that gets built and shiped along to the device by the build system. So you’ll add a reference to some other nuget package and get to use its native functionality.
The reason you need to use the Custom API type for another SPI provider is that when fetching APIs, if you get one back that has the SpiProvider API type, it is expected to conform to the SPI API we define in TinyCLR.h.
You can find and interact with the various APIs and interops registered on the system with the types available under the
System.Runtime.InteropServices namespace.
Keep in mind master is the stable 0.6.0 release while dev is changing a lot between releases. The STM32F4 port is currently the cleanest but there is still a lot of work we want to do on all of them.
One unfortunate limitation in the current interop setup is that you need to compile an interop to a specific window in memory and use that in your linkerscript, then pass the address of a specific object to the interop
Add method. So you’ll need to create custom compilations for each device you want to support. To support this, we’ve set aside a few KB in ram in each device that you can use to put your interops. We want to improve this story going forward, perhaps with dynamic loading and fixups.
The interop and API docs under porting have some more specific info and steps as well.
Had to get a cup of coffee to sit down and read that overview.
Ok, Onward! | https://forums.ghielectronics.com/t/how-do-new-methods-show-up-in-visual-studio-when-you-update-tinyclr/20867 | CC-MAIN-2019-04 | refinedweb | 1,486 | 80.01 |
ZF-10194: Can't delete an entry from Google services using Gdata->delete method - client headers not taken into account ?
Description
Trying to create/update/delete contacts from Google contacts. Know there's no classes as they already exist for others services, but i am using gdata Zend_Gdata_Query class to directly retrieve contacts, gdata->updateEntry() to update a contact
Last action is how to delete a contact.
Looked the documentation (…) But it didn't work : Got a 403 error with error message "If-Match or If-None-Match header or entry etag attribute required"
Steps to reproduce issue:
Try to delete contact with this php code : $client = Zend_Gdata_ClientLogin::getHttpClient($login,$pass,'cp'); $client->setHeaders("If-Match: *"); $gdata = new Zend_Gdata($client); $gdata->setMajorProtocolVersion(3); $gdata->setMinorProtocolVersion(null);
$gdata->delete('…('@', '%40', $user->google_email).'/full/'.$synchContacts->idgoogle);
I also tried to retrieve the entry and delete it directly - same result ! $query = new Zend_Gdata_Query('…('@', '%40', $user->google_email).'/full/'.$synchContacts->idgoogle); $entry = $gdata->getEntry($query); $entry->delete();
I tried on zend framework 10.2 and 10.5 same behavior
I tried to downgrade to protocol version 1, it was working last month but now it isn't anymore. How can we check with Zend or Google if there's some changes in the apis ?
I tried downgrading to protocol V1 but it didn't work as well. Why the setHeaders instruction is not correctly take into account by Google servers ?
I captured the network traffic and don't see the header If-Match I wrote in the code. Is it taken into account ?
Expected output:
Contact deleted, with answer 200 from Google
Actual results:
403
Here is the network traffic recorded between my app and google servers :
My App packet :
DELETE /m8/feeds/contacts/demococea%40captivea.fr/full/308d6f698f2e8298 HTTP/1.1 Host: Connection: close User-Agent: MyCompany-MyApp-1.0 Zend_Framework_Gdata/1.10.2 authorization: GoogleLogin auth=DQAAAJ8AAABGra3kyGKzGO_Gpy8ULHhnr3irCWGXcIMsNFL0s5qCoS4ss3O2EHQ8oH5uVvdXI4HX6s9lNfdymnZwrmjCgtPX6KD1YAGtz2AL3cKHWYYGQjLr9xJWoy1Bg_w3x-AzJ21jDeDVltN6Im8gtZDfo0dGx5AGQ9IicTlNiqUvc_17nNOPDSTMQXpDXVoZxqX7OcK_dUJSXoUwPPVcGRlykz9H GData-Version: 3 Accept-encoding: identity Content-Type: application/atom+xml Content-Length: 0
Answer from Google :
HTTP/1.1 403 Forbidden Content-Type: text/html; charset=UTF-8 Date: Wed, 21 Jul 2010 09:56:32 GMT Expires: Wed, 21 Jul 2010 09:56:32 GMT Cache-Control: private, max-age=0 X-Content-Type-Options: nosniff X-Frame-Options: SAMEORIGIN X-XSS-Protection: 1; mode=block Server: GSE Connection: close If-Match or If-None-Match header or entry etag attribute required
Posted by Yannick BIET (yannick) on 2010-07-21T06:24:42.000+0000
Problem is located in Zend_Gdata_App::performHttpRequest (app.php line 639)
// Make sure the HTTP client object is 'clean' before making a request // In addition to standard headers to reset via resetParameters(), // also reset the Slug and If-Match headers $this->_httpClient->resetParameters(); $this->_httpClient->setHeaders(array('Slug', 'If-Match'));
As the If-Match parameters is reseted during the $gData->delete($entry) call, there's no way for the developper to set up the If-Match header required by Google API.
Default behavior should be the following : Case A
if delete call has been made using edit URL parameter (string), then in case of a delete operation we should add a header "if-Match: *" as we can't know the etag attribute
Case B In case we call the delete method with the entry element (from the feed retrieved) then in case of a delete operation we should add a header "If-Match: "
The modification should be in prepareRequest method
Line 500 (Case A) if ($data == null && $method == 'DELETE') { $headers['If-Match'] = '*'; }
Line 531 (Case B) if ($method == 'DELETE') { if ($rawData != null && isset($rawData->etag) && $rawData->etag != '') { $headers['If-Match'] = $rawData->etag; }else{ $headers['If-Match'] = '*'; } }
Thus the delete calls are now working !
Posted by Tim Hemming (themming) on 2011-11-14T12:49:09.000+0000
Our company are also experiencing this bug with DELETE actions. Framework version 1.11.11.
I can confirm that the approach @Yannick mentions does work. However I have implemented it by extending Zend_Gdata_Calendar into my own class and overriding prepareRequest as follows:
Posted by Yannick BIET (yannick) on 2011-11-14T13:07:22.000+0000
Hi Tim,
Glad to see that my 1year old work helped you fix this damned issue ! I didn't catch how you implemented it and it may interest me as now I am blocked to 1.10.6 as I didn't had any time upgrading and reporting the fix.
Does it change anything to the php code needed to use the service ? In deed I don't know how the lines :
$entry = $gdata->getEntry($query); $entry->delete();
will return instance of your code instead of default Zend_Gdata_Calendar (ok I'm in a rush right now so I don't have time to think about it but I would be interested to have a solution to upgrade to latest Zend framework !)
Have a nice day.
Posted by Tim Hemming (themming) on 2011-11-14T17:59:03.000+0000
I'm using my overridden Zend_Gdata_Calendar class with a Symfony 2 project so it's difficult to explain the usage of it without mentioning the Symfony2 service container and that sort of thing.
However, it seems from your code snippet that you are creating an instance of Zend_Gdata_Calendar and assigning it to $gdata. So in that situation you would instantiate the child class instead:
// Create your $httpClient first as that is used for the transport layer $gdata = new ZendGdataCalendar($httpClient);
Also I am using PHP 5.3 namespaces so my class is called ZendGdataCalendar within its own namespace. If you're not using namespacing I'd call the class something different, like: ZendGdataCalendarFixed.
Posted by RChea (renatochea) on 2012-08-07T18:13:05.000+0000
You are the man!!! The case B solved my problem :D, thank you for sharing!!
Posted by Yannick BIET (yannick) on 2012-08-08T07:54:38.000+0000
Hi RChea, glad to see the 2 years old bug report still helps few developpers ! Will this be fixed one day into Zend framework ?
Posted by Rob Allen (rob) on 2012-12-22T22:16:35.000+0000
I believe that this patch solves cases A and B as referenced by Yannick.
I would appreciate it if someone could test this patch before I commit it. | http://framework.zend.com/issues/browse/ZF-10194 | CC-MAIN-2013-20 | refinedweb | 1,038 | 53.61 |
Red Hat Bugzilla – Bug 73752
anacron freezes my computer
Last modified: 2007-04-18 12:46:32 EDT
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.1) Gecko/20020827
Description of problem:
It freezes my computer totally after a few seconds. I had to start from the
rescue cd and chmod -x /etc/init.d/anacron
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. /etc/init.d/anacron start
2.
3.
Actual Results: The computer started to hang, and after a few seconds it was
totally unusable. It was impossible to gain control over my machine, even over
ssh. The only possible thing was to hit the reset button.
Additional info:
i run a strace and wrote the last sentences down.
SIGCHLD (Chile exited) ---
wait4(-1 [WIFEEXITED](s) && WEXITSTATUS(s) == 0, WHOHANG, NULL) = 4500
wait4(-1 0xbffff15c, WNOHANG,NULL) = -1 ECHILD (No child processes)
sigreturn () = ? (mask now [RTMIN])
rt_sigprocmask(SIG_BLOCK, [CHLD], [RTMIN], 8) = 0
rt_sigprocmask(SIG_SETMASK, [RTMIN], NULL, 8) = 0
rt_sigprocmask(SIG_BLOCK, [CHLD], [RTMIN], 8) = 0
rt_sigaction(SIGINT, {SIG_DFL}, {0x8075b70, [], 0x4000000}, 8 = 0
write(1, "\n", 1
)
rt_sigprocmask(SIG_SETMASK, [RTMIN], NULL, 8) = 0
rt_sigprocmask(SIG_BLOCK, NULL, [RTMIN], 8) = 0
read(255, "\nexit 0\", 934)
rt_sigprocmask(SIG_BLOCK, NULL, [RTMIN], 8) = 0
rt_sigprocmask(SIG_SETMASK, [RTMIN], NULL, 8) = 0
_exit (0) = ?
PS: I wrote all this by hand, to a sheet of paper, on back on the pc. I may have
made several typos.
anacron-2.3-22. I have exactly the same problems with cron. (pc hangs when cron
is starting, had to boot from rescue cd and chmod -x)
Hrrm, can you give some more details about your install and hardware?
How much RAM, HD, swap space, CPU?
Output from top would probably be useful too.
What happens if you wait say 5-10min? Does the machine become usable again?
Mem: 448852K av, 224620K used, 224232K free, 0K shrd, 27612K buff
Swap: 594396K av, 0K used, 594396K free 93796K cached
df -h -T
Filesystem Type Size Used Avail Use% Mounted on
/dev/hdh1 ext3 7.3G 6.3G 706M 91% /
none tmpfs 219M 0 219M 0% /dev/shm
/dev/hde1 vfat 6.4G 6.5G 0 100% /mnt/windows
/dev/hde5 vfat 30G 25G 5.9G 81% /mnt/ibm
/dev/hdh5 ext3 66G 65G 863M 99% /mnt/kjempe
krs-dhcp232:/mnt/wd
nfs 29G 23G 6.3G 78% /mnt/nfs
I have dual celeron333 on a Tekram motherboard. Afaik there is no hardware
problems with my computer, and it has been sucsessfully running RedHat linux
since 6.1 without problems.
Running top (even with -d 1s flag) is useless, because it starts hanging
immideately.
I have waited for atleast 30min, without any improvement on my computers usability.
Full Redhat null installation , upgraded from 7.2 -> 7.3 -> 8 beta. 2.4.18-11smp
kernel.
In RHL 8.0 and later, the start of anacron is delayed for 60min after booting.
Do that make things any smoother for you? Otherwise perhaps you're better off
disabling anacron all together.
More importantly anacron is now run with "-s" (ie jobs are run serially),
so I suspect this is fixed in RHL 8.0 and 9.
*** This bug has been marked as a duplicate of 65870 ***
Oh, but this bug is against RHL 8.0.
Are you using anacron-2.3-23?
Closing for now. | https://bugzilla.redhat.com/show_bug.cgi?id=73752 | CC-MAIN-2017-09 | refinedweb | 566 | 75.71 |
197 Related Items Related Items: South Florida sentinel Preceded by: Daily sentinel (Orlando, Fla.) Succeeded by: Orlando morning sentinel Full Text : :-' ',:---'II'f'Y.: ": ...",,''.... '. -_">'.-..'..: ... .. "......., .". ...... ,-. y. .-.... "..,..".. ._....... --... .,,- -' ..... "'"' --....-.,., .,...--... ....,, ...."..--.... ....._ J."I., . ... "IIl --.7".a.< ......... V-' 'rF" '"''''"'''lfr-- . l ' r.r .. T E MORNING SENTINEL "a ... .. VOLUME TWO ORLANDO, FLORIDA, SUND NOVEMBER 29, 1914. NU3IBER 2:>0. r AERIL BATTLE OF BIRDMEN THRILLS I ._ - f i - : . ' SENSATIONAL: BATTLED IN FLORIDA GROWERS' ANDSHIPPERS' LUCERNE CIRCLE NO\V\ IMMATURE ORANGES . OF DAY'S FEATURE ;. AIR MURDER SECOND DEGREE[ I i. WAR-NEWS |1'I LEAGUE OPENCITY'S UNDER BAN , FOUR WARSHIPS ARE OFF CHilE! TRAFFIC MANAGER HOS- LATEST IMPROVE RULING BY UNITED STATES IS VERDICT CORONER'SINQUEST [ l I BINS RETURNED FROMt : MENT WAS OPENED AGRICULTURAL DEPARTMENT ; f'" May Presage a Great Pacific j I JACKSONVILLE YESTERDAY ., Naval Von Battle-General Hindenberg I He\ Has Been in Consultation Large Crowd to Drive Over the To Stop Shipment of Immature HF1 n YESTERDAY[ [ PromotedParis With Organizations-Re- New Pavement Today- sUegulationIar: : sults Shown in Follow- Nearly One :Mile Be Made Yet :More _ M Nov. 28.-Official and unofficial ing Article Around I StringentWashington - ; reports today from the battle ARMY ELEVEN BEATS LITTLE MARVIN WILDER 1 .fronts in France and Belgium told of Orlando, Fla., Nov. 28, 1914. Lucerne Circle was bubbling with Nov. .-Defining the DIED YESTERDAY AT numerous thrilling battles in the air. I Announcement is made by the headquarters life yesterday for the new pavementwas minimum of sweetness that oranges Four remarkable exploits by flyers I of the Florida Growers' and thrown open to the public for must attain on the tree, if later NAVY 20 TO 0 i HOSPITAL were recorded in today's dispatches.In i Shippers' League here of the successful sweating is not to be held to conceal the vicinity of Amiens a German i accomplishment of several important the first time. The spinning, careen- inferiority, the department of agriculture FUNERAL OF LITTLE BOY TODAY flying corps swooped down upon the I cases during the week, as well as ing motor cars with the wide smilesof today announced it "considers MANY NATIONAL FIGURES I French aerodrome of a French corps I divulging information pertaining to the riders testified to the pride of oranges to be immature if the juice SAW NAVY TRAMPLED '. November 18, dropping a number of other problems which are now under all citizens in the latest Orlando improvement does not contain soluble solids to, or UPON Wilder Shot From Hehind- bombs, but the French aviators took I active investigation. Traffic Manager in excess of eight parts to every part Sits in Jail Lynn County r to the air and after a short but fierce I I Hoskins has just returned from of acid contained in the juice, the battle irj the air the Germans with- Jacksonville, where he has been in The mayor, other city officials anda acidity to be calculated as citris acid Games Yesterday Closed 1911 Repenting of His ;' drew towards the German lines. I consultation with several of the prom- large number of property owners without water: of crystalization." Football Season-Results I Rash Act Three German biplanes this after- inent shipping organizations in our rode over the new pavement yesterday This decision was made to preventthe From Other Gridiron noon descended over the French line state, in reference to numerous trans- afternoon. However, it is pre- interstate shipment of immature Contests FACTS IX WILDER TRAGEDY in front of Ypres... The French artil- portation subjects in which every dicted that today will witness a great citrus fruit colored by sweaing or exposure TilE VERDICTMURDER IX lery destroyed one machine and grower and shipper in Florida is crowd on the drive. in warm, moist air to an extent Philadelphia, Nov. .-The un- TilE SECOND DEGREE." forced the other two to flee. I more or. less interested. The length of the pavement is that will conceal its inferiority.The beaten army eleven gave the navy -- - Numerous houses in Dunkirk were Western Canadian Rates seven-eighths of a mile. regulation may be made more Definition of "second degree," from team 20 to 0 here this af- damaged by the bombs dropped from The result of the Jacksonville con- With the paving of the Circle the stringent after further investigation I a beating the statutes: the German airship, according to official ference was that satisfactory ar- last link in the city paving betweenthe the department announced. ternoon while 22,000 persons lookedon. "When perpetrated by any act imminently - A news from the cosat city today.. rangements were made with the flor- north and south city limits was It was a walk-away for the dangerous to another and One man was killed. ida railroads by which through ratesto completed. It is now possible to en- GENERAL FUXSTOX GIVEN TWOMONrIIS' army. The navy never had a chance. evincing depraved mind regardlessof The British aviators took their turn the entire western portion of Can- ter the city on the north and drive to LEAVE ABSENCE It's line was weak and its puntingwas human life, although without any in bomb dropping, flying over Ghent I(I adc hich markets were just recently the city limits on the south without bad the forward passing mis- premeditated design to effect the where the field headquarters of the de-r'oped by Florida, will be restored leaving the pavement, a distance of Washington, Nov. 28.-Major Gen- erable, and the backs couldn't gainto death of any particular individual, it German army have been ivfstab- on December 5th. In addition 3.2 miles. eral Funston who arrived at Galveston any extent through the strong shall be murder in the second degree. .,. lished three persons being injuredbut there will be included a number of The pavement its i ag reat testimonyto with the main expeditionary force army line. The army team, on the AND SHALL BE PUNISHED BY no great damage being done.A .-, promir. .. points in that section of the spirit of the present administration that occupied Vera Cruz will have other hand, was strong in every de- IMPRISONMENT IX TilE STATE million and a quarter British Reinforcements {'.Canada to which Florida has never the city engineer, the other two months' leave of absence after partment and it seemed to the spec- PRISON FOR LIFE." have arrived in the last enjoyed the privilege of through pub- officials and the property owners. December 1. tators after the second period began -- - few days at Havre and other French I lished rates, while California has always that the army gang didn't overly exert Coroner's jury: Judge WHliaQi Martin - had them. The rates to western itself to do more scoring. coast points and will be distributed any presiding; J. H."ick., John W. ;along the front immediately. i. Canada were eliminated from TEMPLE DEGREE RITEBig LIVE QUAIL ON PILOTColored Many notable national figures occu- Pounds, Paul Sanborn William Bum- The artillery fighting is reported Florida Orange-Pineapple Tariff No. pied box seats and showed much en- Ly, Henry Bartlett, and J. W. White. along the whole front in Belgium. 4, issued November 2, effective De- thusiasm. - She only offensive movement of note cember 5, 1914, by reason of a con- Meeting by Knights Templar Porter at A. C. L. Picks Witnesses: Dr. C. I). Christ; J. A. .- was! a fresh attack by the Germanson troversy between the Canadian and to Be Held Here Bird Off Pilot of Train Coljer, colored Henry Thomas colored - the position of the allies south of the Florida railroads over the termsin r.10NEY--OF? COURSE NOTHow December 3Initiationand No. 92-No Nature Mrs. L. S. Villeneure, and oth- Ypres and this-was repulsed by the I which the rates were to be pub- ers. French.' lished-whether by the box or per Banquet Fake . ..k.. 100 pounds. The League gained I Coud! a Delivery Boy Have - Thursday, December 3, at 3 o'clock John Ross, the faithful colored The slayer: Eric Lynn. 19 years Fmir Warships Off Chilean Coast knowledge of the affair three weeks Mono ?-All Over in a old. Father dt'ad.Iotht'r, Mrs. C. R. and since that time has strivento Olivet Commandery, No. 4, Knights porter down at the Atlantic Coast ago uique, Chile, Nov. 28.-Four warships Minute-Oh, You Agen, lives at 312 West Church I nationally unknown, have been bring about an amicable under- Templar will assemble at the asylumto Line passenger depot, wore a pro- Heart! street. Boy is said to have money in w sighted off the Chilean coast, steam- standing between the railroads so escort the Right Eminent Grand verbial smile yesterday afternoon his own name, left by his father. that Florida's interests would not be ing north. Commander Galen B. Seaman of that was truly characteristic of his Time-After dark, last night This amount is said to be in property in jeopardy; the result is that we will Daytona, from the train to his apart- race. When train No. 92 arrived Place-Gore avenue. amounting to about $4.000, held in have more rates to Canada than be- GMU Von Hindenberg Promoted I ments.A from the south John was seen to Characters-Marcus Boitel, deliv- trust by his mother until he comes fore. j 1 London, Nov. 28.-Gen. Von Hindenberg I Eastern Canadian Rates banquet will be served by the make a dive for the pilot of the en- ery boy for Dickson-Ives; two fear- of age. J has been promoted by the j I 1' The League also calls attention to officers' wives at 6 p.'m., after which gine and step back on the platform some "bums"Bums"Stop - Kaiser to the post of field marshal of the rates shown in the new Orange the Temple degree will be conferred |I at the station the proud possesor ofa !" The victim: Marvin Wilder, eight the German forces operating against Tariff to the eastern section of Canada upon William S. Branch, Jr. I live quail. It is supposed that the (Process of putting on brakes; rap- years old on day of his death lien *f the Rusians. This announcement which, however, were not involved A committee of L. L. Payne C. P.I train ran into a bunch of these much id heart beats; strangling feeling.) Mr. and Mrs. R. L. Wilder. His father ? was taken tonight with statements in the case above recited. An Dow, W. R. O'Neal, Jos. Dawson and ly hunted birds and one more fright Bums-"Have you any money?" is a carpenter and resides about from both Petrograd and Berlin that I examination of the new tariff reveals Geo. McCulloch have charge of the ened than the rest found a supposedly Boi"N-o-o-o-o-o-o-h.." one-half mile west of Lake Holly. He the great crucial battle in Poland I arrangements.The safe retreat on the "cow catcher: Bums-"Beat it." is employed at the Orlando Novelty a number of reductions, as well as a continues without definite result. It I I following ladies will (Process of fast riding-speed limit Works plant. serve: few slight advances to these eastern also indicates that the German army Mesdames T. P. Warlow S. A. John- broken-back to the store.) .4 Canadian markets. The most benefi- I -0- :: has rallied and from what appeared NYE BUYS HOUSE OF DAN ".r'ral-Look out for the hungry A. W. Chas. ; cial changes, however, are by reasonof son, Bumby, Tilden, W. Funeral: This afternoon at 1:30 to be an ignominious retreat and utter R. O'Neal C. P. Carl hoboes. addition route. Shipments to Dow Jansen o'clock at the residence on West rout, the resourcefulness of Gen. practically all eastern Canadian mar- Chas. Jordan DeWitt Miller, and Church, Rev. Wray of the Methodist I "Ve--" .Hindenberg has made it possiblefor Forrier New Yorker Has Purchased :t; kets were confined to the route via Wm. Edwards. church officiating. Burial. ill be in the Kaiser's legions to reform Pretty Home GILES CONDUCTS CASE f l Potomac Yards while under the new All Knights Templar are invited, and the Orlando cemetery. and enter. upon a new stage in the' arrangement, on and after December but must notify the recorder. W. R. .{ Lot in Colonial Hills: --0- campaign.Gen. 5 lot of such be routed O'Neal, in advance.AMBASSADOR AdditionII. a points Little may Marvin Wilder who died on Von Hindenberg, on assumingthe Bankruptcy Case at Winter via Cincinnati at the same rates. The his eighth birthday yesterday morn- higher rank issued an order to : Garden Promises to Be advantage of these : IIERRICK LEAVES C. Nye, of New York for- arrangementsshould H his troops and said that the Germans state ing at the Church Home and Hospital - had brought to a standstill the of- be apparent to all shippers in FOR UNITED STATES merly of Bradentown but now a per- an InterestingOne following a terrible gunshot fensive of numerically superior Russian that shipments may be diverted from manent citizen of Orlando has pur- wound inflicted Friday afternoon Cincinnati to these eastern Canadian Paris, Nov. 28.-Myron T. Herrick, chased a pretty home and lot of II. from Winchester shotgun of Eric forces. .. . Car Nicholas, summing up the sit-!I points, as well as points- in the -unitedj the retiring American ambassador to Carl Dann, on Colonial Hills, Orlan- At Winter Garden yesterday morn- Lynn, will be buried this afternoon in uaticn, : "On the entire front j States adjacent to Cincinnati to France, accompanied by Mrs. Herrick do's latest and newest addition. Mr. ing a special hearing occurred in theJ. the Orlando cemetery following the between says the Vistula and the Warthe which rates are applicable thereby. and the members of his family, left Nye has selected one of the prettiestsites N. Sewell bankruptcy case, which services which will be held from the rivers the battle is progressing: in This is gratifying to our shippers, as Paris for Havre this morning where in the city and he is welcomed before it ends, promises to be an interesting home on West Church street at 1:30 frvor of fns." heretofore they have had to pay ad- they will ,take the steamer Rocham- back to Orlando with open arms. if not a sensational case.J. o'clock this afternoon, the Rev. Dr. :. Victories at certain points are ditional charges in order to place cars beau for New York. N. Sewell, of Winter Garden,for J. E. Wray, pastor of the Methodist claimed by dispatches from the grand in eas'ern Canada on a basis of the Magazines, daily papers, music merly lived at Atlanta, Ga. The cred- church officiating.And . the of through rate applying via Potomac THE WEATHER artists' materials and Christmas nov itors at Atlanta are contesting his while the services are benig '. duke to ministry war at Petrograd - but according to an announce- Yards. Florida: Rain Sunday. Mondayfair. elties. Curtis & O'Neal next post- discharge from bankruptcy and are conducted over the remains of the ment from the Russian .capital, no Di\ertin-j Charge at Potomac YardsIt office. represented by Moore & Pomeroy, of lovable little boy who was innocently is mt generally known that the Atlanta. Sewell is represented by killed Eric claim to an overwhelning defeat has Lynn, slayer, sits in the been made officially. Pennsylvania Railroad company re- I Highest Dickinson & Dickinson of Orlando. county jail with the coroner's verdict Progress in Galicia ant from thirtyto cently field a tariff with the Inter- Military Honor is The hearing is being conducted by rendered yesterday ringing in his sixty miles south of Cracow was state Commerce: Commisison provid- I LeRoy B. Giles, of Davis & Giles, who : ears, "Murder in the second degree," reported in an official staement issued I ing a charge of ?2 per car for diverting Conferred Upon Gen. Joffre has been appointed special examinerby I realizing that for a hasty act he will fruits and vegetables from Poto- the United States district court, be punished.At . during the day from Pdrograd. I I, mac Yards to other markets. The northern district of Georgia.The the coroner's investigation Regarding the fighting .t Lodz, yes- which both German and Russian I League became aware of this pro- Paris Nov. 28.-President Poin- I commander-in-chief.. Poincare testimony of twelve witnesseswas terday afternoon several witnesses ystaffs admit has been resume] with I posed penalty, and immediately handled care has conferred on General Joffre added that he asosciated with him in taken yesterday morning. wer eexamined. The evidence was sufficient - great vigor, the Russian officiaUtate- :I with the traffic departments of I the Medaille Militaire, the highest his congratulations, Gen. Joffre's devoted to cause the jury to render the .. went merely lays that the zar's I Florida itrus Exchange Chase & ''I honor that can be conferred on a collaborators of the general X. Y. STOCK EXCHANGE REOPENS verdict of murder in the second de- >.'e troops have succeeded in ni.king I I Compar "I. C. Schrader Company French soldier. The presentation was staff and the armies of France. gree. It was brought out at the hear- . t t ") at certain 'points. i and t> them Growers' Company, I made in the presence of Premier Viviani ing that little Marvin Wilder was .' t progress.. J' I whit ns are affiliated with the I Minister of \IiMerand., the The Medal Militaire was estab- New York, Nov. 28.-The New shot through the back Dr. Christ : \. r I Le- matters except market- I presidents of the senate and members lished in 1852 exclusively for non- York stock exchange resumed opera- stating that the indications were of :1' lAustrians Repulsed \ I V tit, the League filed aV 'of the general staff. commisisoned officers and men of the tions in a tentative way today for the such nature. : 1!t London Nov. 28.-A oMntenegunofficial with the I iterstate In making the presentation Presi- army and navy. 'It is only awardedto first time since July 30 last, when Great interest was manifested in ; from report Cettinje tJiclJ says ed on page 4) dent Poincare said the simple medal a general or an admiral for valorous the foreign situation caused the ces- the proceedings, and it is thought the ; ; the the Am during last four days , that' forces" which was the emblem of the high- work after such officers already sation business of all the leac.ng: funeral services today will be largely t numbering 10,000, deiperately is. The Austrians were est military virtues and which was have attained the highest rank in the financial markets of the world. Trad- attended. The killing was the most attacked them in defendSoli! eventually retreated in I ihe at \ro worn with equal pride by illustrious legion of honor. It is considered the ing was restricted to bonds for which, tragic that Orlando has witnessed! for passes hegr de. Tli.heavy t 'r after leaving attacks,were. repulsed with: _ro ners. a large generals and humble soldiers, was a greatest honor tha can be conferredon in all instances, minimum prices were a long time. * mark of the nation's gratitude to its a general or admiral. established. (Continued on page 5)p ; , ; .1J : ( ; '-" .J. .. .. <.!'-,xL: I \ I . r '----r:. _'... .=.... _._ '. ...".. __..... .....,.... r- ... .-.- _.,.- . - & 'tI IA I I ...- I GOLD TO CARRYON "SAVE GUAIL"-BARNUM STREETCARS FOR ORLANDO I I II t I I - Orange County Citizen Writesto "Ding, Ding""Pine Street-I WAR The Sentinel Urging I I Sentinel Office in Arcade That Quails Be Building-Step"J. Watch Your , Saved GERMANY MAKING PLANSTO V. McGrew writes to The Senti- Have you heard of the Western nel proposing that a street car line -- . RAISE OVER A BIL- farmer whose crops were being destroyed --- -- - be installed in Orlando. It is by no BEST SELECT LION MORE MONEY by the chinch bug and who means a dream, and some day it will j OYSTERS, 50c. Quart You Ought to See bur C went one afternoon and shot a dozen be "Ding, ding," "Pine street, Senti- iTheVValdeckerFishCo I quail on his own field, only to be in- nel office in the Arcade building"or The beautiful, much advertised "Yo land" lin LARGE ORDER GUNS TO KILL WITH formed by a United States official words to that effect. i FOR FISH AND OYSTERS I II form and with handsome covers and decoratio who, after the stomach of examining The letter follows: Phone No 5. 206 Boone StreetP. Longfellow Calendars one of the quail later in the day, reported I Stevenson Calendars "Mr. Editor: Schwab Gets Order From Large that his analysis showed that 31. and W. O. COX "As the columns of your paper are I Kipling Calendars England-Large Naval that quail had the remnants of 1,200 open for discussion of public improvements -II I 10-23-tf Household CalendarsBed Loss Sustained by chinch bugs in its stomach? I would suggest that we Time Stories Calendars You may imagine the consternationof boost for a street car line, as it would Calendar of Dinners British that farmer better to know that Dr. F. F. a ThompsonDENTIST Calendar of Salads be a boon to the citizens, and a pay- female chinch bug lays five hundred Calendar of Luncheons for the . ing proposition Berlin, via The Hague and London, company."They each month the . eggs during summer.It Calendar of Golden could run out Orange ave- Thoughts, et Nov. 27.-The reichstag has receiveda has been shown again and again north to the ice Only 50c. each, and nothing be , draft of the second supplementary' nue factory and east Will open an office at can mon that birds are the greatest factor of Central avenue around Lake Eola, 115 S. Orange he.. Christmas Gift at a moderate price, than these imperial budget for the 1914. over Curtis & year all in the insects and This the imperial chan- checking pests and south around Lake Lucerne, and O'Neal's Store. You get the cream of the lot if you make ; empowers that Without birds destroy our crops. 0 west on Church street to Bartlett'sgardens. 11-6-tf cellor, for the purpose of meeting the would become unin- globe soon And a little later they could W. S. BRANCI extraordinary expenses, again to raise habitable, for all soon vegetation take in Winter Park and Maitland."I Wood, $1.50 a strand up. Hand'sCoal $1,250,000,000 in the form of credit. would be destroyed. Knowing that believe that the electric light Yard, phone 566. 9-31-tf The Arcade Book and Music\ : Furthermore, the chancelor is , empow- birds valuable to do are so us why could furnish the plant installing power by - - ered to issue treasury notes up to we allow their slaughter? $100,000,000 above the amount another dynamo. . pre- We spend thousands upon thou- -- 5 is not much scribed by the budget for the tempo- sands of dollars in "Daytona larger than trying to develop Orlando, and they have street cars. FINE ICE CREAMS of the AND rary strengthening ordinary agriculture and then allow hunters to ICES It the gives town a more business like working capital of the imperial treas- go out and kill the very things that appearance. They are bound to come ury.The do most for the farmer.It . Austro-Hungarian war loan is to be hoped that every Florida poco itempo."J.. V. McGrew." Telephone 664 up to the present has reached a total farmer will post his land with large of $450,000,000. ,signs reading, "No hunting allowed I BELGIAN WOODEN SHOE HOLDS on these premises. To Help Kill Them L. N. BARNUM. $25 RELIEF FUND and we will deliver your South Bethlehem, Pa., Nov. 27.-It was learned today from a good but EAST COAST PRESS ORGANIZED Baptist Clergyman Uses One He :CCE CBE..A. unofficial source that part of the contract Bought to Take Up Collectionfor for war material which CharlesM. War Sufferers The editors of the East Coast pa- Schwab obtained on his recent pers met in Fort Pierce Monday and For Thanksgiving or Sunday Dii . visit to England, at the request of organized the Florida East Coast Appleton, Wis.-While passing i Lord Kitchener, calls for 200 field Press Association.The through the streets of a Belgian town ; guns, 600 caissons and 800 limbers. following officers were elected' : four years ago, Dr. W. J. Pearce was New York Ice Cream my Specialty. This is in addition to the large sub- President, A. K. Wilson, of Fort stopped by a nine-year-old girl, who marine which will be Philadelphia and Fancy Ice Creams 1. . contract partly Pierce Tribune; vice-president, F. B. tried to sell him postcards. Insteadhe fulfilled at the Bethlehem steel plant Shutts, of Miami Herald; secretary, offered to purchase her wooden and at the Fore River shipbuildingplant. C. S. Emerson, of Fort Pierce News; shoes and she accepted the offer. He ......Wholesale and Retail, Bricks or Ca* . treasurer, J. J. Birch, of New Smyrna made a pin cushion out of one of the Breeze. clogs and yesterday displayed its E Neapolitan Creams Always in Stock We do not think we have yet been The executive committee will con- mate in the pulpit of the Baptist . guilty of advising you how manyda"s sist of the following: C. J. Johnson, I church. He spoke of the sufferingsof there until Christmas, but the Belgian women and childrenand are Delray Progress; Harry L. Brown, in the hope that we can't be convicted St. Augustine Record-Herald, and following his sermon he stood at I I C :a::: :aOEFLE . without a chance to offer an alabi, Arthur Green, of Jacksonville.The the door with the shoe in hand receiving . we are going to say there are just next meeting of the associa- offerings for relief of the women - Residence 205 St. Corner . I Liberty Mariposs twenty-four shopping days until the tion will be held on December 16, and children of that country. , glad event. and an invitation has been extendedto Twenty-five dollars was collected. Factory Around the Corner. Prices Rtat all the newspaper men of the East One traveling salesman contributed I 1 Get the news in The Sentinel first. Coast. $:3.00. : -.-.... ..... -- _ .. . .:J III DID I 8 I" !" t I !II 111.1 II a ; 111.1.1.,1 D f' I 0 D ,.,.. .. ill.lllllll. U .11111.1 IIII a ; ; II ..tt II I J 1. t Q' Ii. C I j. .Q, ,11",1,1,1,1. ... till : ; I ; Ii I I III ; I I .II I II H-t Q I loH t II I i .. I H, I I Q I III!t 2' Mgcfrfri +-= 9 : 8 I C"C-e'fr- roo t IIII ,t I g I taMei i1 en t o .c > I* II &t. >* t. II .+t n '4 . r 0i i? The Florida Real Estate aJ !1L L. r Investment Company 1 . 1 F ! c I The President of this Company left Saturday for SEBRING and will be at his : it. Wednesday, November 25th to transact his business. We again invite P' ; ;:' oWners to list with us their propositions, and are confident if they give us an oppci# . to explain how we guard their interests to get their business. r , 1 = INVESTORS =. 1 t To all who intend to invest we would like to be consulted, not only as real estat 4 . but also as your agents after you have established the fact that our firm s ab.'f financially responsible. We have connected with us a party fully aquainted. .i.iauthorlty = __ ____1-- __ : __ ., -, Jr } -o-.L- in Agricultural Investments. and O-( 7. ,::." \ -_._;&rl-Tu: : "... :.. .4A.,.l for the last fifteen years should be of some'' :<. : *nefrt# to: .you.I .. ! . : - : -v ... ... .., . . r'" I : Phone 570Incorporated OVER PEOPLES, f-.I iO ..L DC1 .. .- .23,000.00 paid up A:: ando: : .. J. 0 E Surplus ,OOO.OO ?: , i 1 .1 . . . .1. ICI . ::::::::IIIIIIII.IIIIII.IIIIIIIIIIIIII::.: ,:. "" :." '\." ., ' .p',.. -. -.... J . -- ---- ---- -- - -- --- - --- ---- I. MMiMMMMMMMMIIiliimilllllMMIIMIMMMMMM 11111..1..1..1..1..1..1.1.111..11..1.111111..11..11111..11. .1..11..11..11.1111111..1..1..1 I. ' 'II"'I'I" " " " .." ,.. ,.. .,..,.." ..,.. .." ,. .. . -- t The Big Store : :: | Again we surpass all previous value giving and selling records with our unprecedentedSale of Ladies' Winter Suits and Wraps. :: REDINGOTE/ MODEL SUITS (ITS AND 1APS' %\ I IJet The largest and most comprehensive exposition it has . All ultra smart; the coats with the long military lines being been our pleasure to present : ,J+r :L 4 extremely popular The most exclusive models, the best values at the commencement i of the season; the widest choice and the most \ ; You taking chanches buying at THE BIG I i are no in t fascinating styles in Coats and Wraps for the particular woman \ , 1 STORE, for we have carefully over the leading designers j I ' gone as yet offered at this Popular Woman's Store. I ., i stocks, and as a result, you have the most approved styles fore 1I t t the coming fall and winter. $15.00 Coats, special for this sale _----------$ 12.75 I ' c II 4- t The materials are Broadcloths, serges, wool poplin, Gabardine y-?r v f{ { w diagonals and novelty effects, in all the wanted shades, $20.00 Coats, special for this sale _----------$17.75 .0 [t our regular $25.00 Coats, special for this sale $ 19.75 -= ---------- - II i 1' $25.00 models specially priced at -----------$| J. J/.7 O I I + $30.00 coats, special for this sale __________-$ 24.75 = i= I 1 $20.00 and $22.50 models specially priced at --}$j 1 7.75 i $35.00 Coats, special for this sale $ 28.75 1 ---------- 1 J j $15.00 models specially priced at ------------$ 12.J[ 7 O fJI j I Ij i $. : '(Coats, special for this sale _----------$ 32.50 ; F j i i $30.00 models specially priced at -----------$ 24.75 i $50.00 Coats, special for this sale $ 39.75 ----------- ' 1 i''; $35.00 models specially priced at ____________$ 28.75 ft:$ I 7fI II, I I1 nnf f $40.00 models specially priced at $ 32.50 nFrn8g4 ------------ 0 L (soy $50.00 models specially priced at ----------$ 39.75 f _ Nemo Self \ I Reducing . \' C Clothillg i g gIen Corsets- r1 ,, ;t: ::\ and Young Men's Clothing ; / Every model has some particular function . z l from the House of Kupppenheimer. \ which it perfectly performs. All give health- t :; x, We are showing one of the best and II ful physical support. They have the hearty most complete line of Clothing that indorsement of practically all physicians the 1. i. -, has ever been shown in Orlando, at "', fl, II A world over. Some of the Self Reducing:: - ri i'1 prices that are right. Suits, Over- dels not only control the surplus flesh, but, ' coats and Raincoats, priced from - by a sort of automatic massage, actually t drive away the fat, that the i so figure is permanently - I'' smaller. Nemo corsets are priced from $2.00 to $10.00 $$15to$25$ - I ii We also carry other well known makes, ? / which provide graceful, natural figure lines. "As fits the corsets, so fits the gown." TilE MODART CORSETS, priced from . h ,ki THE HOME OF BETTER CLOTHESLet t I Us Show You. $3.50 to $8.00 Hats, Caps and Furnishings. All I LaVICTORIA CORSET priced from h \ the latest styles and colors in Hats . = $3.50 to $9.00 a and Caps. OurIen's Furnishing 1l - and Clothing Department is complete. M t REDFERN CORSET priced from I / ! I :Men's: Smoking Jackets we are '; 'j: $3.00 to $6.00 0 I showing a beautiful line for the p Xmas trade, _______$4.00 to $12.00 THOMPSON'S: GLOVE FITTING CORSETS :Men's Bath Robes made of Terry Cloth, and the Blanket Robes, an exceptional r priced from $1.00 to $3.00 COPYRIGHT 1 I4 large line to choose from THE HOUSE OF KUPPENHEIMER $4.00 to $10.00 THE hOusE OKUPPEHEIMER W. B. CORSETS priced from $1 tO $3 '. '.'. .' 1s Little Men's and Children's SHIRTS AND BLOUSES THE FAMOUS: MEN'S SHOES, $3.00: TO $10.00 I Fall and Winter Gloves ! II KAYNEE BRANDWe Just received our Fall and Winter Stockof ; : Department 2-nd Floor. want you mothers to see the materialswe :Men's: High Shoes, in all the best styles, Our stock is perfect, in spite of war con- II Each day brings something new.Ve show in our 50 cent Boys Shirts and and all leathers. Nothing better in :Men's ditions. No advance in prices. Niagara J are making an effort to give the best merchan- Blouses (Colors Guaranteed). Other styles Footwear than the well-known Edwin Clapp, :Maid, double tipped pure silk gloves, guaran- dise obtainable, consistent with good work- priced up to $1.00 and Florsheim Shoes. These are our leaders.BOY'S tee ticket in every pair. manship and the styles of the season. And MISSES' and CHILDREN'S SHOES 50c., $1.00, $1.25and $1.50 LADIES' KID GLOVES, two clasps iri always at moderate prices. OUR The little folks have not been forgotten.We Black, Tan, Navy, Gray and White. BOYS SUITS have a full line of shoes for all, at pricesto $1.25 and $1.50 I See the new Suits we are showing, in Blue, Shoe Department is Complete suit everyone. Special attention given to Tan and Fancy Effects, in plain and Norfolk the children. lad ie's Hosiery styles, priced at Everything new, that's good in Footwear, for lIenVomen and ChildrenLADIES' Ladies' Hose, the best that money can buy $3.501.00, $5.00; and to $12.00 Handsome Leather NoveltiesParty up for the price, Black, Tans, White and colors;, BOYS HATS. SHOES $3.00 TO $6.00 Cases, an unrivaled selection in pair _____________________________ 25c We are showing the most complete line Crepe Seal, Morocco, :Moire and Frogskin; Ladies' Hose, Onyx Brand, in pure silk and You will not find the styles anywhere else of Ladies' fine Shoes ever shown in Orlando.All lined with plain Moire and Dresden Silks, silk lisle, black, tans, white and fancy colors, I we show in Eats for Boys and Children, the most up-to-date styles in all leathers, gilt and nickle fittings ______$2.00 to $30.00 35c.. 50c., $1.00, $1.50 and $2.00 f some beautiful affects. with Silk Brocade, Cloth and Kid Tops. The Leather in Black ! Bags Silk new one strap Stockings, Niagara Maid, we have ! and 50c,75c $1.00 up to $3.00 ALL NEW HEELS beautifully lined and fitted$1.50 to $10.00 them in black, white and fancy colors, $1.50 I IH j 1 The ,Towell-Duckworth Co. H, f ido's Largest Store, "Quality Did It." j ifriittttniii iiininmiin ; ;;liiiiii .;"t;;"11111.1111.;'; ,: .: :":: ,, :!,:.M, iif?**,*,*.*.**.] 111111. D 111111111.11..1 ,MM.i MM so I D D I' C It.t+MM" I e Q C 0 M. ec eBeseBi If I t t f.. II ,-- , 1 ,. ... .. ' "" ""II .. ....."" i1 11111 11".1M".i M 11asra'sirgi< ; - : :\ I ..' .-- '-"< :N, I _:_ : - - , ; .. -- ---- -- - - . , ATALKWITHYOI .i!t i , I An article to and for the interest of every man, woman child who is fortunate enoui! i to live in this city and favors the progressive and foremost business principTHE j I following, in a sense, is an advertisement but may truly be classed as a "heart-to-heart ta I l: those as mentioned above. We desire to place before you a few facts and trust that 3 t consider well our efforts in this matter. Frankly, we want YOU to know exactly what I trying to do for YOU and ask that you will regard this as previously mentioned, "a heart . heart talk." During the time the present management had control, we have left nothing undone in <<< efforts to strike within your desires, yet however, we find there are still some who are not well enoi acquainted with our method, hence we are using this means to reach the entire populace and will] ask ito entertain this seriously and in the same light that it is meant. We appreciate the fact that certain business transactions in the past proved to be unsatisfactory to 3 : ut we want to impress upon you that the present management has thus far has been clear of all si. transactions and is leaving nothing undone in an effort to cast off this predudice restore proper z , d ue relationship. ,w - We want you to feel as though you are a part of our little Theatre and that we are here to serve yoi your wants and desires.: We want your ideas and suggestions, in fact we urge you to tell us and co- erate with us and in this way we feel confident that we are bound to please. THE GRAND THEATRE , . Has a reputation throughout the entire South as being one of the most perfect and attractive theatres in the country. Howe regardless of this fact, the present management was not satisfied and spent hundreds of dollars in order that it might be placed < par with any theatre in the larger cities. In following up this statement we might add that the GRAND THEATRE was the . one south of the Mason-Dixon line to be equipped with the new motor generator, and was a close second in the purchasing of newmotion picture machine;All of which proves thatwe are doing our part please you and uphold the standard of progressive Orla II Following is an itemized statement of expenditures for the new equipment installed by the present management: . II One motor driven Simplex machine $355.00 i One motor generator 300.00 II Special machine equipments 30.00 National Cash Register Ticket machine 210.00 . 1 1 Total expenditures for equipment $ 895.00The above statement does not include the amount spent for decorations and other improvements ..' A CHASED AND MADE FOR YOUR COMFORT AND PLEASURE. We might add that NONE OF TI PURCHASES WERE ABSOLUTELY NECESSARY, but we went into this proposition with a feeling : NOTHING AS TOO GOOD FOR ORLANDO AND ITS PEOPLE, and we mean to retain this spirit, km ' : OUR EFFORTS. :; that YOU WILL APPRECIATE We have been complimented time upon time, on our perfect projection and clear pictures. May we adc a great many of these compliments came from exhibitors from other sections. We have employed the beat _" . pie to operate their respective departments, and are PAYING THE HIGHEST SALARIES of any theater i - city, hence the superior results. We are the only people in the city to engage an orchestra to assist in making your money bring more and entertainment at The Grand than any other local playhouse. . .. Has pleased thousands and we are quoting equal a. y. .. many when we "T1IE MUSIC AT THE GRAS.. T Ft: j1' O K C EC E S' rr 1 K' A;' WORTH TWICE say THE PRICE OF ADMISSION" ,.J 'S but not least comes the Special Feature PropositiorfDuring S yzaa .r 9, a recent visit to Atlanta by the management, in behalf of this department, contracts were mad the showing of the BEST PRODUCTIONS IN THE MOVING WORLD, featuring the greatest and most fa Actors and Actresses on the stage today, namely:-MARY: PICKFORD, MARGUERITE CLARK, BEH Our regular feature days go into effect on TUESDAYand KALICII, AFELE FARRINGTON, DUSTIN FARNUM: GABY DESLYS. ROBERT EDESON. HOBART WORTH, EDWARD ABELES, ANETTE KELLERMAWe . will continue the entire throughout season on ' asked,"Why do advance <. -s Tuesday and Friday. The first production will be are frequently you your ; ., . every " that for this is, \\ : "t S t ; n reply to this we will state our reason \!> .nr : -t L j * .. . ,. "SUCH A LITTLE QUEEN"with ceipts are not greater on these days than they are on the .r.-n.* **! :.'-.-'- '-'F--> t''d'.-' ".> .tto.r ite-at the city for our special features, and furthermore arc :*> -' < ; : : ai- ,: - MARY PICKFCRD playing the leading role. of such productions by exhibiting them at the usual pric : 'n city, and this is the only way in which we can give it tc I ., .. II [ ITNOT1CE: Owing to the fact that Tuesday and Friday have always been I I i ----< -- - -- - the recognized feature days,we are compelled to change the day for the -";'; r showing of Pauline from Tuesday to Monday. Tomorrow is the day! f-i ,p-.f ..T.-._.' 1 " t. ; i, I t , - ] t ," i iJj "PERILS of PAULINE" i.tof t :" ';-: _. .1: ...I..t ;. i . : Monday :. .; *' ; S vm. 1 i .-- _-'-.i'< .SS '"'SS. - ' it : } The 18th Episode of this great serial, and the greatest one thus far. As there are ONLY THREE :MORE 1 INSTALLMENTS REMAINING, you should not fail to be present. See the interior of a submarine whilein '. - action, and see Pauline passed through the "tube." All in this installment. C,4i; REMEMBER: : OUR FEATURE DAYS WILL BE EVERY TUESDAY AND FRIDAY, with the Admis- , sion Price advanced to lOc.and 20c. Also that this is not a scheme concocted to MAKE :MONEY: as we are doubling our expenses, and that it is the one way in which we can give you the best. '... . '. i t -.f THINK THIS OVER AND BE WITH US. Frid :11 : :.- *,-_ rp:' : '. .. D.r : F 1 Are You Willing? -,- .' . : :- ! x .. .. I. --- . ..., ,, -- ,.. . __.... 'C-- '. '-'. -- T'r- -- "- "- '""" -----v- "" ; < "" :>-"""" "' I r:-:,' cFSS.i++ '" 'xa- yr' rr fl" :( t.> I'I I , 'f [ erhood. Sacred, holy, blessed memories the Bible and its teachings shut out |: such a church for some years in California been traveling back to the old days NORTH SUNDAY SCHOOL WORK of our public schools! II. ; and we found that it paid in i when she gathered the children in at Now, my friends-I say this with What a tremendous responsibilityrests the increased interest and membership I night. She had almost reached the some reservations,-with some beautiful upon the Sunday schools. Gen- in Sunday school and church. foot of life's hill, and was just breathing NOTABLE PAPER READ BEFORE exceptions,-religion is not cerally speaking, whatever religious "The women of missionary societiesare I faintly. Suddenly she opened her I W. M. S. OF PRES- taught in the home now. instruction the children and youth of usually practical women, women eyes and said: "Is it dark, father?" BYTERIAN CHURCH Fifty years ago the spiritual education our land receive, instead of coming who do things. Aside from gifts to L "Yes, Janet, it is dark?" "Is it RANGE I ) of the child was not altogether from many sources, as in the past, Sunday school extension, what can night?" "Yes, dear, it is near mid- I. Paper by Hetta F. Douglas Is neglected in the public schools. In must now come largely from the one we do? night. "Oh, are all the childrenin Interestng, Elevatingand the morning we read a chapter from source-the Sunday school. All of us could undertake system- ?" Highly Instructive the Bible, usually each child reading But there are ten million boys and atic Bible study. If it is impossiblefor Dear mothers, and grandmothers, a verse, the teacher often giving enlightening five million girls between the ages of some of us to attend the Sunday and neighbors of some mother's dear AVENUE comments. The teacher four and twenty in these United school Sunday mornings, we might ones, are the children al in, safe in Probably all of us believe that prayed closing with the Lord's Prayer States who attend no Sunday school, join the home department; or we the fold of the tender Shepherd? there is one God, but three manifes- in which the children joined. Thenwe and so have but little if any religious might have a woman's Bible study tations of God-Father, Son and sang a hymn. The child's thoughts instrucion. It is estimated that only club, to meet some week day after- NOTICE Holy Spirit. Some people do not believe had been turned up toward God. Nowit fifteen per cent. of the population of noon. Such a club is delightful, and THE LADIES OF ST. CECILIA'S in the doctrine of the Trinity is possible in many states for a these United States are in the Sun- has its influence in home and church Guild will hold a bazaar, afternoonand because they can not understand it. child to go from Kindregarten day schools. That presents a large and Sunday school. Those of us who I evening, at the Chapter House, One Lot 100 by 240; I do not understand it, but I believein through[ the high school, and throughthe work for every church and Sunday are in Sunday school, especially I on Wednesday, December 2nd. Sup- s be served from six it, because the Bible teaches it, university or technical school of school. teachers, could give one evening a I perU until seven. 11266h't."O" and because there are so many things his state, and to find that a view of What is the Sunday school doingto week to a teachers' meeting, where I One Lot 50 by 225 that I know are facts which I do not life has been trained into his heart meet this awful responsibility? the interests of the Sunday school ) understand. You and I are trinities and mind in which' God is never The Sunday school was never so well are discussed and the lesson for the MM . -born in the image of God. I do not named by those who are afficially organized as at the present time. The next Sunday outlined. Such a meet- T. F. HOURIHAN One Lot 50 by 125 understand it, but we have a body, recognized as the state's educatorsof International Sunday School Association ing led by the superintendent oughtto I I we have a mind, we are a soul. One its youth. We are all agreed that has every state, territory and be both inspiring and helpful.We Licensed Plumber and personality, but three manifestationsof the public school system should seek province in North America, except could, in connection with mis- f Sanitary Engineer yourself.A to train what are called good citizens. Alaska and Labrador, carefully or- sionary societies of other churches Estimates Furnished. Jobbing perfectly developed person is one Ve know tha public schools have, and ganized, and most' of the states have here, appoint a committee to meet Promptly Attended to. whose physical, mental and spiritual must continue to have, a tremendous their State Sunday School Associations the school board, and ask them to i f PHONE 6z9f powers have all bene trained, educated influence in the shaping of character.And and employ a state secretary consider some plan for religious in- Prices cultured. we agree that public schoolteachers who is devoting his full time to the struction for the children of our pub- f 106 Court Street f Right I Fifty years ago, when I was a girl, in the spirit of their work, work. All denominations have also lic schools. These things would lighten : J ORLANDO FLORIDA t tzllm very little was said about the physical and in their moral instructions, are to secretaries for Sunday school work, the Sunday school's responsibilities. -: training of children. Physical aim at building up the character of and they co-operate with the Inter- ,, '- ; culture was hardly thought of; but their pupils that they may become national Association. Twenty thou- From my own personal observa- . . .' nevertheless we received a physical good men and women. At the same sand Sunday school conventions were tions while soliciting members for the training. Each child had its task, its time, in many states of this republic, held in this country last year, which home department and for the Sunday work to do, not with the thought of it is insisted that this shall be done were attended by at least three million school last fall, I should judge that You Know What building up the body, but it was for without laying any religious basis for pupils, who received instructionin there were very few children in Or- .Pill Street. the building of character. We morality, without appeal to the t. were any new methods, loftier ideals of ser- lando who were not in some Sunday taught that we must do our share of name of God, or without any teaching vice for the Master and His children, school. Orange Avenue Is I II. the world's work, we must work that of responsibility to Him. It is a and a wider horizon of the onsweepof But a large proportion of the chil- we migh play; and that made up in difficult task that our country is un- His kingdom. I dren who are in the Sunday schooldo JEWELEREXPERT } I some measure for the lack of physical dertaking. There is one department in the not attend church. When we have culture. We call ourselves a Christian WATCH MAK Eli na- Sunday school work which is making our large, new church, why not bring Times have changed. Now begin tion. And because of the yet bugbear we marvelous progress--the Men's Bible all the children into the morning service - with the Kindergarten and go onto of sectarian instruction which Class movement. At the World's ? Jesus said, "Where you gather 1 I make a special of Fine the gymnasium, the manual and Roman Catholics and Jews and infi- Sunday School Convention held in Watch, Clock and Repairing together in my name, there am I in Jewelery - industrial departments in public dels have set up before our school Washington two or three years ago, the midst." What an opportunity for i ; Diamond Setting,Ring I. CARL DANN schools, and the high school poly- boards, the Bible has been driven more than six thousand men from spiritual culture for the children to and Spectacle Repairing. : technic. Every college and university from our public schools, and the state - the men's Bible classe marched in a meet Jesus Christ and worship Him! Bring me repairs others have now has its athletics. I read in 1 I can take no direct responsibilities in pouring rain in parade to the nation's It would be one way of introducingthem failed to do, and I will repair one of our that there religious instruction. The Ten Com- them satisfactorily or no charge. papers was a capitol. Congress adjourned to view to Jesus, of putting their dear My prices are reasonable. law at nearby Rollins col- mandments and the Sermon the i new our on the inspiring sight. If the men of I hands into His blessed hand, the lege. Every student must belong to Mount are no more sectarian than the the churches will attend the Sunday strongest, the most loving, t'.e ten- Jams and Jellies some department of athletics. multiplication table If or geography. school, the boys will. These organized derest hand in all the world; t..e hand CHAS. VATER Now to be a leader in athletics is a change does not take place, we Bible classes for both men and that never lets go. greater than to be a valedictorian. shall have a generation of infidels. women\'are increasing the demand for There was an old, old woman lay JEWELEREast Surely we ought to be a nation of But, fortunately Christian peopleof Bibles and Bible study; and they are dying and her husband, who had giants, perfectly developed phys- this land are being aroused at the doing a vast amount of work along ; Pine Street, at Young's taken the with her long journey sat ically. threatened danger. philanthropic and lines Repair Shop Received a car load of missionary i by her side. Her youngest child had Fifty years ago our fathers did not Three years ago, while I was livingin one class often supporting a missionary ll-4-d&w-lm "Jim Jams," Preserves, Jel- neglect the culture of the mind, if California, the superintendent of in some foreign field. I died twenty years before; but she had lies and }4 r uits. they did overlook the training of the the city schools of Berkeley made A woman in Topeka, Kansas, hasa ---- -- --- - ody. As soon as our Pilgrim Fath- what was then a startling request.lIe class of women numbering 400; ers landed in this wilderness country, asked the school board to agree and they support both a home and and had provided a shelter for their that a certain number of school foreign missionary. When the mothers THE OLD RELIABLEDANN'S The DelicatessenPies dear ones, they builded schoolhousesand hours in each week should be set and the fathers begin the sys- colleges. apart for religious instruction, that tematic study of the Bible, then we Fifty years ago fathers and mothers instruction to be given by a teacher shall the TRANSFERDr. Cakes, Salads and come again to time when worked and saved, not to buy new appointed and paid by the different religion will be taught in the home, Cooked Meats. carpets and fine furniture, but to send churches; that every child in the public - and that will relieve the Sunday B. D. WIENENGA, Prop. their children to college. Now thereis : schools should go to the church school of a part of its burden. Eightyper a college at door. The of his choice your very parents' on Friday af- cent. of the present day church state give one's children a univer- ternoon, there to meet a well trained membership came up through the Livery Draying and HaulingWE , Bran BiscuitHome sity education; and, if by chance teacher chosen by that church, and Sunday school. there are some that can not avail there to be taught the religion of the The Sunday schools of this country themselves of that opportunity, thena Bible. The work thus done by the made or we supply have increased, during the last five university extension course comes churches would be credited to the GUARANTEE GOOD SERVICE the at of years, average 200 each you bran to make them. to them; and all one need do is to general course of the child's educa- week; and during the same time the sit and listen while a learned pro- tion in the public school records. r average daily additions to the Amer- fessor discoveries fills your mind full of wonderful Just recently I read that the pro- ican churches from the Sunday AUTO, AUTO BUS AND TRUCK and scientific facts. fessor of literature in the North Da- schools has been 755 persons. Any CARUT1HRS & GORE No one need have an untrained mind kota State University, lamenting the business firm would encourage the these days. We ought to be a nationof tfac that it was impossible to give cause that fed their business as the Country Driving for Eight or Ten Solicited intellectual giants. adequate instruction in the chief Sunday school feeds the church, and New Modern Grocer y Fifty years ago the spiritual cul- source of inspiration for all literaturethe so many churches now have a paid HORSES FOR SALE ture of the child was not neglected in Bible because the universitywas Sunday school superintendent who the home. If the father did not giveit a state institution, at a meetingof devotes his whole time to the inter- 'Phone 257. Orlando, Florida Phone 200 quite as much prominence as the he State Educational Asosciation, ests of the Sunday school.It . training of the mind, the mother did. address gave an on Bible Study for I I There was my privilege to belong to were family in which prayers High School and University Students.A . Pine St 6 and 8 East. each child was to have a share. Then committee . was that appointed at ----- -.: the father or the mother talked with ! .convention to draft a syllabus of Bi- God. God was a real person to chil- ble study for which students may receive - Wood Wood Wood dren then, a loving Father watchingover credit. This syllabus was pre- . them. Then the church filled a sented at the of I Ilnmbing meeting the high ]Do not about Excellent Core Pine Wood Sup- large place in the thinking of the school council, composed of all high worry your: people. There were family pews, and school superintendents and principalsin plied in any quantity every child went to church. the state, and by them unanimously - Margaret Sangster wrote of those recommended to the state high ! times: school board for approval. The board, SPECIAL PRICES ON FIVE "In the morn of the holy Sabbath, after careful consideration, voted to and I like in the church to see Sewering STRANDS OR MORE The dear little children .clustered, approve the syllabus, and voted thata Worshiping there with me. half unit credit should be givento ; Faces, earnest and thoughtful, any high school student success- ELDREDGE BARLOW[ Innocent, grave and sweet, fully passing a state examination They look in the congregation, upon j Telephone 375 Like lilies among the wheat; this syllabus.The . And I think that the tender Master, Bible thus becomes an electivein We will do it for and when South Street, near A. C. L. Whose mercies are ever new, North Dakota high schools. But, you you pay you Has a special benedictionFor Tracks dear little heads in the pew." so far as I know, North Dakota ready 9-l-lm There was never any question, in stands alone, ahead t>f any other state get our home, whether we wanted to go on this subject. It is not enough to church. We went, as we went to that high school scholars should Real Estate Boughtand school Monday morning. The pleas- study the Bible, it should be taught I 1 r antest memories of my childhood life in every grade. Prof. Phelps of Yale center round the Sunday afternoons s University said to the school board Call Sold. I' on F. JOSEPH RAEHN PLUMBING .and HEATING CO. -j' at home. There was the catechism, in New Haven, Connecticut: "I believe - made interesting to us because each L that the Bible should not only City Building Lots, $100; $1.0Q \' \ of us was required through the week be taught in every public school, but 'j; and $1.00 per week buys a LAJl: d\work out the puzzle of finding a that it should have the first place, RENTS COLLECTED. .V* ,,"in the Bible to prove the doc- and every other sudy should be made Cor. Graves and Irvin Streets 1 Notary Work Promptly and N* about which we were to talk subordinate. That time may not Done. ,/ next Sunday. One child each soon come; but, when it does, our I -.nday repeated a hymn committed public schools will turn out boys and =ORLANDO= N. P. HATCHEi\ .,'ring the week, usually along that girls of more character than are now II IIJ1 ,, 'line. Then there was the singing of graduated from them. ! ORLANDO,FLORIDA ,, hymns to the accompaniment of or- And so these, my friends,: are Office at Pinehurst Park / gan, violin and flute. Then, last and conditions and problems that confront i d& P. O. Box 281 / best, there was our mother's story, a the Sunday schools of our land: No w-i mo.1115I i PHONE 403; 10-8-ltn Bible or missionary story told in the religious instruction in most of theo ! -V- twilight. H mother's can tell st homes, children not expected to attend i Get the news in the Sentinel first* ries. It gift God gives to moth- public worship in the churches, : , i J Jl9FX x. 1) t Ji .. '1- i......._ > ),,...,.. .........:.. i .... ,...-.--". -, - ....,. "' . . 'ff.' ea. - ) } i . -- - -- -- THE DAILY FLORIDA GROWERS' AND SHIPPERS 1 I SEVERE GULF STORM ! South Florida SentinelMORNING LEAGUE TROUBLE ATJAMPICO I Washington, Nov. 28.-The gulf AND WEEKLYW. (Continued from page 1)) storm still remained today in the vicinity ! M. GLENN .....................Editor Disturbances Are Reported to '' W. C. ESSINGTOV.. .Biiunru Manager Commerce Commission and pointedout of New Orleans. It has caused J] 3 --Entered--at the postoffie at Orlando.---Flor-- the unjustness of a diverting Washington From the 'I heavy rains and easterly gales along ! ida. M Second Class Mail Matter. charge in addition to the transportation Oil Regions in the coast particularly at Pensacola, i iI SUBSCRIPTION RATES rates. Other action was taken Mexico I where the wind attained a speed of .. ; 4 Us Daily.+ly. 6 1 Months Year .. $5.00 2.50 unnecessary to mention here and I 60 miles an hour from the southeast. t tOUR Daily. 3 Months 1.25 word has just been received that the Washington Nov. 28.Trouble Except in the South and in the Daily. 1 Month ..50L . Commission has granted permision to broke afresh in coast states the weather has been { t I Year .................... .. $1.00 I I again out the oil re- SHOE xkfr. 6 Months ......:...;:...-...... .75 the Pennsylvania Railroad to with- gions at Tampico where vast posses- fair. ALL SUBSCRIPTIONS PAYABLE IN draw the charge on less than statutory -I sions of Xetherland, Great Britain, Storm warnings are displayed on ADVANCETO notice which means that it will i and Germany are located according the gulf coast from New Orleans to DEPARTMENTS ADVERTISERS: not go into effect on December 5. I to news received today by the state I I Apalachicola.TOO . Copy for display and all other htnd-Mt Reconsignment of Perishable Freight department from Vice-Consul Bea- advertiunc must be handco in at the buuneu Where Indirect ServiceIs I veane, at Tampico who locates the LATE TO CLASSIFY office .f the Sentinel not later than 2 p. m. the day fcvfore publication. This rule applies Involved 'I trouble at Panuco, which is up the ! .. chanc for contract. adrertiiunc: it. is This is another matter of general river of that name, beyond Tampico, ; "TED-Store., or part of store; Invariable and will be strictly adhered to.S must be good location. AddressK. interest to Florida which the Leaguehas I and the indicated disturbances there TI;;; 3 rinK The Sentinel discontinued ., care Sentinel. 11-29-lt should notify the office in writing on been working on for several were caused by bandits and other ma- the date of at expiration the regular otherwise subscription it will rates.beeontiwed months and now that all arrange- rauders. FOR SALE-Barred Rock Roosters. until notice to stop is received. ments have been perfected with the Fine birds for breeding. Pure I Telephone Numb . ... I breed. W. II. Hazlett 614 North . various southeastern railroads an- METHODIST LADIES TO GIVE Orange avenue. 11-29-lt SUNDAY, NOVEMBER---.- -29-, 1914.-, nouncement is made of the fact that.I NATIONAL BAZAARThe . where cars become in distress whenin FOR RENT-Up to date furnished The postoffice has moved. I house; all conveniences; 211 Jack- market" sections of the "pocket Ladies' Aid Society of the . -0--- son street. Also two light house- . The Courier congratulates the Tam- country they may be diverted on to I Methodist church will give a "national keeping rooms. 11-29-St - pa Times on the suspension of its more profitable markets at the bazaar" on Thursday, Friday and When we talk shoes we eulogize upon Too much through rate from point of origin to Saturday December 3, 4, 5, in the The New Pocket Billiard Parlor, Sunday morning paper. No. 8 West Pine is street now open the finest stock of Footwear for man, ultimate destinaion of plus charge a paper to be carefully read and too. room on Orange avenue recently va- for business. ll-28-7t tempting to be passed without read- i $10 per car for each 100 miles, or cated by the postoffice department. woman or child of any store in Florida.Ve . fractional part thereof, to cover the I Every effort is being put forth to AT YOUR OWN GARAGE-Let a ing-Plant City Courier. I first class mechanic put your car \ are proud of our reputation. \Ve have back-haul or indirect service. Thisis make this a most attractive and o in order. Watch him work. running After waiting an unusually long I another concession which the I pleasing affair. The booths will be Leave call 407 West street. Phone built it upon the strongest foundations time for Central to come in on the League has been able to obtain from named the "Colonial," "St. Valentine's 563. 11-1.9-6t' known to commerce-that of dependa- the railroads, and the news should be ," "Indian" and "Christmas" wire he said to her: "It took an FOR RENT Pleasant of Values sunny rooms, bility qualities. the best tobe awful long time to raise you." "Yes, welcomed, especially by the many where fancy articles of all sorts will completely furnished. Breakfast if twenty years. Number, please" said who have had unpleasant experiencesin be on sale. desired. 605; North Orange avenue. found anywhere, and unsurpassed assortments - she. Chicago Tribune. the so-called "pocket markets." Here also refreshments will be 11-29-lw of stock. If a department is worth .' Tomato Rates to the West and the served to all who have formed the a FOR RENT-To desirable people, at all its worth NorthwestAnnouncement carrying carrying iq a t habit of SILENTThe eating good things. well furnished suite, bed room with publishers of The Sentinel re- was made about a Thursday night from five-thirty to private lavatory. Also single rooms, comprehensive manner. fust to say anything-the twelve month ago on this subject, since whena eight a chicken supper will be and a garage. Splendid location. No. 9 East Washington street. hearing was given by the Interstate served-roast chicken and constituting this morning'sissue pilau. 9 E. pages Washington St. 11- 9-1moc is somewhat of an achievementin Commerce Commission at Jackson- Friday noon oyster stew; at 6 p.m., Our Magnificent Assortment - Orlando newspaperdom. ville. There were originally two salad ham sandwiches and coffee. FOR RENT-Two or three unfur- a cases involving tomato rates to Kan- Saturday noon, lunch of ham nished rooms, either upstairs or downstairs. All modern conveniences. TilE CIRCLE sas City St. Paul, etc., which were baked beans and coffee, and at 6 p. m., Close to two good lakes. Inquire at The pavement encircling beautiful consolidated into one. They were initiated a big turkey supper. 50R W. Amelia street. W. II. Bo tee 1t Lake Lucerne was completed and by Chase & Company and The bazaar opens Thursday threep. ll-2D-6t ! opened to traffic yesterday. It is an the II. C. Schrader Company of Jack- m. You can't afford to stay away. TOURISTS wanted at private board- . which all citizens sonville. The status is that a final 11-29-tf house achievement to may ing near Lake Standish; point with a great degree of pride.In hearing will be given by the Commis- plenty of fishing; real country life; . the days when Lake Lucerne lay sion at Washington on December 19, DEATH OF RICHARD CROOKS Northern cooking; reasonable rates. Wfl:1 . Address Mrs. L. : Smoot Plymouth, and it has been arranged for the in the embrace of wild and rampant Fla. ll-29-6t Nature it was undoubtedly a fairy- League to handle the case to a con- Many friends in Orlando will regret - land. The boulevard encircling this clusion. The traffic manager of the to learn of the sudden death of NOTICE-Go to Mrs. Cornish's Tea League will represent Florida's inter- Mr. Richard Crooks, which occurred Room for turkey dinner today gem has crowned it with modern, Price 50c. Dinner from 12 to 2 , progressive and enlightening ideals of ests at the hearing in Washington.This last week at the home of his parents, o'clock. 11-29-ltc elevating civic advancement. in itself is an important case, :Mr. and Mrs. Thos. P. Crooks in o inasmuch as it involves a certain gen- Hamilton Ontario, Canada. FOR SALE OR TRADE-One No. 5 FIRE ARMS eral policy' for future rate making to The deceased was a bright manly Oliver typewriter, with splendid , leather case. Machine in excellent The tragedy of Friday afternoon the territory lying between the Mississippi young gentleman still in his teens, condition, used but very little. Call resulting in the death of an eight river and the Rocky moun and spent last winter in this city at between 12 o'clock noon and 2 p. m. year old boy, the subsequent verdictof tains. the family residence on East Central for terms. Inquire at Sentinel office. " : 11- 8-tf . Many other subjects of much inter- avenue. the coroner's jury the positive knowledge that punishment for the est to Florida are in an investiga- WANTED ROO\IERS-Rooms and . will result should tional stage notably increased de- POPULAR WINTER VISITOR DIES board in delightful country residence - slayer ultimately Enables us to fit ANY FOOT (a broad three-fourths mile from Orlando be a lesson for the community. Boys murrage charges on perishable i - freight when transported in refrigerator -I News has been received in this city in orange grove. Plenty fresh statement to make), but our stock justi- should not possess fire arms of any air fruits and congenial surround- sort, kind or description; the indiscriminate cars, and through rates to of the death of Mrs. James D. Biggs, ings. Address A. B. C., Sentinel. fies it. Our stocks are new-ABSOLUTE- i southwestern and western territories, I at Riverton Ky. The deceased had 11-e9-1wk. sale of them should be LY-Our lasts the latest-our : stopped; the community should come of which the League will make an- many friends in Orlando where she styles su- to realize that carelessness t heI I nouncement at the proper time.If had been coming to spend the winter They say that the automobile tour- preme with fashion's caprices. pays I II I months always making her home at ists in the eastern states vastly increased - price. A boy budding into young I It you have subscribed for the Sen- Duke Hall She was a magnificentwoman in number this year. Ameri- manhood is behind the bars; there is tinel and are not receiving your paper and came from one of the cans will learn to like this country doubt that he is in sorrowful no a I I you will do a favor by. notifying first families' of the Blue Grass State. when they get acquainted with it. : condition. Until a trial is accorded him and all the facts brought out .; i.,-.1... II..U I" . . i..i.a. . . . t I .J\J : public sentiment should not be too. 11 I 1 I severe and premature. *1 if ltd PROMISING OLTLOOK o [ < I During the last few months this fnM ti i country has passed through the strain e 1 J - and stress of a most critical period. I - - Cents Per Fi 4J t . f The passage of a new tariff law always : t brings unrest and uncertainty . ' i until the business of the country becomes ... II 11111111 II II II 111111111111111111111111 r 111111 C I Willi a III 'i +H J it ;t. ,... : f, < -" to the rates and --- . J adjusted new .d -- -- --- -- ff conditions. The ,. .. I new banking and ltt0 : ;v .11I11' & jo; r. . [ currency law is in a sense an experiment ; t ' J and must be given a fair test I 4I j I. i; . - before But while its merits these are problems known. were be- Gifts That EnduJ <. f . ing worked out along came the great l .: "r i..tv. ., :;. ..., European war like a clap of thunderout .. I i ". '.,_ >.-.h..'" .. 'Pi " of a clear sky. This upset the plans and the business of the world. I:I -- This country the most favored and i .. most fortunate in the world suffered The very air is now becoming charged = itb -f espirit Ii ' along with the rest. Our experience Ii g ?VA \ of giving for Christmas will be 1 _.* ? ' soon < . proves that nations do not live to : t ,. themselves alone any more than man lives to himself alone. A bright out- The choice and preferred gift is the one wh' : <'*ii '. s it':?' !' - look horizon.is looming The markets large in of the Europe business are because it suggests the continuity of love and frienc .$,', I i ... I1a7i .- opening up to our exports. These .'". 1 .t 4 _,; f t i r. exports are growing by leaps and Such a gift is a watch or piece of jewe t t'i'' ... f ;. . bounds. Our credit is being main- ii4 tained and our gold is being kept at } f home. Our balance of trade is fast You will find at our store everything that v .* -:.&l.r i: : j # reducing our obligations abroad. Our first-class of value f c K a ; guish a jewelry store-gifts rare < ..a v :: h . business finances are is becoming in a safe normal.A condition. and spender, and an almost unlimited choice of less ei :-t V.'t" hit- j tI panic that seemed certain a few charming gifts to suit the limited income. . months ago has been averted and g j, things industrial are coming our way. I -;: ; !I';:> . We selections will ;please j . The outlook is bright. Our crops are are sure our ;> m a.p - large and must be in demand. The ':: thing for us all to Bo is to sit steady 5'. < f _ .. .- J ;.: , in the boat and wait for the storm to -1 rtV ; . q : , blow over. Courage and hope are es- "t . sential at this time. Be conservativebut steadfast in making the best of T. H. EVANSx , 11 t ,' 1l our opportunities made possible by ,4"t -a!; .. -- ."1.; ":-< '".'1a"' - I the great war. There is fu- '" 1 a great ; .1. .1.1R ; ture for America and Florida will be JEWELER .. " in line for its share of the rich rewards - to industry and intelligence. Orlando Orange Avenue . o "' a. : f Biggest per cent of profit ever is derived from the use of the Sentinel's want ad. columns. . . r-;IC; ,( II .TTi'. -. 1I I t i- i .. \ --- -- _. -- -- ._--- i 1111111111111.1..11111.10. = ; 111'111111111111.1.1..111.11..1..11..11.1.It .1..I..III..I..I. tt. Ai ' 11 III 1111 H 11 It 111 H i i 1111 i 111111 tit I Ml 1111 i I Ml M 111111 M 111 M 11 M 11 f MrLUCERNE : FAVORTE : ; FIGETIiIT AT ALLEN'Sf JICU I It'I" TODAY I Open Admission 2 p. m. and 5 cents 6:30 .EI I I I II MMMMII MONDAY'S PROGRAMImp. Mlllllllll 11 Jewelry for Christmas J Jll f and Hospital. : : Wm. Shay in "The Old Rag I .. =< ie .: :.a*-;- to. :; Doll." We have on display now a big assortment of Jewelry for : UniversaL "Ike Jr. and His Moth- the Holidays. ; al Music er-in-Law." A scream. Cowboy com- t .; : : edy. I I 1;... : Sterling. Ford Sterling in "Papa's ll " < r . "' t: : '-iJ ,.-:..;;;; - Boy. I ... <. ..--....<<-- ., ,- And another good reel. laugh. s ''' ''''_ . Come and have a good ><: " --- - > $:; TO tMtJ i .' .. .Tfah THAT GOOD I It is not too early to select your presents now-nothing has \s: been picked over. this afternoon. The entertainment Glassware A I .. ,.. S. Y. WAY 5.5 n 1, :j. S t! j ., '" d will be furnished by the Cathedral i .. ,, t ,. '. f school.' Jlclntosh's orchestra will \ :; : : ; .r play. A silver offering will be takenat Cooking L Insurance and Real Estate Agent '. 4 I. ;, the door. Two reels of hand colored \ .;. ,; } .'- i y % '." 'It religious pictures will be shown.OSTEOPATHY. Ii UtensilsWatch '.1 f' 'f< !i .,,\.., i r- L PHONE 172 "'.. r- window for Monday \ <:' t. ;, :, ,. . : .. .. t'III :w e i specials. i f. fc ,i rest DR. J. C .HOWELL 128 S. Avenue Orange < .--it*' ; ::' Hours 10-12. 3-5. and by appointment Advertise in The Sentinel !' t"f'; t. h to.- j No extra charge for house treatment 11111..111.1.111.111.1 . 'Ii- . Except ---- --- : I :ie Night Calla.Telephone - I I..... ,.. : ',';t""i! } '. < u- connections I I r I = I I D I I I I t I I I I I I I I I I 5 I I I I I I I I I I M I I I I I I III I I .t I I a i I II I I tile I J I I I I ,., .. ; .. :i;; Office cor. Orange and Central Av*. . 'L; $ 1.- li- !""" 1 tic -" ) ("I.' I.. .c- :; ..yu' ". ..: ...,; ..3 .,., ,.. VENEER CORES .. . t- ',_.I: :' 4S. 'i )F . 5 i .. $1.25 i i. ;. ./ f I .... a ; " : I. "y .. t ,, S !,; t .5- Per Strand Up . t ,5 1" : ie, FIRE-PLACE CHUNKS AND o t" "")w. S 1!< .. ;, in, OAK WOOD We have employed an expert ' y c'1: Iic i r icrer Orlando Wood Yard . o ,. ... tr':'' .1. \L 'nt:. ." "t.. :., ; :; ;: B. W. Kinprsley, Proprietor radiator and body who is ay.ht. :'4"': } % 'h .;- :1.r r PINE STREET AND K. R. man ,..::<-'" n"f 4 : < i z .> es- 9-30 Telephone 344 " .t--".. ., ............ -- -... c___.. _,, _,.. i ready to stop that leak in your ra- I .; 'J'.i. f'1J='f! "..h.! '.- .' -..,:...;.'.- I... "-*,**."..1-J 4."'---"."H"..... }:}1- -I'" -.1..I..III.I.u .I..II 4I444 "I'I : > " " ,., diator or make those crushed fenders - i ------+- -- ' I .. I.:'r...: I!: "t'' i f,-$' 'j look like new. .( : ' f uy: spu. :;: :eld Tires are all : { , ' : '\ : : 1\ ,,-.. YC'1';: Ho'.J <, conds." ::::: Drop in and see him, we guar- ;. I , f , ; -I. antee satisfaction. .. : ,., .. !1tv L < : i.< e GUARANTEED !: ., t .l i ;' :) but we would be : ": .' := . -..; " : :: y y's' ; n "1 j l ;:e made for service, :I, I,,, S Juan [t .! !' fI 7 ..1 _1.. TMENT. II.: ::"= an aargo DB " 8-27-d & w .iSS HH *:: ::::: :: :::,:::: ''1111111111111111118,1111111 II111I1I to 1111111111111111111111111111 a 11111111111111, ""', -' ....-. _.:,." ......,- ; .. :... '--'-- - ------ --' .' .--......- -- .- << 1'.g- --" ... :('" ; f .' FOR find RENT-Desirable a few well and parties handsomely can J uIllber, Laihs and Sl1..ingles SENTINEL furnished rooms with sleeping porch and bath at 615 North Orange ave- nue. ll-29-3t WANTSThe FOR RENT-Furnished bungalow; r'A five rooms and bath; all modern conveniences; lake front location; at- tractive price. Address Box 33, Win- ter Park Fla. ll-27-6t I Morning Sentinel Want . Directory is the Greatest Want FOR RENT-Furnished rooms in up- ORLANDO Ad Medium in Orlando. stairs apartment with private fam- ily. The rooms are nicely fur- Terms Cash in Advance. nished, close in, and board can be ob- tained next door. Apply 101 East RATES Robinson avenue. ll-24-6t For One Day ______________25t Cement Perfection Lime For Three Days ____________50c FOR RENT Royal Agatite Plaster and For Six Days _______________75c Two furnished bungalows two miles ------- -- ---- - CORNER LOT For One Month ____________S2J50 south of the city on Lake Jennie This rate applies on all ads of Jewel. Everything complete. A party 25 words or less. If the ad con- from New York will be here in a is always the Buyer's choice. tains more than 25 words the rate few days to occupy one of the four There but few will be one cent per word for we have, with a touring car that will are corners each additional word, for each in- make access to the Theseare easy city.; :aU-SI ESS and they are generally sold sertion.If particular HOMES for particular you can't bring your ad to people. early the office telephone No. 24. Ac- SeeCHADWICK counts opened by telephone will be for further particulars. 19 West , We have a house on a good payable the following day. Church street. Phone 398. Open corner lot all modern and Every Home-E\ery Business- evenings. 1-24-6tc Important friendships result from the right introduction, and Every Man has use for a up-to-date. Sentinel Want Ad. FOR RENT-Suite by Dr. Christ.of rooms Loner now term occupied the most important ones come through the right banking con- A NEW HOUSEThe FOIl SALE tenants 11-13-tfc only desired. Jno. M. Cheney.: nections. Association is the keynote of all business success. We ! FOR RENT. -Large, attractive room. invite you to open an account with us where a personal interest - FOR SALE-The attractive 8 room Inquire No. 8 West Washington St. price is very low for dwelling 202 Hillcrest avenue; 11-8-tfc is taken in every depositor.ORLANDO . modern and complete in every re- this grade of property. Call spect. If interested call up phone FOR RENT-Beautiful dwelling with in and let us take you out 565. 11-24-tf garage; elegantly furnished; on Lake Lucerne. Jas. A. Knox. aid show it to you today. FOR SALE-Ten] acres fine orange 8-27-tf BANK & TRUST CO. land; about 3 acres cleared; 3-4 ofa mile south of Gill's store and 1-2 , mile east of Lake Conway on terminal FOR RENT-Seven room house; water .. ORLANDO BUILDING & LAND COII electric of proposed brick road. Ad- gas lights bath etc.; barn, stable, chicken house lot 100 dress Anna II. Piatt J:. J:. No. 1, Box ; by IQ 200 feet. Apply at 735 North North Orange Avenue 43, Orlando. 11-24-lm. Orange J avenue. 10-31-tf + I I II I II I I II I I I III I I II I I I+**-U I I I I I I I.J I I U I I I I I II I I I I II I I I I I ** PHONE 608 FOR SALE OR TRADE-I have one FOR RENT-Small modern flat conveniences - 7-room house nicely located mod unfurnished. Jas. A. OPPOSITE SAN JUAN HOTEL ern in every respect. Will consider auto in part payment. Address Knox. 11-3-tf C. E. P., Conway Fla., Box 43, R. R. \ No. 1. ll-21-8t FOR RENT FURNISHED-Three CT JOTHING FRED B. DALEC. large airy rooms and sleeping H. BRYANT porch; near Lake Lucerne. Mrs. J. E. DAVIS A GILES. CounselF. FOR SALE-Nice, comfortable home Foster 411 South Boone street. A. PEPPERCORN Contractor for small family; grapefruit, guava 11-11-tfc and orange trees; six squares of post- Our stock of office. Apply to W. D. West 510 WANTED new . Marion avenue or Hotel Astor. 1-18-lmo WANTED TO RENT a horse by the : I Business DirectoryFor [ month; will take care of horse and , FOR SALE-Good work horse at a feed. Address X, care Sentinel. Fall and Winter Clothini --- bargain; perfectly sound; young; 11-28-Gt rood condition. Address R. D. War BUSY MEN.T. ing 11-20-tfc, Room 4-6. Watkins Bldg. WANTED-At once, young man; . $250 bond required; good proposi- F. HOURIHAN Licensed Plumber. FOR SALE-Five room house, one tion cigarette for smoker right party need; apply.no boozer Call or at Made SCHLOSS BROS. is by coming Conscientious work at fair prices. mile fi&m court house $650. One Room 7, Mrs. Rooney's about seven nand Phone 62y. 106 Court St. 8-23-tf Liberty Brush automobile $75. Inquire - o'clock in the evening. 11-2 blocks Shoe Hospital 8 East Church from railroad on West Lemon. 27-2t anxious for to SHOE HOSPITAL street. 11-17-lmo we are you see our Bouton Brothers corner Church and Orange avenue. Shoes repaired FOR SALE-One seven passenger, USE will TEDDER'S chickens DEAD SHOT-It of and class New cure ;your sore- Styles high tailoring. Best while wait. Best rubber you heels six cylinder automobile 1 year old head and prevent them from havingthe , 40 cents. Best English tan leather. first class condition. Don't . answer dreaded disease. One two or ' Phone 487. 11-15-lmo unless interested. Address Bargain, is sufficient. For sale applications at- place in the city to buy clothing. ' LAUNDRY care of Sentinel. 11-14-tf my residence, 309 America street Or- 'I lando. Price 50c. and $1 per bottle. Your general laundry work French FOR SALE-Cabbage Plants Charleston ll-28-6t dry cleaning and pressing will be ap- Wakefield, $1.00 per thousand. preciated. Satisfaction and service Frank Phillips, Pine Castle. guaranteed. Open September 28th. WANTED-At once, white woman 10-28-tf I Pnone 654. Reel's Steam Laundry with first class references to cook W. lVI. SLElVIaNS' f-r whites only. 9-25-tf FOR SALE-Fine] cypress fence posts. city.wash and Address iron P.for O.small Box 299.family, in Inquire of Frank Phillips, Pine Cas- 11-22-tf. Up to date painting papering etc., tle. 9-18-tf is the kind you want. Where to get MULES WANTED-One pair of good it? That's easy, phone 636. W. T. I FOR SALE-Forty acres of fine large mules wanted for the winter IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII'III1..111 111111.1111111.111. Entrican. 1-20-tf hammock land; eight in grove; the I months. Give weight and terms. t present> crop of oranges will be from Box 604, Orlando postoffice. I 750 to 1,000 boxes; 500 trees all told 11-18-tfc - . T. F. HOURIHAN Licensed Plumber. ready for setting-10 acres"in lake, " t Conscientious work at fair prices. one and quarter miles from Apopka. Price $5,000. Address W. H., Box ROOMS FOR RENT . Phone 629. 106 Court St. 8-23-tf ic1J4Wftet : 178. Orlando: Fla. 9-23-tf . Eat all you want. For Sale NICE LARGE FURNISHED ROOM, I , Eat what you want. 30 Acres, citrus and vegetable land, to adults only. 409 Delaney street. Take a postscript and forget it. 20 fenced 10 cleared crop on it now, 10-25-tf For sale by all druggists. ready for planting trees this winter l-7-d&w-lyr and crop next spring; near school and i THREE ROOMS AND BATH fur- We have received a car of Foster Bros' famous church; half mile of railroad station; nished or unfurnished, for light BUYERS AND SELLERSI good neighborhood; four miles from housekeeping; electric lights; new court house. Special price if taken in house. Phone Johnson 67. 28-Ct buy sell, rent, and exchange real next 30 days. Box 4, Orlando. I Iron Beds estate. Have resided in Orlando Fla., 10-25-tf LOST AND FOUND Porslanlyke t over 30 years. Have handled over L 20000000. Call and see me. FOR SALE-At Winter Park, desir- LOST-Somewhere on Orange ave- 10-6-3m F. A. LEWTER. able building lot, 50x150; 18 bearing nue, Church street or Hughey FIi ._. and.trees postotuce.; two minutes 1'rice 4oO.from :the::fcummerin bank- I street, a lark blue silk, waist, in pack_ There is nothing that will compare with _the finish f Return to :Sentinel on<"e. ::'uuable ;- IN 01 ICE age. :: Get ;your roses now and have lots House, Orlando. 11-22-61 reward. Mrs. Helen Wright. that are on Foster Beds, and OUR PRICES for these k of bloom through the winter. Roses FOR SALE I:11-28-3tc far less than others charge for the same r now ready for shipment. Narsery stocK one, two three and MISCELLANEOUSART beds are . r W. II. BROKAW Salesman. four year old buds; Foster Early, f 11-3-tf Pineapple Java, Homosassa, Washington style bed that has nothing near the finish. [We would Navel. Address Mrs. M.Kahl PUPILS can make beautiful r OTICE-ROSES Apopka. Fla. Phone 2222. 3-13-U* Christmas gifts. Come and join like you to call and see this line; nothing like it in I the class under :Miss: Echols. Room i r Owing to the cool weather our FOR RENT 43, Watkins building. Phone 218 the ! for city. roses are now ready transplant- after 8 ll-25-6t t ing. All orders booked for first p. m. shipment FOR SALE-20 acres splendid farm- r r should be here by the 10th of: ing land 7 acres pine timber remainder I :MAKE A SPECIALTY of planting ! November. clear; on north shore of Rock with all cash sales. i f 11-3-tf W. II. BROKAW Salesman. lake one-fourth mile from city limits. trees orange guaranteed and grapefruit to live. References trees; all We give coupons Price reasonable. Address P. O. Box furnished. ! Charges $6.00 per acre. 74, Orlando Fla. ll-28-3t Address Box Fla. ! [ 222 Ocoee . Don't wait for every one In town to 11-10-lmo I r have Entrican the painter before WELL FURNISHED APARTMENTof :, you try. Am too busy to look you 4 rooms to rent till April 1st; . up. Phone 636, and I will be on the kitchenette bath fireplace large pi- THE WILLOLA HOTEL 406 South " job. 1-20-tf azzas. In Concord Park 615 Lex- Orange avenue is now open. Beau- Kincaid & Snowden ington street. ll-24-6t tiful location near Lake Lucerne. Home cooking. Spacious veranda. Who is your painter and decorator! FOR RENT-Large convenient Mrs. W. T. Walker. 11-19-1moc ( ; rooms If you are satisfied keep the one you for light housekeeping. Also t have; if not, phone W. T. Entrican ant sunshine front bed room on pleas-first BLASTING, Subsoiling and Dynamiting Priced Furniture House. Ne. 636. Low 1-20-tf floor. Private family and modern of all kinds done on short no- I conveniences. 315 West Central. tice. A full stock of dynamite in E T. F. HOURIHAN Licensed Plumber. 11-24-tfc. magazine at all times. See L. N. Orlando. Florida Conscientious work at fair prices. Lewis, 406 S. East street Orlando. Cor. Main and Pine Sts. Phone 629. 106 Court St. 8-23-tf FOR RENT-One bed room, sleeping 11-13-lmo porch and kitchen with all conveniences . ___ __ Eat what you want. ; close in in a nice neighbor- JUNIOR AUXILIARY SALE I' u t Eat all you want. hood, at 307 East Pine St. E Take a postscript and forget it. 1126-6ts The Junior Auxiliary of the Cathe- f For sale by all druggists. dral parish will hold the annual sale ICORDON'SCARRIAGE' , i l-T-d&w-lyr FOR RENT Fine furnished rooms. at Bishopstead on Saturday afternoon \ 1 r I for houeskeeping; also bed rooms; November 28th beginning at 2 ANNOUNCEMENT i They run almost anything and everything -i bath, hot and cold water; all modern; o'clock. Tea and home made cakewill I -AND-1 . ; on wheels these days. new house; private family; large be served. 11-25-4t I I. r ri iTZ HAUER has opened a I sunny porch. 211 Ridgewood avenue. i : ------- ----- --- -- --- --- ll-26-6t Try the Sentinel for your next job BAGGAGE TRANSFERL. I DR. L. H. RAMSDELu Violin printing. Prices StudioROOM NEIL McDUFFEE FOR RENT-Five room cottage part- very reasonable. I OPTOMETRIST I ly furnished. Near Workmanship the best. Call or phone ' I __ Price reasonable. Apply Lake 502 Cherokee.Osceola. when in need of job work.If T. HARRELL, Prop. I (Successor to Dr. o. F. ...W.- 8 ROCK BLDG. t t . 11-24-6t. ORLANDO, FLORIDA Contractor and Builder and Furniture I, Piano you have subscribed for the Sen- WE FIT GLASSES GRIND LENSES FOR pE\7-\partment of four nice- For particulars call, at the address Bungalows a Specialty tinel and are not receiving your paper Moving a Specialty ly furnished rooms; gas, separate AND DO ALL KINDS OF II Arcade: Building. Orlando haul. new house; adults only. 604 you will do a favor by notifyingus flours 9:00 to 12:00'a.m.. Magnolia avenue. 11-10-tf- at once. 8 E. Church SI. PHONEYS OPTICAL REPAIRING i i I Ij I.I I ' ., ....._-,<- ,, .. -< .-'....:--I....... --- I I --J 1 ,- ; jjF .I f.If I .,1--__ >< \ \ . ------- - --- - r"O NEWEST NOTES OF SCIENCE WHAT THEY HAVE TO SAY r ARMS HELPLESS ON LAND Glycerin; will help to dissolve fruit In Orlando the city has establisheda Official Figures Are Published by ft.. stains from linen. "tourist headquarters" and in the the Nord Deutsche AHgemeine - -. .-,.. .:; UNITED STATES HAS NOT Great Britain and Ireland consume large reception room provided, visit- Zeitung *- .: fi: 1.-.. 30,000,000 rabbits as food annually. ors are invited to register and make YET HAD[ A REAL .. ; : ;"."., Technically speaking, a hair's themselves at home. The idea of London.-The official Nord Deutsche TEST t1i breadth is 17 ten-thousandths. of an having such a place is a good one, Allgemeine Zeituns publishes a list .. : { i and several other Florida cities have of army corps and military sections, China Is Only Great Nation 0 inch.A .! > : large restaurant in Hamburg is also a place of the kind. The Lake- which shows the strength of the Ger- That Is Less Prepared to housed in a building made of com- land Telegram suggests that Lakeland man armies now in the fields of bat- :; r :;> pressed paper. would do well to follow the ex- tle. Battle on Land Than ft.; Nearly all of the work in one of ample. "Lakeland wants to make it Apart from the railway service the Our Own CountryBy . Germany's greatest breweries is done pleasant for all tourists of the right marines and garrisons in Belgium, : ; by electricity.A %.ind," says the Telegram "for we there are 98 army corps. If 40,000 I ( Victor Elliott) 1 : new type of electric meter is designed are fond of such people and we wantto men are taken as a fair average fora Washington Xo\ 28.-Only one of .;' to be read from outside a assimilate them as permanent cit- German army corps, this gives a the great nations of the earth is more ... '. building using it. izens." Tourists everywhere like to total of nearly 4,000,000 men in arms. helpless to defend itself on land than , [ r f.. The Norwegian government main- compare notes, talk of the things they According to a French bulletin 50 the United States. This is China. t"'I j ", l'" 't1-Jt k_. tains an agricultural college' and have seen, plan for excursions and army corps are fighting against the We have not i?en such a mobile armyas .. , '' three experiment stations. especially are they glad to find people allies in Belgium and France. This Belgium, while compared with Ser- PtA' 1 r : Under normal conditions human be- From their home towns and home would leave 48 army corps for the via's military establishment, ours f :. ;. ings perspire about twice as much i states. With a reception room and eastern theatre. looks pitiable. ; ;: when asleep as when awake.In I register they can meet many congenial Germany has more than 2,000,000men The speaker denouncing our military .: "' i. d China an oil well has been I spirits-and the place that has no in the west and about the same state is embarrassed by the : or'4'' : : .,.'.. i a: .... drilled to a depth of 3,600 feet with I such meeting point for visitors will number in the local landsturm ir.\ wealth of material. The most ex- East Prussia and Poland. treme statements can be borne out the most primitive native tools. often lose those who might otherwise Sections o* reinforced concrete linger long and possibly become regu- by careful statistics and the greatest -- pipe, each weig!/ng 61 tons, recentlywere I lar visitors or citizens.-Times-Union. CAPTURE BABOON IN TEXAS I II anxiety is apparent in the minds of made in New Jersey for a sewer. the best informed.The . --. The asphalt deposits of Cuba, when 4,000,000 GERMANS NOW UNDER I Animal Thought to Have Escaped navy, of course, is vastly bet- < FAKER FLEES FROM TilE FROST ter off than the army. But even here, developed, are expected to prove superior From Traveling Show ex-Secretary of the Xavy George von to all others throughout the .. f ifi .. ; .' iIt.. ., 6.T tt Y x & L. : 'it;; Meyer declares the efficiency has world.A Just like they follow the circus the ; ; = ::!. i' ': ; Temple, Tex.-While crossing Leon fallen : > >>- I new strength testing machinefor riffraff follows the crowds to Florida river about ten miles southwest of alarmingly in the last two ... 'it'; ,.. the arms and legs registers the each winter and from this time on Temple A. L. :Miller of this city anda years. The general board of the ; .,..... I :; .:" > efforts of a user in fractions of a there will be all sorts and kinds of L. \Weaver navy, which has fought unsuccessfully - companion, were at- ; .. .. Bumb: ; tt:. ,'" T. ::4 ,. -, horsepower.From fakers, grafter '""irist tramps peo- tracted to a curious looking animal for four battleships a year, has tV. ... : t. just met a further disappointment. ;,. C. : 1,325,000 tons of tar annually ple on so-called p .rian tours, ad- taking a drink from the stream. ':"' t produced in Great Britain from coal vertising fakers jus ain day It decided 18,000 more officers and .... every F.; th.'I, '--", o.J'r J.11> _, .. _1 Closer investigation showed that it about of benzol etc. It behooves the men are needed to man the ships now .. 10,000,000 gallons are tramps police o. ; ',: : :. ': i Tires made of wooden blocks have rock pile ready for the list of unde- they succeeded in doing after a severe iels, secretary of the navy, refusedto 'given good service on motor trucksto sirables and put them at work. It make this recommendation.The . Q.1"'Uq; : ll .p'1:: .t.?::4: ''.Sir' !' : *ri\1 Uf"L,. t.".lf''* 3: convey heavy loads over rough behooves the business man to keep struggle siderable in damage which to they clothing suffered and con-cu- navy, even if no longer the sec- /. I' .<, : mountain roads. his eyes open and cut out the fake ticle. The animal was brought to ond strongest in the world is formid- Work is under tunnel advertiser mid citizen should able compared to the army. Accord- way on a every Temple, where it is being confined. -- ing to the recent report of the chief ; ; I".r- more than three miles long throughthe look into the case of the individualwho It is thought to have escaped from a o < ( enable French be in hard luck and of staff, Major General W. W. Woth- to really ..; .: Pyrenees a may show that exhibited at Belton a short railroad to enter Spain. who really is looking for work to erspoon, a document which is likelyto I .I'f 'Wi' time t ago. ,. : " The top of a recently patented table work and not in order that he may become historic, the "actual fight- :;'f', t'. f ;f ing strength of the with the for use on shipboard is kept level keep away from it. There is work army s I j i 6" 'tt e. by an ingenious combination of for these men but Florida is overrunin .\ TWELVE MILLION DOLLAR colors and without deductions for of- 'r weights and levers. the winter and they should not CITRUS FRUIT CROP IN 1913 ficers and men sick on furlough, de- _.. __.. .1 411a. .r fi ? tached service etc. is 2,738 officers y 4 a English chemists have made a syn- come down here with the idea of liv- , Florida realized at least $12,000- thetic turpentine at what is said to be ing off the people.-Sanford Herald. and 45,968 men. \,' fl.n( : I ',"n" : :./" >. tof 000 from her crop of citrus fruit last Rudyard Kipling, in a letter to : '" H fii t'l'I ; 'i one-third the cost of the genuine a American article.Apparatus YOU HAVE ONE CHANCE IN 60 season and it should be $15,000,000this friend living in Virginia, recently .",- t : """ 'o ; -. ,> -it< -""" ',"" ;r year. Whatever the amount maybe showed that he realized ,-.', ... ,-_#. ": '- has been invented for OF BEING IN A FIRE I fully, as .. ... the hardness it would have been much more does every well informed man in Europe - .q "' & = -r. -- ''' f' h accurately testing of 7 : --7' had not the worthless fruit ,,,,""TR .' .fcc___...lr. &';"" '. .,.. .. _...--... ,.... '.JI metals by showing their resistance to "Fires in our homes are so frequentthat been forced green the markets earlierin the military impotence of the upon United States. He warned his Amer- the teeth of files. the insurance companies tell us By crossing certain fiber plants in that we have about one chance in the season, and the stuff broughtthe ; ican acquaintance that the United the Philippines an excellent grade of sixty of being burnt out sometime in shippers practically nothing, as i it States would some day be trampled ti!. !. artificial silk of much strength has the course of a lifetime. But in the should have done. The markets did I under foot by a strong enemy if not want it. The transportation com preparations to resist not made been produced. same breath they tell us that more were . r t Long fringe has been patented to than half the fires could be preventedif panies got what was in it. In some while there is yet time. ) j: : ; be suspended from garters to save a people understood the commonest instances the shipper was called onto There are those who point to the ;. pay the freight. This green stuffis records of the i feminine wearer embarrassment as causes of them and knew just whatto United States in pre- N now about out of the t' 'she climbs upon car steps. do when a fire starts. way leavingthe vious wars as showing what we can , 'i"it t With English engineers doing the "Smokers are responsible for thou- market for a ripe fruit that only do to defend ourselves now. Millionsof 1 AWE : work the Russian city of Baku will sands of fires, and rats and mice Florida can produce.-Volusia County men would spring to arms, they t#' t.. .: >{.. ,..'' ,"' obtain a new water supply from cause many others by nibbling at Record. I say. But a more careful appeal to i ;j f.t 'f tis :' 7 mountains 120 miles distant.A sulphur-tipped matches. Fires which historical records shows that even .: enough to reach solid ground, has matches being left in clothes or by GERMANY I in severe terms the unstable state ';:,r !': r '- been invented for raising automobilesthat oily cloths which have been stored levies which nearly wrecked his army Reichstag Expected to Vote Fire Billion : during the Revolution. The and hl.. << are caught in mud. away. Many of the floor polishing raw > .If\, .!.. t! A hollow wooden ball' six feet in mixtures contain highly explosive Marks at Short Session undisciplined mobs, which for the diameter, which is moved by the flow oils and spontaneous combustion may most part composed the army in .. se 4. "" of the se'cage is used to remove obstructions start from a nest of these cleaning Berlin.-The session of the reich- 1812, were driven hither and thither -.x tom. i't from sewers in Paris. cloths if placed in a closet near the stag, which opens December 2, is ex by much smaller British forces. Its 4 ;' ,g i'p : ; ,. Oiling the end of the grain of a chimney." I pected to be of short duration. It i was only after the volunteers were block of wood and rubbing emery probable that it will only vote a new drilled and lader discipline that they : ; .: powder into it will make a fairly good WIlY EDUCATE TilE CHILDREN 5,000,000,000 marks war credit and were able to make such a good record . -rat 4s :t, Y. knife' hone for household purposes.An AWAY FROM THE FARM? then ratify the emergency laws pro in the Mexican war, while the un- .,. '" mulgated by the bundesrath.This trained militia I American expert has been en- was practically useless 1'h gaged by the Austrian state of Victo- "What is wanted in the rural dis- is to be accomplished, if pos in that conflict. In the civil war thee : : i._ I ria to reopea a long closed factoryand tricts is the kind of school that will sible, without debate. Neither th raw troops on both sides demoralized revive the beet sugar industry.In meet the demands of today. If we budget nor new taxation proposalsare armies, and it was not until after i a new electrical hair drying want to educate our boys and girls expected to be submitted. The they had become seasoned that they : 111m comb air heated by electricity, is away from the farm our course is budget will be laid before the reich- i made their name. The Spanish war ._ .... ::".":"' t'. ; ; forced through hollow teeth by plain for we can send them to the stag at its session in February. I skirmish never afforded a real test, ., __ _-_..- .-. ,,, d-.' ...-...> .." < -'"-"" ... '".., I but ,, officers knew militia ,. ...- --. squeezing a bulb at one end of the city schools. I don't believe we want army that the ,..-., { ;r......, .. &_ {... ,,, .... ...... handle.An our children educated away from the WIFE QUITS HIM 51 TIMES which assembled in the different Australian has patented rou- farm. What we do want is a broader camps were absolutely unfit for ser- ;"" sers for men with four more than conception of what rural educationmeans. Last Time Was for Good and Husband vice in real war. r\f:J J pj; L.f.l' \, : r-1tT \" tort t the usual number of pockets but with We do not want our boys Gets Divorce We have never had a real land war 'f' : .' the ordinary number of exterior and girls educated to think there is with a real first class power, and for openings.For nothing but hard work on the farm. Brainerd, Minn.-Albert R. Adkins this emergency a great body of ...;. ... trimming the grass on railroad Rather do we want them taught to has been granted a divorce from his Americans are coming to think we embankments there has been patented see and appreciate their wonderful wife, Elizabeth Adkins, and the cus- should prepare. a mowing machine to be attachedto advantages." tody of the minor children. The case 4. t.t. '! a car and take its power from an was heard by Judge W. S. McClenahan. ENGLAND EXPECTS BEEF FROM axle. THE DARN FOOL AND THE WISE SOUTH AFRICA I ;,: "'1 ., i . In Mexico there is a 150-foot bridgeover GUY It was a regular thing for his wifeto ,! ., ". a river that is composed entirelyof desert him, remain away from "Speaking of the meat supply, beef mahogany, worth, at the present The darn fool acquires more useful home over nights, the complaintstate. ''I eating England expects in a few :. "' .' ';' price of the wood almost $2,000,000. information in a day than the wise Fifty times she packed her years to receive a great deal from clothing, squandered his in South Africa. Great of that guy does in a week. You never pour money I areas 136: APPLES A YEAR IS YOUR water into a full pitcher. The man telephone telegraph, livery bills and country can grow from five to eight SHARE who has information to impart is always railway fare, and would go to her crops of alfalfa per annum, and irri- ' " 18* "i" , 'tt. shy about giving it to the man parents or relatives and stay for 'I gation is expected to increase the "America's apple crop, at a reason- who looks as though he had all the days and months at a time. acreage enormously. The farmers able estimate, this :year will approxi- knowledge of the world packed awayin In June, 1913, he says she left him I are already getting into the live stock l' 1 :- ;-. '" ,'. mate fifty million barrels. This sized his cranium. There is no harm, of for good.FRANCE. business in an up to date way, ship- crop would furnish one-half barrel, course, in telling secrets to a darn I\I I ping in great numbers of the best .:; ', ;;r; !. W "t,1t or 150 apples, for each member of fool who doesn't know what you are TO SHOW AT FRISCO breeding stock from the British Isles : i 4r' our population. An apple a day eat- talking about. The real wise guy, DESPITE WAR and elsewhere." ', : en out of hand by Uncle Samuel's who wants to find out anything, will : :::. family from October to March would look as much like an idiot as he can, Cabinet Decides to Exhibit Objects IllS REWARDA consume our entire crop. This makesno and most any smart man can look of Art of Modern and !. : -;. "'* allowance for pie, apple sauce, and haU-witted.-Clearwater Sun. Olden Times Hoosier lad was industriously at baked apples. wok upon a pile of wood in his moth- :" .t 'i .,.. t. j" "Our normal export of apples is REMEMBER THIS ON WASH DAY Bordeaux, France.-The French e's back yard, when he was ap- .; i' : f. about two mililon barrels, so shouldno cabinet decided today tha France proached by a playmate. "Hello tnJ : ; : apples go abroad this year we can "For fruit stains on table linen or shall participate officially in the Pan Ben!" said the youngster, "do you each be allowed a half dozen more, other white goods boiling water is ama exposition at San Francisco, re get anything for cutting wood?" f 156 apples per capita. Really our the best and safest remedy. Stretchthe gardless of the war. "Well, I reckon I do," replied Ben. apple market should not suffer if stained portion before it is wet, I The exhibit will be a reproductionof "Ma gives me a cent a day for doing '...;- \, i ., those six apples are kept at home." over a pan or pail and pour boiling the Legion of Honor palace. In k it I it." "What you going to do with t water through it until the stain dis- will be displayed historic objects of : your money?" "Oh, she's saving it 'S.' *. ..-, ** The harvest season is fast ap appears. It will not take long, but art, French tapsetries, furniture, ,' for me; and when I get enough she'sgoing t f' - ; proaching. Try a want ad. in the the water must be actually boiling porcelain and examples of contemporaneous to get me a new axe."-Chris Sentinel. Results guaranteed. when it is taken from the fire. art and manufactures. tian Register. ' 4 ... __.--' .- .;'-; -, 0....:. \;.';.c..."", .' ;- : ........... .. ...< ""' < ,--", _. =.' .... .-1 ..<.........,. -, ........:...... "F... ..,.........". ''' -.::::: -- -.,- "'-.1. -- .r A -- Tr -.1I -' "" \1 _ I' f VISITING- NEWSPAPER MEN- ENJOY cloth. OLD WIGWAM CHIEF IS NOT TO .e f vent.Attention AGENT A. OL TOM FISHING- TRIP- OX AT TODAY'SServices CHURCHES hour of Sunday is called school tQ and the midday changein "And the mafKet for it will be BECOME A SQUAW MAN I ST. JOHNS found. Millions of men are on the service, also that the evening lectureis at 7:30 instead of 7:45. battlefields and they are the greatest Washington.-Richard Croker will I Will Make an Address in the George S. Rowe, of the reportorialstaff at Orlando Churches First Presbyterian Church consumers of cotton. They wear not become a squaw man unless he I Board of Trade Rooms of the New York World; Harry Today Have Appeal to All Rev. Jno. W. Stagg, D. D. pastor. khaki-all cotton. A soldier wears goes to an Indian settlement to resic I Sunday school at 9:45 W. R. December 10th in Jenkins, of the Evening Telegram, of Classes-Orlando Is a O'Neal, superintendent. a. m.; out about a suit a month on the av and lives apart from all white rr**..* f I '", Morning New York; Waiter Willison, of the Religious City Services at 11 a. m. and 7:30 p. m. erage. At home, in ordinary occupa- The definition t>f a squaw r . .' New York Sun; and W. P. Flower, Morning sermon, "The Failing Time." [tion, he might use three suits a year, en by the bureau of Indian ai ! i-- Mr. C. A. Maul, advertising agent of the Atlanta Georgian, spent sev- I Christian Science change.Evening sermon, "The Wise Ex and those mostly wool at that. Now "A man who marries an Indioaman I of the Atlantic Coast Line railroad, eral hours in the city yesterday as .I Sunday services, 11 a. m.; subject, Men's Bible class at 9:45 a. m. I he becomes consumer of four times separates himself from jts / will visit Orlando, Thursday, Decem- the guests of Ted Dickson, Jr., on "Ancient and Modern Necromancy, Prayer meeting, Wednesday, 7:30p. as many clothes, and those largely of white friends and goes to live with ! Mesmerism and ber 10th, at 10:30 a. m., to address I their way to Havana, Cuba, via nounced.Alias Hypnotism, De m. cotton." the Indians as an Indian." Mr. Forrest Dabney Carr, choir director I r the citizens on the best methods of Tampa. While in the city the visit- Sunday school, 9:45 a. m. ; Miss Roberta Branch, organ- Croker will be on an equal footing e community advertising and agricul- ing newspaper men went fishing on Wednesday evening service, 7:45. ist.A I NEEDLE IX BODY 33 YEARS with Senator Poindexter and other tural economics, with special refer- the St. Johns and landed a numberof Third floor, corner South Orange most cordial welcome' to visitors prominent men who married womenof and Church street. Entranceon ence to the development and improve- I the finny treasures, the largest, avenue Church street. and strangers. Caused Xo Pain and Was Removed Indian blood. , of this section. a black mouth bass, falling to the rod First Baptist ChurchAt Catholic Church ment I First of Advent. From Woman's Leg Mr. Maul is of wide experience and of Mr. Flower. ( the Lucerne) Sunday Sunday school, 9 a. m. DOGS ..XU' DAUGHTERS 1 1If This church extends its cordial hos- gives valuable information. The meet- After leaving Havana, the party Mass and sermon, 10 a. m. pitality to all who come within its Wenahatchee, Wash.-Thirty-three ing will be held at Board of Trade will sail on a Ward liner for New doors. If have no church home Rosary, instruction and Benedic- 1 you when Mrs. A. F. Frantz York, where Mr. Flower will join the here, we want you with us. tion, 7:30 p. m. years ago, a man's dog (pedigreed, of f rooms. staff of the Evening Journal.Palatka Rev. Edward T. Poulson, D. D., All welcome. was a girl, she swallowed a needle. course) happens to be out late at I Dastor. Unitarian Church Recently the needle made its appear- night he'll raise thunder about it amigo LIOXS IX CALIFORNIAIN Morning Post. (Corner Central Ave. and West St.) t KILL 2,099 Mr. Samuel A. Newell, superintendent in her left and Rev. Eleanor E. Gordon, minister. ance leg was removedby out and walk himself to death to . YEARS SEVEN of the school Mr. Geo. Sunday ; Sunday school at 10 Dr. Congdon. During all these find the missing that harm PLAZA RESTAURANT OPENS a. m. one so no i I W. Phillips, director of the choir; Mr. . William S. Branch, Jr., organist. Church service at 11 a. m.; subjectof years the needle has been traveling can come to the animal. But you ', ! Report of Fish and Game Commis- school will meet morning sermon, "Getting Readyto through her body. She felt the don't Sunday promptlyat never often find him chasing around \, _ sion Shows State Has Paid The new Plaza restaurant in the Live. I . $41,980 in Bounties I south wing of the Arcade building on Public 9:30 a.worship m. at 10:45 a. m. and Unity Circle on Wednesday after- presence of it until a few days ago. the t'wn to find out where his son or .'1 Church street opened Thursday for 7:30 p. m., with the pastor in charge. noon; "Current Events," by Mrs. A black and blue spot, accompaniedby his daughter is, do you? And, you I' Morning subject, "Union With Stanley. soreness, was the first indication know, that oftentimes that son or Sacramento, Cal.-Two thousandand the first time. It is one of the most Story Hour for the older childrenon " Christ. that anything was wrong. Next the that daughter might be in a damsite modern and afternoon for theyounger up-to-date Friday at 2:15 ninety-eight mountain lions have thoroughly} ; Subject of evening sermon, A sharp point of the needle showed it- children, 3 o'clock. worse company than the pedigreeddog. sanitary cafes in this section of Flor- I " been killed in California since 1907, I Note of Warning. The meeting of the Round Table self. It was then that she sought And that's no dream.Thorne , according to a statement by the state ida, and is backed by expert restaur- I B. Y. P. U. meeting at 6:30 p. m.; postponed one week. j ' medical attention. in Palm Mr. Eugene Reid, president. Beach Post. ? fish and game commission. Of this anteurs. It is artistically decorated Good music. All seats free. number 118 have been killed in the with flowers and palms' and is neat Don't miss the popular evening ser- WilY ENGLAND NEEDS MORE state for the six months ending June and splendidly equipped. The man- vice COTTON IX WAR TIME THAN ::1 20. The state pays a bounty of $20 agement is making an effort to fur- Methodist; Church IX PEACE /, for each lion killed. The total cost ther improve it and success will sure- Rev. James E. Wray, D. D., pastor. THE CHASE & CO Preaching by the pastor at 11 a. m. of killing lions has been $41,980. ly be theirs. and 7:30 p. m. Judson C. Welliver writes a most I Humboldt is the banner county for I Sunday school, 9:45 a. m.; Mr. C. interesting article on the present cot- { PACKING HOUSE AT ORLANDO .. this year, and for ev ry year. In the Subscribe for The Daily Sentinel. r IE. Howard, superintendent. six months ending with June Mendo- Epworth League, Junior, 3:30liss; ton situation. His article is opti- '! Is modern, fully equipped with the best up-to- i Eliza Wright, president. mistic. An extract follows: | date machinery, and gives the citrus grower the sino, Trinity and Siskiyou are next Epworth League, Senior, 6:45; Mr.J. ' in order named. Los Angeles county T. A. MANN N. Burden president. "We must remember that the British j i PROPER PACKING claimed bounty on 15 lions in seven Woman's Missionary Society, Mon- manufacturing capacity is almost' FIRST AID TO day, 3 p. m. half that of the whole world. The HIGHEST PRICES years. Mission Ctudy class, Monday\ 7:30p. Grocery Pine St.SPECIALS m. British and American capacities to- which, however would not accomplish the end desired i BEFORE YOU DO YOUR CHRISTMAS Sewing Circle, Tuesday 3 p. m. gether are considerably over two- by the grower, if he did not also have available the services - SHOPPING Prayer meeting, Wednesday, 7:30p. thirds of the world's. British mills m. are not going to be shut down; the be sure you know what merchantsare Prof. Wade, organist; Mr. Curry, ofTHE offering in their advertisementsin director. war will not draw away their opera- CHASE & CO The Sentinel. Come and hear plain gospel preach- tives because not over 10 per cent. of ing and join with us in singing the these SALES AND , This is to be a practical Christmas. are subject to military de DISTRIBUTION SYSTEM '. clorious old hymns in the power of More useful presents will be made the mands. The country is full of expe- which, with selling forces thoroughly organized, 'ixpe- than ever before. spirit.St.. Luke's Cathedral rienced operatives who can be drawn rienced, competent and reliable. e ' And The Sentinel will be alive with I The Very Rev. II. R. Remsen, dean. back to the mills if they are needed.On . suggestions of useful Christmas gifts.Is LOOK FOR SOMETHINGBIG I First Sunday in Advent. the whole, it is confidently to be GIVES THE GROWERHIGHEST The Communion 7:30 AVERAGE NET RESULTS Holy a. m. expected that there will be increase a reading lamp? a piano? a Sunday school, 9:45 a. m. a big - rug? a chair? a go-cart or doll? a Morning Prayer and sermon, 11. in the output of the British CORRESPONDENCE SOLICITEDGROWERS' Evening Prayer, 4:30. mills they will be after the trade ; dog, bird, cat or pony? a gas heater MARKETING AGENTS An illustrated talk for children on that the Germans have controlled. new electric appliances or new fix- mission Alaska work in 5 m. tures for the home? Instead of the usual evening p. service "That same is true of the American I OHJ: SIS & CO., - These and many other equallypractical. IN THIS SPACE an illustrated lecture will be given mills. As soon as things get adjusted - the dean in the House Main Office, Jacksonville Florida will be offered in this news- by Chapter to the new conditions the demandfor there several 'n Alaska missions, at 7:30. These cotton will be limited only by the 9-30-d&w A. R. BOGUE, Agent, Orlando, Fla paper. Already are illustrated mission talks will be givenon Christmas advertisements appearing. TOMORROW each of the four Sundays in Ad- I capacity of the mills to turn it into . I - . ROOMS 1 and WATKINS BUILDING PHONE 333: : WALTER W. ROSE . Real Estate . . Personal attention given to all business intrusted to my care: e. . . MR. \\ WALTER. ROSE, . Orlando, Florida, Dear Sir: 1 1I am mighty well pleased with the property and partic- /.r " ularly the kindness and advice which you gave to me in the -' --. ',- .....-- purchase of Florida property. I believe you are one of the men whom I have met in selling Florida property who can be ' absolutely relied upon, and you are at perfect liberty to refer I any prospective purchaser to me and I can assure you that ;you will have no regret for having done so. Very truly yours, J, J. W. WILSON, Mgr. Prudential Ins. Co., ' Cleveland, Ohio. TO WHOM IT MAY CONCERN: e, I have made investments through Walter W. Rose, of TO WHOM IT MAY CONCERN: I am glad to that the investments made in Flori- e Orlando, Florida, and have found them very satisfactory, and very say just as represented. da, through the advice of Mr. Walter \V. Rose, have proven I. Very respectfully, very satisfactory. : J. MURPHY, Kansas, City, Mo. Very truly yours, _, C. L. VanVRANKEN, Kalamazoo, Mich. ;'; /Jr'' TO WHOM IT MAY CONCERN: We buy all lands through the advice of Walter \V. Rose. Yours very truly, . :::_ LAKEVIEW HEIGHTS CO., Inc. .Jf e . S . : ' .. ," . .-, ". t-: l 7j Z ; r\: ' f--'-- ,, : - ,': ; .,- d", \ ' b \-r ..;,' ,- ,. ) - j - t tf Ir I f J"I ,'" . L I . :. l' --- . , } -- , """"" 4 r---...., ...". .. ." ." s.... ..... =- ..........-- -,,-- ....,== 'n.- 0- _. ) j e f o . ---------- 'r" -- -'- i ;Ells GOOD AROUND THE STATEPalatka I LAPS SET UP GOVERNMENT lllillliliMiliniilMIMIMIIIillllMMIilllillllliilllilliltilM IMMMiM X I is to have a meeting of S--- ESSi. Islands of the South Pacificby the Grand Lodge of K. of P.'s in Nipponese Are Being ! Nothing like advance March. news. OF FLORIDA Governed Rapidly by QaYIOSOII -Palatka wants commissionform a ; JDID TALK JapaneseTokio : s of government.The ! o ; LAND editor of the Lakeland Tele- Nov. 27.-The Japanese are t11 / /I 1 gram takes" a pot-shot with his blunderbuss -I rapidly establishing administrative : . ion Was Doing at The Sentinel for not run- governments in the islands which < -. k-In Sym- ning more editorial, but we notice he they have captured from the Ger- rr ith All is good with\ the scissors and pub- mans in the South Pacific. Within a 4IJjfl \ lishes our local and county stories un- week after news had been received rk . -- +-- - der a date line, not giving us credit here of the capture of Jaluit, several I it enthusiastically where credit belongs. But it's all officials were dispatched to the Marshall - boosting for Florida, after all, so we Islands to investigate their i >f the many deliv- .5 : are not caring, worrying or sobbing.The trade and development possibilities, .. ion of the Florida editor of the Telegram has a and since the Caroline Group has ..' -. f Women's Clubs punch back of his little pen that few been added other officials have been veek was that of editors in the .state possess. dispatched. 1915 HARLEY-DAVIDSON Horsepower 3-Speed Sliding Gear Transmission.Automatic I 1, the first lady of.s -Newspapers up and down the Several steamers have been taken Mechanical Oil Pump Step-Starter and 66 Refinements, $275.00. : follows: much harassed A. C. L. are all taking off other runs and a steamship service - ' -( nt, Ladies of the editorial flings at the road becauseof established between Yokohamaand v. WHITNEY WRIGHT 129ord l siting Frien.I the recent annulment of trains. A all the islands now under Japanese "II1111 : :. . he opportun 4IJ to little investigation would ease the sovereignty. Cargoes of Japanese - I1111 ill 11111 11111 11111 11111 11111 II II I"11111 III' I 1..1111111111111111111111. occasion and greetIt dripping pens, and in most cases the merchandise are already on their - is a delight too corporation would be in the right. A - way.The come as repre- corporation must retrench as well as most interesting feature of women of Florida other business houses.St. this industrial occupation of the .. alents are engaged Augustine is having sewer islands, which the Japanese foreign / TEN REASONS WHY YOU SHOULD better things for trouble. office declares were taken for mili- and our state. I -The K. of C. has installed a coun- tary uses only, is the sending of 1,000 '- .. .... rith you and know- cil at Sanford. Japanese laborers to work the phos- tave not before had -The Trinity church in St. Augus- phate mines, and the inclusion in the COOK WITH GAS eting. tine has a new organ.A budget which will come before the I glad that I can be "book shower" for the public diet in December of an appropriationfor :. ; j state federation library in St. Augustine was a good the investigation of the mineral JIoII' ikeland' my home resources of the islands. 0 ;Dent many happy thing.A woman of Marion county has The expansion of Japan to the re so many friends just finished a life of eightyeightyears South Pacific has created a great deal 1. Convenience 6. Speed ere made and en- filled with beautiful service to of rejoicing among the Japanese, who t him who was to her neighbors and her own large fam- look on it as another step towards 2. Comfort 7. Efficiency .u, ion, and where my ily. And the motto that she kept her Japan's domination of the Pacific. lunched our little household living up to all these years 3. Cleanliness 8. Safety ul sea of a happy was: "No unkind word may bfe said JOO BELGIAN GUNS GO TO SCRAP the in this home." HE.l'I I 9. Perfect where people Could anything be I Regulationof '.. to us both and soS fiiner?Miami Metropolis.The I 4. Simplicity efforts and seem- Democrats will have a clear Worn Out by Use of French Shells, Heat with any successes majority of fourteen in the senate Which Were Not Suitable 5. Accessibility p ?. glows with glad- and a majority of twenty-three in the 10. Economy : rivileged to attend house. That is as large a majorityas Dunkirk, France.-The depressionof . -. it it is as the 'full any party ought to have. The more the Belgians quartered here was i over' because the evenly parties are divided the better increased a few days ago when a file eland. legislation can be expected.OcalaBanner. of nearly 400 field guns was dragged 7. EFFICIENCYEfficient 1 her rapid growth, through by horses that looked as .:hr er splendid church- -Florida now wants a big crop of weary and melancholy as their riders. '. st creditable daily winter tourists, but worse than all a These were the guns that had defended because the fire need not be lighted until all the foodjis lie pride and enthu- big crop of permanent settlers who Antwerp as wel las they could. magnificent public know how to make the soil bring For lack of ammunition, bought from ready to be cooked. Because the moment the cooking :is done t embodiment of that forth an abundance. The soil will do the Krupps for delivery in June, but ates our club work its part if industry and intelligence delivery of which was deferred, they the fire is turned off and all expense ceases. Because there is no 't: ay receive our state i are applied to it intelligently.OcalaBanner. were obliged to use French shells . '" her gates with much that were not fitted to them, and waste as the fire is concentrated on the place where it is needed o Six short years -An interesting test of the child which tore the rifling out. The guns *' isband's election as labor laws of Florida might be madein were going to the scrap head, and the and none of the heat escapes up the chimney. Because when .' took us to Tallahasa the instance of the boy preacher horses, after being rested, will draw season our little who is said to be talking every night back French 3-inch field' guns in their you cook with Gas you utilize every cent's worth of fire iyou.buy. t progressingcouldwith to crowded houses in the western places. what it is to- part of the state. Should "Charley"be it is among its mirh allowed to preach when a vaude- CHURCH WOMEN: HUSK CORNOn a live, progressive ville child is forbidden to sing and ys alert in every dance?Miami Metropolis. Farmer's Offer They Add to Aid ORLANDO WATER & LIGHT CO. ovement-it is an -Instead of starting a movementto Society Fund ur federatiorr to as- go away across the sea to induce immigration from among people who Marshalltown, la.-When Charles GAS DEPARTMENT too, to attend the speak a different language from our Miller, a Jasper county farmer, of- , state federation be- own, why not try to lure a few thou- fered the Ladies' Aid Society of the , at work it is doing sand good Americans from the Christian church at Kellogg an acre 1 . '- nents it represents. North?-Tarpon Leader. of his best corn if the women would in the past I expect -The camphor farm at Satsuma pick it, he found he could not run a'' . Beautiful Lake Eola I come from the ef- began the distilling of camphor today "bluff." I clubs and this fed- and will continue until all the The women snapped up the offer ina ippy to be with you leaves fit to harvest go through the hurry. Attired in overalls or in old Home Only LET ME BEAUTIFY YOUR GROUNDS you of my most mill. Prof. Richtmann is also getting clothes they in two hours picked and f in your delibera- ready to put out 500 more acres of cribbed the entire acre, which yielded $6,000. WHETHER A COUNTRY ESTATEOR trees.-Palatka Daily Item. forty-eight bushels. A citizen who was interested in the society's work A SMALL CITY LOT :OOK AT GUESTS' GERMAN: SPIES WORK IN LUNATIC -. offered the women eighty cents a This is an ideal home and a great .BOWS ASYLUMUse : bushel for the corn, and an addiionalfive bargain. Owner is a non-resident, BLUE PRINTS AND DETAILED PLANTING IN- cents if they husked it. This and must sell. Corner lot 75x150 :es to Use_ Chafing Red Cross Flags to Direct Fire, they did.Three. feet. Beautiful new eight room STRUCTIONS AT REASONABLE PRICES ynAni "Tables Vj Till 13; Are Executed other citizens agreed to do- house, elegantly finished and modern. - ago Hotel London.-The Standard's Paris cor- nate five cents for each bushel eafter two girls, col- respondent says that German spies husked, so that the forty-eight bush- New California Bunga- Werner F. NehrIinLANDSCAPE . will prepare your have been stationed in the most unlikely els brought the church women $1.05a the table, "while you places. In Lorraine the Ger- bushel or $50.40, which goes into : lows, Five Rooms appen to dine at one mans used a lunatic asylum as a reg- the society's treasury. ARCHITECTAND and Bath Only ago hotels. ular spy depot. All the doctors and - e selected for beauty, most of the attendants deserted the BUTTON ON CAP SAVES BOY $2000These ment and ability to institution with the approach of the CONSULTING HORTICULTURIST pie and not feel em- French army, and their places were Janesville, Wis.-Clarence Hogan, are undoubtedly the greatest N If contemplate planting a citrus grove taken by spies.Fighting aged nine years, owes his life to the bargains in the city. Located near it will pay you to get my advice. ss Kathleen Galavan, proceeded around the fact that he had a good, old fashioned Lake Eola, and only ten minutes walk I idan road, and :Miss place for several days, and by a clever button sewed on the top of his cap. from the Court House. These little Local Agent for BUCKEYE NURSER- iti, No. 316 South use of Red Cross flags the spies His brother, John, aged ten years, homes are beautifully finished and S I IES, Tampa, Fla. . laywood. were able to direct the German artil- found a loaded revolver. Placing the modern in every respect. Can be k bbsters, frogs' legs, lery fire, with deadly effect. weapon directly on top of his little bought on reasonable terms. Office Above PEOPLES NATIONAL BANK ewed: chicken and sim- Fifteen wer executed in the asylum i' brother's head, he pulled the trigger. PHONE 570 n-io-tf ,1 rith a say of the in- the early hours of Friday morn- younger child suffered but slight in- I be brought to the ta- ing a man was shot by a sentry while jury. John got off with a whipping.ORLANDO I I 114 South Orange Are, Ground Floor Ji prepar d. climbing the wall of the Woolwich Phone No. 311 1 arsenl. The wound was not fatal and I 1I 7 SETS Ot! TWINS the man now is in the hands of the : 1 military authorities. Furnished Homes for Sale nsecutive inTJirth and 1-- __ __ ..., ni: im/mmp/ Atlantic Coast Line CO. 5-TOURIST from TRAINS ... NORTH AND WEST Special for 10 DaysNo. ,, . ., ...,.... -' .. ..k. o < "VJ >V. STREET "Dixie Flyer" 14 Rooms with two baths; No. 3. 4 Room cottage; modern "Dixie Limited""Seminole centrally located. Owner's price, throughout; several bearing , orange 0'1': .; : Limited""Montgomery !' ..,. .. ."+- ... > .:- -L: r 122 Route" $5,000. and grapefruit trees. Owner's price, 2500. .1 .t. '''I; "South Atlantic Limited" No. 2. 5 Room bungalow, on beau- Then homes are all ready to step I .. Pullman cars from Chicago, St. Louis, tiful lake, with dock and boat; lot into and hang up your hat. I have .r '-. 'i-:" :g at Low- Indianapolis, Cincinnati, Louisvilleand Jacksonville 75x450 feet, all set to young budded particular properties for particular intermediate points to citrus trees. Owner's price, $3,200. people, city and county. See- Fla. r ces Cars -. Dining ,s via CHADWICKThe r : :ombine. ,, 1 ". .'-': ",!" .". -. :, t .1'l :.*. '., inventions usedistruetion Standard ATLANTIC Railroad COAST of the LINE SouthJ. "_ i -. G. KIRKLAND, D. P. A., Man That Sells Real Estate. 19 W. Church Street. Hillsboro Hotel, Tampa, Fla. A. W. FRITOT, D. P. A., Watch this space for change in 10 day?. 38 West Bal St., Jacksonville, Fla. - .I' \ ._.,... ....,.,_ ....... ___ ...... -27 --' A.-"-' ,. _ ,.,. ..".'."'v "", 'c. ..- "" -- "' .... "..: '.:r ........... ._ V'.l. ""f"' "rr'"t. :' : I .s : 4''It"" f &. "r--;r'i -. .... ij"r_' -' r-i . ' _ - It 2. ,. ' r'II' . ,r h. .: ; ! i \ .of !; I _. c' ! _: '.. NOTES FROM HIGH SCHOOL !I I a Many Events Transpire to Make I II [fIo the. Week a Notable and II Interesting StudentsBy One for I! J")' ; # ( Arthur hey, Class 1915): BIG is Not The last school day of the week was :'.-:::> ...... .. 'I Starting not a red letter day for brilliant class '. . RightIf I room work. Everybody seemed to be 1".J , Starting ' "t . thankful that one day's respite from \.. J. " our wealthiest men had waited to study had been allowed by the"powers I I start their bank accounts with .a BIG that be." But the general sentiment I . f amount,-they would still be waiting. seemed to be that every one would ftf:1.7 Little deposits grow to big sums. It's have felt a great deal thankfuller if '. ;r.5't.: :. " / the little business that has the best chance they had not been obliged to returnto ; ; t. . to become big. It is the young man who school on Friday. o can take care of the small sums who will --0-- . be able to handle a big business. Various parties of school boys . '.:.-. .'.-.. ; Y ' A We encourage small deposits.Ve wel-. sought the quiet, sequestered shadesof rUfI',";".( '.,,\X\.J rf) \ a..... , .,; come new accounts of $1.00.Ve' pay 4 the sylvan forests on the day of $ " '. t per cent. thanksgiving, hoping there to find anil! Sneak Thieves hold no dread] ; I that peace and quiet which would 4 'J.1 STATE BANK soothe their troubled spirits and calm toy those who keep. their. ,-,.l-" I . their chaotic thoughts, thus enabling " , "'"" * ales in of Orlando them to return with philosophic res- our vault. *. ., ignation to the dull tasks and monot- .. onous round of their scholastic duties. Safe Deposit odes ,: c. -- -- E . ... Many hunting stories, large and :. $1 per year and up. ; i small, have been going the rounds of I .\v ' the High School, relating the marvel- ::. :.. .. lous bags of game and catches of fish ! on Thanksgiving. Sufficient reliable ... . data has not yet been collected to . : . prove who 'has put out the best line - . of stories.A . - '.-, Mrs. J. F. Halliday and two chil- ---v- s : V ;: ' LOCAL AND PERSONAL : : the dren have arrived in Orlando from strictly senior affair was \ hunting trip made by Kimble Hughes, Atlanta, to spend the winter, and are - lIre and Mrs. L. F. Pugh of Jack- pleasantly located at the Oakhurst.Mr. Willis Rogers and Carl Henderson. / i sonville, spent yesterday in Orlando. Bright and early Thanksgiving morn " M. P. Lipe, the clever traveling these seniors hied them to the woody J Mr. J. B. Hinson, of St. Paul, Minn., representative of the L. C. Smith & wilds and dingly dells of the neigh- D 1 I; the where F :' Iiib3lj ::0' those at forests #r was among registering Bros., typewriters, was in Orlando boring they presum- : ; San Juan yesterday. yesterday on business.Mr. ably spent the day quoting poetry and admiring the landscape, baskingin Mr. Russell Wilson, of Fairmont, the bright sunlight and revelling .. .. . and Mrs. John Sellers will return ( .' ': .:. .. ..,.. :. . '.', W. Va., is located at the San Juan fora from Ocala in the balmy breezes. : V V " today ; having mo- .. ... stay of several days in Orlando. --0of . tored over Thursday to spend the i .. - weekend.Mr. One the solitary hunters who Mrs., Cary Hand left for Jacksonville I bore the High School colors into the Miss Ruth Hendricks went over to I Messr. Allan Cohoon, A. 1.. Beck Mrs. J. Mortimer Smith and little c. where she will spend a coupleof I John McCamy left last night forests was Archie Braddock, a jun- DeLand yesterday where she will attend and D. J. Dykes returned yesterday daughter, Jacquelin, who motored, to weeks with relatives and friends. for Plant City, where he will tarry ior, who went far afield with his the Miller-Bielby wedding which afternoon from a very successful Orlando from Miami to visit Mts. for a day or two and then go to Jacksonville faithful gun in search of game. takes place in that town Monday hunting trip down in the southern Smith's mother, Mrs. A. R. Daniels, Before having pictures see "l emotion Archie thinks one of the chief bene- evening.Mr. part of the s ate. They report a on East street, returned, to their pictures" at Evans', 100 North I fits to be derived from hunting aloneis "stem-winding" time-which, plainly home in the Magic 'City, Friday. Main. 11-25-tf If it is good printing that is want- that no one can dispute your sto Carl T. Kuhl, the clever young interpreted, means plenty of game. . aviator, whose biplane flights are the ries. ed, see us about it. Makers of print- Messrs. George Beard and E. "W. of the fair is Flor- novelty , Ellerbe greatest a Mr. and Mrs. D. R. came --0A - ing that makes good. The Sentinel The work of cleaning out the hyacinths -I Henderson have joined the Pennsylvania - ida boy, born and raised in Orlando. . down last night from Sanford to very delightful fishing excursionwas v. phone 24. and other obnoxious growth ranks of the. winters visitors to He has been in the . spend Sunday with relatives. enjoyed by Fildes Thresher and flying game only from Lake Hardeman, in the eastern the City Beautiful, having arrived . two and shows great profi- brother, who early on the day of years I The band concert last night on the part of the city, has just been completed Thursday from Erie. This is ''their The Sentinel is the shopping guide thanks sought out the placid shoresof ciency.-Ocala Star. I corner of Orange avenue and Pine by he contractors, Messrs. J7TI: first visit to this part of Florida,-an for Orange county. Read the adver- the sun-kissed lakes, there unlim- street drew a large crowd who greatly Culbreth and James Stafford, and they seemed very favorably improssed The box for the Church Home and tisements carefully. bering their rods for the of . enjoyed the excelelnt music. tempting the finny denizens purpose of the Hospital at Orlando is being packedby urged over to the city in an excel- with Orlando in every respect TBey / 108 Sunmerlin Place. 't located lent condition. are at l' Col. and Mrs. T. J. Watkins forth from Mrs. Huguenin this month. As re- Mr. and Mrs. F. E. Moore, of Chi- murky depths to come turned yesterday afternoon from a cago, are among the late arrivals at their aquatic coverts and learn civil- this is the Thanksgiving box, it is desired - visit to their old home in Arcadia. the San Juan hotel. They are here ization from associating with human to make it a generous one. Per- In this issue we publish an adver- Mr J. B. Mills received the sad intelligence - beings. sons who will contribute to this box tisement of Mr. Walter W. Rose. Mr. last night of the death of for an indefinite stay. Bishop Cameron Mann left last (Continued in Tuesday's Sentinel.) are urged to send in their donationsto I Rose is one of our successful dealers Mr. Herman Meislahn, a former well night for Kissimmee. 'From thence Mrs. Huguenin, 315 South New in real estate. He has made many known resident of Orange eo..ty, Mr. and Mrs. Charles Tremaine arrived - ,. he will go to Tampa. last evening from Poughkeepsie Messrs. Claude Nolan, C. S. Rob- York avenue, as promptly as possi-I large sales during the past two seasons which occurred last Sunday night in N. Y., and registered at the San- ertson, Cecil C. Robertson, J. T. Leon- ble. People of all I and recently sold the "Alabama" San Antonio Texas, where he ka4 resided .:. Mrs. II. K. Perry, of Sanford, is in Juan hotel for the season. ard and Ed. Wilson arrived in Orlando urged to contribute, as this institution j property of' Mr. William Chase Tem- since leaving this section. Tie Orlando visiting Mrs. C. V. Rowland, last night in a big Cadillac Eight, is doing a splendid work, entirely un- ple at Winter Park. By all means deceased leaves a wife, daughter: and . South Main in Orlando. denominational.-Lakeland Telegram.' I read his advertisement. several grandchildren. on street. Don't let anybody tell you that you the first of its kind seen They are registered at the San Juan 1 . are getting something for nothing. Mr. Eugene Rose,of Vicksburg,Mo., Pay for what you get and see that hotel and will remain over Sunday.Mr. I t.ai . traveling out of New Orleans, spent you get it, or you will live to regret ." yesterday in Orlando on business. that and Mrs. T. B. Gillespie, of Or- you were stung. I lando, arrived in the city yesterdayto Mr. IIerbert Wichtendahl, after a Mrs. Adele Irwin, Mrs. M. B. Tes- spend Thanksgiving with Mrs. Gil- McElroy's PharmacyOrlando visit to relatives and friends, returnedto son and their friend, Mrs. Potter, arrived lespie's parents, Mr. and Mrs. W. S. Tampa yesterday afternoon. last nigh from Washington, D. Fry. Mr. Gillespie returns today, but j his wife will remain for several days. C., to spend the winter. They are Mrs. W. T. Moore, of Mt. Sterling, happily located at the Summerlin.Mr. I I -Palatka Daily Item. Established 34 Years Ky., has arrived to visit the familyof I I Fla. \ } __ :Mr. J. E. Groves, on DeLaney and Mrs. John Jenkins and two inter- ,' --J- Mrs. Fred Wise arrived in street. 'I esting little daughters, Misses Virginia . Orlando yesterday from Davis, Ill., I and Helen, who have been visit- " and are guests of Mr. and Mrs. D. S. Mrs. A. B. Valentine and mother I ing Mrs. C. V. Rowland at her homeon GIFTS : Benage at their home in Concord PRACTICAL of Pittsburg, Pa., passed through Or- i Park. Mr. and Mrs. Wise are on South Main street, returned yesterday - lando yesterday en route to Lakeland. : their honeymoon, having been mar- afternoon to their home in { 'Lt They made the entire trip alone and I i ried on Thanksgiving day, and are Sanford.Mr. FOR XMAS. .* . without a mishap in their Buick making a tour of the South and decided r ,- and Mrs. Southgate arrived .;. .. model 25. will return later to They t to pay a visit to the City Beau- : DeLand for the winter. 1! tifuk yesterday on the Tampa Special from ' ---- "Newport, Ky., to spend the winter, as French Ivory Mirrors Kodaks V '4 Ideal Fountafn.. Pens I i has been their usual custom for a i.' I number of years. They spent a couple French Ivory Combs i Perfumes Thermos Bottles For Christmas of weeks in Jacksonville with ' ; I Carafes Brushes Stationery - 1 French Ivory t 'v. . their daughter, Mrs. J. T. Craig.Mr. . Let us send for you by parcel post a Perfume Bottles i-; Card, Gases Safety Razor Outfits " . I BOX OF FRUIT ' and Mrs. A. G. White and maid ..; : l' :. : back home for Christmas. It " will ' and Mrs. Geo. S. Northrup have arrived . Outfits and Gentlemen. - < make a nice rememberance, at a small in Orlando from Montclair, N. Traveling, four Ladies b"/ : . cost- Y., to join the ranks of the winter :Mail Orders Promptly Filled. No Char;e for Mailing Wrights Seed Store visitors now sojourning in this sec- :- , tion from their state. They have , Phone 179 South Orange Avenue pleasant apartments at the New Lu- 4 - cerne. I ,- - . 4'I I : i I BRASS AND IRON BEDS VV I. 'V. I . ' % = That we guarantee not to tarnish or become dull are included :i !; JJ1ffiV [ V I ;. in the BIG SALE now o'r 9 :! \ ;. 7 :; 1 1I. ,: I I. s . \ . \ . " t. V lJ . '. tt t 7 .. ,. . ... . : .f .-- .. t ... ': ; -_ .' ... .., 1 ... i. --.:.. ".. Contact Us | Permissions | Preferences | Technical Aspects | Statistics | Internal | Privacy Policy © 2004 - 2011 University of Florida George A. Smathers Libraries.All rights reserved. Acceptable Use, Copyright, and Disclaimer Statement Powered by SobekCM | http://ufdc.ufl.edu/UF00079945/00197 | CC-MAIN-2015-35 | refinedweb | 24,761 | 75.1 |
This list contains the issues raised during the Candidate Recommendation phases of DOM Level 3 Core and Load/Save.
Comments or additions regarding this list must be sent to the DOM public mailing.
Color key: error warning note
.
Node).
The specification is fine since our definition qualifiedName allows an optional prefix. This is a serialization issue and is already handled by our namespace normalization algorithm. We will add the test however.
add a test for this case.
Java binding of DOM L3 LS contains unclear javadoc for getNewLine and setNewLine methods of DOMSerializer class.
From the javadoc it is not clear that what should getNewLine return in case setNewLine is not called before ( default value ). Whether "null" should be returned or the "platform default end-of-line sequence"?
The javadoc for setNewLine is exactly same as getNewLine? There is need for changing the javadoc for getNewLine and setNewLine and make it different to define specific purpose of each method.
Whether the intention is that when "null" value is passed to setNewLine, then the end-of-line sequence need to be defaulted to platform specific one? And getNewLine should never return "null".
Here is snippet of javadoc for getNewLine/setNewLine.
/** *): * <dl> * <dt><code>null</code></dt> * <dd> Use a default * end-of-line sequence. DOM implementations should choose the default * to match the usual convention for text files in the environment being * used. Implementations must choose a default sequence that matches one * of those allowed by section 2.11, "End-of-Line Handling" in [XML 1.0], if the * serialized content is XML 1.0 or section 2.11, "End-of-Line Handling" * in [XML 1.1], if the * serialized content is XML 1.1. </dd> * <dt>CR</dt> * <dd>The carriage-return character (#xD).</dd> * <dt> * CR-LF</dt> * <dd> The carriage-return and line-feed characters (#xD #xA). </dd> * <dt>LF</dt> * <dd> The * line-feed character (#xA). </dd> * </dl> * <br>The default value for this attribute is <code>null</code>. */ public String getNewLine();.
Why does this method, when no second argument is
specified, append the child?
Node.appendChild
already does this - why not make
Node ] );?
This is a Level 1 change and would break existing DOM implementations, so it can't be done.
The namespace for LS events is. not 2003?
It's already in the implementation now. No reason to change this, so no change.
.
The.
We renamed NameList.contains parameter to "str".
Shouldn't the DOMConfiguration parameter 'canonical-form' set 'discard-default-content' to false?
changed.
Should 'canonical-form' have an effect on 'xml-declaration'? There seems to be some potential interaction here, esp. regarding XML 1.1.
changed. if XML 1.1 -> fatal error..the.. | http://www.w3.org/2003/12/22-dom-core-issues/ack_sort.html | CC-MAIN-2013-48 | refinedweb | 450 | 61.83 |
Modeling the relationships between types is a fundamental part of the process of object-oriented design. This chapter shows you how to model relationships using composition and inheritance. It describes many facets of inheritance in Java, including abstract classes and final classes. [bv:need better intro]
As you progress in an object-oriented design, you will likely encounter objects in the problem domain that contain other objects. In this situation you will be drawn to modeling a similar arrangement in the design of your solution. In an object-oriented design of a Java program, the way in which you model objects that contain other objects is with composition, the act of composing a class out of references to other objects. With composition, references to the constituent objects become fields of the containing object.
For example, it might be useful if the coffee cup object of your program could contain coffee. Coffee itself could be a distinct class, which your program could instantiate. You would award coffee with a type if it exhibits behavior that is important to your solution. Perhaps it will swirl one way or another when stirred, keep track of a temperature that changes over time, or keep track of the proportions of coffee and any additives such as cream and sugar.
To use composition in Java, you use instance variables of one object
to hold references to other objects. For the
CoffeeCup example, you could create a field for coffee within the definition of class
CoffeeCup, as shown below:
[bv: implement the methods]
// In Source Packet in file inherit/ex1/CoffeeCup.java class CoffeeCup { private Coffee innerCoffee; public void addCoffee(Coffee newCoffee) { // no implementation yet } public Coffee releaseOneSip(int sipSize) { // no implementation yet // (need a return so it will compile) return null; } public Coffee spillEntireContents() { // no implementation yet // (need a return so it will compile) return null; } } // In Source Packet in file inherit/ex1/Coffee.java public class Coffee { private int mlCoffee; public void add(int amount) { // No implementation yet } public int remove(int amount) { // No implementation yet // (return 0 so it will compile) return 0; } public int removeAll() { // No implementation yet // (return 0 so it will compile) return 0; } }
In the example above, the
CoffeeCup class contains a reference to one other
object, an object of type
Coffee. Class
Coffee is defined is a
separate source file.
The relationship modeled by composition is often referred to as the
"has-a" relationship. In this case a
CoffeeCup has
Coffee. As you can see from this
example, the has-a relationship doesn't mean that the containing object must have a constituent object at
all times, but that the containing object may have a constituent object at some time. Therefore the
CoffeeCup may at some time contain
Coffee, but it need not
contain
Coffee all the time. (When a
CoffeeCup object doesn't
contain
Coffee, its
innerCoffee field is
null.)
In addition, note that the object contained can change throughout the
course of the containing object's life.
[bv: need to add UML diagram for composition, and explain the difference between composition and agregation and why I draw my diagrams like I do.]
As you partition your problem domain into types you will likely want to model relationships in which
one type is a more specific or specialized version of another. For example you may have identified in your
problem domain two types,
Cup and
CoffeeCup, and you want to
be able to express in your solution that a
CoffeeCup is a more specific kind of
Cup (or a special kind of
Cup). In an object-oriented design, you
model this kind of relationship between types with inheritance.
The relationship modeled by inheritance is often referred to as the "is-a" relationship. In the case of
Cup and
CoffeeCup, a "
CoffeeCup is-a
Cup." Inheritance allows you to build hierarchies of classes, such as the one shown in
Figure 5-1. The upside-down tree structure shown in Figure 5-1 is an example of an
inheritance hierarchy displayed in
UML form.
Note that the classes become increasingly more specific as you traverse down the tree. A
CoffeeCup is a more specific kind of
Cup. A
CoffeeMug is a more specific kind of
CoffeeCup. Note also
that the is-a relationship holds even for classes that are connected in the tree through other classes. For
instance, a
CoffeeMug is not only more specific version of a
CoffeeCup, it is also a more specific version of a
Cup. Therefore,
the is-a relationship exists between
CoffeeMug and
Cup: a
CoffeeMug is-a
Cup.
Figure 5-1. The is-a relationship of inheritance
[bv: mention this is a UML diagram]
When programming in Java, you express the inheritance relationship with the
extends keyword:
class Cup { } class CoffeeCup extends Cup { } class CoffeeMug extends CoffeeCup { }
In Java terminology, a more general class in an inheritance hierarchy is called a
superclass.
A more specific class is a
subclass.
In Figure 5-1,
Cup is a superclass of both
CoffeeCup and
CoffeeMug. Going in the opposite direction, both
CoffeeMug
and
CoffeeCup are subclasses of
Cup. When two classes are
right next to each other in the inheritance hierarchy, their relationship is said to be direct.
For example
Cup is a
direct superclass of
CoffeeCup, and
CoffeeMug is a
direct subclass
of
CoffeeCup.
The act of declaring a direct subclass is referred to in Java circles as
class extension. For
example, a Java guru might be overheard saying, "Class
CoffeeCup
extends class
Cup." Owing to the flexibility of the English language, Java
in-the-knows may also employ the term "subclass" as a verb, as in "Class
CoffeeCup
subclasses class
Cup." One other way to say the same thing is, "Class
CoffeeCup descends from class
Cup."
An inheritance hierarchy, such as the one shown in Figure 5-1, defines a
family of types.
The most general class in a family of types--the one at the root of the inheritance hierarchy--is called the
base class. In Figure 5-1, the base class is
Cup. Because every class
defines a new type, you can use the word "type" in many places you can use
"class." For example, a base
class is a base type, a subclass is a
subtype, and a direct superclass is a direct supertype.
In Java, every class descends from one common base class:
Object. The
declaration of class
Cup above could have been written:
class Cup extends Object { // "extends Object" is optional }
This declaration of
Cup has the same effect as the earlier one that excluded the
"
extends Object" clause. If a class is declared with no
extends clause, it by default extends the
Object class. (The
only exception to this rule is class
Object itself, which has no superclass.) The
inheritance hierarchy of Figure 5-1 could also have shown the
Object class hovering
above the
Cup class, in its rightful place as the most super of all superclasses. In this
case, class
Object remained invisible, because the purpose of the figure was to focus
on one particular family of types, the
Cup family.
In Java, a class can have only one direct superclass. In object-oriented parlance, this is referred to as single inheritance . It contrasts with multiple inheritance , in which a class can have multiple direct superclasses. Although Java only supports single inheritance of classes through class extension, it supports a special variant of multiple inheritance through "interface implementation." Java interfaces, and how a class implements them, will be discussed in Chapter 7.
Modeling an is-a relationship is called inheritance because the subclass inherits the
interface and, by default, the implementation of the superclass.
Inheritance of interface
guarantees that a
subclass can accept all the same messages as its superclass. A subclass object can, in fact, be used
anywhere a superclass object is called for. For example, a
CoffeeCup as defined in
Figure 5-1 can be used anywhere a
Cup is needed. This substitutability of a subclass (a
more specific type) for a superclass (a more general type) works because the subclass accepts all the same
messages as the superclass. In a Java program, this means you can invoke on a subclass object any method
you can invoke on the superclass object.
This is only half of the inheritance story, however, because by default, a subclass also inherits the entire implementation of the superclass. This means that not only does a subclass accept the same messages as its direct superclass, but by default it behaves identically to its direct superclass when it receives one of those messages. Yet unlike inheritance of interface, which is certain, inheritance of implementation is optional. For each method inherited from a superclass, a subclass may choose to adopt the inherited implementation, or to override it. To override a method, the subclass merely implements its own version of the method.
Overiding methods is a primary way a subclass specializes its behavior with respect to its superclass. A subclass has one other way to specialize besides overriding the implementation of methods that exist in its direct superclass. It can also extend the superclass's interface by adding new methods. This possibility will be discussed in detail later in the next chapter.
Suppose there is a method in class
Cup with the following signature:
public void addLiquid(Liquid liq) { }The
addLiquid()method could be invoked on any
Cupobject. Because
CoffeeCupdescends from
Cup, the
addLiquid()method could also be invoked on any
CoffeeCupobject.
If you do not explicitly define in class
CoffeeCup a method with an identical
signature and return type as the
addLiquid() method shown above, your
CoffeeCup class will inherit the same implementation (the same body of code) used
by superclass
Cup. If, however, you do define in
CoffeeCup an
addLiquid() method with the same signature and return type, that implementation
overrides the implementation that would otherwise have been inherited by default from
Cup.
When you override a method, you can make the access permission more public, but you cannot make
it less public. So far, you have only been introduced to two access levels, public and private. There are,
however, two other access levels that sit in-between public and private, which form the two ends of the
access-level spectrum. (All four access levels will be discussed together in Chapter 8.). In the case of the
addLiquid() method, because class
Cup declares it with public
access, class
CoffeeCup must declare it public also. If
CoffeeCup attempted to override
addLiquid() with any other
access level, class
CoffeeCup wouldn't compile.
For an illustration of the difference between inheriting and overriding the implementation of a method, see Figure 5-2. The left side of this figure shows an example of inheriting an implementation, whereas the right side shows an example of overriding the implementation.
The method in question is the
familiar
addLiquid() method. In the superclass,
Cup, a
comment indicates that the code of the method, which is not shown in the figure, will cause the liquid to
swirl clockwise as it is added to the cup. Liquid added to an instance of the
CoffeeCup class defined on the left will also swirl clockwise, because that
CoffeeCup inherits
Cup's implementation of
addLiquid(), which swirls clockwise. By contrast, liquid added to an instance of
the
CoffeeCup class defined on the right will swirl counterclockwise, because this
CoffeeCup class overrides
Cup's implementation with one of its
own. A more advanced
CoffeeCup could override
addLiquid() with an implementation that first checks to see whether the coffee cup
is in the northern or southern hemisphere of the planet, and based on that information, decide which way
to swirl.
Figure 5-2. Inheriting vs. overriding the implementation of a method
In addition to the bodies of public methods, the implementation of a class includes any private
methods and any fields defined in the class. Using the official Java meaning of the term "inherit," a
subclass does not inherit private members of its superclass. It only inherits accessible members. Well-
designed classes most often refuse other classes direct access to their non-constant fields, and this policy
generally extends to subclasses as well. If a superclass has
private fields, those
fields will be part of the object data in its subclasses, but they will not be "inherited" by the subclass.
Methods defined in the subclasses will not be able to directly access them. Subclasses, just like any other
class, will have to access the superclass's
private fields indirectly, through the
superclass's methods.
If you define a field in a subclass that has the same name as an accessible field in its superclass, the
subclass's field hides the
superclass's version. (The type of the variables need not match, just
the names.) For example, if a superclass declares a public field, subclasses will either inherit or hide it.
(You can't override a field.) If a subclass hides a field, the superclass's version is still part of the
subclass's object data; however, the subclass doesn't "inherit" the superclass's version of the field, because
methods in the subclass can't access the superclass's version of the field by its simple name. They can
only access the subclass's version of the field by its simple name. You can access the superclass's version
by qualifying the simple name with the
super keyword, as in
super.fieldName. (More on
super in the next section.)
Java permits you to declare a field in a subclass with the same name as a field in a superclass so you can add fields to a class without worrying about breaking compatibility with already existing subclasses. For example, you may publish a library of classes that your customers can use in their programs. If your customers subclass the classes in your library, you will likely have no idea what new fields they have declared in their subclasses. In making enhancements to your library, you may inadvertently add a field that has the same name as a field in one of your customer's subclasses. If Java didn't permit field hiding, the next time you released your library, your customer's program might not run properly, because the like- named field in the subclass would clash with the new field in the superclass from your library. Java's willingness to tolerate hidden fields makes subclasses more accepting of changes in their superclasses.
[bv: See Behind the Scenes in this chapter for a description of object images on a JVM heap?]
As you perform an object-oriented design, you may come across classes of objects that you would
never want to instantiate. Those classes will nevertheless occupy a place in your hierarchies. An example
of such a class might be the
Liquid class from the previous discussions. Class
Liquid served as a base class for the family of types that included subclasses
Coffee,
Milk, and
Tea. While you can
picture a customer walking into a caf� and ordering a coffee, a milk, or a tea, you might find it unlikely
that a customer would come in and order a "liquid." You might also find it difficult to imagine how you
would serve a "liquid." What would it look like? How would it taste? How would it swirl or gurgle?
Java provides a way to declare a class as conceptual only, not one that represents actual objects, but
one that represents a category of types. Such classes are called abstract classes.
To mark a class as abstract in Java, you merely declare it with the
abstract
keyword. The
abstract keyword indicates the class should not be instantiated.
Neither the Java compiler nor the Java Virtual Machine will allow an abstract class to be instantiated. The
syntax is straightforward:
// In Source Packet in file inherit/ex6/Liquid.java abstract class Liquid { void swirl(boolean clockwise) { System.out.println("One Liquid object is swirling."); } static void gurgle() { System.out.println("All Liquid objects are gurgling."); } }
The above code makes
Liquid a place holder in the family tree, unable to be an
object in its own right.
Note that the
Liquid class shown above still intends to implement a default
behavior for swirling and gurgling. This is perfectly fine, however, classes are often made abstract when it
doesn't make sense to implement all of the methods of the class's interface. The
abstract keyword can be used on methods as well as classes, to indicate the method
is part of the interface of the class, but does not have any implementation in that class. Any class with one
or more abstract methods is itself abstract and must be declared as such. In the
Liquid class, you may decide that there is no such thing as a default swirling
behavior that all liquids share. If so, you can declare the
swirl() method abstract
and forgo an implementation, as shown below:
// In Source Packet in file inherit/ex7/Liquid.java abstract class Liquid { abstract void swirl(boolean clockwise); static void gurgle() { System.out.println("All Liquid objects are gurgling."); } }
In the above declaration of
Liquid, the
swirl() method is
part of
Liquid's interface, but doesn't have an implementation. Any subclasses that
descend from the
Liquid class shown above will have to either implement
swirl() or declare themselves abstract. For example, if you decided there were so
many varieties of coffee that there is no sensible default implementation for
Coffee,
you could neglect to implement
swirl() in
Coffee. In that case,
however, you would need to declare
Coffee abstract. If you didn't, you would get a
compiler error when you attempted to compile the
Coffee class. You would have to
subclass
Coffee (for example:
Latte, Espresso,
CafeAuLait) and implement
swirl() in the subclasses, if you wanted
the
Coffee type to ever see any action.
Most often you will place abstract classes at the upper regions of your inheritance hierarchy, and non- abstract classes at the bottom. Nevertheless, Java does allow you to declare an abstract subclass of a non- abstract superclass. For example, you can declare a method inherited from a non-abstract superclass as abstract in the subclass, thereby rendering the method abstract at that point in the inheritance hierarchy. This design implies that the default implementation of the method is not applicable to that section of the hierarchy. As long as you implement the method again further down the hierarchy, this design would yield an abstract class sandwiched in the inheritance hierarchy between a non-abstract superclass and non- abstract subclasses.
Most Java programmers have two hats on their shelf, both of which they wear at different times. Sometimes they wear their "designer" hat, and build libraries of classes for others to use. Other times they wear their "client" hat, and make use of a library of classes created by someone else. Some Java programmers even wear both hats at the same time, completely oblivious to the rules of fashion.
When you put on your "designer" hat and work to build a library of classes that will be distributed to
people you don't know and don't necessarily trust, you will likely encounter situations in which you want
to prevent a client from declaring a subclass of one of the classes in your library. Or you might want to
allow a client to declare a subclass, but you want to prevent them from overriding specific methods of the
superclass. The reason you'll feel the need for this kind of control is that a client could take advantage of
polymorphism to effectively change the behavior of the classes in your library. For example, a
swirl() method of a hot beverage object could be redefined to swirl right out of the
cup and dampen or possibly even scald a customer. Fortunately, Java gives you the
final keyword to prevent just such nightmarish scenarios as that.
If you declare a method
final, no subclass will be allowed to override that
method. If you declare an entire class
final, no other class will be allowed to extend
it. In other words, a class declared
final cannot be subclassed. In an inheritance
diagram, a
final class is the end of the line. No other classes will appear below it.
Subclasses can appear below a non-
final class that contains a
final method, but every subclass will inherit the
final
implementation of the method.
Because marking a class or method
final is so restrictive to clients of the class,
you should use it with caution. Only if you are certain you want to absolutely prevent clients from
declaring a subclass or overriding a method should you use the
final keyword on a
class or method.
Initialization and inheritance
When an object is initialized, all the instance variables defined in the object's class must be set to proper initial values. While this is necessary, often it is not enough to yield a fully initialized class. An object incorporates not only the fields explicitly declared in its class, but also those declared in its superclasses. To fully initialize an object, therefore, all instance variables declared in its class and in all its superclasses must be initialized.... }
You can see the inheritance hierarchy for class
Coffee, as
defined above, in Figure 1. This figure, as well as the code above,
shows
Object as having no instance variables. But it is
possible that
Object could have instance variables. The
actual internal make-up of class
Object is a detail
specific to each Java platform implementation. It is extremely likely,
however, that
Object will have no fields in any given Java
platform implementation. Because
Object is the superclass
of all other objects, any fields declared in
Object must
be allocated for every object used by every Java program.
In Figure 2,. This figure represents just one of many possible schemes for
storing objects on the heap inside the JVM.
Figure 2 shows that the instance data for a
Coffee object
includes each instance variable defined in class
Coffee
and each of
Coffee's superclasses. Both of
Liquid's fields,
mlVolume and
temperature, are part of the
Coffee object's
data, as well as
Coffee's fields:
swirling
and
clockwise. This is true even though
Coffee doesn't actually inherit the
mlVolume and
temperature fields from class
Liquid.
A note on the word "inherit"
In Java jargon, the word "inherit" has a restricted meaning. A subclass inherits only accessible members of its superclasses -- and only if the subclass doesn't override or hide those accessible members. A class's members are the fields and methods actually declared in the class, plus any fields and methods it inherits from superclasses. In this case, because
Liquid's
mlVolume and
temperature fields are private, they are not accessible to
class
Coffee.
Coffee does not inherit those
fields. As a result, the methods declared in class
Coffee
can't directly access those fields. Despite this, those fields are
still part of the instance data of a
Coffee object.
Pointers to class data
Figure 2 also shows, as part of the instance data of the
Coffee object, a mysterious 4-byte quantity labeled
"native pointer to class information." Every Java virtual
machine must have the capability to determine information about its
class, given only a reference to an object. This is needed for many
reasons, including type-safe casting and the
instanceof
operator.
Figure 2 illustrates one way in which a Java virtual machine implementation could associate class information with the instance data for an object. In this figure, a native pointer to a data structure containing class information is stored along with the instance variables for an object. The details in which the various ways a JVM could connect an object's data with its class information are beyond the scope of this article. The important thing to understand here is that class information will in some way be associated with the instance data of objects, and that the instance data includes fields for an object's class and all its superclasses.
Initializing fields in superclasses
Each class contains code to initialize the fields explicitly declared in that class. Unlike methods, constructors are never inherited. If you don't explicitly declare a constructor in a class, that class will not inherit a constructor from its direct superclass. Instead, the compiler will generate a default constructor for that class. This is because a superclass constructor can't initialize fields in the subclass. A subclass must have its own constructor to initialize its own instance variables. In the class file, this translates to: every class has at least one
<init> method responsible for initializing
the class variables explicitly declared in that class.
For every object, you can trace a path of classes on an inheritance
hierarchy between the object's class and class
Object. For
the
Coffee object described above and shown in Figures 1
and 2, the path is:
Coffee,
Liquid,
Object. To fully initialize an object, the Java virtual
machine must invoke (at least) one instance initialization method from
each class along the object's inheritance path. In the case of
Coffee, this means that at least one instance
initialization method must be invoked for each of the classes
Coffee,
Liquid, and
Object.
During initialization, an
<init> method may use one
field in calculating another field's initial value. While this is
perfectly reasonable, it brings up the possibility that a field could
be used before it has been initialized to its proper (not default)
initial value. As mentioned earlier in this article, Java includes
mechanisms that help prevent an instance variable from being used
before it has been properly initialized. One mechanism is the rule,
enforced by the Java compiler, forbidding initializers from directly
using instance variables declared textually after the variable being
initialized. Another mechanism is the order in which the fields from
each class along an object's inheritance path are initialized: the
"order of initialization.":
Object's fields (this will be quick, because there are none)
Liquid's fields (
mlVolumeand
temperature)
Coffee's fields (
swirlingand
clockwise)
This base-class-first order aims to prevent fields from being used before they are initialized to their proper (not default) values. In a constructor or initializer, you can safely use a superclass's field directly, or call a method that uses a superclass's field. By the time the code in your constructor or initializer is executed, you can be certain that the fields declared in any superclasses have already been properly initialized.
For example, you could safely use the
temperature variable
declared in class
Liquid when you are initializing the
swirling variable declared in class
Coffee.
(Perhaps if the temperature is above the boiling point for coffee, you
set
swirling to false.) If
temperature were
not private, class
Coffee would inherit the field, and you
could use it directly in an initializer or constructor of class
Coffee. In this case,
temperature is private,
so you'll have to use the
temperature field indirectly,
through a method:
// In source packet in file init/ex15/Liquid.java class Liquid { private int mlVolume; private float temperature; // in Celsius public Liquid() { mlVolume = 300; temperature = (float) (Math.random() * 100.0); } public float getTemperature() { return temperature; } // Has several other methods, not shown... } // In source packet in file init/ex15/Coffee.java class Coffee extends Liquid { private static final float BOILING_POINT = 100.0f; // Celsius private boolean swirling; private boolean clockwise; public Coffee(boolean swirling, boolean clockwise) { if (getTemperature() >= BOILING_POINT) { // Leave swirling at default value: false return; } this.swirling = swirling; if (swirling) { this.clockwise = clockwise; } // else, leave clockwise at default value: false } // Has several methods, not shown, // but doesn't override getTemperature()... }
In the example, the constructor for
Coffee invokes
getTemperature() and uses the return value in the
calculation of the proper initial value of
swirling and
clockwise.
getTemperature() returns the
value of the
temperature variable; thus, the constructor
for
Coffee uses a field declared in
Liquid.
This works because, by the time the code inside
Coffee's
constructor is executed, the instance variables declared in
Liquid are guaranteed to have already been initialized to
their proper starting values.
[bv: want to have description of object image on the JVM heap here?]
[bv: want to mention that Object can be redefined?]
The no-arg.
Could perhaps show how class vars can be used to keep track of
all the instances of the class and then how a
gurgleAllObjects()
class method can send
gurgle() methods to all objects.
The CD-ROM contains several examples from this chapter, all of which are in subdirectories of the
inherit directory. The files for example one are in the
ex1
subdirectory, the files for example two are in
ex2, and so on.
Example one is simply the
CoffeeCup and
Coffee classes,
shown above, that illustrate composition. In this version of
CoffeeCup, the
innerCoffee instance variable is a reference to an object of type
Coffee. The files are in the
inherit/ex1 directory.
Example two is the polymorphism example. All of the code for this example is shown above as part of
the text of this chapter. The files are in the
inherit/ex2 directory. In this example,
the
addLiquid() method of class
CoffeeCup use
polymorphism to call the appropriate
swirl() method an object that either is or
descends from class
Liquid. If you execute the Java application,
Example2, it will print out the output:
Liquid Swirling Coffee Swirling Milk Swirling
Example three is an example of poor design that doesn't take advantage of polymorphism. Only the
UglyCoffeeCup class from this example is shown in the text of this chapter. (The
rest aren't shown because this example doesn't provide a positive role model.) All the files exist in the
inherit/ex3 directory of the CD-ROM, however, so you can run the application
Example3. When you run
Example3, you get the same output as
Example2 gives you. The example works, it just doesn't take advantage of
polymorphism. Try to avoid this style of program design.
Example four illustrates the difference between static and dynamic binding. The code for all the files
in this example, which are shown above as part of the text of this chapter. The files are in the
inherit/ex4 directory. You can see the different between static and dynamic
binding by running
Example4a, which doesn't yield the desired behavior of gurgling
all milk objects.
Example4b shows one way to gurgle milk, however, the preferred
way to gurgle milk is shown in
Example4c. When you run
Example4c, it will print out:
One Milk object is swirling. All Milk objects are gurgling.
Example five illustrates adding behavior to one member of a family of types, and using
instanceof to access that behavior. All of the code in this example is shown above
in the text of this chapter. The files are in the
inherit/ex5 directory of the CD-
ROM. In this example, two of the source files,
Example5a.java and
Example5c.java, don't compile. These files illustrate that you can't access a
method defined in a subclass,
Tea, if you have a reference to a superclass,
Liquid. You can, however, run
Example5b and
Example5d. When you run
Example5d, which illustrates the
proper way to access the
readFuture() method defined in subclass
Tea, the application will print out:
Tea Swirling Reading the future...
Examples six and seven are simply the two
Liquid classes, shown above, that
illustrate abstract classes and methods. In example six, which is in the
inherit/ex6
directory, class
Liquid is declared abstract even though it doesn't contain any
abstract methods. In example seven, which is in the
inherit/ex7 directory, both
the
swirl() method and the
Liquid class itself are declared
abstract. | http://www.artima.com/objectsandjava/webuscript/CompoInherit1.html | crawl-003 | refinedweb | 5,292 | 52.29 |
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
The Internet Relay Chat (IRC) #rdfig channel is now available, offering real
time collaboration to Semantic Web Developers at irc://irc.openprojects.net:6667/rdfig. The IRC channel
extends t...
[Apr. 23, 2001]
The Basic Semantic Web Language (BSWL) is a proposal for a "stripped down RDF-in-XML
syntax" that uses a simpler abstract data model than the RDF Model and Syntax specification. BSWL also uses a sim...
[Jul. 16, 2001]
Abstract:"In this note we describe a method for using RDF, the Resource Description
Format of the W3C, to create a general, yet extensible framework for
describing user preferences and d...
[Nov. 30, 1998]
This spec explains the requirements for web-based micropayment fee systems including how to initialize micropayments and embed micropayment wallets into web pages....
[Aug. 25, 1999]
This is a specification for eXtensible Programming Language (XPL), an XML-based
compiled programming language. XPL's "source code" is expressed using XML
documents. This object-oriented programming...
[Aug. 3, 2000]
Meaning Definition Language (MDL) is intended to bridge the gap between
structure and meaning by modeling a definition of the meanings (such as the
properties or objects) conveyed by the structure o...
[Sep. 6, 2001]
This is a large and growing collection of news feeds available free for use as
site content. The feeds are available in a variety of formats, including
Resource Description Framework (RDF) files. Th...
[Aug. 14, 2000]
This is a DTD for RDF based on the W3C Recommendation....
[Feb. 26, 1999]
The Resource Description Framework (RDF) is a foundation for processing metadata that provides
interoperability between applications that exchange machine-understandable information on the Web.
RDF ...
[Feb. 22, 1999]
This document provides a precise semantic theory for
Resource Description Framework (RDF) and RDF Schema (RDFS). This gives an abstract,
mathematical description of the properties of RDF. The primar...
[Sep. 25, 2001]
This is a Web-based demonstration of Redland, a library that provides
an
interface for Resource Description Framework (RDF).
Any public RSS 1.0 feed may be entered into the online form for
dis...
[Jan. 21, 2001]
This document gives a detailed description of changes to the XML syntax grammar of the Resource Description Framework (RDF)
model, as originally specified in RDF Model
& Syntax. Confusion caused by ...
[Sep. 6, 2001]
This Note describes an alternative Resource Description Framework (RDF) encoding for the vCard personal profile
format.
A vCard profile contains information about an individual (such as mail addre...
[Feb. 21, 2001]
The W3C's Resource Description Framework (RDF) activity page. Link groupings
include events, publications, an overview section, frequently asked questions,
projects, applications, articles, presenta...
[Mar. 9, 2000]
This document describes RDF Site Summary (RSS), an
extensible metadata description and syndication format. RSS started out as RDF Site Summary, then was changed (by Netscape) to "Rich Site Summary," ...
[Aug. 14, 2000]
The Open Healthcare Group is an organization devoted to the promotion and
distribution of a free, open source health record (XChart, an XML-based open
source electronic healthcare system). Openhealt...
[Sep. 12, 2000]
DC-dot is a form-driven CGI based Dublin Core metadata generator that functions as simply user-side as the META-tag generators we are all accustomed to. In fact, it is the information contained in th...
[May. 3, 1999]
This is a Resource Description Framework (RDF) representation of Wordnet
(), an on-line lexical reference for the
English language, organized by synonym sets,...
[Feb. 8, 2001]
XChart is an XML-based open source electronic healthcare system designed to
combine the ease, speed and portability of paper systems with the efficiencies
of computerized records, in a format that i...
[Sep. 11, 2000]
Extensible Graph Markup and Modeling Language (XGMML) is an XML application
based upon Graph Modeling Language (GML) that uses XML to describe graphs rather
than GML's text format. A graph is define...
[Oct. 2, 2000]
Excerpted from "Introduction": "The World Wide Web provides an information delivery infrastructure for various types of digital contents
used in daily life. Payment infrastructures such as digital c...
[Mar. 29, 1999]
XMLNews is an XML and RDF-based news industry format developed by David Megginson for WavePhore for its NewsPak product....
[Apr. 23, 1999]
This page posts a short working description of XSet. XSet is an XML property set
description of XML 1.0 and XML namespaces. The description is a result of
translating the Extended Backus-Naur Form (...
[Aug. 16, 2000]
Sponsored By: | http://www.xml.com/pub/rg/97 | crawl-002 | refinedweb | 759 | 53.81 |
#include <nrt/Core/Blackboard/details/ModulePortHelpers.H>
A Subscription is a unique binding of a received Message type and a returned Message type to a port class.
Module objects must implement an onMessage() callback function on a given Subscription port if and only if they derive from the corresponding MessageSubscriber of the Subscription. Typically, Msg and Ret should derive from nrt::MessageBase, with a special case allowed for Ret being void. See MessageSubscriber for how to define onMessage(). A convenience macro is provided (in details/ModuleHelpers.H) to easily declare a subscription class:
Where PortName is the name of the class that will embody your Subscription, MsgType is the type of posted message (must derive from nrt::MessageBase), RetType is the type of message returned by your subscriber (callback) that will respond to posts on any matching Posting (must derive from nrt::MessageBase or be void), and Description is a plain C-style string describing your Subscription. For example:
The reason for declaring a new Subscription type for each port is to allow one to have several subscribers and callbacks with identical message and return types, but different descriptions and different port classes. Although these deal with identical messages, they may subscribe to them in different namespaces and on different topics (see the definition of MessageSubscriber) and thus achieve different functionalities.
Definition at line 303 of file ModulePortHelpers.H.
Inherits shared_ptr< Msg const >.
Allocate a return message and return a unique_ptr to it, to be used in a callback as return value.
All given args are forwarded to the return message constructor. | http://nrtkit.net/documentation/classnrt_1_1MessageSubscription.html | CC-MAIN-2020-10 | refinedweb | 262 | 50.36 |
In many circumstances, you need a chance to do some clean-up when the user
shuts down your application. The problem is, the user does not always follow
the recommended procedure to exit. Java provides an elegant way for programmers
to execute code in the middle of the shutdown process, thus making sure your
clean-up code is always executed. This article shows how to use a shutdown hook
to guarantee that clean-up code is always run, regardless of how the user
terminates the application.
You may have code that must run just before an application completely exits.
For example, if you are writing a text editor with Swing and your application
creates a temporary edit file when it starts, this temporary file must be
deleted when the user closes your application. If you are writing a servlet
container such as Tomcat or Jetty, you must call the destroy
method of all loaded servlets before the application shuts down.
destroy
In many cases, you rely on the user to close the application as prescribed.
For instance, in the first example, you may provide a JButton
that, when clicked, runs the clean up code before exiting. Alternatively, you
may use a Window listener that listens to the
windowClosing event. Tomcat uses a batch file that can be executed
for a proper shutdown. However, you know that the user is the king; he or she can
do whatever they want with the application. He or she might be nice enough to
follow your instruction, but could just close the console or log off of the
system without first closing your application.
JButton
Window
windowClosing
In Java, the virtual machine shuts down itself in response to two types of
events: first, when the application exits normally, by calling the
System.exit method or when the last non-daemon thread exits.
Second, when the user abruptly forces the virtual machine to terminate; for
example, by typing Ctrl+C or logging off from the system before
closing a running Java program.
System.exit
Ctrl+C
Related Reading
Java Performance Tuning
By Jack Shirazi
Fortunately, the virtual machine follows this two-phase sequence when
shutting down:
Runtime
In this article, we are interested in the first phase, because it allows the
programmer to ask the virtual machine to execute some clean-up code in the
program. A shutdown hook is simply an instance of a subclass of the
Thread class. Creating a shutdown hook is simple:
Thread
run
addShutdownHook
As you may have noticed, you don't start the shutdown hook as you would
other threads. The virtual machine will start and run your shutdown hook when
it runs its shutdown sequence.
The code in Listing 1 provides a simple class called
ShutdownHookDemo and a subclass of Thread named
ShutdownHook. Note that the run method of the
ShutdownHook class simply prints the string "Shutting down" to the
console. Of course, you can insert any code that needs to be run before the
shutdown.
ShutdownHookDemo
ShutdownHook
After instantiation of the public class, its start method is
called. The start method creates a shutdown hook and registers it
with the current runtime.
start
ShutdownHook shutdownHook = new ShutdownHook();
Runtime.getRuntime().addShutdownHook(shutdownHook);
Then, the program waits for the user to press Enter.
System.in.read();
When the user does press Enter, the program exits. However, the virtual
machine will run the shutdown hook, printing the words "Shutting down."
package test;
public class ShutdownHookDemo {
public void start() {
System.out.println("Demo");
ShutdownHook shutdownHook = new ShutdownHook();
Runtime.getRuntime().addShutdownHook(shutdownHook);
}
public static void main(String[] args) {
ShutdownHookDemo demo = new ShutdownHookDemo();
demo.start();
try {
System.in.read();
}
catch(Exception e) {
}
}
}
class ShutdownHook extends Thread {
public void run() {
System.out.println("Shutting down");
}
}
As another example, consider a simple Swing application whose class is
called MySwingApp (see Figure 1). This application creates a
temporary file when it is launched. When closed, the temporary file must be
deleted. The code for this class is given in Listing 2 on the following page.
MySwingApp. | http://www.onlamp.com/pub/a/onjava/2003/03/26/shutdownhook.html | CC-MAIN-2014-15 | refinedweb | 670 | 54.42 |
Vue 3 is just around the corner, and I’ve been building some apps from app-ideas github repostitory to practice. If you’re not aware of it, this repository is a collection of ideas to build an app and practice your skills. Each app comes complete with a description, a list of user stories and bonus objectives and all the resources you’ll need to achieve your objective. It even got an example app, so if you get stuck in some point you can check out how it’s done. In this article we’ll start to build the recipe app.
Until late april the best way to try out one of the hottest new features, the composition api, was to use it in a Vue 2 project, by executing the following vue-cli command on an already created project. You can find many articles on the Internet on how to do it, like this one:
What I Have Learned So Far about 'Vue-Composition-API'
If you don’t what the composition API is, maybe you should read the Vue team documentation about it before we start. As always, the documentation is very clear and concise:
API Reference | Vue Composition API
In April 20th Evan You introduced Vite, a tool to generate a Vue 3 app template, serve it for dev with no bundling and bundle it for production using rollup. I started using on the first day, and have to say I’m really impressed on what they’ve achieved yet. The server starts immediately, since it doesn’t have the need to bundle the application (the components are compiled on the fly and server to the browser as native es modules ) and it even got Hot Module Replacement, so whenever you change your code they’re instantly reflected on the browser. You can check their repository bellow to read the documentation and start coding right now:
vuejs/vite - An opinionated web dev build tool
Enough talking, it’s time to get our hands dirty and write the code.
Getting started
To start our Vite project, all we need is to run the following command:
// you can use npm/npx npx create-vite-app vite-recipe-book cd vite-recipe-book npm install npm run dev // or yarn yarn create vite-app vite-recipe-book cd vite-recipe-book yarn yarn dev
Open your browser, point it to address and we’re ready to go.
The routing
Our app will consist of a simple recipe book. We have two parts, the ingredients and the recipes. As you may know, a recipe is composed of many ingredients.
Since we got two separate parts, the best way to change between them is to use vue-router, the official vue routing solution.
For Vue 3 we can use Vue-router 4 version. It’s in alpha still, but since we’re not building a production app, it’s all fine. The repository of this upcoming version is listed bellow:
Let’s install the latest version as the time of writing this article, v4.0.0-alpha.11, by using the commands bellow:
npm i --save vue-router@v4.0.0-alpha.11 # or yarn add vue-router@v4.0.0-alpha.11
The we have to create our router.js file. It’s a little bit different from the previous version. We create the history object, the routes array and use them to create our router.
import { createWebHistory, createRouter } from "vue-router"; import Home from "./components/Home.vue"; import Ingredients from "./components/Ingredients.vue"; import Recipes from "./components/Recipes.vue"; const history = createWebHistory(); const routes = [ { path: "/", component: Home }, { path: "/ingredients", component: Ingredients }, { path: "/recipes", component: Recipes }, ]; const router = createRouter({ history, routes }); export default router;
We haven’t created the components we are importing, we’ll get there soom.
To make use of our new created router, we have to make some changes to the main.js file, by importing our routing and telling the app to use it:
import { createApp } from "vue"; import App from "./App.vue"; import "./index.css"; import router from "./router"; createApp(App).use(router).mount("#app");
The other file we’ll have to change is App.vue to include the router-view component, so that the current router gets rendered:
<template> <router-view /> </template> <script> export default { name: 'App', } </script>
And that’s it. Now let’s build our components.
Since we have routes, the first thing well create is …
The Nav Component
Our simple nav component will be a list of the 3 routes we created earlier. To make this, we’ll use the composition api and the useRouter hook provided by vue-router. Although we don’t need the composition api for simple components like this, we’ll use it everywhere to practice. So just create a Nav.vue file in your components folder and write the code:
<template> <nav> <router-linkVite Recipe Book</router-link> <ul> <li v- <router-link :{{route.text}}</router-link> </li> </ul> </nav> </template> <script> import { computed } from "vue"; import { useRouter } from "vue-router"; export default { setup() { const routes = [ { to: "/ingredients", text: "Ingredients" }, { to: "/recipes", text: "Recipes" } ]; const router = useRouter(); const activeRoute = computed(() => router.currentRoute.value.path); const isActive = path => path === activeRoute.value return { isActive, routes }; } }; </script>
As you saw, we only return from the setup method the parts that will be used outside.The router object and the activeRoute computed value are only used inside the setup method, so we don’t need to return them. The activeRoute value is created as computed so that it’s automatically updated whenever the router object changes.
I haven’t found any documentation about useRouter hook, but if you’re using VSCode (I hope you are), you can control + click it to inspect it’s declaration. As you’ll see, there are plenty of exported methods and properties in it, including programmatic navigation (push, back, replace, etc). Hope that helps you to understand what we have done to check the current route.
Now all we need to do is include the Nav component in App.vue.
<template> <Nav /> <router-view /> </template> <script> import Nav from "./components/Nav.vue"; export default { name: "App", components: { Nav } }; </script>
One good change you’ll notice here is that Vue 3 doesn’t have the one root element limitation anymore (well done Vue team). The next step is to build the simplest of the the components …
The Ingredients Component
Our ingredients component will be composed by a filter text input, a text input and a Add button to add new ingredients and a table with a delete and update buttons. When you click the delete button, the ingredient will be gone, and when you click update the item will be deleted from the list and put in the text input, so the user can change it and reinsert it. Since we have more than one reactive value that need to be used in the template, we’ll use the reactive method to group them in one object. We could use the ref method too, but then we’d have to create them one by one. The other thing that would change is that we’d have to use the .value ref method to access it’s current value inside the setup method. With reactive we don’t need to do that.
Other things we need to create in the setup method is a computed method to put our filter to work and the add, remove and update methods. Easy peasy right ? So let’s create a Ingredients.vue file in our components folder and start coding:
<template> <section> <input type="text" v- </section> <section> <input type="text" v- <button @Add</button> </section> <section> <template v- <h1>No ingredients found</h1> </template> <template v-else> <table> <thead> <tr> <th>Ingredient</th> <th></th> </tr> </thead> <tbody> <tr v- <td>{{ingredient}}</td> <td> <button @Update</button> <button @Delete</button> </td> </tr> </tbody> </table> </template> </section> </template> <script> import { reactive, computed } from "vue"; export default { setup() { const data = reactive({ ingredients: [], filter: "", newIngredient: "" }); const filteredIngredients = computed(() => data.ingredients .filter(ingredient => !data.filter || iingredient.includes(data.filter)) .sort((a, b) => (a > b ? 1 : a < b ? -1 : 0)) ); const add = ingredient => { if ( !data.newIngredient || data.ingredients.some(ingredient => ingredient === data.newIngredient) ) return; data.ingredients = [...data.ingredients, data.newIngredient]; data.newIngredient = ""; }; const update = ingredient => { data.newIngredient = ingredient; remove(ingredient); }; const remove = ingredient => (data.ingredients = data.ingredients.filter( filterIngredient => ingredient !== filterIngredient )); return { filteredIngredients, data, add, update, remove }; } }; </script>
As you’ve noticed, we’re changing the ingredients array in a immutable way, always attributing to it a new array instead of changing the current value. That’s a safer and always recommended way to work with arrays and objects to ensure reactivity works.
If you think in the next component we have to create, Recipes, maybe you’ll figure out we have a problem with the Ingredients component: the state is local and the recipes will be composed of ingredients, so we’ll have to figure a way to share the state between them. The traditional way of solving this is to use Vuex or maybe a Higher Order Component that controls the state and pass it as props to both components, but maybe we can solve this the Vue 3 way, using the composition api. So let’s move on and create our ...
Store
To create our store that will be responsible to control and share the application state, we’ll make use of the reactive and computed methods of the new composition api to create a hook that will return the current state and the methods used to update it. This hook will then be used inside the setup method of the component, like we did with the useRouter hook, and we’ll be good to go.
For this example we’ll control both lists (ingredients and recipes) in one reactive object. It’s up to you to do like this or maybe create separate files for each one. Enough talking, let’s code:
import { reactive, computed, watch } from "vue"; const storeName = "vite-recipe-book-store"; const id = () => "_" + Math.random().toString(36).substr(2, 9); const state = reactive( localStorage.getItem(storeName) ? JSON.parse(localStorage.getItem(storeName)) : { ingredients: [], recipes: [], } ); watch(state, (value) => localStorage.setItem(storeName, JSON.stringify(value))); export const useStore = () => ({ ingredients: computed(() => state.ingredients.sort((a, b) => a.name.localeCompare(b.name)) ), recipes: computed(() => state.recipes .map((recipe) => ({ ...recipe, ingredients: recipe.ingredients.map((ingredient) => state.ingredients.find((i) => i.id === ingredient) ), })) .sort((a, b) => a.name.localeCompare(b.name)) ), addIngredient: (ingredient) => { state.ingredients = [ ...state.ingredients, { id: id(), name: ingredient }, ]; }, removeIngredient: (ingredient) => { if ( state.recipes.some((recipe) => recipe.ingredients.some((i) => i.id === ingredient.id) ) ) return; state.ingredients = state.ingredients.filter( (i) => i.id !== ingredient.id ); }, addRecipe: (recipe) => { state.recipes = [ ...state.recipes, { id: id(), ...recipe, ingredients: recipe.ingredients.map((i) => i.id), }, ]; }, removeRecipe: (recipe) => { state.recipes = state.recipes.filter((r) => r.id !== recipe.id); }, });
As you saw from the code, we’re using the computed method inside the useStore function so that our ingredients and recipes arrays can not be updated from outside the store. In the recipes computed value we’re mapping the ingredients array to it’s ingredient object. This way we can store just the ingredients id and get the id and the name in our recipes list. The computed arrays are then sorted by name using the sort and localeCompare methods.
We’ve added a method (id) to generate an unique id to every ingredient and recipe, and created the name property in the addIngredient method to make ingredients an array of objects. Another important point is that the removeIngredient method checks if the ingredient is included in a recipe before removing it. This is important to keep our recipes safe.
Another bonus is the use of the watch method to make the store state persistent in the localStorage of the user’s browser and the initial configuration of the state as the localStorage saved data or an object with empty ingredients and recipes arrays. This kind of approach can be used to persist the data in an remote api too.
I think now we can move on and
Refactor Ingredients Component
Now that our store is ready, it’s time to refactor the ingredient component to use it. This can be easily achieved by replacing the data.ingredients array with our store’s ingredients array and rewriting the add, update and remove methods to use the store’s addIngredient and removeIngredient. Another thing we’ll change is to make reference to ingredient.name instead of just ingredient since now it’s an object with the id and name properties. Let’s do it:
<template> <section> <input type="text" v- </section> <section> <input type="text" v- <button @Add</button> </section> <section> <template v- <h1>No ingredients found</h1> </template> <template v-else> <table> <thead> <tr> <th>#</th> <th>Name</th> <th></th> </tr> </thead> <tbody> <tr v- <td>{{ingredient.id}}</td> <td>{{ingredient.name}}</td> <td> <button @Update</button> <button @Delete</button> </td> </tr> </tbody> </table> </template> </section> </template> <script> import { reactive, computed } from "vue"; import { useStore } from "../store"; export default { setup() { const store = useStore(); const data = reactive({ ingredients: store.ingredients, filter: "", newIngredient: "" }); const filteredIngredients = computed(() => data.ingredients.filter( ingredient => !data.filter || ingredient.name.includes(data.filter) ) ); const add = ingredient => { store.addIngredient(ingredient); }; const update = ingredient => { data.newIngredient = ingredient; rmeove(ingredient); }; const remove = ingredient => { store.removeIngredient(ingredient); }; return { filteredIngredients, data, add, update, remove }; } }; </script>
Everything is working fine, now it’s time to move on to a more complicated component
The Recipes Component
Our recipes component will be composed of a form where you can add a recipe by entering the title and selecting the ingredients in a select input. This ingredients will be in a list with the delete button. For simplicity we’ll not implement a ingredient quantity in our recipe, but feel free to do it as an exercise. Besides this form, we’ll have the filter input and the recipes list that will work just as in the ingredients component but adding a view button to preview the recipe and it’s ingredients right below the table. It’s not much more complicated from what he already did in the ingredients component. Time to code:
<template> <section> <input type="text" v- </section> <section> <input type="text" v- <br /> <select v- <option value></option> <option v-{{ingredient.name}}</option> </select> <button @Add Ingredient</button> <table> <thead> <tr> <th>#</th> <th>Name</th> <th></th> </tr> </thead> <tbody> <tr v- <td>{{ingredient.id}}</td> <td>{{ingredient.name}}</td> <td> <button @Remove</button> </td> </tr> </tbody> </table> <button @Add Recipe</button> </section> <section> <template v- <h1>No recipes found</h1> </template> <template v-else> <table> <thead> <tr> <th>#</th> <th>Name</th> <th></th> </tr> </thead> <tbody> <tr v- <td>{{recipe.id}}</td> <td>{{recipe.name}}</td> <td> <button @View</button> <button @Update</button> <button @Delete</button> </td> </tr> </tbody> </table> </template> </section> <section v- <p> <strong>Name:</strong> {{data.viewRecipe.name}} </p> <p> <strong>Ingredients</strong> </p> <table> <thead> <tr> <th>#</th> <th>Name</th> </tr> </thead> <tbody> <tr v- <td>{{ingredient.id}}</td> <td>{{ingredient.name}}</td> </tr> </tbody> </table> <button @Hide</button> </section> </template> <script> import { reactive, computed } from "vue"; import { useStore } from "../store"; export default { setup() { const store = useStore(); const data = reactive({ ingredients: store.ingredients, recipes: store.recipes, filter: "", newRecipe: { name: "", ingredients: [] }, newIngredient: "", viewRecipe: {} }); const filteredRecipes = computed(() => data.recipes.filter( recipe => !data.filter || JSON.stringify(recipe).includes(data.filter) ) ); const add = recipe => { store.addRecipe(recipe); data.newRecipe = { name: "", ingredients: [] }; data.newIngredient = ""; }; const update = recipe => { data.newRecipe = recipe; remove(recipe); }; const remove = recipe => { store.removeRecipe(recipe); }; const hide = () => { data.viewRecipe = {}; }; const view = recipe => { data.viewRecipe = recipe; }; const canAddRecipe = computed( () => data.newRecipe.name && data.newRecipe.ingredients.length ); const addIngredient = ingredient => { if (data.newRecipe.ingredients.some(i => i.id === ingredient)) return; data.newRecipe.ingredients = [ ...data.newRecipe.ingredients, data.ingredients.find(i => i.id === ingredient) ]; }; const removeIngredient = ingredient => (data.newRecipe.ingredients = data.newRecipe.ingredients.filter( i => i.id !== ingredient.id )); return { filteredRecipes, data, add, update, remove, hide, view, canAddRecipe, addIngredient, removeIngredient }; } }; </script>
The app is working well, but with a very ugly look. As homework you may add styles and implement the features that are described in the recipe app readme.
I’ll leave the final code shared in my github so you have something to start from.
Conclusion
As we can see, the composition api is very useful and easy to use. With it we can implement react hooks’ like functions to share data and logic between our components, besides other things.
Hope you all liked the article and maybe learned something useful to help you in the transition from Vue 2 to Vue 3.
See you next article.
Posted on by:
Rogério Luiz Aques de Amorim
Well qualified FullStack Developer familiar with a wide range of programming utilities and languages.
Read Next
5 things that might surprise a JavaScript beginner/ OO Developer
Chris Noring -
Simple Calculator with Dark Mode.
Mohammad Farmaan -
🚀10 Trending projects on GitHub for web developers - 10th July 2020
Iain Freestone -
Discussion
Thanks for the great article! I have played with Vite and Vue 3, but not that thoroughly. The simple store is awesome - really shows the power the composition api has. | https://practicaldev-herokuapp-com.global.ssl.fastly.net/n0n3br/let-s-build-a-recipe-app-using-vue-3-vite-3hij | CC-MAIN-2020-34 | refinedweb | 2,880 | 54.73 |
Last
findOne() function until now, so let’s create a page to show more information for our pokémon!
Creating our pokémon info component
Well, the first step we have to take, before implementing routing, is to create a component that can be used by our routing. For our overview, we already have a component (called
PokemonListComponent), for our info page we don’t have one yet. So let’s create one with Angular CLI:
ng g component pokemon-info
If you remember, this is a shorthand for writing:
ng generate component pokemon-info
This command generates four files, but since I won’t be writing any tests or CSS soon, you can leave those alone or you can delete them. If you’re deleting them, make sure you don’t forget to edit pokemon-info.component.ts to remove the
styleUrls property in our
@Component decorator.
Applying the route configuration
If that’s done, it’s time to configure our routing. Sadly, support for routing is currently not available within Angular CLI, so I guess we’re on our own now!
To configure routing, we have to open app.module.ts and find the
imports section of the
@NgModule decorator. Normally, this contains three modules being
BrowserModule,
FormsModule and
HttpModule. Now, we want to add a fourth module called
RouterModule:
RouterModule.forRoot([ { path: 'pokemon/:id', component: PokemonInfoComponent }, { path: '', component: PokemonListComponent } ])
So, as you can see, we also provided some configuration. The configuration we’re going to use is quite simple. We have our default route (having an empty string as path) that’s leading us to the
PokemonListComponent and we have another path called
pokemon/:id leading to our newly made
PokemonInfoComponent. The
:id part here is a placeholder that tells Angular that there will be a parameter given here. In our case, that parameter will be the ID/number of the pokémon.
Also, don’t forget to import the
RouterModule:
import {RouterModule} from '@angular/router';
Using
<router-outlet>
Now, we have defined our routing config already, but right now, we have fixed our app.component.html so that it uses
<app-pokemon-list>. If we’re using routing, we actually want to show the component that should be activated by the routing here, and not the list of pokémons.
To change this, we simply have to replace the
<app-pokemon-list></app-pokemon-list> with the following:
<router-outlet></router-outlet>
Now, if we look at the application now, we can see that nothing changed, which is a good thing. This means that we’re now doing the same thing as before, but with routing!
Linking to our info page
Now, we can also test our our info page by going to. This should work fine and as well and show the default “pokemon-info works!” message.
So if I want to link to the detail page, I can simply put this link as the
[href] attribute and it should work, right? Well, it would work, but it’s not the right way to do this. If you loaded the page, you might have seen it, it does a serverside request and it has to load the application again to show the info page.
The power of single page web applications is to not have to do that and to be able to see different pages without actually leaving the webpage.
To do this with Angular, we have a directive called
RouterLink. So let’s use it!
Open up pokemon-entry.component.html (because this is the component where we will add a link) and below the
<div> with the class
.card-content, we add another
<div> like this:
<div class="card-action"> <a href="#" [routerLink]="['/pokemon', pokemon?.id]">View</a> </div>
As you can see here, we’re using the
[routerLink] directive and we’re passing an array as an argument, containing the path to the info route, but rather than using
:id here, we’re using
pokemon?.id.
If we take a look at the application now, you’ll see that every pokémon now has a view link that can be clicked on:
if you click these links, you’ll see that we get to the pokemon info component waaay faster than we did before, so it appears to be working as it should. However, how can we go back to our list now?
Adding a navigation bar
Now, to get back to the list of pokémons, I want to create a new component for our navigation bar, which will have a link that allows us to go back to the home page:
ng g component shared/navbar
Now, let’s edit the navbar.component.html template to look like this:
<nav> <div class="nav-wrapper red darken-2 row"> <div class="col s12"> <a routerLink="" class="brand-logo">{{title}}</a> </div> </div> </nav>
Once again we’re using the
routerLink directive here, but as you can see I’m no longer using square brackets around it, because I’m referencing to the default route, which requires no dynamic property binding at all.
I’m also going to show a title here, but to do that we have to create a new
@Input in our navbar.component.ts:
@Input() title: String;
Don’t forget to import it either:
import {Input} from '@angular/core';
With this done, it’s time to add the navbar to the application by editing app.component.html. On the first line, we simply have to add our navbar component:
<app-navbar</app-navbar>
If you’re wondering why we’re not using
[title] here like we did in our previous tutorials, well, just like the
routerLink we used last time, we’re not binding to a property here, so we don’t have to use square brackets.
If you take a look at the application now, you can see that it has a proper navigation bar now, so it looks like that’s working as well.
If you now click on the “View” link of any pokémon and you wish to return, you can now click the “Pokédex” title in the navigation bar and you will see the list of pokémons again.
Using route parameters
Great, we can now properly navigate, but how do we use parameters like the
:id we’ve provided?
To use it, we have to go back to our
PokemonInfoComponent (pokemon-info.component.ts) and change some things. First of all, we have to change our constructor to get the
ActivatedRoute:
constructor(private _route: ActivatedRoute) { }
This contains a lot of information, but in our case we’re mostly interested in the parameters. To get these parameters, you have to subscribe to
this._route.params, for example:
ngOnInit() { this._route.params.subscribe(params => console.log(params['id'])); }
Don’t forget to import the
ActivatedRoute though:
import {ActivatedRoute} from '@angular/router';
Now, if we run the application now, and we look at the console, we can see that the ID is now logged when we open the info of a pokémon. Great, but let’s use our
PokemonService now to retrieve the actual info of the pokémon and show something already!
First of all, we have to change the constructor again, to include the
PokemonService:
constructor(private _route: ActivatedRoute, private _service: PokemonService) { }
Now, we also have to add this service as a provider, so we have to change the
@Component decorator a bit to look like this:
@Component({ providers: [PokemonService], selector: 'app-pokemon-info', templateUrl: './pokemon-info.component.html' })
Also, don’t forget to import the
PokemonService:
import {PokemonService} from '../shared/services/pokemon.service';
Now all we have to do is to change what we do with the
this._route.params observable a bit:
this._route.params .map(params => params['id']) .flatMap(id => this._service.findOne(id)) .subscribe(pokemon => this.pokemon = pokemon);
So, first of all we’re using the
params and mapping it to the ID using the
map() operator of RxJS. After retrieving the ID, we want to use the
findOne() of our service. However, since this will actually return another observable, we can use the
flatMap() operator, to actually flatten the observable to one level, rather han having an observable within another observable.
The last step is to actually put the result in a field called
this.pokemon. Since we didn’t make that one yet, let’s add it:
pokemon: Pokemon;
Now that we have our pokémon, we can change the pokemon-info.component.html to show some information about the pokémon. Actually, we can re-use the
PokemonEntryComponent here, since we want to show an image of the pokémon together with its name and number anyways:
<div class="row"> <div class="col s6"> <app-pokemon-entry [pokemon]="pokemon?.baseInfo"></app-pokemon-entry> </div> </div>
Since we will show a lot more information than just that on this page, I’m going to use the grid system of Materialize here.
However, I don’t want to show the “View” link here since we’re already on that page, so I’m going to change it a bit to not be visible when I’m on this page. To do that, I have to change the pokemon-entry.component.ts a bit to add another
@Input field called
withLink:
@Input() withLink: boolean = true;
By default this will be
true so we don’t have to change anything to the
PokemonListComponent. However, for our pokemon-info.component.html template this will be
false:
<app-pokemon-entry [pokemon]="pokemon?.baseInfo" [withLink]="false"></app-pokemon-entry>
You might think that we don’t have to use property binding here either and can just use
withLink="false", but that isn’t true. If we would have used it that way, the
false would be passed as a string, and not as a boolean
false.
Now all we have to do is to just add an
*ngIf to the pokemon-entry.component.html template:
<div class="card-action" * <a href="" [routerLink]="['/pokemon', pokemon?.id]">View</a> </div>
If we take a look at the application now, we can see that nothing changed at our pokémon overview, and if we take a look at the pokemon info of one of the pokémons, we can see that it shows the same component, but this time without a link to the view page.
That means that the routing is working fine and that we now have both an overview of all pokémons, and a more detailed page. Next time we’ll define a page title for every route using the
Title service. | http://g00glen00b.be/routing-angular-2/ | CC-MAIN-2017-26 | refinedweb | 1,756 | 58.62 |
Hi guys,
I have TODO items in my AsciiDoc documents, and I'd like to add them to IntelliJ's todo Panel. How can I accomplish such a feat?
(In the meantime, I checked the Markdown plugin, and copied some of their work. Unfortunately, it doesn't work. What I did was the following:
- Add a todo indexer to the plugin.xml
- Create a AsciiDocTodoIndexer
- Create a AsciiDocIdIndexer
- Create a AsciiDocFilterLexer
plugin.xml
<todoIndexer filetype="AsciiDoc" implementationClass="org.asciidoc.intellij.todo.AsciiDocTodoIndexer"/>
AsciiDocTodoIndexerpublic class AsciiDocTodoIndexer extends LexerBasedTodoIndexer {
@Override
public Lexer createLexer(OccurrenceConsumer consumer) {
return AsciiDocIdIndexer.createIndexingLexer(consumer);
}
}
AsciiDocIdIndexerpublic class AsciiDocIdIndexer extends LexerBasedIdIndexer {
public static Lexer createIndexingLexer(OccurrenceConsumer consumer) {
return new AsciiDocFilterLexer(new EmptyLexer(), consumer);
}
@Override
public Lexer createLexer(final OccurrenceConsumer consumer) {
return createIndexingLexer(consumer);
}
}
AsciiDocFilterLexerpublic class AsciiDocFilterLexer extends BaseFilterLexer {
public AsciiDocFilterLexer(final Lexer originalLexer, final OccurrenceConsumer table) {
super(originalLexer, table);
}
@Override
public void advance() {
scanWordsInToken(UsageSearchContext.IN_PLAIN_TEXT, false, false);
advanceTodoItemCountsInToken();
myDelegate.advance();
}
}
I'm not sure why, and if, I need them, but the Markdown plugin has it, so I thought it would be a good idea to copy. However, none of the TODO's in my AsciiDoc are added. Comments in AsciiDoc can look like this:
//TODO: this is my comment
This is also possible:
//// TODO: another comment here! This part is hidden and not important ////
It would be nice to see these comments in the TODO panel.
Thanks!!
Erik
I don't see anything wrong with the code at first glance (it's a hack because it parses the entire contents of a file as a single token, but this should still work).
Can you verify whether your TODO indexer is actually called?
Note that, since you don't have a lexer that would be able to distinguish comments from non-comments, your code would detect TODOs anywhere where such a string is encountered in a file, not just in comments.
Hi Dmitry,
Thanks for the quick response. I've added debug points to all methods of the TODO parser, and IntelliJ breaks at none of them, so it seems like it's indeed not called.
Could it be that the line with the todoIndexer might be wrong? I'm not sure what the filetype points to exactly, and what it should match with...
But my AsciiDocFileType looks like this:
So I thought that would be okay.
Any idea how I can debug this?
PS: thanks about the remark about TODO's, but this is the first step. Implementing the full Lexer would be very time consuming, especially since I have never done anything like that before, so I think I'll stick to the poor man's version for now. If I can get it to work, that is... Thanks again!
Hi guys,
I'm still stuck with this. Anyone got a suggestion?
Erik
Try to use this as todoIndexer implementation (just for checking) for your file type:
This should add some TODO occurrences to TODO tool window. Let me now if it's working.
Hi Marcin,
Thanks for the reply. I've done what you said, and my plugin.xml now looks like this:
Are You sure that your plugin is up to date while you are debugging it? Did You try to refresh indices (File > Invalidate caches)? I'm pretty sure that this should collect every todo/fixme phrase in file. | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206761425-How-to-add-items-to-the-TODO-panel | CC-MAIN-2019-35 | refinedweb | 556 | 56.55 |
Declaring members for Interfaces
Discussion in 'Java' started by vamsee.maha@gmail.com, Apr
Can nested class members access private members of nesting class?CoolPint, Dec 13, 2003, in forum: C++
- Replies:
- 8
- Views:
- 971
- Jeff Schwab
- Dec 14, 2003
Declaring only public class members - doesn't workChee Liang, Apr 12, 2004, in forum: C++
- Replies:
- 4
- Views:
- 587
- Christopher Benson-Manica
- Apr 16, 2004
Templates: Members Vs. non-membersDave, Aug 10, 2004, in forum: C++
- Replies:
- 3
- Views:
- 362
- tom_usenet
- Aug 10, 2004
Declaring and implementing exceptions inside interfaces?josh, Dec 17, 2006, in forum: Java
- Replies:
- 6
- Views:
- 433
- Ed Kirwan
- Dec 19, 2006 | http://www.thecodingforums.com/threads/declaring-members-for-interfaces.607127/ | CC-MAIN-2014-35 | refinedweb | 106 | 60.04 |
First you need to install Odlyzko's database of zeta zeros. In a terminal, type:
$ sage -i database_odlyzko_zeta
The command
sage: zeta_zeros()
will then give you a list of the imaginary parts of the first 100,000 non trivial zeros of zeta. Note that as usual, the list is indexed from 0.
The information page about this package also warns: Note that only the first 9 digits after the decimal come from the database. Subsequent digits are the result of the inherent imprecision of a binary representation of decimal numbers. So you should have that in mind and check how this affects precision in your product.
Then you can do the following:
sage: def rho(k): ....: return CC(0.5,zeta_zeros()[k]) ....: sage: def a(x): ....: return (CC(1.,0.)-x)*exp(x) ....: sage: def p(s,n): ....: return prod(a(s/rho(k))*a(s/(CC(1.,0.)-rho(k))) for k in (0..n)) ....: sage: p(0.5,2) 1.00221605640651 sage: p(0.5,20) 1.00403653183532 sage: p(0.5,200) 1.00527298745769 sage: p(0.5,2000) 1.00567828272459
If you plan to use large values of
n, you might want to use Cython to speed up computation, but you should probably first check what precision the computations really give you. | https://ask.sagemath.org/answers/15279/revisions/ | CC-MAIN-2019-43 | refinedweb | 216 | 75.81 |
As an OpenStack Swift dev I obviously write a lot of Python. Further Swift is cluster and so it has a bunch of moving pieces. So debugging is very important. Most the time I use pudb and then jump into the PyCharms debugger if get really stuck.
Pudb is curses based version of pdb, and I find it pretty awesome and you can use it while ssh’d somewhere. So I thought I’d write a tips that I use. Mainly so I don’t forget 🙂
The first and easiest way to run pudb is use pudb as the python runner.. i.e:
pudb <python script>
On first run, it’ll start with the preferences window up. If you want to change preferences you can just hit ‘<ctrl>+p’. However you don’t need to remember that, as hitting ‘?’ will give you a nice help screen.
I prefer to see line numbers, I like the dark vim theme and best part of all, I prefer my interactive python shell to be ipython.
While your debugging, like in pdb, there are some simple commands:
- n – step over (“next”)
- s – step into
- c – continue
- r/f – finish current function
- t – run to cursor
- o – show console/output screen
- b – toggle breakpoint
- m – open module
- ! – Jump into interactive shell (most useful)
- / – text search
There are obviously more then that, but they are what I mostly use. The open module is great if you need to set a breakpoint somewhere deeper in the code base, so you can open it, set a breakpoint and then happily press ‘c’ to continue until it hits. The ‘!’ is the most useful, it’ll jump you into an interactive python shell in the exact point the debugger is at. So you can jump around, check/change settings and poke in areas to see whats happening.
As with pdb you can also use code to insert a breakpoint so pudb will be triggered rather then having to start a script with pudb. I give an example of how in the nosetest section below.
nosetests + pudb
Sometimes the best way to use pudb is to debug unit tests, or even write a unit (or functaional or probe) test to get you into an area you want to test. You can use pudb to debug these too. And there are 2 ways to do it.
The first way is by installing the ‘nose-pudb’ pip package:
pip install nose-pudb
Now when you run nosetests you can add the –pudb option and it’ll break into pudb if there is an error, so you go poke around in ‘post-mortem’ mode. This is really useful, but doesn’t allow you to actually trace the tests as they run.
So the other way of using pudb in nosetests is actually insert some code in the test that will trigger as a breakpoint and start up pudb. To do so is exactly how you would with pdb, except substitute for pudb. So just add the following line of code to your test where you want to drop into pudb:
import pudb; pudb.set_trace()
And that’s it.. well mostly, because pudb is command line you need to tell nosetests to not capture stdout with the ‘-s’ flag:
nosetests -s test/unit/common/middleware/test_cname_lookup.py
testr + pudb
Not problem here, it uses the same approach as above. Where you programmatically set a trace, as you would for pdb. Just follow the ‘Debugging (pdb) Tests’ section on this page (except substitute pdb for pudb)
Update – run_until_failure.sh
I’ve been trying to find some intermittent unit test failures recently. So I whipped up a quick bash script that I run in a tmux session that really helps find and deal with them, I thought I’d add to this post as I then can add nose-pudb to make it pretty useful.
#!/bin/bash n=0 while [ True ] do clear $@ if [ $? -gt 0 ] then echo 'ERROR' echo "number " $n break fi let "n=n+1" sleep 1 done
With this I can simply:
run_until_failure.sh tox -epy27
It’ll stop looping once the command passed returns something other then 0.
Once I have an error, I have then been focusing in on the area it happens (to speed up the search a bit), I can also use nose-pudb to drop me into post-mortem mode so I can poke around in ipython, for example, I’m currently running:
run_until_failure.sh nosetests --pudb test/unit/proxy/test_server.py
Then I can come back to the tmux session, if I’m dropped in a pudb interface, I can go poke around. | https://oliver.net.au/?p=302 | CC-MAIN-2017-17 | refinedweb | 776 | 78.18 |
So I’m trying to get some more precise movements with my BrickPi but I cant seem to get the motors to move and the encoders to spit out data at the same time. I can get the encoders to work by themselves, measuring simply how much I turn the wheel. But I cant get the motor to turn and the encoder to measure them at the same time.
If I try to run the motor at say a speed (power = 100) and tell the motor encoders to print every second or so, the motors don’t turn.
But if I try to run the motors without the encoders on, the motors work.
from BrickPi import * BrickPiSetup() BrickPi.MotorEnable[PORT_A] = 1 #Enable the Motor A BrickPi.MotorEnable[PORT_B] = 1 #Enable the Motor B BrickPi.MotorEnable[PORT_C] = 1 #Enable the Motor A BrickPi.MotorEnable[PORT_D] = 1 #Enable the Motor B BrickPiSetupSensors() #Send the properties of sensors to BrickPi BrickPi.Timeout=10000 #Set timeout value for the time till which to run the motors after the last command is pressed BrickPiSetTimeout() #Set the timeout print "Note: One encoder value counts for 0.5 degrees. So 360 degrees = 720 enc. Hence, to get degress = (enc%720)/2 " power = 100 BrickPi.MotorSpeed[PORT_A] = power BrickPi.MotorSpeed[PORT_B] = power BrickPiUpdateValues() time.sleep(0.1) while True: result = BrickPiUpdateValues() Encoder_A_2 = BrickPi.Encoder[PORT_A] Encoder_B_2 = BrickPi.Encoder[PORT_B] Encoder_C_2 = BrickPi.Encoder[PORT_C] Encoder_D_2 = BrickPi.Encoder[PORT_D] print "Encoder A: " + str(Encoder_A_2) print "Encoder B: " + str(Encoder_B_2) print "Encoder C: " + str(Encoder_C_2) print "Encoder D: " + str(Encoder_D_2) print "___________________" | https://forum.dexterindustries.com/t/motor-encoder-and-motor-rotation-wont-work-in-parallel/1741 | CC-MAIN-2019-09 | refinedweb | 261 | 59.9 |
Opened 6 years ago
Closed 3 years ago
#16074 closed enhancement (fixed)
Metaclass syntax changed in Python 3
Description (last modified by )
The tool 2to3 changes the code to the new Py3 syntax.
But the code has to depend on the Python version!
There are 30 affected modules.
This ticket is tracked as a dependency of meta-ticket ticket:16052.
REFERENCE:
Change History (28)
comment:1 Changed 6 years ago by
- Milestone changed from sage-6.2 to sage-6.3
comment:2 Changed 6 years ago by
- Milestone changed from sage-6.3 to sage-6.4
comment:3 Changed 4 years ago by
comment:4 Changed 4 years ago by
comment:5 Changed 4 years ago by
comment:6 Changed 4 years ago by
comment:7 Changed 4 years ago by
comment:8 follow-up: ↓ 15 Changed 3 years ago by
- Cc tscrim jdemeyer added
The expected python3 replacement is
from six import add_metaclass @add_metaclass(name_of_metaclass) class something
But this does not work for the three mentioned files.
comment:9 Changed 3 years ago by
For reference, can you at least push the non-working branch?
comment:10 Changed 3 years ago by
- Branch set to public/16074
- Commit set to c9bb275589209c5da4b16c475c076f12a2557816
- Milestone changed from sage-7.6 to sage-8.0
here it is
New commits:
comment:11 Changed 3 years ago by
failing as follows
File "/home/chapoton/sage/local/lib/python2.7/site-packages/sage/algebras/clifford_algebra.py", line 2163, in <module> class ExteriorAlgebraDifferential(ModuleMorphismByLinearity, UniqueRepresentation): TypeError: Error when calling the metaclass bases metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases
comment:12 Changed 3 years ago by
@add_metaclass probably does something with the metaclass. Unless it is because of our heirarchy of metaclass...but I doubt it is that. It might be a decorator that needs to be added to the subclasses as well.
comment:13 Changed 3 years ago by
The problem is the decorator, because it works in 2 steps:
- The class is created the usual way.
- The decorator modifies the class to have a new metaclass.
The problem is that step 1. already fails, so the decorator cannot do anything.
comment:14 Changed 3 years ago by
I'll look into this.
comment:15 in reply to: ↑ 8 Changed 3 years ago by
There are some other files with
__metaclass__, such as
src/sage/structure/unique_representation.py. Should I try to fix these on this ticket too or is there a different ticket?
comment:16 Changed 3 years ago by
Please fix the metaclass issue in all places where you can. There is no other ticket. I only listed the algebraic use cases. The rest is more like "sage infrastructure".
comment:17 Changed 3 years ago by
Hmm, I agree that it's not easy :-) I'm making some progress, but I'm not quite there yet.
comment:18 Changed 3 years ago by
- Commit changed from c9bb275589209c5da4b16c475c076f12a2557816 to 36009e4e7fef4cf8136695279af1e8b17cc7cd5c
Branch pushed to git repo; I updated commit sha1. This was a forced push. New commits:
comment:19 Changed 3 years ago by
- Commit changed from 36009e4e7fef4cf8136695279af1e8b17cc7cd5c to 9635fb408451c3a6aca1d26deafed6461ac62f01
Branch pushed to git repo; I updated commit sha1. This was a forced push. New commits:
comment:20 Changed 3 years ago by
- Status changed from new to needs_review
comment:21 Changed 3 years ago by
Done. The main effort here is fixing
sage.misc.six.with_metaclass (the first of 3 commits).
After this ticket, there is no remaining Sage code using
__metaclass__. Some doctests still use
__metaclass__, but I suggest to fix that later.
comment:22 follow-up: ↓ 24 Changed 3 years ago by
I don't agree with this change (and subsequent changes from this):
src/sage/misc/six.py
diff --git a/src/sage/misc/six.py b/src/sage/misc/six.py index 8273d91..5f03a96 100644
as later we require the first argument to be the meta class. So if someone happened to pass nothing, it would get an error message starting much later in the code. IMO, it also obfuscates the code a little bit too.
comment:23 Changed 3 years ago by
- Commit changed from 9635fb408451c3a6aca1d26deafed6461ac62f01 to f1ca05f39db7d8ca8afc379808858c568619b9eb
Branch pushed to git repo; I updated commit sha1. New commits:
comment:24 in reply to: ↑ 22 Changed 3 years ago by
comment:25 Changed 3 years ago by
I wanted to submit these fixes to
with_metaclass upstream to
six. However, when looking at the original
six code, I understood what was going wrong and it is really simple.
I also realized that it was #18503 which is the cause for the metaclass breakage. In other words, #18503 did the wrong thing and actually made everything more complicated.
It turns out that #18503 can be fixed in a much simpler way, see and then this issue (#16074) simply does not occur.
comment:26 Changed 3 years ago by
That being said, I think the current solution on this ticket is structurally better than upstream's
with_metaclass: upstream is overriding
__new__ with
__call__ which is a bit fishy. Instead, I am overriding
__call__ with
__call__ which makes more sense.
comment:27 Changed 3 years ago by
- Reviewers set to Travis Scrimshaw
- Status changed from needs_review to positive_review
Makes sense to me. It's somewhat less pretty than the decorator, but this works for now and gets us closer to Python3.
comment:28 Changed 3 years ago by
- Branch changed from public/16074 to f1ca05f39db7d8ca8afc379808858c568619b9eb
- Resolution set to fixed
- Status changed from positive_review to closed
first half in #22474 (combinat folder) | https://trac.sagemath.org/ticket/16074 | CC-MAIN-2020-40 | refinedweb | 934 | 64.41 |
/>
.
Project Overview
This project consists of a file called spritemaker.py for the module, and another file called spritemakerconsole.py containing a main function to try out the module. The program really needs a GUI which I'll write for a future project.
The download ZIP file and GitHub repository also contain a few social media icons which we'll combine into a single image although of course you can substitute your own. This is the combined image.
/>
And this is a sneak preview of one of the CSS classes generated by the program.
.youtube { background: url('sprites.png') no-repeat; width: 36px; height: 36px; display: inline-block; background-position: -180px 0px; }
In the last line note that the x-coordinate is negative. This is because we need to specify how much to move the image relative to its default 0,0 position, not the position of the image within the file. The YouTube icon's x coordinate is 180px so we need to move the image -180px (ie. to the left) to bring the YouTube icon to 0.
You can download the source code and sample images as a zip file or clone/download the Github repository.
Source Code Links
This is the source code for spritemaker.py.
spritemaker.py
import os from pathlib import Path import PIL from PIL import Image def create_sprites(imagepaths, spritefilepath, cssfilepath): """ Creates a sprite image by combining the images in the imagepaths tuple into one image. This is saved to spritefilepath. Also creates a file of CSS classes saved to cssfilepath. The class names are the original image filenames without the filename extension. IOErrors are raised. """ size = _calculate_size(imagepaths) _create_sprite_image(imagepaths, size, "sprites.png") _create_styles(imagepaths, "spritestyles.css", "sprites.png") def _calculate_size(imagepaths): """ Creates a width/height tuple specifying the size of the image needed for the combined images. """ totalwidth = 0 maxheight = 0 try: for imagepath in imagepaths: image = Image.open(imagepath) totalwidth += image.width maxheight = max(image.height, maxheight) except IOError as e: raise return (totalwidth, maxheight) def _create_sprite_image(imagepaths, size, spritefilepath): """ Creates a new image and pastes the original images into it, then saves it to spritefilepath. """ sprites = PIL.Image.new("RGBA", size, (255,0,0,0)) x = 0 try: for imagepath in imagepaths: image = Image.open(imagepath) sprites.paste(image, (x, 0)) x += image.width sprites.save(spritefilepath, compress_level = 9) except IOError as e: raise def _create_styles(imagepaths, cssfilepath, spritefilepath): """ Creates a set of CSS classes for the sprite images and saves it to spritefilepath. """ styles = [] x = 0 try: for imagepath in imagepaths: image = Image.open(imagepath) classname = Path(imagepath).stem style = ["."] style.append(f"{classname}\n") style.append("{\n") style.append(f" background: url('{spritefilepath}') no-repeat;\n") style.append(f" width: {image.width}px;\n") style.append(f" height: {image.height}px;\n") style.append(" display: inline-block;\n") style.append(f" background-position: -{x}px 0px;\n") style.append("}\n\n") x += image.width style = "".join(style) styles.append(style) styles = "".join(styles) f = open(cssfilepath, "w+") f.write(styles) f.close() except IOError as e: raise
Imports and Pillow
We need a couple of imports for file handling, and also two more for Pillow, the Python imaging library which is used for creating the images. The Pillow usage in this project is very simple and self-explanatory but if you want to learn more I also have a full article called An Introduction to Image Manipulation in Python with Pillow.
create_sprites
This is the sole "public" function in the module which takes the three arguments necessary for the whole process of creating the sprite graphic and CSS, and then calls the three "private" functions to actually do the hard work. The arguments are:
- imagepaths - a tuple of the individual files
- spritefilepath - the path to save the combined image to
- cssfilepath - the path to save the CSS file to
_calculate_size
This function calculates the width and height of the combined image. The width is the sum of the widths of the individual images, and the height is that of the highest individual file. (I made the arbitrary decision to arrange the files horizontally.)
Here we see the first use of Pillow to open images and retrieve their widths and heights.
_create_sprite_image
Firstly we create a new Pillow image of the required size and then iterate the input images, pasting each into the new image at the appropriate x-coordinate. Finally the new image is saved to the specified path; note compress_level = 9 which I'll discuss later on.
_create_styles
Firstly we create a list to hold the CSS classes, and initialize the offset x coordinate to 0. Then we iterate the input images again, creating the individual lines of the class and adding them to another list which is joined to form a single string.
The class list is then also joined into a string which is then saved to the specified path.
Now let's look at spritemakerconsole.py.
spritemakerconsole.py
import spritemaker def main(): """ A simple console application to test the spritemaker module """ imagepaths = ("icons/facebook.png", "icons/github.png", "icons/linkedin.png", "icons/pinterest.png", "icons/twitter.png", "icons/youtube.png") try: spritemaker.create_sprites(imagepaths, "sprites.png", "spritestyles.css") except IOError as e: print(e) main()
This is a very simple program to try out the module, so all it needs to do is create a tuple of paths and call spritemaker.create_sprites.
That's the coding finished so we can run it with this command:
Running the Program
python3.8 spritemakerconsole.py
You will now find the sprite image and CSS file in the locations specified.
Usage Within HTML
To use the sprites all you need do is set the relevant classes on the elements you want to show the images. This is a simple HTML page I put together as an example.
<!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>CSS Image Sprite Maker</title> <link href="spritestyles.css" rel="stylesheet" type="text/css" /> </head> <body> <div> <span class="facebook" title="Facebook"></span> <span class="twitter" title="Twitter"></span> <span class="linkedin" title="LinkedIn"></span> <span class="youtube" title="YouTube"></span> <span class="pinterest" title="Pinterest"></span> <span class="github" title="GitHub"></span> </div> </body> </html>
The HTML file is included in the ZIP and GitHub repository, and if you open it you'll see this.
/>
A Few Words About File Sizes
The PNG or Portable Network Graphics format is lossless but as you saw in the code it is possible to specify a compression level when saving files. This affects the file size and speed of encoding/decoding but with no effect on image quality. The compression level is an integer between 0 (no compression, high file size, fast encoding/decoding) and 9 (maximum compression, low file size, slow encoding/decoding). The original files used in this project were all saved with a compression level of 9 as was the output sprite file so that the sizes could be meaningfully compared.
The total file size of the six input files is 5,252 bytes, and the file size of the output file is 4,365 bytes or about 83% of the individual sizes. Every little helps!
What About HTTP/2
While researching this project I found an article on the Mozilla site called Implementing image sprites in CSS. It includes the note "When using HTTP/2, it may in fact be more bandwidth-friendly to use multiple small requests", but fails to explain or expand on this. Looks like I'll need to research this further and possibly write an article on the topic...!
"But I Prefer JavaScript..."
If you prefer JavaScript I have a NodeJS version of this program in the pipeline. Watch this space. | https://www.codedrome.com/css-image-sprite-maker-in-python/ | CC-MAIN-2021-25 | refinedweb | 1,286 | 55.54 |
One materialism as a requisite for scientific merit is an unfortunate consequence of a misconstrual of the principal of uniformitarianism with respect to the historical sciences. Clearly, a proposition – if it is to be considered properly scientific – must constrict its scope to categories of explanation with which we have experience. It is this criterion which allows a hypothesis to be evaluated and contrasted with our experience of that causal entity. Explanatory devices should not be abstract, lying beyond the scope of our uniform and sensory experience of cause-and-effect.
This, naturally, brings us on to the question of what constitutes a material cause. Are all causes, which we have experience with, reducible to the material world and the interaction of chemical reactants? It lies as fundamentally axiomatic to rationality that we be able to detect the presence of other minds. This is what C.S. Lewis described as “inside knowledge”. Being rational agents ourselves, we have an insider’s knowledge of what it is to be rational – what it is to be intelligent. We know that it is possible for rational beings to exist and that such agents leave behind them detectable traces of their activity. Consciousness is a very peculiar entity. Consciousness interacts with the material world, and is detectable by its effects – but is it material itself? I have long argued in favour of substance dualism – that is, the notion that the mind is itself not reducible to the material and chemical constituents of the brain, nor is it reducible to the dual forces of chance and necessity which together account for much of the other phenomena in our experience. Besides the increasing body of scientific evidence which lends support to this view, I have long pondered whether it is possible to rationally reconcile the concept of human autonomy (free will) and materialistic reductionism with respect to the mind.. How does mind interact with matter? Such a question cannot be addressed in terms of material causation because the mind is not itself a material entity, although in human agents it does interact with the material components of the brain on which it exerts its effects. The immaterial mind thus interacts with the material brain to bring about effects which are necessary for bodily function. Without the brain, the mind is powerless to bring about its effects on the body. But that is not to say that the mind is a component of the brain.
We have further independent reason to expect a non-material cause when discussing the question of the origin of the Universe. Being an explanation for the existence of the natural realm itself – complete with its contingent natural laws and mathematical expressions – natural law, with which we have experience, cannot be invoked as an explanatory factor without reasoning in a circle (presupposing the prior existence of the entity which one is attempting to account for). When faced with explanatory questions with respect to particular phenomena, then, the principle of methodological materialism breaks down because we possess independent philosophical reason to suppose the existence of a supernatural (non-material) cause.
Material causes are uniformly reducible to the mechanisms and processes of chance (randomness) and necessity (law). Since mind is reducible to neither of those processes, we must introduce a third category of explanation – that is, intelligence.
When we look around the natural world, we can distinguish between those objects which can be readily accounted for by the dual action of chance and necessity, and those that cannot. We often ascribe such latter phenomena to agency. It is the ability to detect the activity of such rational deliberation that is foundational to the ID argument.
Should ID be properly regarded as a scientific theory? Yes and no. While ID theorists have not yet outlined a rigorous scientific hypothesis as far as the mechanistic process of the development of life (at least not one which has attracted a large body of support), ID is, in its essence, a scientific proposition – subject to the criteria of empirical testability and falsifiability. To arbitrarily exclude such a conclusion from science’s explanatory toolkit is to fundamentally truncate a significant portion of reality – like trying to limit oneself to material processes of randomness and law when attempting to explain the construction of a computer operating system.
Since rational deliberation characteristically leaves patterns which are distinguishable from those types of patterns which are left by non-intelligent processes, why is design so often shunned as a non-scientific explanation – as a ‘god-of-the-gaps’ style argument? Assuredly, if Darwinism is to be regarded as a mechanism which attempts to explain the appearance of design by non-intelligent processes (albeit hitherto unsuccessfully), it follows by extension that real design must be regarded as a viable candidate explanation. To say otherwise is to erect arbitrary parameters of what constitutes a valid explanation and what doesn’t. It is this arbitrarily constraints on explanation which leads to dogmatism and ideology – which, I think, we can all agree is not the goal or purpose of the scientific enterprise.
712 Replies to “Intelligent Design and the Demarcation Problem”
This premise is clearly false, as most atheists are not materialists (they are actually Buddhists) and materialism does not necessarily follow from atheism. For example, there is no inherent contradiction between substance dualism being true and theism being false. And so if you demonstrate materialism to be untenable, one can still quite rationally maintain their atheism.
If by human autonomy, you mean contra causal, libertarian free will, then I think this premise is both not supported by empirical evidence and actively contradicted by supported by empirical evidence.
Contra causal free will necessitates that human beings are their own unmoved movers, who have the ability to enact influence upon the world, yet are themselves immune to physical and environmental factors on their behavior. Yet this is not what we observe in both neuroscience and psychology. Your environment is a major determining factor at how violent, happy, hard working, adjusted, etc. you are.
In a universe with libertarian free will, “priming” research would be impossible, or at the very least give mixed results. Yet we see time and time again in psychological research that how you are primed determines your actions, thoughts and feelings.
JMcL:
I agree with most of your conclusions, even if my approach to the problem is slightly different.
Personally, I prefer not to derive conclusions about consciousness or other fundamental aspects of reality from a purely logical deductive reasoning. My approch os more empirical: I consider consciousness as an empirical fact, directly observed in ourselves and inferred in others. That is enough to include consciousness in our map of reality, and to study its laws and the interaction between its phenomena and the other observable phenomena, which is exactly what modern reductionism refutes to do. In that sense, ID is a perfectly correct scientific theory, being completely based on empirical observations and on reasonable inferences based on them.
1: If atheism is true, then so is materialism.
This doesn’t strike me as true – what about certain strands of Buddhists, spiritualists, or property dualists such as David Chalmers and John Searle? A theistic God isn’t the only alternative to materialism.
2: If materialism is true, then the mind is reducible to the chemical constituents of the brain.
This seems fair enough. There are arguments to the contrary, but they mostly seem to be that practically the mind is irreducible, not that it’s in principle irreducible.
3: If the mind is reducible to the chemical constituents of the brain, then human autonomy and consciousness are illusory because our free choices are determined by the dual forces of chance and necessity.
How does something being determined make it not autonomous? A human can be determined but still acting from internal reasons. It is fully compatible with determinism that I form the belief that I want a pie, that I believe that if I want a pie I should get a pie, and therefore I get a pie. Autonomy being illusory by no means follows from the mind being physical or from determinism.
The rest of your argument depends on these earlier premises being sound. Even if we manage to get to the conclusion that autonomy is not compatible with a material mind, there are many who would accept a lack of autonomy rather than materialism about the mind being false. Non-libertarian accounts of free-will can explain our sense of free-will and autonomy, whereas libertarian accounts of free-will have never even inched close to stating how a cause can be non-determined and non-random. These two exhaust the options.
I think we have more reason to think that the brain constitutes the mind than that it doesn’t. The effects of neurological damage, electrical stimulation or chemical stimulation to the brain seem to affect a person’s actual personality rather than any kind of breakdown of transmission of a personality. Are we supposed to think that behind a paranoid schizophrenic there’s a normal mind capable of typical deliberating? And if that’s so, how come this mind doesn’t recall it’s earlier rational deliberating existence when the schizophrenic is no longer having a severe episode?
As for your claim that
“It lies as fundamentally axiomatic to rationality that we be able to detect the presence of other minds”,
I entirely fail to see how this is fundamentally axiomatic – it’s certainly not anywhere in my notion of rationality. Notions of rationality such as induction, deduction, evidence etc may be normative social concepts, but this doesn’t mean that we have to know for certain that there are other minds behind the behaviour shaping the social concepts for us.
Even taken for granted that it’s fundamentally axiomatic, the materialist seems to be in a better position. “Minds are brains, x has a brain, therefore x has a mind.” The dualist can always doubt whether a brain is actually ‘linked up’ to a mind.
“Explanatory devices should not be abstract, lying beyond the scope of our uniform and sensory experience of cause-and-effect.”
Forms of physics that don’t rely on notions of causality aren’t explanatory? They’re certainly abstract. Besides which, cause-and-effect is a particularly tricky idea rather than something with the obvious simpleness you imply here.
peachykeen:
I would like to comment on some of your arguments about free will.
You say:
Contra causal free will necessitates that human beings are their own unmoved movers, who have the ability to enact influence upon the world, yet are themselves immune to physical and environmental factors on their behavior.
But that is not true. Free qill does not mean that, and does not imply.
That “range of freedom” can be very small, or great enough: that we really don’t know, and it probably depends on many variables. But the important concept is that it is there, and it can and does change our personal destiny.
So, in no way the concept of free will requires or implies that we are “immune to physical and environmental factors”. We do exercise free will “in the context” of our physical and environmental factors. But there is no doubt that those factors do influence us.
IOW, free will is about how we react to those influences, and not about being immune to them.
Peachykeen –
Let me say a few words in defence of my argument.
Premise 1 does not require that all atheists be materialists. But it does require that atheism logically implicate materialism. As such, I would argue that something which is non-material cannot trace its origin to a material cause, and ultimately all must trace its origination back to a transcendent, immaterial cause. I think this conclusion is necessitated from a variety of branches of philosophy.
Belief in immaterial entities may be divorced from a belief in a transcendent deity. But I think the existence of immaterial objects are difficult to account for if you do not believe in such a transcendent intelligence – an unmoved mover.
With regards to the existence of libertarian free will, I would argue that a variety of disciplines now point strongly towards the conclusion of substance dualism. Such evidence includes the ability of psychiatric patients to make permanent changes to their neural pathways by focusing their attention in a particular direction. O’Leary and Beauregard argue from the Placebo effect to such a dualistic construct in their book, “The Spiritual Brain.” Jeffrey Schwartz and Sharon Begley argue to a similar effect in “The Mind & The Brain – Neuroplasticity and the Power of Mental Force”.
Sincerely,
Jonathan
peachykeen,
This cannot really be the case, because the “priming research” and everything else would itself have been determined, and no “objective” statement about anything could ever be made, for all would be subject to and the result of the same thing, and we couldn’t step outside of it even to determine that it is caused by anything. It’s self-referentially incoherent.
[blockquote cite=”Jonathan”]Such evidence includes the ability of psychiatric patients to make permanent changes to their neural pathways by focusing their attention in a particular direction.[/blockquote]
I’m not familiar with the work around this area. Do you have links to any article length treatises?
Prima facie this doesn’t seem at all convincing for dualism. Focusing attention in a particular direction would involve the use of neurons, which are arranged in a complicated and constantly changing mesh. If anything, with materialism we would expect to see neural pathways changing when someone thinks about certain things.
[blockquote=”Jonathon”]I would argue that something which is non-material cannot trace its origin to a material cause, and ultimately all must trace its origination back to a transcendent, immaterial cause.[/blockquote]
Why does something immaterial that isn’t a theistic deity have to have a material cause as its origin? Atheism is still perfectly compatible with non-material origins of things, just not one that fits the typical descriptions of a god.
Right. And that’s where the “humans have to be unmmoved movers on libertarian free will” comes in. If it is in part “not determined,” then it is somehow able to move while being unmoved by other factors. Everything that is free from the casual chain is kind of a little God itself. It just difficult to make sense of such a concept, especially since there is no evidence for the existence of things that are able to move while being (even in part) not influenced by other, moving factors. And that includes humans. (Except for “I feel like I’m free,” which isn’t any kind of evidence at all)
But if “we” can react (and somehow override) physical and environmental influences, then they aren’t really influences at all are? By the very fact that we can have “veto power” over such influences means that we have the capability of immunity to them. But again, that’s not what we see in research psychology. What we see is that the unconscious will is primary, and has complete influence upon the conscious will. And the unconscious will, is in turn influenced by the environment.
Everything being determined doesn’t entail nothing objective being able to be said about the world. Equally, libertarian free will doesn’t entail agents being able to speak objectively about the world.
And nothing objective being able to be said doesn’t entail that we are not justified to various degrees in believing certain propositions. Even after giving up realism we can have a pragmatically subjective account of science as the prediction of future phenomena.
For those interested in philosophy of mind, my senior paper in seminary (kind of like a mini-thesis) talks about reasons to not believe in physicalism, and also provides a suggested way to model non-physical causation in cognitive modeling. Anyone who wants a copy can email me at jonathan@bartlettpublishing.com.
A very shortened version which only talks about the non-physical methodology for cognitive modeling is available in abstract C2 in this years’ BSG proceedings.
peachykeen:
But if “we” can react (and somehow override) physical and environmental influences, then they aren’t really influences at all are? By the very fact that we can have “veto power” over such influences means that we have the capability of immunity to them.
No, again that’s not right. The influences remain influences, and do determine the range of our possible responses. But still we have a range of possible responses. We have no “veto power”, but we have the power to respond in different ways, even probably slighltly different ways in most cases. That’s free will, and the influences remain influences, and cannot be simply “denied”.
What we see is that the unconscious will is primary, and has complete influence upon the conscious will. And the unconscious will, is in turn influenced by the environment.
I would not accept your distinction between “conscious” and “unconscious” will. For me consciousness expresses itself at various levels, and what we usually recognize as conscious mind is only a part of those expressions. But always it is consciousness. Conscious processes, even subconscious ones, are never completely “unconscious”. At what level free will really operates remains open to debate, but I would suggest that it usually does not operate exclusively, probably not even mainly, at the level that we usually call “conscious mind”.
And, for the reasons stated at the previous point, the subconscious mind too is certainly influenced by circumstances, and yet it can express free will.
Daguerreotype Process,
Why not? Every thing said couldn’t have been otherwise. We would never even know anything objectively, as if we stood outside of the current of determinism, even to know that we were determined. Knowing that you’re completely determined is logically impossible, for you could never step outside to know anything objectively. Determinism breaks down with the problem of real knowledge and what constitutes sufficient grounds of knowledge and one’s vantage point for how this knowledge is obtained.
peachykeen,
The word influence is distinct from the word compulsion for a reason, and that reason is that things that can influence us can have an impact on us without forcing us. You might want to let the implications of the word “influence” sit with you for a bit.
There are a few points one could make but I will limit myself to one. The constraint to exclude real design (intelligent agency as an explanation) is not arbitrary but something people would agree on prior to making any observations about the world. You would agree an it because intelligent agency could account for any possible observation. The reason why you can’t change this agreement later is the same why Rawls proposed the concept of original position.
JMcL: “1: If atheism is true, then so is materialism.”
peachykeen: “This premise is clearly false, as most atheists are not materialists (they are actually Buddhists) and materialism does not necessarily follow from atheism …”
Daguerreotype Process: “This doesn’t strike me as true – what about certain strands of Buddhists, spiritualists, or property dualists such as David Chalmers and John Searle? A theistic God isn’t the only alternative to materialism. …”
Buddhists are even more immediately irrational than materialists are — for Buddhism explicitly denies that we exist … and, apparently, that anything at all exists. With materialism, the denial that we ourselves exist isn’t a premise of the -ism, but rather inescapably follows from its premises. So, while materialism *is* irrational, one must be willing to critically examine it to see its inherent and inescapable irrationality.
AND the argument presented here isn’t about ‘atheists,’ it’s about what logically follows from atheism, that is, from God-denial. The argument isn’t about whatever ad hoc mish-mash of contradictory propositions this or that God-denier may choose to graft onto his God-denial in a vain attempt to ward of its inherent irrationality, it’s about the God-denial itself.
JMcL: “1: If atheism is true, then so is materialism.”
This would be better expressed as “1: IF atheism is true, AND the material/physical world exists, THEN materialism is true.”
It’s even better expressed as: “GIVEN the reality of the natural/physical/material world, IF atheism were indeed the truth about the nature of reality, THEN everything which exists and/or transpires must be wholely reducible, without remainder, to purely physical/material states and causes.” … as I explore here: You Cannot Reason.
Determinism is compatible with foundationalism, coherentism, naturalism, externalism, internalism, empiricism, rationalism and any other epistemological viewpoint I know of. Why do we need to step outside of determinism to have our beliefs be accurate, and what element of real knowledge requires libertarian free will?
JMcL: “Material causes are uniformly reducible to the mechanisms and processes of chance (randomness) and necessity (law). Since mind is reducible to neither of those processes, we must introduce a third category of explanation – that is, intelligence.”
Actually, material causes are *not* “reducible to the mechanisms and processes of chance (randomness) and necessity (law),” but only “to the mechanisms and processes of []necessity (law).” For, “chance” has absolutely no causal power whatsoever.
As you point out, mind is not reducible to physical/material necessity; that is, mind is itself. Minds — being agents — are able to introduce new causal-chains into the web/matrix of material causality. Agents are free to act; everything which is not an agent merely reacts.
Depends what you mean by “we ourselves”. By no extent does materialism deny that self-experiencing organisms exist. So far as I can tell you’re tying the notion of libertarian free will into the notion of self, and therefore begging the question by concluding that anything without libertarian free will isn’t a self.
Contra the most common interpretation of quantum mechanics? There’s no logical inconsistency in positing a random event having causal powers.
Daguerreotype Process, were I to tell you, and everyone reading this thread, that you are a fool and a liar (there is a bit of redundancy between the two), would it have been material necessity which caused me to say it, or would it have been a freely chosen act of will?
Now, if you deny that it would have been a freely chosen act of will, then I shall call you a fool and a liar … and, I predict that you shall respond by whinning about “incivility” or some other meaningless nonsense.
I’m not all that concerned about civility.
I’m presuming you’re taking a roughly Moorean stance on free will, such that you can dismiss any argument against libertarian free will as wrong because any premises are likely to be less likely than the denial of libertarian free will. I think this Moorean stance is wrong. All the data we have is compatible with libertarian free will, apart from strong convictions which can be well explained by facts other than the existence of libertarian free will.
I’m a bit late into this discussion, but just a few comments on the first 4 premises:
Why shouldn’t atheists believe in the existence of abstract objects – like numbers, for example?
(I’ll assume you mean “reducible” in the ontological sense here, not the epistemological sense.) But ontologically speaking, I don’t see why this follows. Why can’t a material entity have irreducible immaterial properties (like mental properties)? This is the majority position in the philosophy of mind today. (AKA ‘property dualism’).
You could argue that property dualism isn’t compatible with materialism. But bear in mind that property dualists are still substance monists.
Yes, I agree.
If you mean *libertarian* free will, I have to disagree. Having studied it in depth lately, I think it’s an incoherent notion that undermines human rationality. Libertarian free will requires an agent not to have his choices causally determined by anything – not even reasons. But if a choice is not determined, ultimately, by an agent’s reasons, then ultimately made for no reason at all. And this is irrationality at its best.
So the best way to defend human rationality is by adopting a compatibilist definition of freedom, where choices are causally determined by reasons, and agents always have the power to do what they want to do.
However, I think the above argument could be modified somewhat so that it establishes that a weaker conclusion (not the falsity of atheism, but the falsity of a material conception of the mind)
1) If physicalism is true, the mind is physical entity or a property of a physical entity
2) If the mind is a physical enitity or property of a physical entity, mental causation is impossible
3) If mental causation is impossible, then human rationality is illusory
4) Human rationality is not illusory
5) Therefore the mind is not a physical entity or a property of a physical entity
6) Therefore physicalism is false
If a lit match dropping on a piece of highly flammable wood causing a fire is explainable in terms of electromagnetic particles, does that mean that the lit match did not cause the fire? Our psychological states have as much causal power as most things we say are causal, regardless of reductionism.
Daguerreotype Process @ 24 said:
But in order for your analogy to work, mental states essentially need to be higher level descriptions of the same physical thing. So what you’re really saying in your analogy is that mental states are identical to complex brain states. There are significant problems with this, though. For example, look up the ‘knowledge argument’, the various deployments of the ‘zombie’ argument, or the ‘inverted colour spectrum’ argument.
$ 0.02:
I think a lot of the trouble on these matters traces to inadequate grasp of the conceptual and observable nature of cause.
I mean by this that cause is not a monolithic entity. We have contributory factors, which may in part be necessary for an effect to occur [absent a necessary factor and an effect is blocked], and may be sufficient [so soon and where a sufficient cluster of factors is, an effect WILL occur and/or be sustained.
Building on the fire triangle used by Copi in his logic, go get a box of safety matches:
1: Pull a match, and strike it — heat, oxidiser and fuel are each needed to initiate or sustain a fire.
2: They are also jointly sufficient.
3: Now, hold a burning match, and tilt it up so the flame tries to burn the already burned wood. (It will gutter down and perhaps go completely out if you don’t tilt it back fast enough.)
4: Q: Why is that?
5: A: because you were removing a necessary causal factor, fuel.
6: So we can see demonstrated the reality of necessary as opposed to sufficient causal factors. These are a strong form of “influences,” that must be present if an effect is to occur.
7: In addition, there are contributory factors that may affect but are not necessary. (Soak the match wood — not the striking head — with a drop or two of kerosene, and you probably get a much enhanced flame . . . Don’t do this one at home!)
8: With the distinction fixed in mind, we can see that a lot of the exchange above is missing the difference between influence and control.
10: We are influenced by external and internal factors, but that is not the same as being determined in our mental and volitional acts as we perceive them, on chance + necessity.
11: And, if we were determined, a la evolutionary materialism, we would have no credible foundation for thought, reason or decisions and choices.
12: In fact such evolutionary materialism plainly ends up in self-referential incoherence and it is thus reasonable to reject such materialism on that alone: we directly experience, rely on and see the credibility of what should not be so on evolutionary materialist premises.
$0.02
GEM of TKI
Daguerreotype Process,
Oh I thought you were going to make an actual argument, instead of just asserting something, as you did in the last comment, and as you’ve done here. I’ve yet to see an argument of how it isn’t self referentially incoherent to claim that determinism is absolute. In the case of those who claim to know that we’re fully and absolutely determined, HOW are you not just as determined in your thinking and “conclusions”? You show why it’s not incoherent with absolute determinism, and then you’ll be making an actual argument.
If determinism is true, then your beliefs are solely the result of outside forces, like atoms banging around inside your head. If that is the case, then why should we suppose that such reactions will produce true, reliable beliefs? As C.S. Lewis states in his book Miracles: “If my mental processes are determined wholly by the motions of atoms in my brain, I have no reason to suppose that my beliefs are true, and hence I have no reason to believe my brain is composed of atoms.”
Whenever someone insists that I believe in determinism, I always like to ask “If someone came up with a really good argument that there is no such thing as free choice, would you freely choose to believe it?” In other words, someone tells me to believe in determinism, and I ask “Do I have to?”
Since belief in determinism undercuts your belief that your beliefs are reliable, determinism must be abandoned as a reasonable worldview.
Daguerreotype Process @ 26:
No, they don’t have anything to do with causality. But their relevance is this: the success of your argument concerning causality relies on the falsity of these arguments.
What these arguments show is that it is not legitimate to identify a brain state with a mental state. In other words, these arguments show that a mental state is not just a complex physical state. This means that your later argument, which relies on this premise, falls through, since a ‘mental state’ is no longer analogous to a ‘match’.
You also said:
Yes, this is exactly what they achieve. I think the term ‘spooky’ is perjorative, but the idea that they show mental properties to be immaterial is entirely intuitive. Mental states (joy / pain / thoughts / desires / beliefs) have no weight, no precise spatio-temporal location and a subjective ‘feel’ to them.
Why do you think the majority of philosopher’s of mind are property dualists? It’s precisely because they see the success of the aforementioned arguments, and they see that mental states are fundamentally different from ordinary physical properties. They see that it is not legitimate to just assert that a mental state = complex brain state. The two are like chalk and cheese (in fact more different because at least chalk and cheese are both material).
So to return to your analogy, if we start saying that ‘matches’ are analogous to ‘mental states’ (i.e. that mental states are just higher level descriptions of the brain) then we’ve stripped mental characteristics of all their defining features. We might talk about ‘mental causation’ but we’re not talking about mental causation as it is generally understood-we’re not talking about subjective qualitative experiences. Thus under your scenario we’ve saved mental causation in name only.
Daguerreotype Process,
If you can’t see the contradiction, it’s not your fault, you are determined not to.
Ok, so I really didn’t want to get pulled in to a discussion on libertarian free will, but I think Daguerreotype Process is right. Clive, please could you explain how libertarian free will is rational?
The way I see it, libertarianism seriously undermines human rationality because it leads to the conclusion that humans make choices for no reason whatsoever. Libertarianism requires that choices be indeterminate – and indeterminate in an absolutely unconditional sense. This is known as the “prinicple of alternative possibilities” (PAP, for short). PAP means that even given all the same antecedent conditions, an agent’s actions could have been otherwise.?
Compatibilism seems vastly more rational, since it ensures that an agent’s choices will be causally determined by his reasons / state of mind / desires / beliefs / moral values / will power, and so forth. Acting in accordance with such reasons is perfectly rational.
Incidentally, does ID need libertarianism? I can’t see any reason why it does. It bugs me a little that the two are so often lumped together.
Not at all. I am referring to the causal power of mental states, whether those mental states are ‘phenomenal’ or not. We’re all happy to allow mental states that are not part of access-consciousness causal powers – I could be irritable without noticing, and have this irritability cause me to do something. I can even have thoughts that are not vocally expressed in my head. I am identifying a mental state with a brain state, with leftover epiphenomenal residue.
Jaegwon Kim summed up property dualism quite well: .”
Apart from the subjective ‘feel’ aspect, this is conceivably the masked man fallacy. The functional mental states of joy/pain/thoughts/desires/beliefs can supervene upon, and be fully constituted by, material properties. Only 18.3% of Philosophers of Mind lean towards zombies being metaphysically possible, which is hardly the majority. I’d agree that most of them think that mental states are fundamentally different from ordinary physical properties, which is why materialistic functionalism is so popular.
I’m not convinced that property dualism is correct anyway, but even if it is true it doesn’t get you what you want.
Daguerreotype Process:
Ok, I’ll grant that your anaology works where the mental state in question has no phenomenal quality. In such a case, a mental state could maybe be analogous to a ‘match’, and thus maybe one could say that it has causal power.
But in all the cases where mental states cause things in virtue of their phenomenal quality, your analogy breaks down because you’re stripping mental states of all their defining characteristics, and like I said, saving mental causation in name only.
And Kim’s functionalism doesn’t help much here either. Firstly, what are functions? Functions are human concepts – they are not genuine properties of the external world. So the functionalist might talk about mental states, but in reality, they are talking about a concept, and thus the mental states have disappeared.
Secondly, even if functions were objective properties of the external world and not concepts, are mental states really just functions? ‘Pain’ might have a functional role, but can it be equated with this functional role? I don’t see how it can. The phenomenological quality of pain disappears when you start talking about functions. So whilst functions aren’t physical properties, they are not really mental properties either.
Basically, functionalism fails whenever you want to give a causal role to a phenomenological quality. And contra Kim, I don’t think it’s ok to leave this all phenomenal qualities as epiphenomenal mental residues. To save human agency, in many cases (not all, as your example above with irritation highlights) we need to save the causal efficacy of these qualities.
No, I agree. Property dualism suffers from the problem of overdetermination, and so it has problems accounting for mental causation too. But I think it’s a step in the right direction because it acknowledges the distinctness of mental states. Now it just needs to find a way to secure their causal efficacy. I’d argue that this cannot be done within the bounds of a physicalist ontology, which is why I made premise (2) of my reformulation of this blog’s argument the following:
2) If the mind is a physical enitity or property of a physical entity, mental causation is impossible.
DP:
Your rebuttal is unfortunately evasive not material:
In fact, the point was and is that material processes of cause-effect driven by chance or mechanical necessity (as Leibniz pointed out long ago and Lewis more recently, who I owe more to than to Plantinga) are IRRELEVANT to issues of truth and validity.
Indeed, Plantinga’s point in essence is that that irrelevancy means that many beliefs are compatible with survival enhancing behaviour.
At most, NS — and recall the claimed [but highly dubious on config space search reasons] source of new information and organisation is chance variation on that model, not culling out based on differential reproductive success [a misdirection that is common] — would support survival enhancing perceptions and response arcs, not credibility of the mind and cognitive processes of reasoning on logic in accessing an accurate view or understanding of the world.
So, per evolutionary materilaism, we have no grounds for trusting the credibility of the mind on precisely the process of reasoning that you are using or trying to use. In points for convenience:
____________
>> a: evolutionary materialism argues that the cosmos is the product of chance interactions of matter and energy, within the constraint of the laws of nature. (by def’n]
b: Therefore, all phenomena in the universe, without residue, are determined by the working of purposeless laws of chance and/or mechanical necessity acting on material objects, under the direct or indirect control of chance initial circumstances. [direct implication]
c: But human thought, clearly a phenomenon in the universe, must now fit into this picture. Thus, we arrive at Crick’s claim: what we subjectively experience as “thoughts” and “conclusions” can only be understood materialistically as unintended by-products of the natural forces which cause and control the electro-chemical events going on in neural networks in our brains. [by inclusion in implication.]
d: These forces are viewed as ultimately physical, but are taken to be partly mediated through a complex pattern of genetic inheritance shaped by forces of selection [[“nature”] and psycho-social conditioning [[“nurture”], within the framework of human culture [[i.e. socio-cultural conditioning and resulting/associated relativism]. [elaboration on many lines of common argument]
______________________________
e: Therefore, if such evolutionary materialism is true, then the “thoughts” we have and the “conclusions” we reach, without residue, are produced and controlled by forces that are irrelevant to purpose, truth, or validity. (The conclusions of such arguments may still happen to be true, by lucky coincidence — but we have no rational grounds for relying on the “reasoning” that has led us to feel that we have “proved” them.) [First main conclusion]
f: And, if materialists then say: “But, we can always apply scientific tests, through observation, experiment and measurement,” then we must note that to demonstrate that such tests provide empirical support to their theories requires the use of the very process of reasoning which they have discredited. [self reference is inescapable on appeal to empirical data and inferences therefrom]
g: Thus, evolutionary materialism reduces reason itself to the status of illusion. But, immediately, that includes “Materialism.” [materialism is a paert of the reasoned inferences made by some]
h:? Should we not simply ask a Behaviourist whether s/he is simply another operantly conditioned rat trapped in the cosmic maze? And, would not the writings of a Crick be little more than the firing of neurons in networks? [self-reference on concrete examples of he problem. The Freudian case shows tha this dates back to the 1980’s. The Marxian one used to be dated but it is back on the table.]
__________________________
i: In the end, materialism is evidently based on self-defeating logic. [Second main conclusion] >>
_______________
DP, that is what you need to answer to, and the statement of your faith in the ability of your mind on materialist premises is not sufficient to rebut the issue. And, as Plantinga pointed out, chance variation and natural selection are about survival not truth.
GEM of TKI
PS: By way of contrast, if our mental and perceptual equipment is designed and implemented to be generally reliable, we have good reason to trust them, equally in general. Of course we sometimes err, but we have reason to believe we have the ability to detect and correct such error. (E.g. think about how a spoon in a glass of water appears bent, but running a finger along will show that something has altered the in-built interpretation of linear transmission of light.)
PPS: let me underscore again, lest a strawman misperception prevails, the point abovge is that cause-effect bonds and chance and necessity as drivers and controllers are IRRELEVANT to issues of truth, validity, and right or wrong. This deciviley undercuts any assumption or assertion that on chance plus natural selection, we can assume generally accurate work of the mind [here an epiphenomenon of brain.] That would have to be SHOWN, on materialist premises; and thence we get into cycles of self-referential incoherence BECAUSE THE THOUGHTS AND CONCLUSIONS WOULD TRACE TO ACCIDENTS AND BLIND CAUSE-EFFECT CHAINS LINKED TO SURVIVAL, NOT TO TRUTH. (And that was what Plantiga’s example of he Ape-like creature was about, and his conditional probability inference.)
The human mind is free to will, or intend, events or goals that logically correspond to their physical situation, and/or those that do not.
IOW, I might be in chains in a basement somewhere; of course I do not have “free action” in the sense that I can choose to not be in chains, sprout wings and fly off; but I certainly can intend for such a situation to occur, whether such an intention is a logical possibility or not.
I can also intend to eat a slice of blueberry pie, and intend to invent a sports car powered by popcorn in the same situation – neither of which have anything to do with my current physical state of being in chains in a basement.
Humans with free will have unfettered ability to intend, even beyond what they can specifically imagine, by simply intending an emotional or symbolic outcome, such as common themes of triumph, freedom, love, innovation, enjoyment, etc.
I think the best argument that materialsm/nature doesn’t lead to truth is simply that there are so many people here arguing for contradictory truths.
If materialism/naturlism necessarily leads to truth, why do people disagree? If it doesn’t necessarily lead to truth, then how can one possibly discern what is true?
And? We have good reason to think that processes can arise which nonetheless result in producing creatures with veridical beliefs despite there being no teleological drive towards such.
And, remarkably, we find some beliefs which are compatible with survival enchancing behaviour but are not true. Cognitive misperceptions arise everywhere. The main attribution error, the above average effect, any number of other psychological mistakes which are universal. As long as we have logic and perception being reliable (can you provide some alternatives which would have had an equal or more likely evolutionary route?), and a cluster of true beliefs, then we can work on expunging the false beliefs because they will not hold up to further testing. Asking for more, absolute solid indefeasible knowledge, is a sceptical position that gets us nowhere and applies equally to everyone. Self-reference is inescapable for everyone.
Because it leads to truth through a twisty and difficult road. If theological creation leads to truth, why do people disagree?
Or you’re defining mental states’ entire defining characteristics as phenomenal character, which I’m not fully happy to accept. Although phenomenal experience is a part, it’s not all.
Which cases? If mental states cause things in virtue of their phenomenal quality, the quality is functional. If it doesn’t cause anything, it’s epiphenomenal and therefore useless. The zombie argument relies on the thought that mental states can function without any phenomenal qualities playing a role. Inverted qualia arguments suggest that functionally equivalent phenomenal qualities can be different with the same functional role, and is more of a problem, but still doesn’t lead to more than epiphenominism about the specific nature of the experience. And I’m not inclined towards functionally equivilant phenomenal qualities that can be different -> have you read Dennett’s quining qualia? Paul Churchland’s Chimerical Colors, Some Phenomenological Predictions from Cognitive Neuroscience is also a convincing read for so functionalising phenomenal qualities.
Functionalists either functionalise phenomenological quality, hence giving it a causal role, or just deny that phenomenlogical qualities have no causal role. None of the thought experiments get us to more than that.
The mental and/or brain states fill in the concept. They’re a manifestation of the concept. If we talk about ‘species’ or a ‘mouse trap’, are we equally making groups of animals or a couple of blocks of wood and pieces of metal dissapear?
DP:
First, take a little look at how across time materialists of the evolutionary stripe have sought to undermine the thought of those who differ from them. Then ask your self what happens when the knife cuts the other way, as I did above.
Next, I see you:
Not at all, you have simply asserted a belief that will hacve a lot of institutional support.
The point is that the processes of chance and necessity that you cite have no credible power to design complex life forms on digitally coded, algorithmically functional complex information and associated implementing machines, as can be seen from the state of OOL studies, much less major body plans that are embryologiclaly feasible, much less a mind that rises above forces of chance, necessity and survival.
That we have good reason to think that we do know and think reasonably well cannot be accounted for on such supposed forces. Remember, you need to trace form physics and chemistry in a warm little pond or the equivalent to a mind with the capacities we are discussing.
And, when we see the reductionism of mind to brains and wiring of networks on chance variation and survival of what survives, we have no good basis for accounting for the intricate information involved at all levels, on evolutionary materialist premises.
This inadvertently comes out in your:
In short you acknowledge that there is an unreliability in the processes you posit, then you propose logic as the solution. It is, indeed, but it cannot be bootstrapped on chance variations and survival selection in the plains of E Africa or the woodlands thereof.
So, you are begging the question, and compounding it by erecting a strawman caricature of those who have challenged your claims, to GROUND the credibility of the mind and reasoning on evolutionary materialistic premises.
Nor am I shut up to undirected chance + necessity as causal mechanisms, so I have no need to supply an alternative evolutionary strategy as such: that we were designed to be reasoning creatures is more than good enough, whatever mechanisms were used being of little account.
It is your side that purports to explain all — including reasoning and consciousness — on chance and mechanical necessity.
So, you need to do it, without ending up in self referential incoherence, and so far, no joy.
GEM of TKI
PS: A few words on the consequence of the view that phenomena are driven by chance + mechanical necessity, from the case of Crick in The Astonishing Hypothesis, 1994:
Philip Johnson’s rejoinder was richly deserved,.])
Let me continue on Kairosfocuce’s post about Plantinga’s argument against naturalism in 38.
Plantingas argument against naturalism could be rebutted if:
1. Set/namespace of propositions that make sure species survive is A
2. The set/namespace of propositions that are true is B
3. Plantinga’s argument against naturalism is true if A != B and B != A even in some cases.
4. The rebuttal of Plantinga’s argument would have to be as follows A = B and B = A always.
It’s easy to demonstrate that 4. is not true as completely and obviously fallacious propositions could ensure that species survive.
For example there could be an evolved belief A) When you are good Santa Claus will bring you presents on Christmas. So it could be possible that because of this belief there would be no wars etc. and the species would survive even though the belief is obviously false.
Innerbling:
If Plantinga’s argument were that silly, it wouldn’t be worth rebutting. But Plantinga’s argument is more subtle. He says that the probability that our cognitive faculties are reliable given evolutionary selection is “low or inscrutable.”
Green,
It is enough to show that determinism is self defeating when it is absolute, and if it is not absolute, then we’re back to free will.
Daguerreotype Process,
What is it that determines everything? Is it something physical moving about, causing everything to occur, even thoughts?
Clive:
Your main argument for this seems to be that if we were determined, we would never be able to know it (thereby making determinism self-referentially incoherent).
But why would we never be able to know it? Why can we not know it by introspection? For me, one of the most compelling piece of evidence for determinism is my introspective experience. I know that my actions are determined by a combination of my beliefs, desires, moral values, long term goals etc. etc. I know that I always act for the most compelling reason at the time, and that I could not act differently unless I had a desire to act differently.
So I fail to see the force of your argument. Were you perhaps referring to physical rather than mental determinism? I could see how that strips humans of all rationality. But how could you argue against a determinist who is a substance dualist (like myself)?
Well, I wouldn’t say you’re back with any sort of free will worth wanting. I’d say you’re either back with pure randomness, or with irrationality. And why is that worth having?
Green, I wonder if you believe in libertarian free will and just call it compatibilism because you define the two terms differently.
Libertarian free will is a type of determinism. It states that outside forces combined with an internal cause (immaterial self) produce a choice. Under libertarian free will, it is coherent to say that any given agent under any given set of circumstances will determine one and only one choice. It could not be otherwise. The difference between this and other types of determinism is that the immaterial self is a factor.
Think of a choice you made in the past. Now if right before you made that choice, I swapped out your immaterial soul and swapped in a different soul, might that soul make a different choice, even though circumstances are absolutely the same?
I think you’re confusing libertarian free will with simple indeterminism. In the latter, our own free actions are simply uncaused. Some might even extend this idea beyond human actions into the natural order, such as invoking Heisenberg’s Uncertainty Principle. The problem with simple indeterminism is that if our choices (or a part of our choices) are simply random, then how can we justifiably control our behavior? There seems to be no basis upon which any responsibility can exist if our actions are random.
But my whole point was that functions don’t have a phenomenal quality! By functionalising things, this is exactly the feature of mental states that you lose. You seemed to concede this further down when you wrote:
What part of “giving it a causal role” retains the phenonemal quality of a mental state? To functionalise something you basically find the conditions under which the mental property (e.g. pain) is instantiated, and the effects that it typically causes. This is essentially what “giving pain a causal role” amounts to. Where in such a description is the feeling of pain, though? It is nowhere to be found. It has been lost entirely.
Jaegwon Kim himself says at the end of Physicalism or Something Near Enough that phenomenological properties can never be functionalised or reductively explained. This is why he tries to argue that none of the most of the mental states involved in agency either (a) have no phenomenological quality, or (b) also have a causal role, thereby enabling them to be functionalised. But my whole point is that (contra Kim) there are some mental states that are causally efficacious in virtue of their phenomenological properties. Like itches that make you scratch, for example.
J.P. Moreland has also argued (cogently, I think) that mental-to-mental causation requires that mental states be causally efficacious in virtue of their phenomenological quality. He argues that if the thought “George Washington is president” and the thought “Ben Franklin invented bifocals” do not have any phenomenological difference between them, then how would we ever know it?
So it seems that contary to Kim, we do need mental states that are causally efficacious in virtue of their phenomenal quality – the phenomenal quality being the very feature that can’t be functionalised. Thus functionalised can’t save all mental causation.
*functionalism, sorry.
Green,
I see that your version of what determines us is different than mine. When I reply to a statement that we are determined, I have in mind that every single thing, without exception, obeys something that is not anything we have any control over whatsoever, and determines every single aspect of anything ever thought or imagined or dreamed or seen, or wished for or hoped, etc., including introspection..
Ref:
Kane, R. (2002) Introduction, in The Oxford handbook of free will, ed. R. Kane (Oxford, Oxford University Press)
Adel in 46:
If Plantinga’s argument were that silly, it wouldn’t be worth rebutting. But Plantinga’s argument is more subtle. He says that the probability that our cognitive faculties are reliable given evolutionary selection is “low or inscrutable.”
Yes it’s the case that the way I presented Plantinga’s argument is rather silly but I think I caught the essence of it. To my limited knowledge of Plantinga’s argument it essentially says that cognition and or consciousness that develops through a process that aims for a greater survival benefit and where the process is only interested in survival has a low chance to recognize truth and if it does it does by mistake when survival enhancing proposition and true proposition are in the same set i.e when A = B and B = A. And if we were to draw two Venn diagrams of the namespace of finite propositions where A belongs in B we could see that the probability to hit the right mark is low indeed and in some situations impossible. As such my silly elaboration of the argument stands if we add one more evolutionary/naturalistic assumption:
1. Consciousness primary goal is survival and cognition is primed to select the choice that has the most survival benefit.
From this it follows that if survival is primary even one belief where A != B would mean that the true proposition would be always ignored and the fallacy selected. Thus it might be that I am not really wearing black but pink socks as pink socks offer survival benefit but thinking that I am wearing black socks also gives survival benefit. Or it might be that there are pink aliens in the world but not sensing or seeing the aliens has given our ancestors survival benefit so my consciousness does not see them. Or it might be that counting correctly has always been detrimental to propagation because of the “nerd” effect and thus our consciousness has evolved to always count wrongly. etc..
Clive:
Yes, I agree, ‘control’ is definitely a key component of agency.
I would want to distinguish between ‘proximate control’ and ‘ultimate control’ too, since I think both compatibilists and libertarians have a problem with the latter.
‘Proximate control’ is synonymous with ‘self-determination’, and an agent can be said to be self-determining (and thus in proximate control) if she acts upon her own desires and wants. Compatibilists capture this aspect of control. Libertarians sometimes do, but not always (it depends which libertarian theory you’re talking about).
The other sense of the word control, though, is ‘ultimate control’. ‘Ultimate control’ is synonymous with ‘self-origination’, and it captures the notion of an agent being in control of the ultimate source of her choice. Both libertarians and compatibilsts struggle with this sense of the word control. Whilst an agent’s desires are his own desires, I don’t see how either party can give an account of how the agent is in control of the ultimate source or origin of these desires. Given determinism, these desires are the result of antecedent conditions (previous desires and so forth). Given ‘event-causal’ theories of libertarianism, these desires are ultimately random, and given ‘agent-causal’ libertarianism, these desires are ultimately inexplicable. So I don’t think the compatibilsist is any worse off than the libertarian when it comes to ultimate control. And I think they’re often better off when it comes to proximate control:)
Anyhow, is ID tied to any particular account of freedom, agency, or control?
Stating that agent causation is inexplicable makes no sense to me.
If an electron is shown to be basic (not composed of parts), then one might ask why an electron has a negative charge. And the answer would be that it is a property of the electron. Period. No further explanation exists, nor is needed.
Same thing with agent causation. The agent is the origin of the desire. It is self-caused. It is not inexplicable, but self-explanatory.
If self-determinism exists, and only if self-determinism exists, is rational thought possible. It provides the necessary gap between stimulus and response where the immaterial soul or self can act.
If compatibilism is true, then the soul has no causal power. Hence, under any given set of circumstances, all souls would make identical decisions. This means that if at conception, or birth, or whatever, your soul was taken out and another soul put in your body, that other soul would live an absolutely identical life to the one you have lived. So your choices are not really your own, because your desires are not your own. What room is there for praise or blame? It would be like faulting an Alzheimer’s patient for forgetting something.
If self-determinism is true, then different souls would act in slightly different ways, so your desires and choices are not really your own. We do not have to understand how and why this works in order for it to be true, or for us to affirm it rationally.
Green (#57):
No, I don’t think that ID is “tied to any particular account of freedom, agency, or control”. In a sense, ID is not necessarily tied to the concept of free will.
The fact is that ID gives central importance to the process of conscious design as the only possible source of that property of designed objects which we call CSI (or any equivalent definition).
Obviously, it is possible that conscious design requires free will in the designer, and indeed I believe that way, but such a recognition is not really necessary to develop the theory of ID.
Obviously, once we finally admit (and it is certainly time we do) that consciuousness and its processes are a part of reality and must therefore be a part of science, then an objective analysis of the intrinsic laws of those processes becomes naturally part of science too.
It is however my personal conviction that, of all the properties of conscious processes, free will will probably remain the most elusive, the most difficult to define and understand. Indeed, according to the interesting sub-definitions you have offered, I would certainly describe myself as a believer in “‘agent-causal’ libertarianism”: I think that the ultimate source of free choice is related to the transcendental nature of the I, and therefore eludes a final understanding.
In that sense, I prefer to use the word “free choice”, and not “desire”, because IMO the concept od “desire” does not catch completely the essence of free processes: the free choice could be better described as the ability to choose one desire against another.
Finally, I really can’t understand and accept compatibilism. As I understand that you have a deep knowledge of the subject, and I have appreciated your very clear remarks on it, I would like to understand better your position, using exactly your terms. So I will give sone specific questions, and I am sure that you can clarify better to me your personal thought.
You say:.
As I do believe in the PAP, are you saying that you don’t? (should be so, as you define yourself a compatibilist).
‘Proximate control’ is synonymous with ‘self-determination’, and an agent can be said to be self-determining (and thus in proximate control) if she acts upon her own desires and wants.
That is not clear to me. I would think that the only real control is what you call “ultimate control”.
Again, the emphasis on “desire” can be misleading. I agree that a desire is a conscious representations (in feeling) that is connected to the process of choice. The problem is if the desire, as an internal state, is determined or not. I am not sure I have understood your position about that.
As I see it, we have more or less the following possibilities:
1) Some entity acts in a purely deterministic way, its behaviour is in principle completely understandable in terms of previous states, and the entity is npot conscious. That’s what I would call a physical machine. That’s probably what many reductionists think we are. Conciousness has no role in that. I reject completely such a model of human beings (as probably you do too).
2) Same as in 1, but conscious representations are constantly connected to those processes. Even if we do not affirm a specific explanation of consciousness (independent principle, emergent property, or whatever), the fact remains that those conscious representations (“inner states”), while being in relation to the input-output of events, cannot influence it: they can only be influenced by it.
Is this your position? Is this what you intend as “compatibilism”?
But, in this model, what you call “desires” are inner states which must be considered a consequence of previous states, be they viewed as outer or inner (indeed, in this model, there is no real difference between thye two things, because inner states are a necessary consequence of outer conditions).
So, if this is your position, could you please explain me in what sense it is objectively different for simple determinism? I really can’t see, and that’s why I have never had any esteem of compatibilism.
3) Finally, my position. Consciousness has to be described as an existing principle, obviously connected bidirectionally to the physical world through the body and the brain. A correct description of conscious processes requires recognition of the intuitive sense of free choice. I accept and affirm that such an intuition is true, not because I think it can be proved externally, but because nothing in our map of conscious realities would make any sense otherwise.
Free choice remains vastly a mystery, but it certainly means that our final behaviour cannot be completely explained according to previous circumstances, both outer and inner. In that sense, it cannot even be explained according ton our “desires”, which should be considered as “previous internal states”. So I think that this model naturally includes both the concept of PAP and of what you call “‘agent-causal’ libertarianism”.
Now, I suspect (but would like you to confirm and explain) that in reality you stay somewhere between 2) and 3). If that is the case, could you please clarify in what sense, possibly referring to the models I have given? I am really interested to that, because I cannot understand how a position intermediate between those two models is really possible (avoiding, obviously, mere word games).
“If an electron is shown to be basic (not composed of parts), then one might ask why an electron has a negative charge. And the answer would be that it is a property of the electron. Period. No further explanation exists, nor is needed.”
Not true. If you can think of the question then there is further explanation required.
Late atheism.
p.p.p.s. I didn’t read past the article (7. Therefore, atheism is false.) so sort of flying blind here. I hope I didn’t completely miss the point. Regrets if I did.
tgp out.
TGP,
“I’m sure, is that free will is required to generate information.”
In the end, this is precisely the central figure in Abel and Trevors “Three Subsets of Sequence Complexity”
The fundamental contention inherent in our three subsets of sequence complexity proposed in this paper is this: without volitional agency [actual, free will] assigning meaning to each configurable-switch-position symbol, algorithmic function and language will not occur.
TGP:
Great post. I completely agree with all you say.
I would sum up this way:
1) CSI (language and similar) cannot be explained in any way by purely deterministic models.
2) In particular, the specification component of CSI requires the concept of consciousness to be even merely defined.
3) As non conscious entities cannot generate CSI (which is both empirically verified and theoretically inferred), it’s easy to infer that some properties of consciousness are necessary for the generation of CSI.
Which is exactly your point.
Now, if we want to specify better which properties of consciousness are actively implied in the generation of CSI, we have to look at our subjective experience of producing designed objects, such as strings of language, machines, software, end so on.
Here I would partially differ form your point in the sense of expanding it. Indeed, I believe that many different conscious processes are necessary to generate CSI, and all of them are exclusive of conscious experience, and have no counterpart in purely algorithmic processes. I will try to list the most important ones:
1) Perception and representation of meaning. Meaning is a purely mental experience. All cognition os possible only because we attribute, recognize and consciously represent meanings.
2) Feeling of purpose. Purpose is a function of feeling, more than cognition. It is indispensable to the definition of function, which is the basis for the concept of FSCI. In a sense, meaning and purpose have some similarities, but I believe that there is some difference between the two concepts.
3) Free will: the intuitive perception of agency. I believe that, without free will, we would have no concept of conscious agency. Free will is the necessary consequence of the output connection of consciousness to reality, just as perception and cognition are the necessary consequence of its input connection. As you very correctly say,).
That’s one of the reasons why ID is so fundamental, not only to get rid of darwinism, but also to get rid of many other fundamental follies of contemporary thought, first of all reductionism and strong AI theory.
Finally, as a very good collateral demonstration that human knowledge is not merely algorithmic, and therefore a demonstration of the fundamental role of consciousness in it, I would strongly suggest Penrose’s argument based on Godel theorem.
TGP & GP:
Very well said.
My view is that we need to also look at these things “forward” on teh evolutionary materialistic model, using Mrs O’Leary’s 4 unexplained big bangs:
When you look above, you will see that the defenders of he evolutionary materialist thesis consistently assume wha they have no right to, the reliability of their minds, and try to shift the burden of proof to those who point out that they are building on a foundation that cannot support that assumption.
Notice how hey twisted Plantinga’s argument that on evo mat premises the reliability of mind is low or inscrutable, i.e the mechanism is irrelevant to the result.
But neurons firing away in mV of impulses,and connected in chains that embed a lot of FSCI, are inexplicable on materialistic evolutionary grounds. Worse, mV, ion gradients across membranes, and pulse repetition rates in Hz are utterly irrelevant in themselves to meaning, inference, implication and warrant for knowledge. Brains are not self-explanatory on reason.
The meaning is from somewhere else, imposed on the physics, chemistry and neuron network architecture.
But if you are ideologically committed to mind emerging form brain by some materialistic poof-magick [I deliberately added the k, to point out that this is not necessarily benign . . . ], you will staunchly defend your system until it breaks down utterly.
Remember the utterly true believer, bitter end Marxists of our youth?
This thing will not be decided on mere arguments, but he collapse of institutions and movements that build on the worldviews.
And the amorality and irrational radical relativism of evolutionary materialism will be a big part of that collapse, on the example from Plato when this last was a major movement.
Unfortunately, if institutional science allies itself too tightly to such a doomed movement, it too will take a terrible blow when the collapse comes.
GEM of TKI
PS: to read Plantinga in his own voice, cf here, and this is the actual argument he made, in 58 pp as a supplement to a book.
Upright @ 63 cool. Great paper. It will bear rereading a time or three. Thanks for the confirmation.
GP @ 64. Excellent points. Thanks.
GEM @ 65 “This thing will not be decided on mere arguments, but he collapse of institutions and movements that build on the worldviews.”
Nail – head.
“Afraid” US gov’t will be in vanguard…
What physicist was it that said that science only changes when the old guard dies off??? Or words to that effect.
Somewhat related and maybe of interest,
As Upright pointed out in the paper he referenced @63, Though Shannon’s definition of information is seen as inadequate to explain the generation of functional information, Claude Shannon’s work on ‘communication of information’ actually fully supports Intelligent Design as is illustrated in the following video and article:
DNA and The Genetic Code Pt 3 – Perry Marshall – video
Skeptic’s Objection to Information Theory #1:
“DNA is Not a Code”
TGP, it was Max Planck who said that. Planck was the father of quantum mechanics and a devout Christian;
“A new scientific truth does not establish itself by its enemies being convinced and expressing their change of opinion, but rather by its enemies gradually dying out and the younger generation being taught the truth from the beginning.”
BA’s quote at 71.
Nail – head.
This whole “responsiveness” of science to critique is a great deal of hot air. As Berlinski points out, scientist run from valid objections just like everyone else.
Gpuccio @59:
In answer to your questions, yes, as a compatibilist, I deny PAP. I think the idea that you can make decisions completely contrary to your beliefs, desires and so forth is absurd. With regards to your 3 categories… as a substance dualist, I deny (1). I also deny (2), since you describe this position as entailing that “conscious representations (“inner states”), while being in relation to the input-output of events, cannot influence it: they can only be influenced by it”. I deny this because I think that causation runs both ways: i.e. physical states causally affect conscious states, and conscious states also causally affect physical states. So I would accept your position (3) but without the “free will” part. So I think that consciousness is a fundamental part of reality, and that it causally affects our brains, and vice versa. However, I think that “free will” is a hopelessly incoherent notion, that either entails that decisions are arbitrary, or that they are irrational. As I said above, I think that our decisions are causally determined by the combination of our beliefs, desires, long term goals, moral values, and so forth, and that there is no ‘arbitrary’ or ‘irrational’ part to decision making – which libertarianism seems to lead to.
Let me just briefly explain why libertarianism leads to either arbitrariness or irrationality. Under ‘event-causal’ theories of libertarianism, PAP is satisfied by injecting an element of indeterminism into the causal chain leading to an agent’s action or decision. Some have argued that injecting indeterminism into the causal chain actually undermines human agency because it takes away the agent’s control over the action. However, this does not have to be the case. Mele (2006) has developed an ‘event-causal’ libertarian theory where agential control is still exercised. He proposes that the the beliefs and thoughts that come to mind during the process of deliberation come to mind in an indeterministic fashion, but that from there the agent takes over and his/her decisions are causally determined by the mental states in question. Here you’ve got PAP and proximate control, but no ultimate control. This is the best that event-causal theories of libertarianism can do. Ultimately the source of decisions is arbitrary. The indeterminism injected to gain PAP in this account thus seems to add nothing, and one might as well just be a compatibilist.
Agent-causal theories of libertarianism attempt to overcome this problem (and gain PAP, proximate and ultimate control) by positing that an agent is a distinct ontological entity – specifically a substance rather than an event. (However, they’re quite clear that their view does not require substance dualism, and agent causationists such as Timothy O’Connor and Robert Clarke both hold that the ‘substance’ in question is a physical substance – specifically the human animal. Having said this, I think it could be consistent with substance dualism is you wanted)..
Most have rejected the ‘agent-causal’ view of libertarianism because the idea that a substance, rather than a property of the substance, can cause anything is unintelligible. I think it fails for about 4 other reasons, too. But I’ll just list one of them here, namely the fact that it only gives agents the power to make irrational decisions (and who wants that?). Here I’ll just paste in an edited version of the thought experiment that I wrote to Clive above:
Agent-causal libertarianism seriously undermines human rationality because it leads to the conclusion that humans make choices for no reason whatsoever. Agent-causal libertarianism requires that choices be indeterminate – and indeterminate in an absolutely unconditional sense. Under the agent-causal theory of libertarianism, an agent is causally influenced by his/her mental states, but is not determined by them. The ‘agent’ (as a substance) is meant to have the final say. agent-causal? I don’t think there is anything rational about this, and thus I think that all the agent-causal theory of libertarianism gives you is the power to make irrational decisions*.
*In fact, I’d argue that it doesn’t even get you this, because substances can’t be causes.
** I would also argue that acting for no reason is the same thing as not being in control, and thus that agent causal libertarians fail to give an account of both proximate and ultimate control. – but this post is already long enough so I’ll stop there.
Sidenote: GP, you are no doubt the most patient and thoughtful ID proponent on earth. Thanks go to you.
BA @ 71. Thanks. I’ll save the quote.
On tgpeeler’s comments @61:
“Beware when the great God turns loose a thinker on this planet.”
Ralph Waldo Emerson
Green:
Start from your experience of thinking and deciding for yourself.
If that experience is not real — per your worldview — it leads you to self-referential incoherence.
Without real freedom to decide and think and act for ourselves, our whole thought world disintegrates.
Labels and dismissive arguments don’t help.
GEM of TKI
Kairosfocus:
“Per my worldview”?? Kairosfocus, with all due respect, have you read anything that I’ve said? I am a substance dualist. I do not think that consciousness is illusory; it is a fundamental feature of reality. My determinism (or compatibilism, whatever you want to call it) entails that all my decisions are determined by my mental states – my beliefs, desires, moral values, long term goals, etc. Please tell me why knowing that my decisions are causally determined in this way is self-referentially incoherent?
Wow. I’m the one using labels and dismissive arguments? I haven’t seen a single cogent argument for libertarianism from you – in fact I haven’t seen a single argument full stop. I took the time to clearly lay out the arguments of the two of the main camps in the libertarian literature (‘event-causalists’ and ‘agent-causalists’), showing why they both fail. You have shown me no argument to the contrary. In fact, I doubt you are even familiar with the libertarianism literature, since surely if you were, you would have tried to counter my argument, rather than just simply asserting that free will is real. Please show me an argument. And don’t give me intuitions because intuitively I can equally say that I know my decisions are causally determined by my mental states, hence compatibilism.
Drew at #50 asserts that libertarian free will is a kind of determinism. This is odd.
My previous understanding of libertarian is *same* soul + *same* conditions = *different* outcome each time. But that would be simple indeterminacy.
Whereas my understanding of compatibilism says *same* soul + *same* conditions = *same* outcome, as Drew does. It does NOT say, *different* soul + *same* conditions = *same* outcome. Some compatibilists might, but that treats the soul as something ineffectual and contentless, and thats not what compatibilism is.
Most of the ‘libertarians’ above forget that the makeup of a soul is part of the prior conditions of any decision it makes, and that every soul makes decisions based on reasons and prior inputs, including its own experience, current mood, biochemistry, etc. To the extent that a soul’s decisions are not based on prior inputs, it is just making irrational, random decisions.
Once that is considered, there is again a straight choice between compatibilism and simple indeterminacy.
Freely made choices are made by a deterministic process. Its weird, but thats the way it is, if you reason far enough. Thank you Green for trying to get them there.
*
As for CSI, we shoot ourselves in the foot if we insist on attaching it to this ill-defined kind of agent-causation. It annoys me muchly. It is much better to say simply that CSI is the result of a particular kind of process, rather than to try to make these arguments based of fundamental philosophical presuppositions. Otherwise we escape from a christian scriptural fundamentalism into just as narrow a christian philosophical fundamentalism (although I appreciate it doesnt seem like that to those i am criticising).
CSI results from a particular kind of process, algorithmic or not. Specifications do require intelligence to be well defined, but why attach the consciousness and libertarianism to this? Those are actually extraneous.
As for strong AI, I think that is still an open question, with the caveat that the term may be as ill-defined and useless as libertarian free will (with the PAP thing). It may be that even humans do not display strong AI in its strongest sense, but are merely conduits of information, learning from other humans, from designs in nature, and possibly even from God directly. In that case all that is good and rational and ordered comes ultimately from God: sola gloria deo, nihil sine deo.
Actually, please read the literature on libertarianism before replying, or else I’m wasting my time. This SEP article is a good place to start:
For the purposes of CSI, it is enough to simply assume that intelligent agents effectively share a dictionary of concepts that supervene mere physics, concepts such as ‘rotary’, ‘motor’, ‘tail’ with the modifier ‘long’. That gets you Dembski’s tractability condition for specifications. That’s all you need for CSI.
Green:
Re:
Determinism normally refers to a dynamical process that starts from initial conditions and produces outcomes mechanically more or less. That turns “dualism” into self-referential incoherence.
No, I offer no proof that we make real decisions. I simply note that once we set up a criterion that turns decisions into the mere mechanic al playing out — pardon that sort of language — of prior states, freedom to choose has gone. We may have he illusion of choice, but not the substance.
And, absent real choice we end up in a self-referential reductio: do you hold your view because you have chosen to follow the facts and implications, or because a prior state just happened to be so, and had it been otherwise your view and thought as subjectively experienced would have been necessarily different, not because of truth but because of mechanical necessity or an analogue — let’s call i ta software version — thereof?
If so, then it is merely a matter of who or what manipulated or programmed initial states that drive outcomes robotically, or if you will, the happenstance of initial conditions.
Instead, as I suggested earlier, we need to distinguish between influences or constraints and those sufficient factors that once present WILL make an outcome happen.
And I knew a man — a professor in my Uni — who would occasionally deliberately do something utterly diverse form what he would otherwise wish, sometimes by doing a bit of dice throwing to determine his decision.
It once saved his life by “making” him choose to be late for a flight that then crashed.
GEM of TKI
—Green: “I think the idea that you can make decisions completely contrary to your beliefs, desires and so forth is absurd.”
Tell that to every smoker who knows that he should stop but chooses not to.
—Green to kairosfocus: “Please tell me why knowing that my decisions are causally determined in this way is self-referentially incoherent?”
You exhibit incoherence every time you complain about someone else’s behavior.
StephenB:
Yes, and WHY does the smoker continue to smoke? Because his desire to smoke outweighs his will power, moral values and so forth. He is simply acting on his strongest desire. Compatibilism in action.
Kairosfocus:
This just simply doesn’t follow. Please explain how determinism >> self-referential incoherence. I am still in need of an argument.
You’ve set up a false dichotomy. I hold the view I do because I have followed the facts and implications, and these facts and implications, combined with my prior knowledge and my current mental states, causally determine that I’ll reach the decision I do. If they don’t causally determine my decision (read: libertarianism) then what does? Nothing. Yes, how rational.
This proves nothing. It is entirely consistent with compatibilism. Why did he choose to throw the dice? Becuase he DESIRED to do something spontaneous and different. Again, this is compatibilism in action.
—Green: “Yes, and WHY does the smoker continue to smoke? Because his desire to smoke outweighs his will power, moral values and so forth. He is simply acting on his strongest desire. Compatibilism in action.”
Your original statement was that an individual cannot act against [a] his beliefs and/or [b] his desires. I have already refuted [a]. Surely, you understand that I can refute [b] just as easily with another example.
In any case, I knew that you would react that way, which is why I included section II, where I wrote, “You exhibit incoherence each time that you complain about someone else’s [kairosfocus’] behavior.”
By your lights, KFs combination of mental states prompted his behavior, an activity over which he has no control. So, why are you complaining?
[“Kairosfocus, with all due respect, have you read anything that I’ve said?”
[“Wow. I’m the one using labels and dismissive arguments?”]
Is it the case that he could have acted differently or is it?
…or is it not?”
StephenB:
You’ve misconstrued my original statement by making it and “and-or” with regards to desires and beliefs. What actually said was the that an agent cannot act against his “beliefs, desires, and so forth.” What I meant by the comma “,” was AND. No compatibilist seperates mental states and says you can act against these, but you can’t act against those. Compatibilists take ALL the mental states into consideration, and says that actions are causally determined by the COMBINATION of them.
Thinking about it, your example of the smoker is not only consistent with compatibilism, it is actually also evidence *against* libertarianism. You implied yourself that the smoker was powerless to do otherwise.
StephenB also wrote:
Ok, so now you’re talking about moral responsibility, and how this is incompatible with compatibilism. I would agree that. Compatibililists have no good theory of what I called “ultimate control” above. So it is diffiuclt to see how they are morally responsible. I will answer this from my personal perspective below. But first, let me point out that I can make exactly the same charge against you..
Now, onto my personal perspective..
Now let me turn it around, how are you justified in complaining against others’ behaviour? You seem to be a libertarian, and no libertarian theory has yet given a good account of it. And if you’re not a Christian, I cannot see any way that you can be rationally justified in holding man responsible for his actions either.
Green (#73):
thank you for your detailed, clear and informed answer. This kind of exchange is truly rewarding.
I think I owe you some comments. First I will comment briefly on your position, and then I will try to clarify better what I think.
1)About compatibilism, and in particular your form of compatibilism. You have been very clear, but still I have difficulties with this kind of position. In particular, it is difficult for me to see where it may really be different form strict determinism.
Perhaps, the key is in the only phrase in your reasoning which remains unclear for me:
“I deny this because I think that causation runs both ways: i.e. physical states causally affect conscious states, and conscious states also causally affect physical states. So I would accept your position (3) but without the “free will” part.”
So, if I understand well, you believe that conscious states influence physical states as much as the reverse. That is fine, I perfectly agree.
And you seem to believe that conscious states have some form of existence, and are not a mere byproduct of the physical states of the body and brain. That’s even better, I could not agree more.
But still, you don’t believe in free will (for reasons which are interesting, but which I will discuss in the next point).
But then, what determines inner states, other than physical inputs? Because determined they must be, if there is no free will. And it seems to me that there are only two options:
a) inner states are completely determined by physical inputs, and then we are back to my 1), and to strict physical determinism, which you deny.
b) Inner states are determined by some intrinsic form of inner determinism, which can interact with physical determinism, but is partially independent from it.
Is that your position? Is that what you call “compatibilism”? In that case, I appreciate the effort, but can’t understand if there is any substantial difference with physical determinism. You are just trading a simple form of determinism with a double determinism intertwined.
But the problem with strict determinism is that it creates unacceptable consequences in our conscious representation of reality: in particular, it denies any possible role to the concepts of moral responsibility, of commitment to self-improvement, and it becomes really difficult to give any sense to human ideals and hopes, and to most human values.
I really can’t see how trading one form of determinism for another can change any of that.
Moreover, it seems rather obvious that, if our inner states are determined, our intuition of being able to change our personal destiny, even if partially, can only be viewed as self-delusion.
For all those reasons I can’t accept compatibilism, and I continue to believe that it is only a way to put strict determinism in a more palatable form, at least for philosophers.
2)Let’s go, then, to my position.
From what you say, it seems even more obvious to me that my position can be described very well as “agent-causal” libertarianism. I am happy that I have learned something new about what I believe 🙂 !
I would definitely reject at first sight all “event-casual” models. I see no reason or utility for them. Indeterminism is certainly not a substitute for choice, and it does not solve any of the problems created by strict determinism. So, in that I agree with you: event-causal models are useless.
But the same is not true for agent-causal positions.
First of all, I will say that I am not too interested in debates about “substance”. If you want, I could accept that the agent is a substance, but I prefer to define it as a “transcendental subject”, perfectly existing and real, and connected to the external world through the body-brain interface (bidirectionally, as it is obvious).
Now, I will try to address your objections to that positions, from my point of view:
a) .”
This describes quite well my position. But I want to state again that I would rather describe the agent as a transcendental subject, able to both perceive and represent inputs form the external world, and to output actions to it.
b) “Most have rejected the ‘agent-causal’ view of libertarianism because the idea that a substance, rather than a property of the substance, can cause anything is unintelligible”
I can’t see exactly why that should be the case. Anyway, I see the transcendental subject as the real “substance”, and the ability to represent and to act could well be defined as its “properties”, but maybe that’s only a matter of words. Anyway, there is no doubt IMO that the transcendental I acts as a cause.
c) “I think it fails for about 4 other reasons, too. But I’ll just list one of them here “.
And to that one I will answer. But I am ready for the other three 🙂
d) “the fact that it only gives agents the power to make irrational decisions (and who wants that?)”
I could argue that irrational decisions are not rare, nothwithstanding free will. That’s not my whole argument, but it should suggest that, if free will exists, its results are not necessarily rational decisions.
So, I will try to clarify my point of view.
Free will is the power to choose. The transcendental subject, at each given moment, has a complex pattern of representations (the sum total of its inputs and internal states). According to those representations, it “feels” a definite pattern of possible reactions (let’s say at least two, or maybe more). It is not so important how different those reactions are. In some cases, it could be the difference between salvation and ruin, other times it could just be a slight difference in mood. The important point fro free will to be real is that there is not a single, determined reaction.
And then? Then the I “chooses” its reaction. Moment by moment, any time. Is that a rational choice? Maybe yes, maybe not..
From a religious point of view, that is probably the single most important factor. Love, and especially love for God, and truth, and good, are always represented in some way in the consciousness of the I, at one level or another. And, moment by moment, the perceiving I can be loyal to that love, in any of its form, or disloyal to it. That’s the real value of free will and of responsibility.
Is that a rational choice? It is, if reason is taken with sincerity and if it is guided by love. Otherwise, reason can produce many wrong things. After all, in all religious (and even not religious) traditions, choice is more a result of feeling than of reason, otherwise moral errors would only be cognitive mistakes. I would say that, in all human activities, cognition and feeling are always intertwined, and the one has no meaning without the other.
e) So, let’s go to your “thought experiment”. Let’s say that “A” is a good decision, a liberating, compassionate, moral one, and “B” a disloyal, hate inspired, egoistic decision. The choice is open to the agent. It can choose one or the other. A is probably rational, and B irrational, but that is not the only point in the game. And yes, Joe can definitely choose B.
But he can definitely choose A, and that is the true, simple glory of free will, that no form of compatibilism or other human conjecture can ever deny.
Wow.
If you can simply give up the belief in free will (ahem), then such oh-so parsimonious explanations just perculate from the chaos – do they not?
Hi Green,
I hope you don’t mind me joining in with a question to you – I have been following this discussion from the beginning, and am in complete agreement with you on the conclusions that free will in the libertarian sense cannot exist, for all the reasons you have so nicely laid out, and many more.
My questions to you is on this statement:
.”
If I understand you correctly, you believe in ultimate responsibility for your choices, although you believe that all your choices are in fact determined and couldn’t have been otherwise (as do I believe). Do you have a reason for this contradictory assumption besides the bible teaching (which, don’t take me wrong, I do completely respect as your personal beliefs; I have my own set of beliefs)? So – I am basically asking: why are you a compatibilist, not a determinist?
As a determinist, I conclude that our sense of personal responsibility for our choices is the result of the social benefits of accountability for one’s actions. In other words, a society where beneficial behavior is rewarded and detrimental behavior is punished thrives. Thus, in my opinion, although responsibility itself is an illusion, the “sense of responsibility”, or social accountability is a real, beneficial force and that is the reason why it factors into our motivations/desires/evidence evaluations for making choices.
GP (and Green);
Right now I am taken up with a Constitutional crisis, and have to deal with a visiting expert.
That makes me much shorter than I am wont, which may cause lack of specificity and detail. (And yet the constraints do not force the outcome, I — and it is I not my attitudes or feelings [and surely not ole devil rum or new devil pill . . . ] — decide to read and to respond, even if short and perhaps too sharp even while maybe not being specific enough.)
I apologise for that.
GP has well said most of what I would wish to say.
I should add, that I am not unfamiliar with non materialist determinists, e.g. one of my theological friends across decades of discussions is a dyed in the wool hypercalvinist, thus a dualist. (Yup, such still exist!)
Every species of determinism, however, ends up undermining not only power to choose in the moral context but also power to choose in the rational one, i.e. it ends up undermining the ability to make sufficiently free decisions to think for oneself.
Without power to select towards a purposed message — however constrained by rules of communication etc, one cannot be free enough to really think responsibly. And for that matter, to love [the root of all virtue].
Such freedom entails the power to do the opposite of such things and maybe too many people ARE led around by the nose through their attitudes and feelings and perceptions.
But, to really think for ourselves, we must be able to choose for ourselves. Other wise we become the creatures of our conditioning, down which road lie the errors of for example Marxism. [And marxism is not just in the materialist forms we are often familiar with, at least for my generation, which I suspect is GP’s generation, and both Italy and Jamaica suffered much at the hands of such determinists and their class war ideology.]
GEM of TKI
PS: And, Calvinism is a worldview all to itself . . .
Hi gpuccio, again hoping that Green won’t mind, I’ll chime in on your 91 also:
“Agent causationists say that agents ‘survey’ the mental states and that mental states (reasons and so forth) may influence a decision, but they do not determine it. The ‘agent’ (as a substance) has the final say.”
“The transcendental subject, at each given moment, has a complex pattern of representations (the sum total of its inputs and internal states). […] The important point for free will to be real is that there is not a single, determined reaction. […] Then the I “chooses” its reaction. [….
My question then is: do you view these factors (desires, loyalties, love, etc.) as properties of the self, that cause/make up the inherent differences among different selfs?
kairosfocus,
Q. What did the hyper-calvinist say when he fell off the ladder?
A. “I’m glad that’s over with.”
—Green: “Thinking about it, your example of the smoker is not only consistent with compatibilism, it is actually also evidence *against* libertarianism. You implied yourself that the smoker was powerless to do otherwise.”
On the contrary. I know that the smoker has free will and can go either way. Your philosophy allows him to go only one way, which is, of course, inconsistent with human nature. Human nature is a drama: Every saint has a past; every sinner has a future.
Mr Arrington:
You got that one right!
Double election on strict TULIP — Abraham Kuyper is a bit “soft” — is a real mind stretcher.
(Having said that the Calvinists made major contributions to the rise of modern democracy and free self government, cf William the Silent of Orange and the Dutch Declaration of Independence 1581, which is an ideas precursor to the US one of 1776, may even be a hinted at source for ideas [recall what NY used to be . . . ]. Resented and unacknowledged by today’s world of thought, of course.)
G
—Green: “Ok, so now you’re talking about moral responsibility, and how this is incompatible with compatibilism. I would agree that. Compatibililists have no good theory of what I called “ultimate control” above.”
Yes, that is true.
—“So it is diffiuclt to see how they are morally responsible.”
You have written wisely.
—“I will answer this from my personal perspective below. But first, let me point out that I can make exactly the same charge against you.”
To be sure, you can make the charge, but I don’t think that you can argue the point successfully.
—.”
I am not proposing to defend “libertarian” free will or “ultimate control.” I am defending free will from the perspective of what some might call “self determinism.”
I can justify holding any sane person responsible for his/her behavior within the context of moral choices that are made. We are all influenced by biological, psychological, and environmental factors. So, our choices are obviously limited in that context. However, we certainly have the power to become better or worse people on the strength of the moral choices that we do make. My position will not “dissolve into irrationality.”
—“Now, onto my personal perspective. I’m a christian, and I think that the bible teaches that man is still ultimately responsible for his actions.”
You are quite correct about that.
—“So whilst no theory (libertarian or compatibilist) can account for this, I am justified in beleiving it, since I am rationally justified in believing the bible.”
You are making perfect sense, and for that reason I recommend that you abandon your position and embrace self-deterministic
…free will.
KF “one of my theological friends across decades of discussions is a dyed in the wool hypercalvinist, thus a dualist. (Yup, such still exist!)”
As a Calvinist, standing on the shoulders of Paul, Augustine,Luther, Calvin, Edwards, Whitfield, Spurgeon etc, etc, I cant tell you how many times I have been misrepresented as a hyper calvinist. Im not saying that your friend is not a hyper calvinist but have you asked him if he accepts that designation?
RE 97 Barry we all know that the flower of the Calvinist is the tulip, do you know what the flower of the Arminian is? The daisy…He loves me , He loves me not.
On to the topic of this thread. Surely no one on this thread is arguing that the will is not determined by something?
Vivid
molch:..
My question then is: do you view these factors (desires, loyalties, love, etc.) as properties of the self, that cause/make up the inherent differences among different selfs?
Past experiences, past choices, past representations, are properties of the phenomenic self, which includes all the various levels of the mind and bosy. They are not, in my belief, properties of the transcendental self, although they certainly influence its destiny.
I hope I have answered your question, but feel free to detail it further if you want.
Vivid:
The gentleman in question at one point used to walk with a copy of Calvin’s institutes, as others would with a Bible.
G
SB:
Well said.
GP:
Well said.
G
Vivid:
The gentleman in question at one point used to walk with a copy of Calvin’s institutes, as others would with a Bible.
G
That would not make him a hyper calvinist. Just as the definition of faith has been redefined to mean fideism so to has it become common place to characterize “historic calvinism” as hyper calvinism. I assure you that William Carey, the founder of the modern day missionary movement and a calvinist ( in the hitoric sense) was not a hyper calvinist.
Vivid
Vivid,
I take your point, but the case is specific; I chose a “slice of the cake” behaviour.
In any case I have no quarrel with Calvinists or for that matter Catholics . . . ] — I am a Biblical-inductive not a systematicist [though I appreciate the validity of ST, e.g. in the context of the Nicene Creed as a contextual extension of the message of 1 Cor 15:1 – 11 etc.]
My point upthread, is that there is a species of Christian determinism that can be astonishingly fine-grained; just as Marxists are deerminists on matter and dialectical materialism. So, determinism and dualism are not necessarily opposites.
G
StephenB: please go and read the literature on this so you can adopt a theory of free will. Until you do this, I can’t argue against you because your position is so vague and ill-defined. At the moment, you just seem to be using ‘free will’ as a label, and I have no idea what I’m supposed to be embracing. Should I be embracing an event-causal account of free will? An agent-causal account? Or perhaps a non-causal account? Please tell me exactly what you are proposing, and then I can tell you why I think it fails. The SEP article I cited earlier is a good start: ()
With regards to GP and molchi… I’m just about to start working on a response… watch this space 🙂
Vivid:
As a sampler of what is being addressed in the thread, cf PK in no 1:
1 –> That a person with a free enough and responsible mind and will should initiate lines of action and cause-effect chains is one thing, that such a freedom is without contextual influences is utterly another.
2 –> And yet we see conflation of the two, driven in part I believe by failing to distinguish contributing influences, necessary causal factors and sufficient causal factors.
3 –> To help clear the atmosphere, I suggest a read of the SEP on the subject. Excerpting:
4 –> And on and on at length. But, one hopes that with this in mind we can take a fresh look at the original post.
5 –> Especially, bearing in mind the need for significant freedom to think for ourselves and follow logic and material facts, as opposed to conditioning:
GEM of TKI
F/N:
Perhaps a famous Biblical text will help our reflections and will help us understand where willing, controlling and causing can be very different indeed, at least as a possibility:
______________
>> Rom 7 8:1Therefore, there is now no condemnation for those who are in Christ Jesus,[d] 2because through Christ Jesus the law of the Spirit of life set me free from the law of sin and death. 3For what the law was powerless to do in that it was weakened by the sinful nature,[e] God did by sending his own Son in the likeness of sinful man to be a sin offering.[f] And so he condemned sin in sinful man, . . . .
9You, however, are controlled not by the sinful nature but by the Spirit, if the Spirit of God lives in you . . . >>
_______________
This passage has always been a challenge to us.
Here we see the instructed mind and will consenting freely and leaning towards the good but hopelessly in bondage to sin.
Then, we see the same will and mind set free in Christ by the indwelling Spirit to move towards its true desire. With a hint that the keystone of bondage is the obsessiveness of sin: even desperate resistance to sin is focused on what sin wants and is entrapped. So the Spirit empowered renewing of the regenerated mind and its liberation to think on the things of God, multiplied by an empowerment and motivation from the Indwelling Spirit to walk in the ways of the Spirit become a hope for transformation.
And the moral responsibility of walking in the wrong is now a secondary one: not the raw ability to will and do the right by oneself, but the willingness to surrender to and receive the empowerment that transforms.
All, modelled out in the life of Paul himself.
So, perhaps a fresh perspective on the differences, distinctions and implications may help.
Later, DV, I think I will note on the significance of Eng Derek Smith’s two=tier controller model for our anthropology of the mind and will.
G
gpuccio at 103:
well, you answered my question insofar as you made clear that the motivations in questions are in your opinion contributors to the choice, but not the causes of the choice. So let me try to summarize what I think your position is:
A choice utilizes evaluations of immediate evidence and “rational elaborations of reality, past experiences, past feelings, and so on”, which I would gather under the heading “motivations”. The self then surveys these evidence evaluations and motivations and makes a choice. But what, if not the evidence evaluations and motivations themselves, is then the cause of the choice itself, in your opinion?
—Green: “StephenB: please go and read the literature on this so you can adopt a theory of free will.”
I have already read much of the relevant literature, which is why I had no difficulty explaining why your position is incoherent–a point that you have already acknowledged, and one which you were apparently unaware of until I pointed it out to you. That should have been your first clue that you should not presume to lecture me.
—“Until you do this, I can’t argue against you because your position is so vague and ill-defined.”
You just weren’t paying attention because you are all hung up on what some call the various “schools” of free will. My argument is simple: A person’s moral acts are not caused by another, nor are they uncaused. They are caused by the person. That means that, as persons, those individuals are morally self-determined, their acts freely chosen, without coercion or compulsion, and that they could have done otherwise.
I have already said this in different words. Go ahead and refute the point if you think you can.
Following up:
We immediately see that there is an addiction to the wrong that, at our best, we all struggle with. This is a big piece of the concept of moral fallen-ness and bondage of the will.
Even so, the mind and will are able to choose a different path, though not always to effect it. That is, at our best we may grow in the right, and sometimes we stumble and lapse. But the issue is persistence in the way of the right, and openness to the liberation and transformation that come from the Transcendent. (Indeed, in the NT, responsiveness to truth we know or should know is a moral issue; thence the intellectual virtues approach to epistemology.)
We also see that the picture is complex, though sadly familiar: addiction and struggle to break its bondage, with the threat and reality of occasional lapses. (Not to mention those who choose instead to give themselves over to evils.)
So, freedom is here a relative term, in the context of constraining and in some degree enslaving forces and factors: we are freer to consent to and will than we are to do the right.
But, by God’s grace, we may turn tot he power of God that helps us grow in the right, though “la lucha continua.”
So, now is freedom to be seen as an acausal process, with neither influences nor necessary constraints nor enslaving addictions?
No.
But, in the end we do have a power of choice, with vast implications for the path of our own lives and the communities in which we live.
A difference that starts with being willing to face the truth about ourselves and our struggles. Which we are freer to do than to escape the entangling and enslaving pull of the wrong.
Going yet further, there is the point hat to think and choose aright, we must have sufficient freedom to think and to choose. The past, by itself may influence the future but it does not determine or utterly control it. The saint has a past, and the sinner [often the two live in the same body] a potentially bright future.
Which brings us back to the Derek Smith Two-tier controller servosystem model that I often use in this general context.
Smith was looking at how complex robots may be developed, and saw that an input-output loop controller may have a supervisory controller that carries out goal-setting, path imagining and general oversight of the loop. And the two tiers of control may interact informationally not simply by the sort of dynamic control that obtains in the loop proper.
With that perspective in mind, we can now think afresh about mind and body, will and control actions in light of that possibility. For instance, “reprogramming” the lower level controller may be a difficult process, especially given the neural architecture and the need to learn.
In a crude way this suggests one cause for a gap between the higher and the lower, and why it is a struggle to learn the right way. (Ever had to unlearn poor techniques in a sport? And, ever been discouraged from persisting in the corrective path? Ever had an encouraging word make a difference?)
It is also suggestive of why the higher order controller may have a greater freedom than the lower one.
All of this is not meant to be a proof [much less a doctrine!], but a means to use lateral illumination to help with opening up our thinking so we can see a little more broadly than the sort of strawmannish projections I cited above.
Freedom, choice, influence, and control are all subtler and more complex than we are often inclined to imagine.
GEM of TKI
GP:
Clarifying my position
I’m not sure what you mean by strict determinism. Do you mean physical determinism? If that is the case, then my determinism is completely different. Being a substance dualist, I completely affirm the existence of the wide variety of conscious mental experiences that we have. And being a determinist, I also believe that these conscious mental states are determined.
Yes, you’re right. What determines my conscious mental states (i.e. my beliefs, desires, moral values etc.) is a variety of things. These things would include my previous mental states, my interactions with people, my encounters with god, the physical make-up of my brain, the books I read, and so on and so forth.. Whilst these mental states are determined (by all the previous factors I mentioned above), you still get a robust account of agency. Indeed, as long as mental states are causally efficacious (which I think they are) then human agents can make a real difference in the world. They can still act for good reasons, they are still able to deliberate and compare alternative courses of action, they are still able to compare and evaluate different means, ends and consequences, and they are still able to act upon their own desires. So mental determinism is definitely substantially different from physical determinism.
On the consequences of compatibilism (aka determinism)
You objected to my compatibilist view of agency for the following reasons:
(1) It couldn’t ground moral responsibility
(2) It allows no ability for human self-improvement
(3) It makes it difficult to give any sense to human ideals, hopes and values
As I’ve already noted, (1) is also a difficulty for all libertarian accounts, so compatibilism is no worse off here (and I’ve already given an account of how I can be justified in personally thinking that moral responsibility still exists. With regards to (2), that doesn’t follow, since as long as humans have the desire for self-improvement, they can act on it. With regards to (3), I’m not quite sure what you mean?
On the agent-causal theory of libertarianism:
Firstly-great, I’m glad we’re in agreement about the event-causal theories of libertarian free will. Like you said, they don’t add anything to an account of human agency.
With regards to my point that it only gives us the power to make irrational decisions:
You noted that well, sometimes are decisions are indeed irrational. I think I should have been clearer here. Yes, we are sometimes irrational in the sense that we sometimes make illogical decisions, or in the sense that we sometimes make decisions for bad reasons, but I don’t think we are ever irrational in the sense that we sometimes make decisions for absolutely no reason whatsoever – which is what the agent-causation view leads to. Even in a scenario where two courses of action are equally preferable, there is no reason to think that the decision is made for no reason at all. In scenarios such as this, a decision will be made because of a desire to choose a course of action. Thus even here, there are reasons that explain the decision. In agent-causal scenario, not so.
My other problems with the agent-causation account of libertarianism
Ok, so herein lie my other problems with agent-causation. It’s long, but you did ask 😉
(1) Causation and explanation
The agent-causation theory does not seem to be able to give an adequate account of agential control. All those working in the field of agency (even those who are not libertarians) agree that control is one of the necessary conditions for agency. And, one of the necessary conditions for control is causation. In other words, to be in control of an event, an agent must at the very least be a cause of it. Aside from the problematic notion of substance-causation, though, it is very difficult to see how agent-causationists can justify the idea that agents are causal entities. This is because the cause in question is in no way explanatory. I illustrated this with the thought experiment with ‘Joe’ in a previous post. Recall that in this thought experiment there was no reason whatsoever for why Joe chose A and not B. The idea that the cause of an event might fail to explain that event, however, seems incoherent. How can positing a specific cause for an effect not also explain that effect? One philosopher working in this field (Ginet) has argued that whilst he wouldn’t go so far as to say that the idea that a cause ought to explain its effect is self-evident, but he does say that its denial is highly puzzling, and it should not be accepted without sufficiently compelling reason. Agent-causationists are aware of this problem. However, their only response seems to be that it is not axiomatic that causation ought to follow explanation. That’s all well and good, but it’s hardly a compelling argument.
(2) Causation and control
Secondly, even if the causal power of an agent on the agent-causation view is granted, the agent-causation theory still faces serious objections.. Non-agent causationists (e.g. event-causationists) solve the problem by saying that control is i) causation PLUS ii) acting for conscious reasons. However, this option is not available for the agent-causationist, since their theory posits that agents ultimately act for no reason at all. Given that they cannot use ‘acting for reasons’ as an account of control, it seems that the agent-causationist is simply reduced to the bare assertion that control is exercised simply because the cause in question is an agent. In fact, O’Connor (a prominent agent-causationist) fairly explicitly states that agent-control simply is the relation between the agent and the effect, implying that no more explanatory work is needed. Critics objected to this: the agent-causationist can’t be allowed to say that agent-causation constitutes control because “it just does”. What kind of a response is this?
To summarise (1) and (2), agent causationists have difficulty not only justifying the idea that the agent in question is a cause, but they also have difficulty justifying the idea that this cause is of the right sort to constitute agential control. And with no control, agency is undermined.
(3) I also think that the agent-causation theory is unintuitive
Firstly, by separating an agent from his or her mental states, I think the agent causation theory is setting up a false dichotomy. I don’t think that agent should be distinguished from the sum of her mental states. Of course when mental states are described from a third-person perspective, they will appear lifeless and inactive. As Nagel (1995) once remarked “‘[s]omething peculiar happens when we view [agency] from an external… standpoint. Some of its most important features seem to vanish under the objective gaze. Actions no longer seem assignable to individual agents as sources…’ However, when we view mental states from a first-person perspective, it is clear that the distinction between an agent and his or her mental states is only apparent. The only reason one might feel a reluctance to identify an agent with a series of mental states is because there is a gulf between our experience of these states, and our conceptions of them. When given an objective description, mental states seem distinct from an agent, but when experienced from the first-person perspective, mental states (desires, motives, evaluative systems, long term plans, moral values, likes, dislikes, and so forth) plausibly constitute an agent. Also, whilst we’re on the topic of intuitiveness, as I’ve been arguing all along, the idea that agent could do otherwise – even if she had no motivation, no desire, no will power, etc. etc. is absurd. Yet this is what the agent-causation theory entails. In fact, it’s what any libertarian account entails.
Ok, I’ve already used up a lot of space, and I know I haven’t responded to all your points, but I hope I’ve clarified some of the reasons why I think the agent-causation view fails. If you want to see why it fails to account for moral responsibility too, check out a short section in this paper by Schlosser (2008): And if you want to see why substances can’t be causes, the SEP page entitled ‘Incompatibilist (nondeterministic) theories of free will’ has a good section on it. (I referenced it earlier:)
Molch:
No, I don’t, since I haven’t seen a good philosophical account of it anywhere.
Sorry, I was using compatibilist and determinist interchangeably. Maybe I should just use the word determinist to be clearer:)
Not to butt in, but this seems like a massive internal contradiction. Or, a simple eviceration of definitions.
Affirming a rich variety of conscious mental states (which are mechanically determined) is simply determinism in a different dress. The only illusion I see is that this is substaintially different than any other determinism.
I’ll watch.
Oops…I cut off the end of the quote.
Really? I don’t see anything but determinism. But like I said, I’ll be quiet and watch.
I have been aware that determinism and moral responsibility are inconsistent for quite some time. Anyhow, with regards to the incoherence in my position: I think it’s useful to make the distinction between determinism itself being inconsistent (which some here have tried to argue, unsucessfully, I think), and the combination of determinism and moral responsibility being inconsistent (which you have pointed out). And I have freely acknowledged that this latter combination appears incoherent. Which is why I gave biblical grounds for believing in the latter and not philosophical grounds. But bear in mind, I also pointed out that libertarian accounts do no better in trying to account for moral responsibility (or ‘ultimate control’).
Yes sorry I realised that came across quite harshly. Apologies, it wasn’t intended. I think I was just a bit frustrated with this whole discussion. (I never should have myself into it to be honest!)
I don’t mean to be a pain with categories here, but it is helpful to know which type of free will you are defending so that I can respond accordingly. And it seems to me that you are defending the agent-causation theory. So I’ll explain why that account fails to give an account of moral responsibility. Firstly, I’ll assume that moral responsibility requires the following:
(1) The agent must be the source of the action
(2) The agent must be in control of themselves when they do the action
I think the agent-causal theory can get you (1) but not (2). See the comments I made to GP above for why it cannot get (2) – specifically see the headings entitiled “Causation and explanation” and “causation and control”. Both these sections show that the agent causationist cannot give an account of how an agent caused, or is control of, her action. So it can give you origination, but this alone isn’t much help. Schlosser (2008) explains this much better than me, so you could check out section 6.4 of his paper if you wanted 🙂 () If you can’t access it, let me know and I can post the relevant section on here, or email it 🙂
Upright BiPed:
Upright BiPed, I’m not sure what’s not clear here? The distinction I’m trying to draw is between:
(1) Determinism being true and humans being only physical
(2) Determinism being true and humans being both mental/conscious and physical (my position)
Clearly (2) is different from (1). (1) strips humans of all mental states; i.e. all desires, all beliefs, all long term goals, all reasoning faculties, all rationality, all qualitative experience, all feelings, and so forth. (2) does not; it affirms the existence of all these things. One does not have to have libertarianism to have this rich conscious experience.
Anyhow, I’ve spent too much time on this blog in the past couple of days, so I’m not going to make any more substantial comments. I can point people to references, and answer quick questions, but I need to get some work done! Ciao 😀
I was pre-destined to believe in free will.
p.s. SB. thanks
Green, I think the problem is obvious.
By your definition it would seem that (should you choose to recognize as such) the vapor physically rising from a pot of boiling water could be mental – it is certainly as determined.
Perhaps some of you will find this illuminating.
Thank you for the discussion I have enjoyed it however one question about compatibilism.
compatibilism:
1. My ultimate desire’s/alignment is primary
2. Alignment creates/determines combination of mental states and or reasons
3. At time t the strongest combination of mental states decide my action.
The question becomes then who decides alignment that determines mental states? Do I have freedom to choose where I want to ultimately align myself i.e. to love or pride, anger etc? I am in agreement with Green in a sense that I think we act according to our alignment which determines our mental states, reasons and actions. If our behavior was not caused by anything (libertarianism) then we could see a person acting like a saint for 2 weeks and like a psycho for the next 2 which is not the case.
Green:
I don’t want to use too much of your time, so I will try to give brief and substantial comments. Then I think we can be happy of having explained our ideas, or we can go on discussing, according to your wish.
I’m not sure what you mean by strict determinism.
Any form of determinism where anything is pre-determined by existing conditions, either physical or inner or both. In that ense, I think it is clear form what you say that any form of compatibilism is strict determinism..
It’s substantially different, but not from the point of view of determinism. The system, even with its conscious states, remains totally determined. By the way, adding possible probabilistic factors does not change anything substantial (I think we agree on that). So, I nwould treat random-deterministic models together with strictly deterministic models as one, let’s say: no free will models.
Whilst these mental states are determined (by all the previous factors I mentioned above), you still get a robust account of agency.
No, you just get a robust account of two deterministic models interacting, which is the same as one deterministic model with two levels of organization. The existence of an interaction between conscious states and physical reality is no guarantee of agency, no more than the existence of a software interacting with hardware is agency. A causal relationship is not the same as agency. We must be cautious with words, they can sidetrack us. Agency is a word which has always been reserved to experiences with a subjective intuition of free will, and not to merely causal models. Changing the use of the word does not change facts.
Indeed, as long as mental states are causally efficacious (which I think they are) then human agents can make a real difference in the world.
This is really nonsense (and I say that with the utmost respect for you, please believe me). It’s the same as saying that, as long as covalent bonds are causally efficacious, then they can make a real difference in the world of biochemistry. Something which is part of deterministic system, does not “make a difference”. The system is just what it is, with its parts, and could not be different in any way. The word “difference” means something else, and is not appropriate here. My idea is that compatibilists are trying to “mess things up” to be able to re-enter words and meaning which apply only to free will models into a deterministic model. From that point of view, I suppose pure determinists are better, because at least they are not trying to escape from the consequences of what they believe to be true through intellectual games.
They can still act for good reasons,
Only if they are pre-determined to do so. And in what sense would a reason be “good”, and another one be “bad”? They are just what they are: inescapable pre-existing causes. And what about people who can only act for “bad” reasons, because the flux of their mental states can only bring them to that behaviour?
they are still able to deliberate and compare alternative courses of action
Here I really can’t follow you any more: what do you mean with “deliberate”? Deliberate what? And didn’t you say that you don’t believe in PAP? So, how are “alternative courses of action” possible, least of all “comparable”?
they are still able to compare and evaluate different means, ends and consequences, and they are still able to act upon their own desires.
They are not “able” to do anything like that. They “must” go through the inner states which are inevitably already established by their condition: even if those inner states include the illusion of comparing, evaluating, of desiring and acting, in no way they are “able” to do all that: They “must” do it, and they cannot do anything different. There is a world of difference, and the (pseudo)-smart use of words cannot change that difference.
More in the next post (you see, fragmenting my answers in different posts can give the illusion of brevity, but I am afraid that they remain substantially long… 🙂 )
IB:
Actually, sometimes we DO see one who is saint-like stumbling, and sometimes stumbling very badly indeed.
Also, I must again point out that cause is more complex than we tend to think — especially on mental and moral [responsible choice related] behaviour.
Causal factors come in clusters, can be contributory, can be necessary and in some cases are sufficient, and sufficient does not entail necessary. (Copi’s classic example is a fire: each of oxidiser, fuel and heat are necessary, and they are jointly sufficient. Without a necessary factor, an event cannot occur. With sufficient factors, it will occur.)
I tend to be very wary of those who speak of “mental states” as — in a world where terms are often chosen subtly — that often suggests emergentism or emanationism rooted in materialism, aka physicalism. Physicalism on the mind is immediately self-referentially incoherent, for various reasons linked to determinism [i.e compelling sufficiency] on non-rational causal factors and how it thus undermines choice, a key component of rationality and even language. The physicalist deerminist, not least, on his premises, holds his position by the chance circumstances of genetics, culture, class etc and physical consequences that led him to be born, raised and educated [insofar as education is possible beyond mere conditioning], not by any process tracing to credible grounds and logical consequences followed by seeing good reason to do so and deciding to follow such. So, a Crick reduces mind to neuron networks firing away, a Skinner turns us into rats in a maze, and so forth. All of which turns on them.
Now, there are also dualistic determinists [or in some cases, fatalists is a better term], certain types of Calvinists and believers in controlling occultic influences being classic examples. So are certain types of Muslims.
The determinism is the downfall of such thought-systems: do they hold these views because they are warranted, or because they are caused to do so on sufficient and controlling factors irrelevant to truth, reason and right?
If we cannot really choose, if there is no difference between influence, habituation and outright control, then rationality and responsibility have evaporated. All that is left is coercion and/or manipulation, of one species or another. In short, we end up at that horror: might makes right.
Resemblance to what is going on all around us is NOT coincidental.
By contrast the view cited in Rom 7 – 8 above opens up the issues of a mind and a will that have enough transcendent freedom to reflect soberly on what one does habitually or even by stumbling or being unable to escape it. It then holds out the promise of the liberating encounter with the Transcendent, which empowers one to find the motivation and capacity to be forgiven and to overcome, however one may stumble in the path of the good.
And, it points out a key principle for the renewal of the mind and heart: the mind of the flesh [sarx] is obsessed with the things of the flesh, whilst the Spiritually empowered mind is lifted from that level to what Paul describes so eloquently in Philippians 4:8:
An indictment of out civilisation in its current befouled mindset!
GEM of TKI
Green:
As I’ve already noted, (1) is also a difficulty for all libertarian accounts, so compatibilism is no worse off here (and I’ve already given an account of how I can be justified in personally thinking that moral responsibility still exists.
I don’t see the difficulty for my kind of libertarian account. Moral responsibility is grounded in the simple fact that different possible actions have different “moral” meaning for the agent. They can be in harmony with his higher aspirations, or not. That is the basis for the universal concept of “moral conscience”, and I don’t think it is a difficult intellectual achievement: human beings of all kinds have spontaneously understood that concept for aeons, and they still do. Maybe philosophers are smarter, anyway…
With regards to (2), that doesn’t follow, since as long as humans have the desire for self-improvement, they can act on it.
No, they “must” act on it. Again you use “can”, betraying a free will model while you deny it.
And what about those who “must” act “against” their desire for self-improvement, because their inner states command that? What about those who “must” ruin their life through drug dependency, or dependency on fame desire, egotism, pettiness, or any other unpleasant human qualities? What possibilities of “self-improvement” are left to them?
With regards to (3), I’m not quite sure what you mean?
It’s simple: ideals, hopes and values are strictly connected to the concept of responsibility, and of alternative possibilities. Exactly what compatibilism denies. In a deterministic system, one cannot “hope” for anything: one can only go through some compulsive representation of hope, which has no real relationship with what can happen: indeed, if one were really “wise” about his own condition (that is , if his inner states pre-determine for him that wisdom), he would understand that nothin “can” happen, but that all “must” happen: hope is therefore just a gratuitous feeling, with no real relationship with the reality of things.
About the other points, I will try really to be brief. I think the key point is the following: for me, free will has not such a strong connection with causation or control.
I will try to explain myself better. Free will is all about how we choose to act: it is not about the real final consequences of our actions. One can choose a really good behaviour, in whole sincerity and humbleness, and still circumstances that he cannot control can determine a different outcome from the one he envisioned.
That means something that many religious followers have known for ages: we don’t control anything, except our inner choices. And for them, and only for them, we are morally responsible.
It is true, obviously, that in making our choices we have the duty top acknowledge with humbleness and sincerity any input, be it rational or of other kinds, which we have about our situation. But that is completely different from being able to control the situation, or to be the absolute cause of any event.
I think that the emphasis philosophers put into “control” and “causal power” is the sign of a basically non religious attitude. Religious experience is all about the recognition that we are not able to control anything, especially without God’s help. But we are responsible for accepting God’s help or not..
Innerbling:
What you call “alignment” is exactly the manifestation of free will, as I have tried to say in my previous post. It is an inner action, essentially transcendental, which allows us moment by moment to be receptive to truth and good, or not.
Compatibilism denies that inner alignment, or just treats it as one of the many pre-existing mental states, in fact denying free will. Only a transendental conception of that fundamental choice allows for true free will and true moral responsibility.
That’s why I don’t really like the word “libertarian”. The point is not that we are “free” (we are not, we are influenced by so many circumstances). The point is that there is a space of freedom in our innermost reactions to those circumstances: exactly the “alignment” of which you speak.
Regarding your very interesting observation:
If our behavior was not caused by anything (libertarianism) then we could see a person acting like a saint for 2 weeks and like a psycho for the next 2 which is not the case.
Well, sometimes we do, unfortunately. But you are right, that is not usually the case, and that opens the discussion to another important aspect.
As I said before, in my model free will is not absolute freedom. It is not control, it is not really even power of causation. It is power of choice about our reactions to circumstances.
But, as I said, our range of possible reactions is not always the same, and is determined (yes, that one is determined) by our previous states. And this is the important point: our current states are influenced (not determined) by our previous use of our free will.
But there is some inertia in the way and time that our use of free will (good or bad) can change our inner states. So, a long use of free will in a good way will in time change our inner states for the better, and expand the range of our possible actions. IOW, it will give us greater inner freedom. A long, repeated bad use of free will will make us slaves of our existing conditions, and our range of reactions to them will become narrower (but will always exist).
A saint has great inner freedom: he is not probably going to loose it just for some occasional wrong use of his free will.
On the contrary, an egotist has scarce inner freedom: he can change, but he will have to struggle for some time before his good use of free will can give him greater inner freedom.
This inertial aspect is the cause of many confusions. Free will is always present, is always a resource fully available to anyone. But the range of its power (our cumulative inner freedom) changes slowly in time, according to our use (or abuse) of our free will.
PS: BTW, one consequence of the above for determinists is that we are not really having a deliberative, responsible discussion. We are only exerting controlling, manipulative rhetorical [or stronger . . .] power influences on one another, as we have been programmed to. So, it is no surprise that the foundations of civil democratic society and ethics of reasonable discussion are at stake in discussions like this; especially if a consensus builds up in power institutions that undermines respect for right reason and reasonableness, instead substituting that the point of communication [and thus, inter alia, education] is manipulation by subtle control forces. That easily explains the sort of stunts we keep on seeing from the NCSE, US NAS, teacher’s associations and unions, the media, and even text and reference books and works. If the issue is power and persuasion by whatever means are effective, then truth, fairness and moral restraint go out the window. Welcome to Star Trek world, the reality.
GP: Very well said, as usual. I particularly liked your summary of Rom 7 – 8 in a sentence or two. Your brunch break has been put to good use! G
F/N: SB will love this, from Chesterton’s essay on The Wind and the Trees:
__________________
> . . . >>
___________________
Worth a thought or two. Typical GKC, I’d say. G
KF, thanks.
Your feedback and contribution is always truly appreciated.
Ok, a couple of real quick responses:
Upright BiPed:
But I don’t see mental states as like vapour rising from the neurons of the brain. Recall that I said I’m a substance dualist, and I think that causation runs in all these directions:
1) from the physical to the mental
2) from the mental to the physical
3) from the mental to the mental
Your analogy depicts a scenrio where only (1) is in place, meaning that reasons can’t infliuence physical action (2), and that reasons can’t influence later reasons (3). But I don’t ascribe to that view. You could say that I think causation is tri-directional.
GP:
Thank you for your interesting thoughts. I’ll just make a couple of quick comments 🙂
I’m not sure how the word ‘agency’ has been used historically, but most working in this area define agency as ‘purposeful-agent based production’, which is said to consist of the following 3 elements (none of which entail libertarianism):
(1) The ability to represent ones own goals. (This is basically the problem of intentionality, namely how mental states come to be about other things). [We haven’t touched on this at all].
(2) The ability to achieve these goals. This can be understood as the problem of mental causation; how mental states come to be causally efficacious. [I had brief interchange on this with Daguerreotype Process, but since then I’ve just been assuming it.]
(3) Finally, these representations and subsequent actions must be the goals and actions of the agent – in the sense that they provide the entity’s own reasons for acting, and in the sense that the agent is in control of them. [And we’ve touched on (3), but only as it relates to libertarianism, not as it relates to agency simpliciter.]
These 3 elements give you agency (purposeful agent-based production). Libertarian agency is something different: it requires something in addition to this (e.g. PAP, ultimate control). But a simple account of purposeful-agent-based production only needs (1) (2) and (3). I’m pretty sure even Tim O’Connor (an agent-causal libertarian) agrees that you can have agency simpliciter without libertarianism.
Now you could argue that agency (as defined by (1) (2) and (3) ) without libertarianism is not agency worth having. But I’ve yet to see why. Or you could argue that agency as defined (1) (2) and (3) is worth having, but that you also need something more if you are going to give an account of moral responsibility etc. I think this is Tim O’Connor’s position and is why he tries to add libertarianism to agency.
I’m sorry, I don’t understand why this is nonsense. By a “real difference” I mean that determinists still attribute to agents causally efficacious mental states. Thus agents can still make a real difference to the causal flow of a purely physical world. Things like ‘desires’, ‘intentions’, ‘beliefs’, ‘goals’ – all these things can affect and alter the physical realm. The fact that it is deterministic does not take away from this fact.
By deliberate, I mean that the agent can go through the process of weighing up the pros and cons of a decision. They can mentally compare alternative courses of action. There is nothing inconsistent with determinism here. You’re right – I deny PAP and I wouldn’t say that all these alternative courses of action are actually possible – but that is only because once the agent will have good reasons to act on one of them. This is in contrast to the libertarian, who will ultimately choose one of the courses of action for no reason at all.).
N.b. someone has just pointed out to me that I shouldn’t use the terms ‘determinism’ and ‘compatibilism’ interchangeably because compatibilism embraces both determinism and human responsibility, whereas determinism does not necessarily embrace human responsibility. So, apologies to Molch for any confusion on that; I’m a compatibilist; embracing both determinism and human responsibility (the latter based on biblical grounds).
Green:
I think you need to modify what you keep saying about “libertarian Free Will” in light of the objections made above and the points in say the SEP on Free Will. Otherwise, you are knocking over a strawman.
Freedom of action does no0t mean a want of reasons, but it does imply a decision to follow those reasons, and not, say, the reasons for another course or impulses or whatever.
Influences and constraints are real, but that does not mean that hey determine and control. The difference between contributing, necessary and sufficient causal factors has already been pointed out.
So has the significance of the personal, unified self-transcending conscious identity that integrates experiences and makes decisions etc. including he decisions implicit in the course of deliberative reasoning, individual or collective.
I fear much of what is happening above is that you are projecting a strawman onto those you have exchanged with, based on the particular schools of thought you are familiar with.
But something is going on outside your a-causal free choice straw-box.
GEM of TKI
Kairosfocus, you suggest that I am setting up straw men, but this is not the case. Let me be very explicit about what I have been arguing against. The SEP article entitled ‘Free Will’ goes into lots of different accounts of free will. However, many of these are consistent with determinism. I have not been arguing against any of these definitions of ‘free will’ (indeed, I am a proponent of one of them!).
What I have been specfiically been arguing against here is the type of free will that says that free will is inconsistent with determinism. These are the ‘incompatibilist’ accounts of free will (so called because they see free will and determinism as incompatible), and I have been referring to them here as libertarian accounts of free will – to distinguish them from determinist accounts of free will.
Libertarian accounts of free will fall into 3 main categories:
(1) Non-causal theories
(2) Event-causal theories
(3) Agent-causal theories
I have spent most of my energy arguing against (3) because that is the account that GP and StephenB seemed closest to in their writings. And (3), actually DOES entail that agents ultimately cause actions for no reason at all. (1) and (2) don’t necessarily entail this. But I haven’t been arguing against (1) or (2) because no-one here seems to be defending it. I will be happy to make a few quick comments on (1) and (2) if you like, but I haven’t said much about them thus far because no-one here has been defending them. In fact, GP explicitly said that he doesn’t think (2) is any use.
GPuccio @128. That was a truly wonderful post.
Green:
Thank you for your answer.
In general, what you say confirms to me my opinion: that compatibilism is only a new berbal formulation of determinism, whose purpose is mainly to “comfort” believers in determinism about the logical consequemces of what they believe. I am afraid that i cannot say much more about the main points, because both you and me have explained our positions clearly enough. I could maybe remark that the concept itself of “purpose” implies a belief, maybe not necessarily explicit, in PAP, and therefore is either evidence of free will or a mental delusion, but I doubt that would be specially useful, given the general trend of compatibilist thought.
Instead, I may perhaps add one relevant point about your last remark. You say:).
.
Your insistence about point 2), that control of action is necessary to ground moral responsibility, a condition which I feel no reason to agree with, has made me realize the possible reason of this misunderstanding.
Control of action is usually required for the concept of human responsibility, as it is usually applied in law or in social institutions. That is fine, and I certainly appreciate that. But I don’t believe that human and social responsibility are the same as moral responsibility. It is good that human laws and human reasoning be in some way inspired, at least to a certain degree, to moral concepts, but that does not mean that they are the same as those moral concepts.
So, here is the difference: human responsibility requires control of action, because human reasoning is tied to external facts: in law, you cannot be held responsible for the intention to achieve an evil result, if you don’t succeed in your intentional course of action. On the contrary, in many cases you are held responsible for some negative result of your actions, even if you really had never any inner connection with that result.
There is nothing wrong in that. Human morality, social morality, are imperfect and external. They have their reasons, but they are not perfect, and they have to rely on social conventions and on social opportunity.
But true morality is different. True morality is all about inner actions, about intentions, not about results. We are responsible of our inner actions, whatever the external result, whatever control we have, or have not, of the final exit.
Human morality is about our relationship with others, and about their expectation about us. True morality is about God and truth, and our duty towards them.
So, I maintain that control of “outer actions”, of “outer results”, is in no way necessary to ground true moral responsibility. Control of intention is enough for that.
Errata corrige: in the previous post, “berbal” should obviously be “verbal”.
I usually don’t care too much for typos, but I did not want anyone to spend time asking himself what “berbal” may mean… 🙂
Stephen, thanks.
I have been truly enjoying this thread, which, I believe, has been unusually deep, rewarding and harmonious. Maybe it’s easier to debate free will than origins… 🙂
Anyway, I really want to thank all who have contributed to the discussion (Green first of all).
Thanks gpuccio. It’s been great discussing this with you too. And thanks StephenB too. And apologies again; I think I was a little harsh in some earlier posts.
If only there were more hours in the day. We could discuss this for weeks, I’m sure 🙂
Wow – great thread! Green, your comments were superbly clear and your position is well-argued. I don’t think substance dualism solves the problems you want it to solve, though, and it raises more questions than it answers. (The same is true of neutral monism, but at least it is more parsimonious). But I do agree with your take on libertarianism.
In any event, I’ve argued elswhere on this forum that ontology and free will are ancient questions that still manage to resist resolution by appeal to our shared experience. I think this thread amply supports that claim, as even Christian ID proponents can’t manage to agree on what the truth is regarding these issues.
I believe, however, that ID is very necessarily tied to particular stances on these questions, and it is no coincidence that these issues constantly surface in discusions of ID. It’s been said here that ID requires only that intelligent processes be distinct from other processes, and that ID can detect those processes by their artifacts, no matter what is true about dualism or free will. I disagree.
ID claims that CSI can only be produced by intelligent processes (hereafter”IPs”). But unless one adopts a dualist/libertarian metaphysics, there is no way to characterize IPs independently of CSI production. Obviously if IPs are characterized only by virtue of their ability to produce CSI, then the definition is circular (CSI is created only by IPs simply because IPs are defined as that which can create CSI). And there is no other characterization of intelligence that can be used in the context of ID to substantiate the claim that IPs are distinct in the world.
We can obviously describe our phenomenal experience of thought and claim that is what distinguishes IPs (i.e. we experience conscious foresight). But as this thread has demonstrated, there is currently no empirical resolution to questions regarding the causal status of consciousness.
For these reasons, I believe that ID rests squarely on a set of metaphysical claims that remain controversial… even among ID proponents!
Green:
Pardon, no offense intended. But, a concern on how to best address a complex question needs to be underscored.
Just above, at 135 — and echoing 23, 32, 73, 114 and maybe more — you said:
I do not think you will find a single interlocutor in this thread who believes that we — as the presumable subjects of freedom — make choices “for no reason at all,” and at least some of us will hold on knowledge of he difference between contributing influences, necessary causal factors and sufficient ones, that one may act on path A fro reasons associated with it, while choosing not to act on path B having rejected reasons for going along with B.
Further to this, a reason or an argument is an influence, but does not constitute a sufficient cause whereby upon its being present [in adequate strength], triggers a given path. Not at all, in the normal course we have factors for path A, and factors for path B, and make a choice on values, desires, prudence etc. But none of these are causally sufficient or even necessary in most cases. They simply influence and contribute.
I actually gave the example of a prof I knew who on occasion would deliberately make a random choice to drive an A-path. He had a reason for that, which would prevail over his general tendency that might have made him go down B. In at least one case, it saved his life as the plane he missed crashed.
But, to go with path A does not thereby become acausal. It is influenced by external and internal factors, and it is in the end determined by the transcendental I as GP described so well.
From the Rom 7 – 8 excerpt — pretty autobiographical for Saul of Tarsus, and pretty accurate to commonly encountered real life experiences in the struggle of virtue — we can see where we actually have a gap between intent and desire and action under the grip of the enslaving, addicting and entangling power of vice. (And in that context of want of perfect power to live with perfect consistency by the right we consent to [hypocrisy being to pretend to be better than we actually are], the issue of responsibility becomes openness to help from the Transcendent and transforming.)
So, I am concerned that the pictures being painted are too simplistic, both of the reality we are trying to capture, and of he views and positions of those you are interacting with.
If we can cross that hurdle then we can all have — or watch — a far more productive discussion.
GEM of TKI
PS: I used “determined” in a sense that means decided, not in a sense that means mechanically controlled or the like, or the substantially equivalent. A responsible decision may be influenced by reasons but it is not caused in the sense of sufficiency.
And it is not acausal for all that: contributory influences help shape but do not determine. There is a real and responsible decision, which could have been otherwise: the man who opens up the bank vault and lets in the robbers is not guilty if he has a gun to his head, or is facing a gun to the head of a hostage.
And yet, I know of a man who is a target for terrorism, who has made a pact with significant others, that should he be so held hostage, the intimidation is to be ignored, though it cost him his life.
–Green: .”
You have stated several times that you can’t get #2, but you have not, by any means, make an argument for that point of view. To assert is not to argue; to allude to the Stanford Encyclopedia of Philosophy, a source that is obviously biased against theism and free will, is not to argue. You should see hit piece its authors do on intelligent design. Just look at their list of references. Better yet, look who is missing. It’s a stacked deck.
You are claiming that agents cannot be morally responsible because they cannot control their actions. Please make your case.
F/n: AIG, simply produce a case in our observation where digitally coded, algorithmically or linguistically functionally specific complex information is produced by blind chance and mechanical necessity. We have an Internet full of examples where dFSCI comes from directed contingency tracing tot he actions of recognised intelligent agents.
That you and your side cannot do that is obvious from the persistent absence of a serious example.
So, we have excellent reason to inductively infer from dFSCI as sign, to directed contingency as the causal process, and to set that in the context of the reliably known — per empirical observations — source of directed contingency.
On that, we have excellent reason to infer to the directed contingency that best explains C-Chemistry cell based life and its major forms.
StephenB the two sections called “causation and explanation” and “causation and control” that I wrote to GP in post 114 explain why agent-causal theories cannot explain agential control. The article that I referenced by Schlosser goes into more detail, but I think what I said above will suffice. GP hasn’t rebutted these paragraphs; instead he’s said that he doesn’t think an agent needs to be in control in order to be morally responsible. I’m afraid I don’t have the time to get into a discussion on that one. Re-the articles in SEP, to be fair, they’re usually quote objective. I know that the one on free will was written by Tim O’Connor, for example, and he’s a theist and a libertarian. SEP seems to be far more fair-minded than wiki, anyhow.
SB:
Let’s just say the article I could find is on Creationism, and starts by speaking of “god,” describing ID as a subset of Creationism.
Even the language and tone are wrong, indeed unprofessional and lacking in basic calm objectivity; firing off all sorts of warning flags.
SEP just lost a lot of respect from me.
If it cannot get this right, through its peer-review process, something is deeply wrong and needs to be fixed.
THUMBS DOWN!
Sad.
GEM of TKI
the discussion is taking more and more fascinating directions, but, in support of Green’s point (with which I agree), that “the libertarian will ultimately choose one of the courses of action for no reason at all”, an earlier question of mine still remains unanswered. I asked primarily gpuccio, since he has been arguing the libertarian perspective very thouroughly:
If you argue that your choices are NOT in fact, uncaused, but are also NOT the necessary result of evidence evaluations and motivations of the self, WHAT is the (necessary & sufficient) cause of the choices, in your opinion?
From your previous posts I gather that you would probably respond: the self. But that really only moves the goalpost, because then you need to explain what CAUSES the differences between different selves, that will make different choices in the same situation.
Green:
thanks for clarifying your position as a compatibilist. I completely respect your position and your justification for it on religious as opposed to philosophical grounds.
Thanks Molch 🙂 I’ve really appreciated your comments on the issue here too.
Green:
Y’day, in 110 and 113, I cited Rom 7 – 8 and discussed it.
Moral responsibility is not simply a matter of control of outcomes, even the outcomes carried out by one’s body. We have a responsibility to cultivate the life of truth and virtue — cf Rom 2:5 – 8 — even though we can and do stumble, on the grounds that help is available from the Transcendent.
Citing:
So, we see here another form of the issue of freedom and responsibility. Inability to control one’s circumstances and even bodily behaviour does not automatically undercut moral responsibility.
I already pointed to the implications of the two-tier control model. Informational influence, say through quantum level effects, are at least a possible gate-way between the mind and the brain-body system. Nor do we have to have in hand a mechanical explanation to know a fact beyond reasonable doubt.
I do not need a mechanical explanation to know that I have decided to compose, type out and send this comment.
I have inner access that shows me that I am deciding and acting, and it is an I, a unit, not a concatenation of accidental or mechanically forced outcomes locked into a Laplacian determinism chain calculable on knowing some prior circumstance of the cosmos and relevant force laws or the comparable for a [proposed] calculus of the mind. There is even a transcendent reflection that allows me to see — that nagging little voice in the head that says, you’d better, or you’ll be sorry — that I needed to mention something about calculus based dynamical chains, suggesting that the underlying unstated assumption and model for explanation is that there is an analogue to the Newtonian dynamics for the mind.
Why should all the world conform to the model of Newton?
No wonder people used to speak about the difference between the mechanically governed realm and he morally governed world of moral responsibility and laws of human nature that were diverse in focus and effect from mechanical necessity.
We need to stop, and think about how we are thinking about cause, explanation and warrant.
GEM of TKI
—GPuccio: “So, I maintain that control of “outer actions”, of “outer results”, is in no way necessary to ground true moral responsibility. Control of intention is enough for that.”.
StephenB (and GP):
I’d forgotten that that’s what GP said (I thought he’d said we didn’t need agential control at all). However, the agent-causationist still needs to give an account of the agential-control of ‘inner intentions’. My two objections in post 114 thus apply equally as much here.
KF,
I’m not sure what “side” you imagine I’m on, but you’ve missed my point about ID’s connection to the mind/body problem and the problem of free will.
Can we agree that all of our observations confirm that only human beings encode CSI? (Let us, to simplify argument, ignore the two other types of things in our experience that produce CSI, which would be other animals and computers).
So hopefully we agree that human beings are what we directly observe producing (digitally coded, functional, etc) CSI.
You (and ID proponents in general) take these observations of human beings creating CSI and then you generalize your observations into an abstract class called “intelligent agents”. Although we know from our experience of only a single member of that class (humans), ID posits that there could be other members of this class that are not themselves the complex life forms ID seeks to explain but still somehow retain the mental and physical abilities we observe in humans.
If dualism and libertarianism are true, then ID can say immaterial mind and contra-causal volition exist in humans independently of our physical brains, and that these things are the cause ID refers to as the best explanation of first life.
If dualism and libertarianism are not true, then ID must point to something which is within the world of material entities and physical cause, but is still somehow distinguished as being “intelligent” while all other processes are not..
Thanks Green, I understand your position better, but I still see no distinction whatsoever. If a “mental” influence is physically determined, then calling it “mental” seems to serve no purpose other than to create a category with no distinction from “physically determined”.
In any case, it has been an intersting thread.
UB,
I think you are right about this, UB. This is exactly why ID is predicated on dualism.
—Green @114: .”
This objection does not even begin to address the issue of free will, nor does it take into account the human faculty of judgment inherent in any free-will act.
First of all, it should be obvious that no one who denies freedom of the will lives by the same philosophy that he preaches.. You complain that kairosfocus and I, for example, did not give your points a fair reading, as if we had other choices. In spite of your protests, you do, at every turn, assume and act on the proposition that everyone, including you, possesses free will. Indeed, I promise you that any expositor of determinism that you can cite will sue me for plagiarism if I write a book using his muddleheaded ideas.
As Aquinas pointed out 800 years ago, physical laws act without judgment and animals act from a primitive form of judgment known as instinct. Animals know where danger lies and they act accordingly. Humans, however, act from “free” judgment because they know some things are better than others just as they know that some things should be pursued and others avoided. The central issue is whether or not they learn to appreciate that which really is good or that which only appears to be good. Put another way, free will cannot be separated from the faculty of judgment from which it springs. Indeed, rationality itself requires free will.
In keeping with that point, it is not possible to be rational and not have free will because rationality insists that actions should tend toward that which is good and avoid things that are bad. If we had no such freedom, our rationality would be a joke. Thus, by denying your free will, you also deny your rationality and the capacity to make reasoned judgments. In effect, you are claiming that you are a slave to your mental states, which are, in turn, slaves to the elements. Why would anyone who has been given the blessing of free will want to assume the role of a slave?
.
Upright BiPed, I think you’re still misunderstanding my position. I don’t think that everything is ‘physically’ determined. Some things are determined by previous mental states. And these mental states are not physical.
Aiguy, your comments have been very interesting, and I think I agree with you that ID requires dualism (of some sort – probably substance dualism, given the problem that property dualism has with mental causation). I am less sure that it requires libertarianism… but you might be right, and your points are well taken.
Free will is a category of causative agency, or “explanation for effects”, just like “deterministic” causes, or “random” causes.
In order for free will to be a true third category, then of course it cannot be explained in terms of randomness or deterministic effects – because then it would just be a subset phenomena of deterministic and random forces.
So, saying that one cannot imagine how the third category works because it cannot be explained by the other two categories is a bit incoherent. It shouldn’t be explicable by the other two categories, if it is a true 3rd category.
Green,
Whether determinism exists in single strata, or in two, or in a hundred, it is still determinism, meaning no significant free will, and no significant moral responsibility, and no significant means to escape whatever prior events or states, mental or physical, compute into, whether it is moral, immoral, senes or nonsens, true or false.
IOW, in any kind of determinism, you are a physical and mental automaton, simply computing what came before into what will come next.
We don’t see the point of your position that another layer of deterministic strata is involved. So what? You’re still just an automaton programmed by prior events and/or states to do whatever is determined as “next”.
Green has it right that we are not just talking about physical determinism. Deterministic causal agents could be immaterial. However, it could be physical determinism and we would still experience free will, and be responsible for the decisions thus made.
When i say philosophical fundamentalism, I mean (a) starting from basic philosophical beliefs that seem reasonable to us, rather than first checking them for consistency, and (b) the particular belief that our experience of free will reflects a fundamental disconnect from the rest of the universe, which we have the ability to analyse, explain and predict.
‘Libertarians’ are faced with a problem that they want a 3rd way between determinacy and indeterminacy, but there isnt one. They need to deal with that fact and recategorise their concepts, rather than simply reasserting a fundamental belief in various different ways. I had to do that a few years ago.
*
@AIguy: ID does not require any fundamental dualism. Intelligent agents and processes are in practice very easy to distinguish from non-intelligent agents. 1st is that they treat higher-level concepts (that can be recognised by other intelligent agents) rather than mere objects or material. 2nd is that they have access to much external information and carry it into their action. There may be others. I think the 1st is the most crucial and useful.
ID is a ‘common-sense’ science that uses what we are familiar with, so it ought to be independent of these debates about the fundamental nature of things. I am not a materialist or a physicalist. I am a dualist in the sense that an immaterial reality exists, but I am open on the question of whether a human soul need be made of some immaterial substance or not. ID does not depend on any such assumptions. The problem is that it leads to *conclusions* ultimately that are incompatible with physicalism, and that is why physicalists/materialists block it out.
Green,
Again thanks for your edifying comments.
I actually think it’s more clear that ID requires contra-causal will than substance dualism (though I suppose most think the former entails the latter).
ID arguments center on the inadequacy of “unguided nature” to account for CSI, and on the explanatory power of “directed contingency” in this context. What ID fails to make clear is what is supposed to be able to guide nature or to direct contingency. If this is not libertarian will, I really don’t know what it could be.
William, I wrote that before I read your post. Interesting.
aiguy,
This is a non sequitur. Design detection does not require dualism whatsoever. The coherency of any true theory of mind might require it, and hence free will, but this is not generic light-of-day design detection by any means. Nice try though.
This tidbit may be of interest:
Scientific Evidence That Mind Effects Matter – Random Number Generators – video
andyjones,
It seems to me that Darwinian evolutionary processes carry external information from the environment into its action; these processes learn and remember. Wouldn’t it be true, then, to say that evolution meets your criteria for “intelligence”, whether or not you believe these processes account for biological complexity? As for producing “higher-level concepts” – how would you go about determining if the Designer hypothesized by ID was capable of “higher-level concepts” or not?
Clive,
In that case, what is it that directs “directed contingency”. What is it that guides nature when nature is not “unguided”? What is it that allows processes to “see” when they are not “blind processes”?
@AIguy
>>What ID fails to make clear is what is supposed to be able to guide nature or to direct contingency.
Concepts and goals. Breaking a problem down into sub-goals. Use of analogy to previously solved problems. Use of previously accumulated experience. Storing information that goes beyond the ‘average reproductive success’ of the final product, + noise, for example (thats all evolution does; it does not record success of sub-goals etc).
The external environment does no more than judge the final product. It contains no concepts of itself. An intelligent agent can perform experiments upon the environment to form concepts about it, but evolution can only hack about. Any concepts it appears to hit upon can only be by chance.
StephenB addressing Green:
.”
What you seem to completely misunderstand is that those counsels, exhortations, commands, etc. are evidence to be evaluated by the audience. And IF the internal motivations and additional evidence evaluations allow it, this new evidence can serve to indeed change hearts and minds. Thus, this change of heart and mind is no less determined (by the all the internal motivations and evidence a particular person has, including the new one in form of counsel etc.) than a change of heart and mind that DOES NOT occur, because the evidence was not strong enough in light of a person’s internal motivations and accumulated evidence evaluations.?
andyjones,
Are you saying all of these are necessary attributes for something to warrant the label of “intelligent”? Unless something breaks a problem into sub-goals, uses analogies, and learns from experience, then it isn’t intelligent? And anything that does do all of these things is intelligent?
Aig,
ID is predicated (at least partly) on the observation of patterns in existence which are not the effect of randomness or order, but indeed are always explained by directed contigency in all cases where we know their cause. We make an valid inference from all those which are known to the one that is unknown.
Your interest in tieing ID to dualism is rhetorical.
UB,
My question is what exactly is it that is supposed to direct contingency in instances of “directed contingency”.
Aig,
“It seems to me that Darwinian evolutionary processes carry external information from the environment into its action; these processes learn and remember.”
For Darwinian processes to function at all they need (as we find them) an information processing system based upon symbols and rules. WIthout that, there is no “learn and remember” anything at all.
“My question is what exactly is it that is supposed to direct contingency in instances of “directed contingency”.”
You’ve been given that answer a number of times, by different ID proponents in different ways on a number of different threads. Do you not remember any of them? Or, is it that you’d like to go through it all again only to argue over where in the causal chain of existence we place the “We don’t know”. You’d like to place it prior to the observation that there are such patterns in existence which can be observed, and we would place it after we have everything we do know on the table – including the observation that order and chaos do not have the capacity to create these patterns while directed contigency does.
UB,
I think we both agree that at least “microevolution” occurs, where lasting changes in information is stored in the genome of a species as a result of incorporating information from the environment. If you object to using the terms “learning” or “remembering” for this, you are incorporating other aspects of intentionality that aren’t usually associated with those concepts. Do you believe that a computer memory “remembers” data, for example?
My question was “what is it that directs contingency in instances of directed contingency“? I don’t believe you’ve answered the question here; you’ve simply re-asserted what you think “directed contingency” is supposed to account for.
aiguy asks: “My question is what exactly is it that is supposed to direct contingency in instances of “directed contingency”.”
Let’s alter the question a bit: What exactly is it that directs deterministic causes to have deterministic effects?
“Deterministic process” and “directed contingency” are categorical descriptions of certain kinds of cause and effect relationships.
Asking what determines a directed contingency is like asking what directs a deterministic result.
One might explain the fundamental difference this way: deterministic outcomes are sequential/contextual computations. The outcome, X, is determined by the factors that precede it.
Directed contingency begins with a target, X, and then develops a sequence of events to get to arrive at X.
Deterministic causal relationships do not begin with a target; they simply arrive wherever they arrive. Directed contingency, however, can imagine targets that do not even currently exist, and cannot even be reasonably computed by deterministic functions (universal resource bound), and begin directing materials and resources towards that end.
William,
Physics seeks to characterize specific causal relationships; there is no single thing that we know of that “directs” deterministic causes. Perhaps if we ever find a single unified Theory of Everything then we will be able to reduce all phenomena to a single cause; I’m not holding my breath.
But with regard to things we do have some empirically-grounded understanding of, science carefully characterizes exactly what it is that is supposed to be directing the effects we see. The fundamental forces of physics are axiomatic, but they are characterized in such a way that we can go about seeing if they really do exist as we describe them. Referring merely to a “directed contingency” that is capable of achieving whatever phenomena we’re trying to explain is unhelpful without somehow trying to characterize what it is that is directing these contingencies. If it is res cogitans, or contra-causal will, then ID should just say so.
William,
It is always the case that information and cause may be moving via unanticipated channels, causing phenomena that can appears to be working backwards, but which actually proceeds via the same cause-and-effect we see in all phenomena. When a lightning bolt hits a church steeple, for example, it seems that as it leaves the cloud it has chosen its target and worked backwards to pick a trajectory. It was only quite recently that the deterministic mechanism was revealed that unmasked this seeming “directed contingency”.
Likewise, it appears that while we consciously experience ourselves working background from our conscious goals to our sub-goals, and from there to our algorithms for action. But it may be (and many neuroscientists believe this is the case) that “blind” (forward-acting) generate-and-test processes inaccessible to our conscious awareness may be what is actually doing the work, and our consciousness is simply narrating the results.
I don’t know if this is true or not, but I do know that we have no settled science to tell us if something contra-causal (or “backwardly causal”) is operating inside our heads when we design things.
Aig,
“Do you believe that a computer memory “remembers” data, for example?”
I made my point clear, again.
I note that you feel warranted in using a human-made object to make your point, and then in another setting, you’ll argue that human-made artifacts are invalid as a reference. That is an inconsistency that ID proponents are not forced to contend with for the singular reason that they make their observations based upon the artifact, not the “artist”.
In any case, do you think a computer can remember without symbols and rules? If it is true that rules and symbols are required for function, how did they come into being instantiated into the material of a computer? Did it involve foresight? Is it even possible that chaos and order could lead to it? If it is the case that chaos and necessity could not lead to it, then what is it exactly about symbols and rules that are beyond the causal powers of chaos and order.
“My question was “what is it that directs contingency in instances of directed contingency“? I don’t believe you’ve answered the question here”
You are correct.
aiguy,
Exactly. That is the question. It won’t do as an answer to claim that it isn’t X, when we know not what X is. This hinges on what is “natural”, a question that I’ve yet to see answered except by begging the question. It may be a will, it may not be, but the leap from design detection to dualism is a non sequitur.
aiguy,
If directed contingency is an illusionary narrative, why should I believe anything you or those neuroscientists say? You’re only outputting what prior states dictate, like anyone does who outputs whatever they output, including people we call insane or delusional.
Indeed, if you and are are simply outputting what our prior states dictate, and intending a goal such as discerning the truth is just an illusory narrative invented by prior events to accompany our actions, why should I consider anything you say to be anything more than the noise made when the wind blows through the tree?
Such arguments are self refuting. Unless you can actually intend outcomes, and actually direct contingencies, then logic itself is just our illusionary narrative companion as we bark and cluck our way through existence.
UB,
What I’m trying to do is to understand what exactly you mean by “remember”. I’ll offer a definition: “To remember is to undergo a lasting change in physical state as a result of interaction with the environment.” Per my definition, people, computers, evolution, and Temper-pedic “memory foam” (the material my mattress is made from) all are capable of “remembering”.
If you would like to offer another definition, please do. But remember, you have asked this question: “Do you think a computer can remember without symbols and rules?” So it would not do to incorporate “symbols and rules” into your definition of “remembering”, since you would simply be answering your own question by definition rather than as some fact about how remembering works.
According to ID, things can be judged as intelligent or not without regard to their origin. For example, I presume you believe the following two propositions:
1) Human beings were designed by an intelligent designer
2) Human beings are themselves intelligent
So you don’t seem to think there is any inherent contradiction in something being a bona-fide intelligent agent even though it was itself designed by something else. Thus, it appears you have no grounds to deny that computers are themselves bona-fide intelligent agents, no matter how they were originally created.
I know.
Clive,
I’m having trouble understanding your position here. I’m saying that unless you assume dualism, the notion of “directed contingency” isn’t characterized in a meaningful way. Unless you specify what it is that you believe has the power to guide nature, to direct contingency, and to allow processes to “see” (as opposed to being “blind”), then you haven’t actually offered any specific cause at all. One answer is to posit an irreducible, immaterial, causal substance (or property) that is mental; that’s why I say ID requires dualism in order to be non-vacuous.
William,
Because we make sense… and we’re smart? 🙂
I never understood why people like this argument. Here’s my response:
1) Either our minds are reliable or they are not.
2) If our minds are reliable, then all of this talk about materialism or evolutionary processes being unable or unlikely to produce a reliably rational mind is moot… because our minds are reliable.
3) Otherwise (if our minds are not reliable), then all of this talk is still moot, because our minds are not reliable and we have no way of telling what the truth is.
So either way we can’t use the reliability/unreliability of our minds to prove anything at all.
—molch: “What you seem to completely misunderstand is that those counsels, exhortations, commands, etc. are evidence to be evaluated by the audience. And IF the internal motivations and additional evidence evaluations allow it, this new evidence can serve to indeed change hearts and minds.”
What you seem to completely misunderstand is that if the audience had not been exposed to the message, there would be no change of hearts and minds at all. If the message changes the attitudes or behaviors to any degree at all, determinism is finished. Indeed, that is why you refute your own philosophy every time you enter into the arena and try to create an impact. If you didn’t believe you could make a difference, that is, if you didn’t think you could change the course of events in a way not possible without your presence, you would not bother.
StephenB –.
Aig,
If you do a word search on this page for the word “remember” you’ll see that you said evolution could “learn and remember”.
I took the words you used and saw them in the context you used them. I saw that you made a complete sentence and did so in an environment of others who would understand your words and see the context of your thought. I found no reason to parse your comment, or suggest that it was unintelligible or incoherent. I then said that without a system of rules and symbols evolution cannot “learn and remember” anything at all.
So now you’ve turned around to ask me what I meant by “remember”?
Honestly…wow.
You then go on to suggest that humans, computers, and genomes have a likeness in their ability to “remember” with other articles such as foam bedding.
I simply cannot carry on a conversation with this.
—molch: ?”
THE cause? An act of the will is “a” cause, not the cause. There are a multiplicity of causes for any and every human event. One cannot choose without a mind and a will. Who or what caused the existence of those two faculties?
Moving past that, humans are driven by psychodynamic, biological, environmental, and cognitive factors, all of which are causes.
Moving past that, the intellect must provide the will with a target to hit. The mind produces the target, the will shoots the arrow. Without rationality, free will is impossible; without free will, rationality is impossible.
Moving past that, a human being’s nature is a cause. As Aristotle says, all men naturally want to be happy. It is that nature that informs every choice that we make–it is yet another cause.
To say that an individual makes a free moral choice by an act of the will is to acknowledge that a number of causes have already been in play and will continue to be in play. Indeed, if the creator stops sustaining the universe, all human choices will end immediately. The final act of the will is simply one more cause, only this time it is an agency cause–an immediate cause that creates an effect that never would have occurred in its absence. In some cases, God moves the will after having been asked by the agent to overcome some internal obstacle.
If humans had no free will, then they could never raise themselves beyond the level of a barbarian because they would have disavowed the one faculty which allows them to say “no” to a bad impulse. Anyone who denies free will does, by his own choice, exempt himself from opportunity to make that elevation. Indeed, that is not a bad defintion of evil–a perverted will, one which has, as a result of its previous choices, become too soft-headed to resist bad impulses and too hard-headed to acknowedge its moral duties.
UB,
That is correct. Evolutionary processes incoporate information from the environment (learn) and store it in genomes (remember).
Well, I would say you did indeed “parse” my comments; otherwise you could not have undestood my sentences at all. But in any case, you seem to have been using a different definition of the words “learn” and “remember” than I was; by the definition of “remember” that I offered (and you have declined to improve upon), evolutionary processes do most clearly remember.
Right. This showed that we were using these words differently.
That’s right. I provided you a definition of what I meant, and I asked you to either accept my definition or provide one of your own. That way we could communicate more clearly about how evolution was or wasn’t capable of memory or learning.
???
According to the definition I provided, this is clearly the case. That is why computer memories are called “computer memories“, and memory foam is called “memory foam”.
Oh. In that case I take it you concede my points. Thanks for the discussion!
aiguy:
The question isn’t if our minds are reliable, but rather if one’s premise allows for one being able to tell if their mind is reliable or not.
William,
But of course it is not possible for us to determine if our minds are reliable or not, because we have only our minds with which to discern the answer. We cannot determine the reliability of our minds simply by adopting one or another belief about origins or metaphysics.
–aiguy: .”
You know I think you just might be on to something here. If every primary cause, every mediating cause, and every output is determined, then, by gosh, you would have determinism. Seems like a good, safe bet to me.
StephenB,
Indeed. Moreover (this is the part I think you missed) in this deterministic world, scenarios such as me attempting to verbally convince you that I am correct, or my hoping that you change your behaviors as a result of my exhortations, all make perfectly good sense.
Aig,
This observation does nothing to explain what must be explained.
When you used the word “remember” you used it in the context of a “Darwinian evolutionary process”. It is under this context that I responded that the “Darwinian evolutionary process” required a system of symbols and rules in order to operate at all.
This is just not true, in fact, it is demonstrably false. In your very next response to me you re-established the context of your use of the word, saying “information is stored in the genome of a species as a result of incorporating information from the environment.”
Only later did you change your usage of the word to include somethig having the likeness of foam bedding. The last time I checked, bedfoam did not hold encoded information within a carrier inside its genome. The last time I checked, bedfoam did not evolve – yet that was the context of your word use in the original comment, as well as your follow-up comment.
The definition you then provided had nothing to do with a “Darwnian evolutionary process”.
I never objected to your usage. To the contrary I commented on it in the same context that you both originally used it, and then re-established in your follow up comment.
That is a ridiculous statement. One that can only be defended by a zealot.
By all means, please do.
UB,
??? I certainly didn’t offer this as an explanation of anything. If you review the quote in context, you’ll see that we were discussing whether or not ID is predicated upon dualism. Andyjones remarked that intelligent processes access information and “carry it into their action”, and I responded that evolutionary processes do this as well. The point here was that one could not distinguish intelligent from non-intelligent processes on the basis of accessing information (or “learning” and “remembering”) unless evolutionary processes were also going to be considered intelligent.
I’m still unsure what you mean here. Do you mean that a system of symbols and rules must have existed in order for Darwinian evolution to take place? Or that Darwinian evolution itself operates according to symbols and rules?
In any event, I think it’s clear that since ID attempts to distinguish “intelligent cause” from the “random mutation + selection” sorts of processes that drive evolution, most ID folks don’t really consider evolution to be intelligent per se. So:
1) evolutionary processes are not considered “intelligent”
2) evolutionary processes acquire information from the environment (learn) and store it in the genome (remember)
3) therefore, learning and remembering must not be sufficient for warranting the label “intelligent”
Yes. This is what I meant when I said evolutionary processes learn and remember.
No, this is exactly the same sense of the word “remember” that I used in the context of evolution. In both cases, I am defining “remember” to mean “to undergo a lasting change in physical state as a result of interaction with the environment.” Evolutionary processes remember information by storing it in the genome; memory foam remembers information (about the shape of your body) by storing it in the shape of the bubbles in the foam.
No, there is no “genome” in my mattress. The physical state that changes in the mattress is the deformation of the bubbles, enabling the mattress to remember the shape of my body. The physical state changes in computers that enable them to remember is electro-magnetic charge. The physical state changes in evolution is the sequence of bases in DNA. And so on.
??? I did not imply that my mattresse evolved; rather, I pointed out that it had a memory (which is why they call it “memory foam”).
We obviously miscommunicated here; I think if you read this post it ought to become clear.
Good then. We agree that “to remember” means “to undergo a lasting change in physical state as a result of interaction with the environment”. So we ought to agree that memory foam and computer memories – as well as human brains and evolutionary processes – are all capable of remembering information.
WHAT?
AIG @ 157:
Not quite, on several aspects:
1 –> It is not embodiment as such that produced dFSCI in our observation, as already discussed.
2 –> For one instance, computer programs and systems are produced by knowledgeable, trained experts, i.e the capacity traces to intelligent behaviour specifically.
3 –> Similarly, text strings here at UD are produced not primarily because we are embodied but because we are intelligent.
4 –> And we have no good grounds for inferring that we exhaust the possible specific or general types of intelligence; indeed he origin of a fine-tuned observed cosmos makes an extra-cosmic, necessary being intelligence credible or at least possible, and one that — per the heat death challenge cannot reasonably be material and subject to the random transfers of motion that lead to thermodynamics effects.
5 –> Next, when computers produce dFSCI they do so as extensions of humans, their desigers and programmers.
6 –> I am at present unaware of animals producing digitally coded functionally specific complex information, but would accept such cases as proof of intelligence of said animals. For instance if certain claims about certain parrots pan out, I would accept them as intelligent, as I would accept a robot that passes certain tests that show originality, as I have long since said.
7 –> What is relevant is of course that in cells, we find dFSCI systems, and we are not a credible cause.
8 –> On the known observations and the challenge of the resources to sample an appreciable fraction of the relevant search spaces beyond 500 – 1,000 bits storage capacity, it is a reasonable inference that dFSCI is an empirically reliable sign of intelligence as key causal factor,however it was brought to bear.
9 –> Which is of course the inference that you and others object to.
10 –> But we note that you have been unable to show a case where dFSCI credibly arose from processes of undirected chance and mechanical necessity, in our observation. And, that is what would have been required to disestablish the generality of the observed pattern.
11 –> Onlookers, kindly observe this.
GEM of TKI
Stephen (#155):
Unfortunately, work kept me from following the last developments, and now it’s hatd to catch up!
You say:
.”
Well, what I meant is that we can be morally responsible of our action only in the measure that we can really control them. There may be situations where there can be a great difference between the intention and the actual action.
This is a field where I don’t think we can really judge, but only try to understand with humbleness. Many action are compulsive, and probably the agent, in his present state, isnot able to control them much. Think of many states of dependency, both physical and psychological, for instance. In that case, a moral behaviour could just be the effort to fight against that state, even if at first unsuccesful. The outer action can still be apparently evil, but a sincere inner intention to go upstream can be the premise for future redemption.
So, I would stay very open and flexible in this field: it is important to know that we have the inner power to change, and that such a power will increase if we apply our free will in a positive way now.
F/N: UB cite fr AIG at 185:
Memory registers, as you know, mechanically STORE states based on a designed organisation, e.g. a JK flip flop or a D latch acting as a storage register.
Remembering is a CONSCIOUS act [when we have forgotten, we cannot recall to consciousness . . . try as we might (especially in an exam!)], which we routinely observe and experience as intelligent, conscious creatures.
We note again the repeated attempt to blur key and manifest distinctions; in service of undermining confidence in what we do know and are personally aware of.
That fores off a lot of warning flags.
GEM of TKI
AIG:
Your “evolution produces” claims are based on an equivocation and a conflation of what is observed with what is imposed by Lewontinian a prioris.
We observe minor small scale changes in existing complex functional living systems, sometimes called microevolution. These are essentially irrelevant to the origin of such systems based on dFSCI well beyond the 1,000 bit threshold.
We do not observe — and have not answered tothe chalenges connected tohe purported origin on blind chance plus necessity of first life [100 + k bits of info] and novel body plans the very heart of macroevo. Cambrian revo, for main body plans 10’s + mns of new biots of info. Unaccounted for, but often assumed per imposition of a priori materialism. Perhaps int he guise of so-called methodological naturalism.
Kindly, top arguing in circles.
GEM of TKI
PS: Foam mattresses and other memory effects in materials are the result of not symbolic storage but physical and/or chemical changes that partly lock in a former state. E.g. a reel of fishing line may curl like how it was wound on the reel [and you should load a spinning reel over the side of the spool, so you do not reverse the sense of curl], and heated hair stretched while heating will straighten, or wrinkled clothes heated on an ironing board will flatten out or crease [until you crush them again].
GP: as is highlighted in Rom 7 – 8, and urged in Rom 2:5 – 8 and Eph 4:17 – 24. G
aiguy (#157):.
Well, that’s another good aspect I would like to discuss; now it’s very late here, but I will try at least to give some ideas.
I have partially followed some of your posts on this matter, but I had not the time to comment on them. I will try briefly to outline my position.
My fundamental concept is the concept of consciousness. Not a concept, indeed, but more a fact, directly experience by each of us.
We must always remember that, for each of us, consciousness has a double status: it is a fact directly experienced (our personal consciousness), and a very reasonable inference by analogy (the consciousness of other human beings).
The set of conscious experiences, both perceived and inferred, must necessarily be an important part of our map of reality, because otherwise we would exclude from that map the fact itself which allows us to know and think and feel (consciousness).
You are interested in some clarification about intelligence and intelligent processes. What we know about intelligence derives directly from our conscious experience. There are facts which are undeniable:
1) Conscious representations have a double aspect: a cognitive aspect and an aspect of feeling.
2) Cognition is based on some fundamental conscious representaions,such as the sense of meaning, the processes of deductive reasoning and of inference, the concept of purpose and of fucntion. Many of this representations have also a “feeling” component. All opur maps of reality use some or all of these processes.
3) Intelligence is a way to describe our cognitive representations. It implies usually abstract thinking, and always the concepts which I have cited at the previous point (meaning, inference, and so on).
4) Design is an intelligent conscious process where those cognitive intelligent representations create a purposeful output, the designed object.
5) I don’t believe that the concept of intelligence exists out of consciousness. Intelligence is a kind of activity of consciousness, whose main purpose is to understand reality. Non conscious realities cannot be intelligent. Obviously, intelligent outputs can be “written” in non conscious supports, like machines, software, books and so on. But intelligence is always a conscious activity. You cannot define meaning or purpose or truth if not in the context of conscious representations. Those concepts have no realities for non conscious entities. And intelligence cannot exist without those concepts.
6) So, we can easily “distinguish intelligent processes from non-intelligent processes”: intelligent processes are all those in which a conscious agent has obvious conscious cognitive representations. If those representations generate purposeful objects as an output, we can call that process intelligent design. There is nothing difficult or problematic in that. We witness design everyday, both in ourselves and in others, both directly and inferentially.
7) CSI is only a way to infer that some object is designed, when we have no direct evidence of the above processes (we have not witnessed the design process). CSI is not necessary for the definition of design, nor for its recognition inmost cases: if I look at a child drawing some simple picture, I am sure that the child is designing, even if the design is simple enough not to be defined as CSI. So, there is no circularity in ID theory.
That’s for a start. I hope we can go on more deeply tomorrow.
molch (#151):. But they are not without meaning or value. The self chooses according to an independent will which can be in tune with truth, or evade truth. That does not depend on his previous states or representations, but the way his choice interacts with reality does. That’s what I mean when I say that all of us have free will, in the same degree, but that we have different ranges of inner freedom. Our inner resources are different, according in part to our previous use of our free will, but the ability to act, in some way, for good or for bad, is present in each one of us.. But that does not mean that our free choice does not exist, or that it has no value, only because we cannot force it into categories which are not appropriate for it.
That does not mean that two selves are the same. The precious history of each of us, and other factors, make us different. Our representations are very different, even in similar outer circumstances. But our inner free will is the same. Even if A and B act in similar circumstances, and even if both act in the best possible way for them, making the best possible use of their free will, their action can just the same be very different, because, as I have said, their phenomenic self is different, and it’s their phenomenic self which determines how their good free choice can manifest outwardly.
So, to sun up, the differences between phenomenic selves are determined by their precious states, including their previous use of free will, and they will influence very strongly the form that their free choices can assume, while the reason why two different selves act for good or for bad, in any situation, cannot be explained in a cause and effect scenario. It is rather a transcendental manifestation of the self, and it has the power to change reality.
—aiguy: “Moreover (this is the part I think you missed) in this deterministic world, scenarios such as me attempting to verbally convince you that I am correct, or my hoping that you change your behaviors as a result of my exhortations, all make perfectly good sense.”
No, not really. I think you missed the humor of the situation. In effect, you were arguing that if everything is determined, then everything is determined.
In any case, your efforts to change minds refute every word that you write. More importantly, your assertion that you are a slave to conditions that dictate your every move does nothing to win the confidence of those whom you would try to influence.
GPuccio, thanks for your comments and clarification at 202. Since I have already briefly indicated my position @155, there is no need to repeat it.
StephenB, Kairosfocus and others:
This objection does not even begin to address the problem of free will? I don’t understand how can say this? You seem to be arguing for agent-causal libertarianism and claiming that it can ground moral responsibility. I said that moral responsibility required two things: (1) agential origin (2) agential control. I then said that the agent-causal account could not ground (2) because it cannot give an account of causation (one of the pre-requisites for agential control), and even it it could, agential-causation doesn’t automatically equal agential-control. What am I missing here? I have clearly shown how on the agent-causal theory of libertarianism, moral responsibility cannot be grounded. Please counter my objections if you want to hold the contrary.
With regards to other comments, such as:
I’m surprised you can make claims like this which are so obviously fallacious. Thank you to the others on this thread who have pointed out that none of the above follows from determinism.
I have yet to see a libertarian account of free will that can ground rationality better than determinism. As I’ve already pointed out, there are three types of libertarian free will:
(1) Noncausal theories
(2) Event-causal theories
(3) Agent causal theories
(1) and (2) end up giving you decisions that are arbitrary, whilst (3) ends up giving you decisions that are irrational, since they are made, ultimately, for no reason at all. Kairosfocus, you have several times made claims to the effect that “no-one on this thread thinks that decisions are ultimately made for no reason at all”. Well, they may not, but then they are not agent-causal libertarains, since agent-causal theories inescapably lead to this conclusion.
But, maybe I shouldn’t define “rationality” as I do. Maybe this is the problem. Maybe agent-causal theories only fail because I have been defining rationality wrongly. I have been defining it as “acting for reasons”. However, StephenB suggests another alternative. He writes:
But I can’t quite make sense of this claim… Rationality “tends towards” what is good? Do you mean that rationality consists in acting in such a way that good will result? If so, how, on the libertarian view, can you then ground such rationality? To act in such a way that good will result, one must surely have to act on reasons that are morally good? Yet acting, ultimately, for reasons is exactly what the agent-causal libertarian does not have. So I don’t see how this definition of rationality helps the libertarian.
Several on this thread have also claimed that determinism entails that we are “forced” to make certain choices, that we are “slaves” to these choices. I find these terms quite perjorative. If I have a desire for a drink of orange juice, and a belief that going to fridge will will satisfy this desire, and these together determine my action to go to the fridge, am I acting as a “slave”? Am I being “forced” into my decision? I don’t think these terms are appropriate. I am acting on my desires, and my beliefs. No-one is “forcing” me, or co-ercing me.
On a more general note, I find that a lot of libertarians just use “free will” as a label to claim things like moral responsibility, rationality, choice and so on, but never really dig deeper to find out what whether they’re really entitled to these things. I think that were libertarians to dig a bit deeper, they would see that “free will” just falls apart; libertarian theories just don’t come up with the goods. They have as much right to claim that their position justifies moral responsibility as the determinist does. And they have less right to claim that they can justify rationality. The 3 types of libertarian theory that exist today simply do not get libertarians what they say they want.
[On a sidenote: with regards to what I said earlier about ID and dualism, gpuccio has written an excellent post at 206. The reasons (1) to (6) that he lists are exactly why I think ID is tied to dualism. Materialistic / property dualist accounts of the mind cannot get you (1) through (6) which is why I think ID requires substance dualism.] 🙂
Green, I have only one question for your: what about God’s will. Is it free, or is it determined?
If libertarian free will is as incoherent as you are saying, then isn’t God also a fully determined being that lacks contra-causal power?
gpuccio,
Agreed, to everything up to this. I also agree with this except where you say consciousness allows us to think. I don’t believe we know that is the case at all.
First, it’s well known that a great deal of our thinking (making sense of our perceptions, generating plans, solving problems, etc) happens unconsciously. Even complex mathematical reasoning appears to proceed without conscious attention. Second, it’s clear that other animals think (unless you have a more restricted sense of ‘thinking’ that excludes non-human animals) even in cases where our inference to their consciousness is weak. Third, again depending upon how you defining “thought”, you may even agree that computers can think, and I trust we’ll agree that it’s unlikely they are conscious.
We have no general theory of intelligence; simply put, we do not know how we think. If we did, we could either build a generally intelligent computer system or explain why we can’t. At present, we can do neither.
I agree we each experience sentience and have a comfortably strong basis for ascribing sentience to each other (and some other animals too).
But again, I do not believe we have any basis to say how consciousness is involved in reasoning. find this description/definition of “intelligence” too nebulous to work with. First you say that intelligence describes representations. Do you really mean that intelligence is something that describes other things, including representations?
Then you say intelligence “implies abstract thinking”. Wouldn’t it be equally true to say abstract thinking implies intelligence? So aren’t these just two ways of saying the same thing, rather than one serving to help define the other?
Then you throw in “meaning” (i.e. intentionality. certainly a difficult problem) and finally “inference”. It’s clear though that at least some kinds of inference are algorithmic, and we perform them continuously without conscious awareness.
Again you have connected consciousness with intelligence, and intelligence with various and sundry mental attributes and abilities, but I can’t make out a coherent mapping here..
I agree with Green that ID requires dualism; this is precisely why I object to the claim that ID rests upon empiricism.
aiguy:
a very thoughtful post.
I agree with many of the things you say. There are some clarifications which I have to make, because I realize that I have been not completely clear in my post about those points (my fault, it was very late and I was very sleepy).
You say:
Agreed, to everything up to this. I also agree with this except where you say consciousness allows us to think. I don’t believe we know that is the case at all.
This is a simple point. What I mean is that we would not have any representations of thoughts without consciousness. The formal contents of our thoughts could be still somewhere (in our brains or elsewhere), but thet would not be thoughts, no more than the content of a book is out thought unless we read the book. I think we may agree that the word “thought” describes conscious experiences. I try to make my statements as simple and empirical as I can, and to avoid non explicit implications.
You say:
First, it’s well known that a great deal of our thinking (making sense of our perceptions, generating plans, solving problems, etc) happens unconsciously. Even complex mathematical reasoning appears to proceed without conscious attention.
True. But first of all, when I speak of consciousness I always refer to the whole spectrum of conscious representations, which include everything at least the subconscious mind. Conscious attention is a narrower concept, usually reserved to distinct representations in the wake state. IMO, those are only the tip of the iceberg of consciousness.
Second, I don’t believe in the existence of completely unconscious “mental states”. Again, there can well be many unconscious processes, for instance in the btain or in the body, which at a certain point become conscious, more or less distinctly, like the book which is read. But nothing is “mental” if it is not in some way represented, at least in my terminology. You can take that as a property by definition, just to be clear in our discourse.
You say:
Second, it’s clear that other animals think (unless you have a more restricted sense of ‘thinking’ that excludes non-human animals) even in cases where our inference to their consciousness is weak.
True. And I have no special prejudice about animals. I just agree with you, our inferences about their consciousness are possible and legitimate, but often weak. So, I prefer to discuss humans for that reason. I don’t see how any discussion about animals should make less consistent what we know about humans.
You say:
Third, again depending upon how you defining “thought”, you may even agree that computers can think, and I trust we’ll agree that it’s unlikely they are conscious.
No, according to my precisation above, definitely computer don’t think, unless we can infer that they have conscious representations. Again I apologize for not having been clear enough in my definitions.
Obviously, some may think that they can infer cosnsciousness in computers (maybe even in Windows Vista 🙂 ), but I don’t think they have vald arguments for that (and I suspect you agree).
You say:
We have no general theory of intelligence; simply put, we do not know how we think. If we did, we could either build a generally intelligent computer system or explain why we can’t. At present, we can do neither.
I have not tried to give a theory of intelligence. I have only tried to give an operating definition of intelligence, which is just what we need in ID. And, sinply put, my definition is:
Any set of conscious representations which includes cognitive representations.
I have also given some examples of representations which we usually agree have a cognitive content. More on that in the following points.
I agree we each experience sentience and have a comfortably strong basis for ascribing sentience to each other (and some other animals too).
I agree too.
But again, I do not believe we have any basis to say how consciousness is involved in reasoning.
We can make some inferences about that, but that was not my point. My point is that we cannot have any representations of the reasoning type without consciousness. Again, only an operational definition, not a theory. I will not define “reasoning” a non conscious process. I will define “reasoning” any type of cosncious representation where the representations have the form of what we usually call “reasoning” (deductions, inference, and so on). Maybe a computer, appropriately programmed, can compute a deduction, but it certainly cannot consciously represent it. agree with that. And one of my firm methodological and epistemological principles is that, in science, “the jury is always out”. I am not a believer of final truth in science, and I am not a supporter of scientific consensus as a necessary value.
That said, I am definitely on the side of Penrose and Searle, and I am very proud of that.
You say:
I find this description/definition of “intelligence” too nebulous to work with. First you say that intelligence describes representations. Do you really mean that intelligence is something that describes other things, including representations?
No, I don’t mean that. My fault again for having been imprecise.
What I meant is:
“Our concept of intelligence (or, if you prefer, the word intelligence) is a way to describe our cognitive representations.”
I hope that’s more clear.
Then you say intelligence “implies abstract thinking”. Wouldn’t it be equally true to say abstract thinking implies intelligence? So aren’t these just two ways of saying the same thing, rather than one serving to help define the other?
No, here too, I meant:
Our concept of cognitive representations implies (or probably it would be better to say “includes”). Again, I am using empirical things (our representations) to define concepts.
and words. My purpose here is to remain as empirical as possible, because that’s what we need for discussing the ID theory.
Then you throw in “meaning” (i.e. intentionality. certainly a difficult problem) and finally “inference”. It’s clear though that at least some kinds of inference are algorithmic, and we perform them continuously without conscious awareness..
Again I meant the representations. I note here that you have smartly avoided to state that meaning can be algorithmic. At least in the case of meaning (and, I would add, of purpose) the inevitable connection to conscious representations is really self-evident.
About inferences, I am not sure they can be made algorithmically, while I would definitely endorse that possibility for deductions. But again, the algorithm just computes what is computable. But the appreciation of the meaning of any deduction or inference needs a conscious representation.
Again you have connected consciousness with intelligence, and intelligence with various and sundry mental attributes and abilities, but I can’t make out a coherent mapping here.
I hope I have made more clear that vI have only given simple and consistent definitions, and not a general mapping of what consciousness does or of how it does it. Clear empirical definitions are necessary for the ID inferences. A general mapping is not.”?
It’s the second. I have defined intelligence as a set of particular conscious representations, so it is a logical deduction that non conscious entities (correction agreed) cannot be intelligent..
Agreed. But, as I said, the ID inference does not require a general theory of intelligence or of consciousness, just a good empirical description of what they appear to us.
I agree with Green that ID requires dualism; this is precisely why I object to the claim that ID rests upon empiricism.
I don’t know. I don’t even like very much the concept of dualism. ID needs only the definitions I have given above, and nothing else in terms of theories of consciousness, intelligence and free will. IOW, ID needs the simple ackowledgement that conscious representations and processes are part of reality, and the empirical recognition of formal properties of the outputs connected to those representations and processes. Nothing more than that.
If that is dualism, then ID needs dualism. But I don’t think that is the case. After all, physics has harboured all kinds of separated principles (like matter and energy, at least before relativity), without being considered specially dualistic.
But again, it’s a matter of how one wants to use words.
Finally. I can only add that I can give you the full ID theory using only the above operational definitions, and common scientific reasoning.
Green:
Kindly, observe that over the past two days, a classic reference on degree of control over behaviour and over thought and will — in the context of moral responsibility and the path of virtue — has been in play, but has been overlooked.
I therefore suggest you scroll up to 110, 113 and 154, observing on the challenge of the entangling, addicting/ habituating, corrupting and enslaving nature of vice. On fair comment, this is an all too apt description that was originally plainly autobiographical, and is descriptive of extremely common experience with the morally tinged challenges of life.
That classical reference goes on to describe how, by transforming encounter with the Transcendent, we may be empowered to gradually overcome the entanglements and addictions that enslave us to vice.
This, too, is an abundantly common experience, with many celebrated cases beyond Paul; Augustine and Francis of Assisi, Blaise Pascal [note his November 23, 1654 “Fire” vision], Wilberforce, Chuck Colson and Mother Theresa come easily to mind.
This experience-based pattern — there are literally millions of examples, across thousands of years and spread across the whole world (some of them being pivotal to the flow of history of our civilisation and world) — shows a very different, and obviously empirically anchored view on the self, the will, the transcendent, and the question of responsibility on motives, attitudes, thoughts, intentions, behaviour and the struggle towards consistent virtue.
I find it absolutely telling that when I look at not only your repeated characterisation of “libertarian free will” [cf my remarks at 109, 137 and 145] but also when I look into too much of modern discussion, there is little or no reflection on this rich, widespread, easily accessible body of empirical knowledge. This is nothing less than en-darkenment in the name of enlightenment.
It is fully in the bull’s eye ring of the censures implicit in Plato’s Parable of the cave where people are induced to imagine that darkness and manipulation are light and truth.
In other words, I here have a vast body of experience based reason to dismiss the whole exercise of trying to find fault with “libertarian free will,” as an abject failure of scholarship to engage with patent and easily accessible facts; an altogether too common pattern in our time. And, when I find as well, the sort of caricatures of how morally and epistemically responsible freedom of will is normally or commonly understood by those who accept it, I become even more unhappy with what I am seeing.
For, this begins to look uncommonly like the fallacy of the closed, ideologised mind, locked into selectively hyperskeptical, radically secularist and often a priori materialist academic schools of thought that are unfortunately contemptuous towards and inexcusably arrogantly dismissive of whole swathes of reality as people experience, observe and reflect on it. (Just think about how, when survival of the community as a circle of the civil peace of justice is at stake, the Law has had to reckon with responsibility and action in light of responsibility. I am going to give a lot more weight to the facts that have forced the law to think like that for thousands of years, than to academic theories that simply do not match well with general experience, or for that matter, my own experience with the Transcendent.)
I think you will find it clear enough that we find ourselves more often consenting to the true and the right than living by what we assent to and even sincerely pursue. Similarly, we find it easier to decide to walk by the truth and the right than to live by same, but by encounter with the Transcendent, we can grow in the path of virtue. And so, our primary moral challenge is to decide the path we follow. As St John records in yet another classical C1 text:
So, the question is not over how much we can by ourselves control even our own impulses and behaviour, but our capacity to choose the path we will walk: light, or darkness. In that context, agency is capable of cognitive actions like awareness, perception, conscience, knowledge, judgement, decision and initiation of a path of action; however much we may stumble in the way. And, many of these actions will show themselves in directed contingency in the empirical world — purposeful arrangements and organisation of objects towards goals. For instance, I decided to type out this comment, giving intelligent direction to many entities that could as easily have been given another configuration.
Similarly, while all of this has been going on for days now, I have struggled with experts to help find a way to properly formulate Montserrat’s new constitution. My son has been busy exploring on the Internet to help guide him in experiments on designing and developing a bow and arrows using resources in our environment. Having tried a first effort using guava wood, he has concluded the bow was too weak; so now we are working on a sturdier device using Leucenia [sp?]. And as I have driven back and forth with family and friends as well as colleagues, I have been ever conscious of my duty to drive “with due care and attention,” i.e. my duties of care on the road.
None of these make sense apart from our ability to be sufficiently responsible, by virtue of being sufficiently free. Our actions, then, are not merely the passive, rigidly predictable flow of dynamics from some preceding mental and/or physical state, nor are they random impulses flowing from molecular chaos or the like, but a rational, responsible, volitional, judgement driven process of the unitary, conscious, intelligent, enconscienced deciding self. Yes, we are first, self-directing causes in ourselves, but that is not an incoherent concept.
Only worldviews that accept that as a base reality make coherent sense within themselves and match the facts of the external world. (Again: whether or no some will acknowledge it, only if I am free enough to see and follow the way of reason and truth for myself, am I able to be rational. There is a profound irrationality in all species of determinism, whether the controlling forces in view are external or internal or both.)
In that context, we do not need to hafve any great exploration of the metaphysics of agency to recognise that it is a real frce in our world, and that intelligences often leave characteristic traces when they act. Digitally coded, functionally specific, complex information and associated organisation and conventions are a classic cluster of such signs. And, we may freely study the object and its signs to infer to the causal process per empirical observation: directed contingency.
In closing, even as I type this comment, I am painfully aware that between my dyslexia and imperfections as a typist [why is it that I so often find myself inverting letter order . . . ], even this comment probably does not entirely reflect what I would wish. But, I am still responsible for it and it still expresses dFSCI, down to my choice to use UK style spellings not US style ones; despite the annoying problem of spell checks that so often fail to deliver on the promise of recognising UK spellings.
And that example in the micro concretises and reflects the issues in the macro.
GEM of TKI
F/N: Re AIG:
This is an error maintained in the teeth of abundant correction, and serves the rhetorical purpose of a turnabout confusing accusation in the teeth of <a href = ""the problem of evident imposition of a priori materialism as a criterion of being "scientific."
In fact, the design inference is patently empirical in foundation, focus and methodology. But, since it provides empirically based evidence that points in directions uncomfortable for ideological materialists and heir fellow travellers, it is commonly objected to speciously based on strawman caricatures. This happens to be one of them.
1 –> It is a commonplace fact of life that causal patterns routinely trace to mechanical necessity [a heavy object falls if it is dropped], chance [if it is a fair die, it tumbles to a value essentially at random] and directed contingency [we can set a die to read what we want, turning it in our fingers].
2 –> Natural law-like regularity empirically marks out necessity, stochastic contingency marks out chance, and several signs of directed contingency mark out design or art. The UD glossary entry on ID discusses this and can easily be looked up.
3 –> Certain particular signs are commonly found in cases of design, and they can be identified as reliable, e.g digitally coded, functionally specific complex information, a particularly important subset of complex specified information.
4 –> To wit, in every case where we directly observe the causal process, dFSCI traces to directed contingency. So much so, that attempted counterexamples are often obviously strained and blatantly irrelevant. The attempt to suggest that an intelligently designed PC, using an equally designed program, is spontaneously generating dFSCI from undirected chance and blind mechanical necessity is the most obvious such illustration.
5 –> We have in hand a reliable causal pattern and characteristic signs. When we see the sign otherwise than where we directly observe the causal process, we have every epistemic right to infer to the signified process, directed contingency.
6 –> AND to use these signs as credible indicators of the presence of intelligent designers [the observed source of designs] at the relevant places and times.
7 –> Therein lieth the rub and the motive for many an objection. (Pointing to motives is appropriate when rebutting objections that ever so often indulge in motive mongering. Sauce for the goose . . . )
8 –> One strained objection is that the only designers we observe are embodied humans. But plainly the source of the dFSCI is not he embodiment but he intelligence and knowledge, as the creation of PCs aptly illustrates and more generally the widespread pattern of expertise.
9 –> Another is the attempted reduction of intelligent agency to chance and/or mechanism with deterministic cause-effect chains driving out freedom of intention and action.
10 –> This falters on the simple observation that to object, the objectors must use directed contingency to create clusters of symbols according to rules of communication in languages, which they assume will be reasonably accurately understood and acted on by readers they hope to persuade.
11 –> In short the objectors only succeed in providing further evidence that dFSCI is produced by design, and that it lives in a context of an assumed world of understanding intelligence that is not simply being programmed like a PC. The objection is inescapably self-referentially incoherent.
12 –> Not that that will stop determined objectors, but it will expose their reductio ad absurdum, assuming, exemplifying and using what they would so earnestly dismiss.
_________________
GEM of TKI
PS: Ouch on a mangled link, the PC cuold not understand what I INTENDED ot say, unlike teh intelligent reader [the typos are deliberate]. Also, at certain times and places I have used US style spellings; i.e. this is a choice that is partly habitual.
PPS: I think Plato’s remarks in making his design inference in The Laws Bk X, are highly relevant. I excerpt, inviting us to watch the exchange between the Athenian Stranger and Clenias as the former answers the evolutionary materialists circa 400 BC:
___________________
>>? [Hence of course, first cause]
Cle. Exactly.
Ath. Then we are right, and speak the most perfect and absolute truth, when we say that the soul is prior to the body, and that the body is second and comes afterwards, and is born to obey the soul, which is the ruler?
[[ . . . . ].] >>
____________________
I am of course not citing Plato as though he were a decisive authority — as I was once unjustly accused of in this blog by a commenter.
Instead, I invite us to look afresh at what we may have overlooked.
GEM of TKI
—Green: “I’m surprised you can make claims like this which are so obviously fallacious. Thank you to the others on this thread who have pointed out that none of the above follows from determinism.”
I don’t know what others have said about that point, but it should be obvious to anyone who will think it through.
[A] A determinist argues that both determinists and non-determinists are determined to believe what they believe. However, determinists also argue that advocates for free will are wrong and ought to change their view, which implies that they have the free power of will to do so.
[I think it was C.S. Lewis (someone you should read along with G.K. Chesterton) who made the following point]
[B] If determinism is true, there would have to be a rational basis for that position. On the other hand, if determinism is true, then there cannot be any rational basis for thought since all thought would be controlled by non-rational forces. So, if determinism claims to be true, it must be false.
[Indeed, rationality itself requires free will]
—“I have yet to see a libertarian account of free will that can ground rationality better than determinism.”
Here you have simply ignored the point [rationality requires free will and free will requires rationality] and reverted back to your talking points.
In any case, I made a bit of an extended argument @193, inspired much by the work of Aquinas (someone else you should read.) There are plenty of other good accounts that you seem not to know exist, none of which have been included in your sources for convenient reasons on their part.
If you haven’t found good arguments, it may just be because you don’t know where to look. In keeping with that point, you are basing your judgments on the wisdom of anti-free will partisans and have not sufficiently explored the work of better writers who are capable of refuting the nonsense of compatibilist/determinism in short order.
Indeed, you seem to recognize the fact that the Christian Scriptures, which you claim to believe in, are at variance with your compatibilist/deterministic philosophy, which you also claim to believe in.
–“Clive (commenting on aiguy’s claim that ID assumes dualism)
“This is a non sequitur. Design detection does not require dualism whatsoever.”
–aiguy: “In that case, what is it that directs “directed contingency”. What is it that guides nature when nature is not “unguided”? What is it that allows processes to “see” when they are not “blind processes”?”
Clive’s point is precisely correct. ID, as science, does not speak to the issue of dualism. To detect the “effects” of design is not in any way to argue on behalf of a transcendent cause but only to argue on behalf of a specific preliminary cause, which could, in principle, be material.
To be sure, a sound “philosophy” of design does, indeed, require moderate dualism since matter cannot be its own first cause, which itself must be non-material, one, personal, eternal, and self-existent. That means that any ID scientist who rejects dualism is a terrible philosopher. ID science, as science, however, does not depend on theistic dualism, though it is clearly consistent with it. To sum up: Good philosophy requires dualism and a first cause; ID science requires only a preliminary cause but it does fit nicely with theistic dualism.
The key words are these: “Consistent with” does not mean the same thing as “depends upon.” If only Judge “copycat” Jones had understood that point.
Like Clive said, though, nice try.
SB: Again, well said. G
That should read “study” not [detect] the effects of design.
#218
However, determinists also argue that advocates for free will are wrong and ought to change their view
Not if they are compatabilists .
MF: Cf. above; it is clear they are reinterpreting what freedom means. G
StephenB & Green:
Green said: “I’m surprised you can make claims like this which are so obviously fallacious. Thank you to the others on this thread who have pointed out that none of the above follows from determinism.”
StephenB said: “I don’t know what others have said about that point.”
That’s because you obviously have not read others comments on this issue. Kindly read 174, as it addresses the fallacies of your claim that Green, as a determinist “at every turn,.”
gpuccio,
It’s fine if you would like to define “thought” as only conscious experiences. We then will need another word for what it is we do when we generate plans, solve problems, come up with ideas, and complete other mental tasks without conscious awareness.
So you are suggesting that we consider unconscious processes to be conscious? This would seem at first glance to confuse the discussion…
You speak of “representations”; I trust you are aware that this is a point of great contention within philosophy of mind. Do mental representations exist? If so, what are they? How do they come to refer to other things (this is the problem of intentionality).
Anyway, I still don’t see how we can agree that mental states can never been unconscious; it seems to me that since a great deal of mental functioning proceeds unconsciously, if mental functions involve states and representations, then these would be unconscious too..
I don’t think that equating intelligence with representations is a good definition.
Representations don’t do anything; they are used by reasoning processes in order to reason about the word (whatever part of the world is being represented). I don’t think representations can be conscious, either; rather it is the conscious mind that employs representations to think about the world..
However, we have no empirical data about mental representations. This is what I mean when I say we have no theory of intelligence. We cannot observe, nor figure out, how we think. Some people believe we should understand thought in terms of representations, others disagree.
Certainly inferences are performed algorithmically, including inductions and even abductions..
StephenB, not “blind processes”?”
ID is offering an intelligent cause for first life (a “Designer”). The only intelligent things we know from our experience are themselves complex living things, rich in CSI. The Designer is either this known type of thing (an intelligent life form), or it is something unknown to our experience (something that is not a complex physical life form but somehow possessed of the mental and physical abilities of human beings – and then some).
If the Designer is another life form, then ID really isn’t a very interesting theory after all. (If we posit an extra-terrestrial life form, we might as well assume we are their descendents rather than the products of their engineering efforts!). This leaves the speculation of an unknown type of being who created life in an unknown fashion.
Dualism has its own problems, of course. Since CSI doesn’t seem to arise without mind, and mind doesn’t seem to arise without CSI, one could choose to posit either mind or mechanism as the first cause. My vote is that it got started in neither of these ways, but rather that we are missing something quite fundamental in our comprehsension. In any event, all this is deep in the domain of philosophy and theology, and well past anything we can support empirically.
Ok a couple of quick points:
Drew Mazanec wrote:
Recall that libertarian freedom means that there are no antecedent causes or conditions sufficient to determine an agent’s action. That means no desire, motivation, inclination or anything compells the agent to act. These things may have an influence on the decision, but they don’t determine it. So if libertarian freedom exists, then it is possible for an agent to act against all his desires, beliefs, motivations, and so forth. I think this is absurd, and I don’t think God has this kind of freedom. I don’t think that it is possible for God to act in a way that is not accord with his character.
If God had libertarian freedom, then it would be possible for him to do evil. But Titus 1:2 says that God cannot lie. If “cannot” here means not only that God does not lie, but also that it is metaphysically impossible for him to lie, then God does not have libertarian free will.
With regards to the comments by Kairosfocus, I have been reading your posts, and the verses you cite from Romans and so forth, but I haven’t responded because I haven’t seen anything inconsistent with determinism in them. Many of the verses you cite seem to say that humans are a “slave” to their sinful nature, or that they are a “slave” to righteousness. How do these verses prove libertarian free will? I have yet to see a verse in the bible that affirms libertarian free will.
With regards to a couple of comments by StephenB. SB wrote:
Nowhere have I said that compatibilsim/determinism are at variance with Christian Scriptures. What I said was, determinism and moral responsibility are at variance with eachother philosophically. I think the determinist definition of freedom and moral responsibility are completely consistent with Christian Scripture. I think Scripture itself teaches them both, but it is paradox because we don’t know how to put them together.
(Incidentally, libertarianism doesn’t help solve this paradox that because it too is unable to ground moral responsibility. No-one here has yet given me an argument to the contrary; all I’ve had from kairosfocus et al. are assertions that it can. So libertarianism isn’t a solution to this paradox; neither determinists nor libertarians can ground moral responsibility in a philosophically coherent way).
“ID is offering an intelligent cause for first life (a “Designer”). ”
ID is offering the detection of design, not a designer. If you want to argue with the design hypothesis, (which you seem to most certainly want to do) then argue with the design hypothesis. Is design detecable? Is it not? What are the observable evidences that it has been detected? What are the alternative explainations which can be offered to explain what is observed?
Stephen (#219).
very well said.
gpuccio @ 207:
.”
Thanks for your detailed description of your opinions, that illuminated the background of your position!
You basically reject determinism on religious grounds. It is your religious view that cause and effect cannot be applied to a “transcendental self” and it’s choices. Although, in a logical philosophical worldview, this leads to the unavoidable conclusion that these choices by transcendental selfs are then in last consequence uncaused, I respect your religious convictions. But maybe you can also see why this position is, from a logical philosphical standpoint, not very convincing, since the free will of your transcendental agents amounts to arbitrariness.
SB:
It doesn’t imply that they think oppponents have “the free power of will” to change their mind in the libertarian sense. Minds can be changed in an entirely determnistic fashion. I can be exposed to evidence, that evidence can overwhelm my previous convictions, and thus I will be inclined to change my mind. All perfectly deterministic.?
markf (#222):
Yes, but compatibilists are not advocates of free will. At most, they are advocates of compabilism! 🙂
(if you are Mark Frank, we have already discussed that in the past)
aiguy (225):
So you are suggesting that we consider unconscious processes to be conscious? This would seem at first glance to confuse the discussion…
No, I am suggesting that we considere unconscious processes non conscious and non mental processes, and that we consider subconscious processes as conscious at a different level of cosnciousness. IOW, that we do not restrict the concept of cosnciousness to distinct representation in the waking state, and that we do not extend it to non conscious processes.
More later, now I have to go…
If you consider the notion that perhaps not all humans have free will, debates that contain dialogue such as we have here make much more sense.
Generally, when a person tells me that they do not have free will, or that they have compatibalist “free will”, I accept that and move on – for all I know, it is true, and a rational assessment of the statements they utter certainly supports their assertion.
My world-view doesn’t insist that all humans have free will.
#232 gpuccio
Yes I am Mark Frank. For some reason my old ID stopped working and I had to change.
Of course compatabilists are advocates of free will – that’s what it means. You may think we are wrong – but we know what we are advocating!
Just as a side note to MF and others: I think it’s helpful to be specific when we talk about free will. Many determinists believe in free will – but free defined as ‘the ability to act upon ones own desires’ (my position). This is not free will as the libertarians want it, though. Libertarians want free will to mean ‘the ability to do otherwise in an absolutely unconditional sense’. So to distinguish these two types of free will, it probably saves confusion to just use the terms ‘libertarianism’ (or libertarian free will) and ‘determinism’.
William J. Murray, aiguy, and others arguing contingency and determinism — all I can say and contribute is to repeat what I already wrote last.
As far as contingency, before complicating and muddying the matters with new words and terms, all would be wise to ponder Aristotle’s words, especially in chapter II of his Physics, where Aristotle wondered why the earliest physicists were determinists who either ignored chance, or wrote strange contradictory things about it — like Empedocles, whose statement — “that most of the parts of animals came to be by chance” — could be seen as the origin of biological evolutionism in the Western thought.
Here are a few interesting excepts. Please note how Aristotle tied chance to moral action and deliberate conduct:
;…
Thus to say that chance is a thing, contrary to rule is correct. For ‘rule’ applies to what is always, true or true for the most part, whereas chance belongs to a third, type of event. Hence, to conclude, since causes of this kind are indefinite, chance too is indefinite. … ‘good fortune’ or ‘ill fortune’ be ascribed, to them, except metaphorically…
The spontaneous on the other hand is found both in the lower animals, and in many inanimate objects. We say, for example, that the horse, came ‘spontaneously’, because, though his coming saved him, he did, not come for the sake of safety. Again, the tripod fell ‘of ‘from spontaneity’., These ‘spontaneous’ events are said to be ‘from chance’ if they have, the further characteristics of being the objects of deliberate intention, and due to agents capable of that mode of action. This is indicated, by the phrase ‘in vain’, which is used when A which is for the sake, of B, does not result in B. …”
gpuccio, not that we shouldn’t investigate and argue the nature of morality, but you have nicely summarized @202 what our human attitude in this sphere ought to be — humility. If not any other reason, but for the sake of the complexity of all this.
I would, however, add that something else is needed beyond our own inherent or God-given “inner power” to change positively, as you put it. Even this inner power or free will as some call it is often not enough, and that is what puzzles many who contemplate free will and moral change. That something extra is needed, some outside impetus, is obvious to any normal human being who is tempted by all sorts of irresistible things which one cannot resist despite his reason telling him otherwise, precisely as the Scriptures put it. This something extra is what theologians call “God’s grace,” and this topic and debate is almost a proof that such a thing must exist, or none of us would be able to change on our own.
UB,
I think everybody agrees that we see designs (i.e. complex patterns, form and function) in biology. So we don’t need to “detect” that. Rather, we’d like to figure out where these designs came from. Stephen Meyer says that the best explanation for first life is a “conscious, rational, deliberative agent” – in other words, a “designer”.
What I really argue with is not the hypothesis per se, but rather the claim that ID offers a cause known to our experience (which it doesn’t). Maybe some conscious agent created life, and maybe not. But we have no experience of conscious agents that are not themselves complex life forms, and we have no way to empirically ascertain if such a thing could exist, or existed in the past, or if it could actually create the biological systems we see.
Again, I think we all agree that there are complex functional patterns (designs) in biological systems. Again, the question is not if designs exist, but rather what caused them to exist.
I don’t think we have any good explanations at all. Obviously we could speculate that we are descendents of some other life forms – but that isn’t a very good hypothesis (it doesn’t explain first life, and we have no evidence of extra-terrestrial life forms). We could speculate that there are an infinite number (or an astronomical number) of different universes, so there will be at least some universes where vastly improbable events will happen. But that isn’t a good hypothesis either, because we have no way of telling if it is true or not. We could speculate that some unspecified, unknown type of conscious agent, or some unknown, unspecified type of unconscious process was responsible – but both of those ideas are pretty much without content and certainly untestable. There are other ideas we could come up with too, but they’re all just wild guesses… So there is really only one answer we’re left with: we do not know.
AIG @ 226:
UB correctly called you out on this insistently repeated caricature.
In fact — and as has been pointed out umpteen times, with sources etc, and just as consistently ignored — what design theory offers is a methodology for design detection from its empirical traces and consistently observed patters of cause.
Thus, what ID actually gives us is empirically anchored reason to infer design as a credible fact from its empirical traces.
Designers do come with designs [as shoes normally come with soles], but that is again an observed fact. We do not observe designs without designers, and we do observe designers, noting that design is consistently connected ot inteligence. Where, we recognise intelligence from our own capacities that can be easily enough observed.
So far, we have made no a priori metaphysical commitments, apart from that our minds — of whatever ultimate nature — function reasonably well and that the world is reasonably intelligible and not utterly chaoric.
It is equally observed that it is our intelligence and knowledge base — not the specific facts of our embodiment — that explain our ability to create complex designs.
Embodiment may be a good way to effect designs [it helps to have skilled hands], but embodiment as such is not the actual source of the design. Or, the ill-instructed would be leading software makers, and so would our friends the chimps. (Contrary to persistent rumour, trainloads of bananas are NOT a major raw material at a certain facility in Redmond!)
Unless AIG can offer good reason to confine the nature of designers to embodiments similar to ours, he needs to accept that credible evidence pointing to design is an indicator that designers were equally credibly present at the origin of C-chemistry, cell based life that uses dFSCI in the nanomachines of life.
At that juncture, we have no basis for inferring he nature of said designers, especially whether they are within or beyond the cosmos. As has been openly said by modern design thinkers ever since the very first such technical work, Thaxton et al in The Mystery of Life’s Origin.
Going beyond origin of life, and on the fine-tuned nature of the observed cosmos that suits it for C-chemistry cell based life, we have good reason to infer to design of the cosmos that we live in. Such a contingent cosmos then points to a necessary being as its ultimate causal root [even through a multiverse, cf discussion here, section b].
Moreover, such a necessary being will not be material once we allow thermodynamics to speak on the consequences of random molecular motion [the root of temperature] and the way that heat moves from higher to lower temperatures, and similar processes.
Namely, proverbial heat death.
The unity of the cosmos points to a unity as its source.
Summing up, we are looking at an intelligent immaterial necessary being designer, who built a cosmos that is suited for C-chemistry, cell based life. Not very comfortable for materialism, but hat seems to be the best explanation for the evident design of the observed cosmos.
[Onlookers, observe how consistently those who object to the inference to design of life avoid addressing the inferences from evident signs of design of the observed cosmos to its credible source.]
But now, we can work back forwards on that. Once we see that it is a reasonable view to hold on evidence that the observed cosmos comes from such an extracosmic intelligence, then the worries over whether mind and matter can interact or whether immaterial reality is possible or causally efficacious shrink to due — vanishing — proportions.
One has fairly serious grounds to infer to an immaterial cause of the material cosmos we inhabit, so mind as a prior and initiating cause for events in matter is plausible.
But we do not need to have all of that in our background when we discuss the inference from signs such as dFSCI to the signified causal action of directed contingency.
BTW, to design you had better be able to have alternatives and he ability to reasonably freely choose among them, so that the one most likely to work can be freely selected. Otherwise, design is no longer a rational enterprise but a hit or miss affair driven by chance accidents and blind mechanical necessity. Resemblance to a certain popular worldview is NOT coincidental. But we have no good reason to believe that such an approach would give rise to functioning complex designs, on the scope of our observed cosmos.
Intelligence is the only directly observed, known effective source for complex designs.
Even just posts on this thread.
GEM of TKI
PS: Let’s put the bottom-line:
1 –> It is empirically reasonable to infer from reliable signs to the signified causal process of directed contingency.
2 –> Since we exhibit dFSCI in our cells, we are derivative designers, as the cells on which our bodies are based are credibly the result of directed contingency.
3 –> We also live in an observed cosmos that per fine tuning, is in turn credibly the result of directed contingency.
4 –> Which in turn (per the strong empirical association between designs and designers) implicates that there were designers present to originate life and its complex body plans, including ours.
4 –> Constraints on the ultimate designer of a cosmos like ours, point to a radically different architecture for the designer, which also opens up the possibilities for Mind — of whatever ultimate nature — to be prior to matter.
5 –> So, we have no good reason to infer that we or creatures more or less like us, exhaust the possibilities for designers.
6 –> Nor do we have good reason to infer that designers can operate without freedom to select among diverse contingencies towards their goals.
7 –> That is the existence of design in the sense of directed contingency, is a strong empirically based pointer against the ideas that freedom to choose and to shape or organise is a delusion.
8 –> If I were an a priori materialist [which BTW entails some species of determinism] or a fellow traveller, I would be worried.
#236 Green
Libertarians want free will to mean ‘the ability to do otherwise in an absolutely unconditional sense’
I appreciate your point – but actually I don’t know what this phrase means – acting on whim? But that is a kind of desire.
Green,
.”
Dostoevsky, Notes from Underground.
AIG:
Here you are redefining the term design from what it means in the context of our discussion.
We already have a cluster of terms that describe and specify what we see in biology and int eh technological and literary worlds for that matter: digitally coded, functionally specific, complex information and related organisation.
Design — directed contingency — is the routinely and reliably observed causal explanation of dFSCI.
So, the design inference is that dFSCI [among other empirical indicia] is a reliable sign of design in the sense as just described.
On the strength of that, we infer from dFSCI in the cell to directed contingency as its causal explanation, in inference to best empirically anchored explanation.
So, please do not try to slip in a quesiton-begging redefinition of “design” on us; just like we request that the NCSE and NAS, NSTA etc kindly refrain from slipping in a question-begging redefinion of science per a priori materialism.
Rhetorical slip-ups (or, where calculated, sleight of hand) does not decide matters of fact and reasonable inference on evidence.
GEM of TKI
—Green: ?”
According to your philosophy, irrational elements of nature cause the mental states, which in turn, cause the behavior. That means that irrational elements are calling the shots, which means that thoughts cannot have a rational basis.
Also, you have not yet addressed the fact that you claim to believe in Scripture, which acknowledged free will, and compatibilitistic determinism, which does not. Do you have an answer for that?
StephenB:
As I said to Upright BiPed previously, I do not think that “irrational elements” are calling all the shots. Causation runs in all these directions:
1) Mental to mental
2) Mental to physical
3) Physical to mental
4) Physical to physical
Only (3) and (4) of these involve irrational entities “calling the shots”. (1) and (2) are cases of rational entities calling the shots.
You might respond: well, your mental faculties themselves arose from physical entities, and therefore they are ultimately irrational. Firstly, I don’t think this follows. Secondly, even it did follow, I am a substance dualist of the opinion (at the moment) that this rational substance was imparted to me by God. Thus whilst I am not 100% decided yet, I would currently say that I don’t think that my mind is simply an emergent property of the brain. yourself “determinista”. yourselves “determinists”.
With regards to this:
Are saying that scripture affirms the existence of libertarian free will, and thus asking why I don’t believe in it?
molch (#230):
I think we understand each other’s position, and that’s fine. But why do you call mine “religious” and yours “philosophical”? My position does not depend on any specific religious revelation, it’s my philosophical position just as yours is yours. My phylosophy is religious, like many other philosophies. Maybe yours is not.
Why do you assume that a philosophical position should be based only on logics? Philosophy tries to understand reality, and can use any kind of cognitive instrument and of experience. Logics is only part of our cognitive approach.
aiguy (#225):
You speak of “representations”; I trust you are aware that this is a point of great contention within philosophy of mind. Do mental representations exist? If so, what are they? How do they come to refer to other things (this is the problem of intentionality).
If philosophy of mind doubts that mental representations exist, then ot’s not for me. They do exist. What they are, and how they refer to other things is all another problem.
The existence of mental representations is a fact, directly percieved by anyone of us. What they are and how they refer to other things are philosophical and scientific problems, and it’s perfectly legitimate that different theories exist about those problems.
But my definitions are based on the existence of those representations, and not on tyheories about them: therefore, my definitions are completely empirical.
aiguy (#225):
Anyway, I still don’t see how we can agree that mental states can never been unconscious; it seems to me that since a great deal of mental functioning proceeds unconsciously, if mental functions involve states and representations, then these would be unconscious too.
Perhaps I don’t understand what meaning you give to the word “mental”. Please, specify, and maybe I can agree with you, according to your definitions.
According to my definitions, “representations” refers to conscious experiences, and to nothing else. To be clear, a picture has a form which may correspond to some other external reality, but I will not say that it is “representing” it. A conscious mind which perceives a picture is representing that form in its consciousness (that form is in that moment the object of perception of that consciousness). That’s the sense in which I have used the word in my definitions. Again, the important thing is to clarify.
aiguy (#225):!).
You can choose the word you like. I will reserve “think” and “thought” for cosncious processes. I think I am in good accord with the usual meaning of those words.
Of a computer, I would just say that it is performing automatic computations. That’s what it is doing, nothing else.
I respectfully disagree with Drew McDermott. Deep Blue does not think. A child who computes 2+2 is thinking, because he is doing that task via conscious representations. I believe that the difference should be rather self-evident.
—aiguy: not “blind.”processes”?”
What do you mean, “if it is true.” ID doesn’t depend on dualism, period. There is no “if” to it.
Methods are methods, and the ID method does not base its methods on dualistic metaphysics. It bases its methods on obervations of facts in evidence.
Concerning the follow up questions, I gather that many ID theorists would say that a designed program directs the kind of directed contingency that you seem to be
alluding to.
—Green: “You might respond: well, your mental faculties themselves arose from physical entities, and therefore they are ultimately irrational. Firstly, I don’t think this follows.”
Oh, but it does follow. That is precisely what all the fuss is about, and why I zeroed in on the point.
aiguy (#225):.
Let’s try to understand each other. Computers compute. They don’t “invent novel designs”: they compute what conscious beings have programmed. If that programming can generate what will be then considered by humans a “novel design”, well, that’s fine. But computers don’t know what they are doing. They don’t know the meaning of words. They don’t design, because they have no idea of what design means. Computers preforms calculations.
If you are familiar with AI theory (and I believe you are), you know that the computation in itself is independent form the machine which performs it. From all points of view, computers are not different from powerful abacuses. From all points of view, computing 2+2 is not different form performing a complex software. Indeed, for a computer a complex software is nothing more than a long string of 2+2, or of similar tasks.
The only thing resembling thought in what a computer does is the higher level organization of those simple computations. That’s because that higher level organization is the product of conscious thoughts, and bears that mark.
And of course, when we think we too perform computations. I absolutely believe, like Penrose, that human cognitions are not purely algorithmic, but some parts of the process are certainly algorithmic. I have never denied that point. So, the algorithmic parts of a thought process can well be “written” in computing machines, but not the rest.
—Green: “Are saying that scripture affirms the existence of libertarian free will, and thus asking why I don’t believe in it?”
I am saying that Scripture affirms the existence of self-determined free will, which I defined earlier, and yes, I am asking why, as a self-proclaimed Christian, you don’t believe it.
StephenB:
(a) Firstly: you have ignored the fact that this does not apply to me, thereby conveniently overlooking the fact that a determinist can be rational even if this premise is granted.
(b) Secondly: there is no reason to think this premise axiomatic anyway. Why could God have not created matter such that given a certain complexity and a certain configuration of neurons, irreducible mental (read: rational) properties emerge. There is nothing illogical about this idea. I think it’s an empirical matter whether or not it is the case. In fact, Timothy O’Connor – the agent-causal libertarian that I have been referencing the whole way hrough this talk – holds exactly this view. As do other libertarians such as William Hasker.
aiguy (#225):
I don’t think that equating intelligence with representations is a good definition.
It’s a good definition of intelligent representations: representations with a cognitive content. I am not giving a theory of how intelligence works, I am just saying that intelligent processes are connected to conscious cognitive representations, and I call them intelligent representations. If you prefer, I will avoid to use the word “intelligence”, and stick simply to “intelligent representations”. to make explicit that I am not giving a definition of how intelligence works.
Representations don’t do anything; they are used by reasoning processes in order to reason about the word (whatever part of the world is being represented).
Again, I have not argued about what they do. You have. I will abstain form that, for the moment. I absolutely agree that “they are used by reasoning processes in order to reason about the world”. That’s my idea too. But I definitely will add: “by conscious reasoning processes”. Are you OK with that?
I don’t think representations can be conscious, either; rather it is the conscious mind that employs representations to think about the world.
What’s the different? When I say that a content is conscious, I mean obviously that it is a content of the conscious mind. It’s not the same meaning as when we say that an entity (like a person) is conscious. In that case, we mean that the entity has a conscious mind. Those are two different uses of the word “conscious”, but you know, human language is context dependent.
StephenB:
“Self-determined free will” – well I’m not sure how you defined that earlier, but if you mean libertarian free will (i.e. the ability to choose otherwise in an unconditional sense), then I have no obligation to believe it because I don’t think Scripture affirms it.
aiguy:.
??? What do you mean? That you don’t perceive your conscious representations?
Let me see:
“What is empirical is our behavior – the observable abilities we have to plan, design, solve problems, build things, etc.”
You observe behaviours through sensations. Sensations are conscious representations. If sensations are not empirical, then the observation of behaviours is nor empirical.
“Other empirical data we have is what we observe in the brain – neural activity, brain waves, and so on.”
We observe brain waves through instruments which give us sensations. Same as above.
All we know about outer reality comes through sensations. Sensations are conscious representations. But they are not the only one. Mental states are cosncious representations. Pain is a cosncious representation. Dreams are (sub)conscious representations. And so on.
What do you mean when you say that “I don’t agree that representations are empirical”? That they don’t exist? That they are not facts? And what is a fact, then?
gpuccio @ 250:
Sorry if I did not define clear enough what I was aiming at with the distinction between religion and philosophy. I was using philosophy in the sense of it as a science, were debates between different philosophical positions are carried out on the basis of the logical coherence and deductability of arguments and positions.
I do agree that your religious position is just as much a philosophical position as mine. However, your position was not arrived at by “critical, generally systematic approach and reliance on rational argument” (as is the definitional case in the science of philosophy), but by religious assumptions. Which I completely respect. I work with a certain set of metaphysical assumptions myself (that is changing when it is informed by new evidence), and although I try to arrive at those assumptions mostly by “critical, generally systematic approach and reliance on rational, logical argument”, I realize that there are alsways some assumptions in the mix that have more to do with my desires than with rationality and logic. I merely wanted to point out the difference. No offense intended.
#248 Gpuccio
Hi Mark. I think we can agree that the free will of compatibilists is a completely different concept form the free will of “libertarians” like myself..
aiguy (#225):
However, we have no empirical data about mental representations. This is what I mean when I say we have no theory of intelligence. We cannot observe, nor figure out, how we think. Some people believe we should understand thought in terms of representations, others disagree.
We have the representations themselves. They are empirical data.
Indeed we have many different theories of intelligence, but I can agree with you that none is satisfying. That’s why I have not given a theory of intelligence, but only empirical definitions based on observables.
We can observe our thoughts. I am not trying to figure out how we think. And I am not saying how we could understand thought. I am not trying to understand it at all.
aiguy:
Certainly inferences are performed algorithmically, including inductions and even abductions.
You say it, and it’s fine for me. I just said “I am not sure” for inferences, while I am sure for deductions. That is still my position, but if you know that it is possible to preform inferences algorithmically, I have no problem with that.
What about understanding meanings?
MF, I’m not sure how you’re defining ‘free will’ – but my determinist definition (the ability to act on my desires) is certainly different from the libertarian definition. I’m just throwing that out there because I don’t want to cause confusion over anything I’ve said. 🙂
aiguy (#225):
The part you agree with is enough for my discussions on ID. I did not expect you would agree on the rest. And I agree with Penrose that only some things cannot be made algorithmically. It remains to see how many. Please remember that Penrose’s argument is about mathemathics, indeed about arithmetichs, which is supposed to be the temple of algorithmic processes.
aiguy (#225):
.
There is some confusion here. I agree that we have no final solution about mind body dualism. That does not mean that we don’t observe mind and body as two different contexts. That’s enough for me.
If any theory of mind and consiousness is unsatisfying (which would include strong AI and materialism, together with dualism), we can just the same describe what we observe and make inferences about what we observe. That’s all ID needs.
Conscious representations are observed subjectively. Outer events are onserve objectively, but through the senses, which are subjective representations. That is some dualism, and it requires no further phylosophy to be obviously true.
These are facts. These are the basis for the ID inference.
ID needs no “untestable claims regarding the nature of mind”. It just needs a simple and appropriate description of what we observe in the mind.
Mark:.
I don’t want to fight with you about that. But I never said that there were definable differences in the behaviour. I said:
“I think we can agree that the free will of compatibilists is a completely different concept form the free will of “libertarians” like myself.” (emphasis added)
Since when is a difference in concepts a difference in observable behaviour?
For instance, just using Green’s terminology, we libertarian believe in PAP and incorporate PAP in our concept of free will, while you compatibilists don’t.
That’s a difference, IMO. Or do you believe in PAP?
Green (#267):
Thank you for the precisation. I appreciate it.
gpuccio,
Thank you for your comments. I think we’ve clarified our differences.
molch (263):
And no offence taken at all! I am fine with your concepts in 263 (I would state some things a little bit differently, but surely it’s not worthwhile to deal with minor points here).
KF,
If you would like to refer to what we see in biological systems as FSCI instead of “designs”, that’s fine. I don’t care what words we use as long as we agree on definitions.
If you would like to refer the cause you propose for FSCI as “directed contingency”, that’s OK, but I don’t know what you mean by that. Do you mean, as Stephen Meyer does, a conscious entity? If so, then we disagree about the warrant for that conclusion. If that is not what you mean, then the term doesn’t mean anything at all to me, but I’m not interested in pursuing it.
aiguy:
Me too. Thank you.
StephenB,
First of all, I am not the one alluding to this mysterious something that guides nature, directs contingencies, and enables processes to “see”. These are the words used with great frequency by ID advocates themselves. It is quite right for me to ask what it is ID is proposing as the explanation of complex form and function. Saying that “blind processes” and “unguided nature” can’t produce FSCI is one thing; saying what sort of process is not “blind”, and saying what sort of nature is “guided”, is quite another.
So you finally provide an answer, which is “designed program”. I have no idea what this is supposed to mean! ID proposes that FSCI in biology is the result of “directed contingency”, and when I ask what is supposed to be directing these contingencies the answer is a “designed program”. What does the “designed program” mean here? Does it have anything to do with conscious thought or not?
aiguy:
Just to be clear: for ID, the origin of biological information (and of any kind of CSI) is a conscious intelligent agent.
gpuccio,
Thank you very much! I do appreciate your clarity.
ID advocates seem to equivocate on this point (UB, GF, StephenB, etc) but I don’t see any other way to make of sense of ID’s claims aside from assuming that ID is proposing:
1) The cause of FSCI in the first organisms was a conscious entity
2) This entity was not itself an FSCI-rich organism
3) Consciousness is a causal factor in the universe and exists independently of living bodies
Do you agree that ID entails these things?
AIG, #239
I’m not certain what planet you are living on, but on this planet the observed “complex patterns, form, and function” are only allowed to be labeled as “apparent design” by the tribal power of an academic edict. These patterns cannot under ANY circumstances be the result of an actual design. Indeed, if you utter such heretical ideas, your university will erect a splash page in front of your department’s website so that they may (at the very least) malign you (if that is, they have been unfortunate as to not be able to run you off). The others are simply maligned and run off. Of course, the net effect is to keep your mouth shut within the high walls of academic freedom.
So for you to say with such ease “Ah Design, of course Design…” is divorced from reality. Not only that, it is a careless insult to those who have put their reputations and livelihood on the line in order to voice an educated opinion about their own professional disciplines. So now that we are clear as to which sides we are arguing from, let’s back up and put into play the reality of the situation:
Now I can answer you. My response is that you are completely correct, insofar as we don’t need to detect complex patterns, form, and function. What we need are explanations that are actually suited to explain the patterns we see driving biology. SOME of these patterns have a singular entry into our knowledge of causes. We don’t find, and have not found, multiple reasons for SOME patterns to exist. When we see them, they ALWAYS come from a singular source. Moreover, we have studied them relentlessly from a variety of disciplines, and we understand the characteristics surrounding them, and we understand why they come from a singular source.
Therefore, ID has a very narrow thesis which is causally adequate to the observable evidence. Our opponents on the other hand, arguing that design is only apparent design, have an explanation that is not casually adequate by any stretch of the imagination, or should I say, is only adequate by nothing more than a stretch of the imagination. Knowing this very well, your response so far is to not argue against the actual merits of ID, but to instead tie some garnish to it, so that you may argue against that instead. My job is to point this out each and every time you do it.
Full stop. In its primary formulation, ID does not posit any attribute to the designer other than the ability to create the patterns we observe. The reasons for this are appropriate to the evidence – because the patterns are all that is accessible to us. Anything beyond that may be interesting, but it does nothing to change the fact that the patterns exhibit the signature of purposeful design (the central ID thesis).
Since you are obviously not suggesting we have no experience with conscious agents, you are left to repeatedly insist that that we have no experience with conscious agents who also happen to create life on planets like earth? For you, this means that we have no reason to suggest we have any experience with what causes the patterns we observe in biology BECAUSE we observe them in biology.
The fact that we find these patterns in biology has no bearing on the validity of our observations. We know the source of these patterns in each and every other domain in which they have been discovered. Our inference is as valid as it could possibly be.
Honestly Aiguy, it’s hard to imagine such an illogical position being taken by what is an otherwise intelligent person. Where else does such an assessment come into play? Thank goodness you are not a fire investigator or in another such discipline. Defense attorneys would love you. To hell with the evidence, you want the prosecution to request that the Judge recant himself on the basis he didn’t personally witness the crime.
If we follow your rationale to it ultimate end, we can only offer an opinion about the origin of life on this planet if we first A) observe an agent starting life on other such planets, or B) witness life starting on its own accord. Your desperate need to criticize ID extracts a heavy toll my friend. When you say ID is wrong, you are also saying that Darwin was wrong, Sagan was wrong, Monod was wrong, Dawkins is wrong, Mayer is wrong, the NCSE is wrong, all materialists are wrong along with every biology department with standard issue textbooks throughout the world.
Is that your position? If it is – and it must be for the reasons you’ve argued here – then please tell me one thing. Make a case why anyone interested in origins should care to follow your ideas. They end before they begin.
aiguy:
Well, personally I would entail those things. but from a stocy ID point of view, I would say:
1) is true, but incomplete: the cause of new FSCI in all organisms is one (or more) conscious entity. ID is not only about OOL, but also about evolution of life.
2) is true only if the designer is (directly) a spiritual god. That is really not the only possibility, although it is certainly the simplest one.
3) is certainly true for the first part. The second part depends: it is true for a spiritual god, not necessarily for any other agent.
For instance, there could be in reality agents who have bodies which don’t correspond to our concept of physical body. I am not sponsoring the idea, just saying that we must really be open to all possibilities. From a philosophical point of view, I would say that only God exists independently of a living body.
But my personal opinion is that we must stick to the scientific method:
1)first detect the formal characteristics that allow to infer design (CSI);
2) then verify the presence of CSI in biological information, and falsify all present alternative theories about a non conscious origin of that biological information;
3)then infer a designer as the origin of biological information and build a design theory for that;
4) finally, try to build theories about the other important aspects: the nature of the designer, the modalities of implementation of the design, the characteristic of the design,and so on. But all these things must be done in a strictly empirical way, starting from facts and good reasoning, and without any ideological prejudice.
—aiguy: “First of all, I am not the one alluding to this mysterious something that guides nature, directs contingencies, and enables processes to “see”.”
First things first. You began this discussion by making a claim that was, in fact, wrong. ID science does not depend on dualistic metaphysics in any way. There is no way to extract metaphysical speculations from “specified complexity” or “irreducible complexity.” Do you, or do you not, acknowledge your error. Once that point is settled, we can move forward. If not, then I need to spend more time on the point until you get it.
UB,
The issue I’m concerned with here is what distinguishes “actual design” from “apparent design”. Gpuccio was forthcoming enough to state outright that ID proposes a “conscious entity” as the cause of biological systems. This is also the position of Stephen Meyer. Is this your position as well?
You are being very coy about this “singular source”, but I’ll hazard a guess at this point… you are talking about human beings. Is that right?
Now, you can generalize your findings and say this “single source” is “living things” or that it is “complex organisms” or that it is “intelligent agency” or that it is “things with brains” and so on… but in terms of our actual observations, this “source” is human beings. Whatever else is true of human beings, it is human beings that you are talking about, and nothing else.
I will repeat what you just said: In its primary formulation, ID does not posit any attribute to the designer other than the ability to create the patterns we observe.
In that case, ID is saying the following: The patterns we observe were caused to exist by the ability to create the patterns we observe.
Read that again, UB. According to your definition, the theory of Intelligent Design says absolutely nothing! It is like saying you have a theory that explains sunspots: Sunspots are caused by that which creates sunspots. Or a theory that explains protein folding: Protein folding is caused by that which can fold proteins.
Hopefully you will join the rest of us in agreeing that these sorts of theories are no more than tautologies, and do not actually say anything that adds to our understanding of anything. In order for ID to have any content at all, it actually must say something about the cause of these patterns besides that it is able to cause these patterns.
What???? You just got through saying that ID does not posit any attribute to the designer other than the ability to create the patterns we observe. But now, all of a sudden, you are making a completely different claim!!! Now you are saying that the cause of these patterns is purposeful!
That’s just fine, of course – you can make any claim you’d like to make. But if you keep changing your mind our debates will simply go around in circles. So now your statement should read as follows:
The patterns we observe were caused to exist by the ability to create the patterns we observe, and that the patterns were created purposefully.
Ok, now we can talk about what you mean by “purposefully”. Do you think that something can be purposeful without being consciously aware of it? If so, then I’m not sure what you mean by “purposeful”. If not, then wouldn’t you say you are also claiming that the cause of these observed patterns was also conscious?
The only things we have experience of to which we attribute consciousness are complex, FSCI-rich life forms (viz. human beings and perhaps some other animals). Do you claim we have experience of any other sort of conscious agent?
For me, this means that if ID posits a known type of cause for the patterns we observe in biology, and it also posits something conscious, then it must be talking about the only known type of cause that is conscious and can create similar patterns, which would be human beings. Since it makes no sense to say human beings caused the first life, then ID must not actually be offering a known cause at all. Instead, ID is speculating that there is a very different sort of entity that exists – one which is not itself a complex life form but still somehow has the mental and physical abilities that life forms have.
This is very far outside of our experience. In my experience everything that I think is conscious (or intelligent) is a complex living thing. Have you ever seen anything which you think is intelligent or conscious that wasn’t a complex living thing?
This is funny. Imagine I was a fire investigator and I reported that I had figured out what was responsible for a fire:
AIGUY: I have decided that this fire was set by something that was intelligent, but it wasn’t a living.
See? Nobody ever infers “intelligent agency” as the cause of anything, UB. Never. We infer specific sorts of living things. The watch was built by a human being. The hive was built by bees. The dam was built by beavers. The mound was built by termites. The nest was built by a bird. The web was built by a spider.
We have no experience with anything that isn’t a living thing that still builds these sorts of things. If you would like to imagine some other type of thing that could have caused first life, then you are hypothesizing something that nobody has ever seen.
Wow, you seem upset. Relax. I’m not desparate, and you shouldn’t be either.
Anway, I’m not saying these folks are wrong at all. None of them think that ID is science either, so as far as that goes I’m in complete agreement with all of them.
gpuccio,
Right – that’s what I meant too.
Either (2) is true the way I put it, or ID fails to explain the origin of the first FSCI-rich organisms. Correct?
Likewise, either (3) is true the way I put it, or ID is merely saying that life on Earth came from life elsewhere (either by engineering or by biological reproduction).
StephenB,
I think it does, but it’s hard to tell because so many ID proponents have so many different ideas about what ID says.
I understand you are denything that ID is predicated on dualism. Fine. In that case, you believe that intelligence may (or may not) be another word for physical cause. If that is the case, and dualism is false,.
—Green: ““Self-determined free will” – well I’m not sure how you defined that earlier, but if you mean libertarian free will (i.e. the ability to choose otherwise in an unconditional sense), then I have no obligation to believe it because I don’t think Scripture affirms it.”
I could provide a hundred examples where the Scripture advances the argument that the will is free. I will just provide ten.
—“I call heaven and earth as witnesses today against you, that I have set before you life and death, blessing and cursing; therefore choose life, that both you and your descendants may live.
[That means that one has the power to choose either life or death]
—“If you love Me, keep My commandments.
—“The power of life and death is in the tongue.”
We can choose life or death by the way we use our words]
[That means that love is a choice and one can either consent or refuse.]
“If you abide in Me, and My words abide in you, you will ask what you desire, and it shall be done for you.
[The decision to abide is a free choice and is not determined]
..”but glory, honor, and peace to everyone who works what is good, to the Jew first and also to the Greek. For there is no partiality with God.
[Good behavior is rewarded. Rewards make no sense from a deterministic framework. Why reward anyone for something that they cannot.”
[Some may choose to quit or choose not to quit]
Also, [to be “temperate” is to choose the golden mean between two opposite extremes. That requires the capacity to resist one’s desires rather than act on them.
—.”
[Why urge anyone to do anything if they cannot make choices that will affect their destiny. How can someone be blamed or be blameless without the capacity to choose good and the capacity to choose evil]
–“Therefore if anyone cleanses himself from the latter, he will be a vessel for honor, sanctified and useful for the Master, prepared for every good work.”
[If one can choose to cleanse himself and prepare, and one can also choose to not cleanse himself and not prepare.]
—“At that time the kingdom of heaven will be like ten virgins who took their lamps and went out to meet the bridegroom. Five of the virgins were foolish, and five were wise.”
[The wise virgins made the right choice; the foolish virgins made the wrong choice.]
—“Depart from me, ye cursed”
[They are cursed because they made a conscious choice to refuse love when they could have made another choice]
AIG, 274:
First, as already pointed out, we have been using “design” in a specific context; so when you introduced a different one, I corrected that what you were choosing to term “design,” has an established adequate descriptive term in this context. That is not a matter of my idiosyncrasy, it is a matter of clarity.
Now, I see you trying to suddenly suggest that directed contingency is not a clear term.
This, after many, many examples in point have been given, and where to post a comment — event the above — YOU gave [purposeful!]direction to the possible contingencies of ASCII text strings, creating a message in English in response more or less to a context.
Directed contingency, as has been described and exemplified many many times, is about just that: especially, text strings that are:
Such strings routinely appear in language contexts, and in computer programs. You either know this, or should know this. Pardon me, long since.
(You may wish to look at Trevors and Abel here on the subject of string patterns; esp cf. Fig 4 as pointed out previously.)
However, the point of the sudden question comes in your onward remarks: you unfortunately have a persistent rhetorical agenda to infer and project that the design inference is about assuming a dualist philosophical position a priori, and from that to dismiss the design inference.
You have repeatedly been corrected on the point,in details [including for instance how earlier today I again pointed out where the design inference from reliable sign to signified causal process based on directed contingency, is an inductive inference to best explanation] but keep going back to it.
That is what you need to explain, why you keep resorting to a strawman caricature of design thought, regardless of how many times it is corrected.
That suggests a fixed controlling notion — one premised on a SLANDER, BTW [kindly cf the UD Weak Argument Correctives] — and the attempted rhetoric of “gotcha.”
So, first, no: design as used in design theory is about the observed causal process of directed contingency that often — and that is a matter of massive experience including your own as a designing intelligence — creates functionally specific complex organisation and associated information.
And, as was yet again pointed out today, such inference from empirically reliable sign to signified causal process of directed contingency AKA design is an INDUCTIVE exercise on inference to best explanation per reliable and routine observation of the design process in action and its results.
Now, if you wish to reject Mr Meyer’s observation that the directed contingency process of causation that we routinely see creating dFSCI — and that is the particular kind of complex specified information he is dealing with and we are dealing with — is in our general experience empirically associated with the work of conscious intelligent designers, you are welcome to produce an empirical counter example.
Just as design thinkers have put on the table for years and years now as a decisive test and potential falsification.
When we do not see such examples coming forth, but instead rhetorical tactics that try to embroil and entangle us in debates on worldview level assumptions [even putting definitions in our mouths that do not belong there],that says loud and clear: strawman fallacy.
That is telling.
Sorry, kindly provide well-warranted empirical counter-evidence, or concede that the inductive inference from dFSCI to its consistently observed cause, directed contingency aka DESIGN, is a well-warranted one.
I will go further than that.
Actually, the evidence and inductive inference from signs to directed contingency as causal process, leads to the inference that C-chemistry cell based life is designed. A similar inference on the significance of a complex, finetuned cosmos to support such life, points onward to ultimate design of the observed cosmos by an extra-cosmic intelligence that is credibly immaterial [matter cannot credibly be the necessary being, on heat death grounds] and a necessary being.
That suggests that transcendental Mind is a reality, i.e. that on empirical evidence and inductive inferences connected thereto, dualism is a reasonable — as opposed to irrational or ill-informed — worldview.
Even, theism is a reasonable worldview in that context.
And the recoiling in horror — real or histrionic — of the a priori evolutionary materialist magisterium that has recently tried to redefine science as applied materialism moves me and a lot of others not one bit on that.
All it shows is their ideological worldview level question-begging closed-mindedness, frankly. (Remember, I cut my intellectual eye-teeth on Marxists.)
Pardon me if I sound a bit plain-spoken, but you must realise that after unresponsiveness in the face of literally dozens of attempts at correction, little alternative is left.
GEM of TKI
—aiguy: “I understand you are denything that ID is predicated on dualism. Fine.
I am doing more than denying that ID methods are predicated on dualism, I am stating, as fact, that they are not–a fact that can be verified very easily. Since you have not yet acknowledged that fact, it seems that I have more work to do. Do you acknowledge it?
.
StephenB,
Well, StephenB, I’ve already said that different people have different ideas about what ID theory says, what it entails, and what it assumes. You seem to think that there is one, single, genuine, canonical version of ID that everybody adheres to, but I’m not aware that such a version exists..
In any event, I am perfectly willing to acknowledge (as I already have) that you have a version of ID theory that is not predicated on the truth of dualism. So let’s move forward and accept your version of ID in our debate.
Now, please tell me: Since ID theory is not predicated upon dualism, “intelligence” may refer to a causal, libertarian, disembodied consciousness, or it may refer to nothing but physical cause. So I ask again:.
Aig @282
And what do you think that is?
And GP is more than welcome to make an inference to consciousness. I agree with him on the point. Just like I agree with others who have argued that it entails foresight. Abel refers to it as volitional agency. I myself argued it under the banner of intelligence. Your response was to immediately pack on physicality – since we know of no conscious intelligent foresighted agent that does not also have physicality. You might as well add in hair color and a neural response to the taste of fat, correct? If not, then why not?
No, I am talking about intelligent agency. And as I have already noted you may add volition, foresight, and consciousness as well. If it suits you, you are even free to make up your own word and provide a definition. Have you ever heard of Karl Popper?
FCSI was in operation on this planet long before humans existed. We came along and noticed it later.
It may not have occurred to you, but I was not giving a definition of ID. Your reformulation was purely rhetorical (and meaningless).
It does. The first is that the “I” in Intelligent Design stands for “intelligent”. You have been repeatedly given attributes that are the causal factors that lead to CSI but you refuse them all. Let’s add it up: You’ve been told intelligence – “no good, we know nothing of what constitutes intelligence”. You’ve been told volition – “no good, we have no idea what intentionality is”. You’ve been told foresight – “no good, we have no way to measure foresight”. You’ve been told purpose, “no good, we have no idea what does and does not have purpose”. You’ve been told consciousness – “no good, we have no idea how consciousness works.”
Isn’t that about the heart of the matter? By the way, since we have no operational definition of Life, should biology class be called off until we do? If not, why not? And also, since there are philosophers who debate over the meaning of many words we use constantly, how can we use these words without confusion? What is the answer to that question?
Are you saying that I may not add an adjective to my comment, or are you saying “WOW I have a new concept to obfuscate?” Lets us be honest here, you are not searching for clarity, only tools.
Cha-ching.
I never said differently, but here is what I have said: .”
To which you responded: “The problem of intentionality (how symbols mean things) is a difficult and contentious area of philosophy.”
To which I responded “Yet the mapping to which I am referring to is hardly a philosophical question. It is entirely observable; in fact, our entire understanding of biochemistry surrounds it being elucidated.”
And you responded with: zilch
This comment has no meaningful connection to the comment it was purporting to respond to.
No one has suggested that the physicality of a life form is a mark of intelligence.
How do you know if something is intelligent?
And you see absolutely no pattern running through your list of artifacts beyond the fact they are all the product of living things? What could it be? What if you noticed that none of them could exist without purpose, intentionality, and foresight? What if you noticed that none of them could be the product of unguided natural forces? Let us say that a new artifact comes in question that exhibits these same qualities, but its origin is a mystery. Are you going to say “we don’t know” before of after you recognize what you do know?
Can you offer any credible answers to these questions?
“FCSI was in operation on this planet long before humans existed. We came along and noticed it later.”
UB Excellent empirical observation
Vivid
—aiguy: “In any event, I am perfectly willing to acknowledge (as I already have) that you have a version of ID theory that is not predicated on the truth of dualism. So let’s move forward and accept your version of ID in our debate.”
Why should I move forward with you to discuss theories and opinions when I can’t even get you to acknowledge a fact.
—“You seem to think that there is one, single, genuine, canonical version of ID that everybody adheres to, but I’m not aware that such a version exists.”
No, I have not indicated anything like that.
—.”
Meyer’s conclusion comes after the facts in evidence have been considered, it is not an operating assumption that precedes the investigation, which is the mistaken point that you are tying to peddle.
I am happy that you are making the rounds with other ID bloggers, however, no one that I know of has ever indicated that they think ID depends on dualistic metaphysics. So, you are chasing the wind on that count. If, indeed, ID did assume metaphysical dualism, then an inference to design would not be a inference at all but rather a trivial tautology. To assume a conscious designer prior to the investigation is to smuggle the conclusion into the hypothesis.
So, whatever you are reading in to the comments of other ID bloggers, it can’t be that ID depends on metaphysical dualism. In keeping with that point, if I can’t get you to understand basic ID, I am certainlly not interested in discussing advanced ID with you.
aiguy:
in one of my previous posts I stated that I could give you the full ID inference using only the kind of empirical definitions that we have discussed in some detail.
Even if you may still have problems with those definitions, I would like to show here that such a claim is not unfounded. Therefore, I will give a very quick outline of how that is done, so that, if you want, we may discuss any single point on which you like to have more details:
1) We start by observing that we, human beings, have a kind of experience that we call “conscious representations”. We call that condition “being a conscious entity”.
2) We observe that many of those representations have a cognitive content. We call those representations “conscious intelligent representations”, and the condition being a “conscious intelligent entity”.
3) We observe that conscious intelligent representations are often associated (I am not stating that they are necessarily the cause of) to the output of purposeful objects, to which the agent imparts some meaning or function. We call that kind of output “design”, the process of the object creation “design process”, the conscious intelligent representations “conscious intelligent representations associated to the design process”, and the entity which performs the design process “conscious intelligent designer”. We call the outputted object “designed object”. It is important to notice that the designed object needs not have any special characteristics, other than being the result of the design process, which implies having receive some form of meaning or function from a conscious entity through a process associated to conscious intelligent representations, which obviously include the representation of that meaning or function. For the rest, it can be anything: simple or complex, analogical or digital, and so on.
4) The meaning or function represented by the designer in the process, and imparted to the designed object, we call “specification”. The specification originates in the designer’s representations (or, to be even more rigorous, is associated to them in the process of design. But, if the circumstances permit it, it can also be recognized in the object by another conscious intelligent being. Sometimes that cannot happen (for instance, I may not recognize a string of symbols if I do not know the symbolic code used by the designer). But if and when it happens, we call that “recognition of a specification by a conscious intelligent observer in an object”.
5) Now the problem is: how do we know that an object is a designed object?
6) The general answer is easy: when we know for certain that it was designed, because we designed it ourselves, or we could witness the process of its design, or we have indirect evidence of that process. In that case, we say that the object is designed because we have observed, directly or indirectly, the process of its design. This is a simple fact, and not an inference. It does not require that the designed object have any special characteristics, other than being the output of an observable design process.
7) But what about objects for which we cannot have the information mentioned in the above point, but that we suppose may be designed? For them we must perform a procedure called “design detection”, or “design inference”.
8) The procedure starts by looking for some formal property of designed objects which empirically is specific for them (which is always associated only to designed objects, and never to non designed objects). We need a very high level of specificity for that inference.
9) We find such a property: it is CSI. Objects exhibit CSI when two conditions are satisfied:
a) A specification can be recognized in the object explicitly and clearly by conscious intelligent observers. By “explicitly” I mean that the observers, after recognizing the specification, must also be able to clearly define it, and define how the presence of that specification can be objectively verified, and if possible quantitatively measured. If the object satisfies that, we call the object “specified object”.
b) The specification must be obtained through a high level of complexity in the object. Here, “complexity” has the usual meaning of Shannon entropy, or any equivalent definition. In general, it corresponds to Kolmogorov complexity (non compressible). We use some pre-defined threshold of complexity. If that threshold is satisfied, we call the object “complex”. The complexity must obviously be connected to the specification.
If our object satisfies both a) and b) we call it “complex specified object”, and we say it exhibits CSI.
10) For all practical purposes, we can choose (I usually do that) to work with some subset of CSI which is easy to treat, provided that such a subset can be applied to the objects we are studying. The subset I usually refer to is dFSCI, which is any kind of CSI where:
a) The specification is an observable, definable, and measurable function
b) The complexity is in digital form
From now on I will refer to dFSCI, and not to CSI in general.
11) We observe that dFSCI is a property which empirically is observed only in designed objects, and never in non designed objects. That is a completely empirical statement. There are also theorical reasons to affirm that in principle, and they are certainly an interesting part of the ID theory, but being theorical they are more questionable, and we will not use them here.
12) The above statement is always observed to be true, with the notable exception of a very special set of objects: biological objects. Many biological objects, more specifically all the genomes and proteomes, seem to exhibit dFSCI. As their origin is not known with certainly, we will for the moment put them apart, and we will not consider the an exception to the generally observed rule.
13) On the basis of our empirical observations in 11), we decide to use dFSCI as a tool for design detection.
14) We develop a rigorous procedure to verify if an observed object exhibits dFSCI. That is more or less the explanatory filter. We define the specification and ascertain its presence, we measure complexity, we rule out any known model based on necessity which could have originated that object.
15) If the object satisfies 14), we say that it exhibits dFSCI.
16) That empirically works perfectly: the procedure seems to exhibit 100% specificity. No exception is known to that detection rule. No single example of objects exhibiting dFSCI which are non designed is known. So, we affirm that design is the best explanation for objects exhibiting dFSCI.
17) On the contrary, the procedure is not good at all from the point of view of its sensitivity. There are a lot of false negatives. Any designed object which is simple will not be recognized by that method as designed.
18) We call the whole process “design detection”.
19) So, to conclude, through the design detection process we can classify objects in different categories:
a) Non specified and non complex objects: they are usually non designed. They can be designed, because recognition of specification is not always successful, and complexity is not needed for an object to be designed. For this class, design can only be affirmed if the design process has been directly or indirectly observed.
b) Specified non complex objects: These are often designed, but again we cannot be sure of that unless the design process has been directly or indirectly observed.
c) Non specified complex objects: these are very common, and are usually non designed. In ID we know very well that complexity is easy to find, and that it does not imply design at all. Still, some of these objects could still be designed, if they exhibit a specification but we failed to recognize it.
d) Specified complex objects: we can affirm, on the basis of empirical experience, that they are always designed.
The above summary tries to clarify with some rigour what design detection is. As you can see, I have not even started to apply that system of thought to biological information. I am ready to discuss in detail any part of it, if you or others want to do that.
I affirm that the above model of thought is completely empirical and scientific, and that it needs not special theory of anything, out of what is usually accepted in scientific reasoning.
Gpuccio @ #270 (also Green @ #267)
Mark:
I don’t want to fight with you about that. But I never said that there were definable differences in the behaviour.
I would never want to fight with you – just debate.
I said:
“I think we can agree that the free will of compatibilists is a completely different concept form the free will of “libertarians” like myself.” (emphasis added)
Since when is a difference in concepts a difference in observable behaviour?
If you cannot define any difference in how people with your concept of free will behave and people with my concept – then how can you tell the difference between the two concepts? After all free will is about actions.
For instance, just using Green’s terminology, we libertarian believe in PAP and incorporate PAP in our concept of free will, while you compatibilists don’t.
What is the difference between PAP and any event which happens with nothing to cause it to happen in that direction in that particular time (we might call them “random” although that is a much abused word) – such events are logically possible. I define determinism to include random in this sense.
Mark:
I don’t believe that free will is at present accessible to scientific evaluation. It is a philosophical concept, and certainly an important one.
Libertarians and compatibilists have different philosophical concepts about that problem. Even if they use the term “free will” in their philosophies, the term means completely different things in the two contexts. That’s what I mean, nothing more.
I can certainly tell the difference between two concepts just by examining and understanding how they are formulated, even if I have not an empirical way to test those concepts. Concepts are products of our mind, and two different concepts are two different concepts, even if they had no bearing to the real world.
PAP is a philosophical concept. You can accept it in your philosophy (libertarians do that) or not (compatibilists don’t). A definition of free will based on the concept of PAP is certainly different form a definition of free will which refuses that concept. I don’t think you can deny that simple fact.
You can certainly include random in your definitions. For me, old libertarian that I am, randomness has nothing to do with free will. Again, that proves that our concepts of free will are completely different.
AIG, 288:
First, name any significant academic discipline where there is 100% consensus across scholars in the field. You cannot, and indeed, anyone who knows academia will recognise that it is the nature of academics to have various views on a matter.
So, the root of this latest objection, unfortunately, is selectively hyperskeptical special pleading.
It is also distractive.
What fundamantally counts is observable facts and how inferences can be constructed relative tot hose facts that warrant an objective conclusion. So, we go around the loop yet again:
1 –> We are conscious intelligent observers of our world, with an insider access to the world of intention, foresight, goal-oriented interventions into the world, linguistic communication based on conventional symbols and rules for using them, etc etc.
2 –> Indeed, we access experience of the world by our conscious intelligence.
3 –> Any theory, any worldview [and per Lakatos, research programmes embed worldview level commitments in their protected cores], that requires us to willfully shut our eyes to these first facts is by that claim fundamentally self-referentially inconsistent and absurd.
4 –> In particular, experience of these first facts is prior to intellectual life, and so is prior to all exercises intended to create precising definitions. We therefore point to examples and seek to describe, but any failures of such definitions, real or imagined, does not thereby dis-establish the reality being referred to. That is absurd, as to define we have to use these very same first realities.
5 –> Next, we turn to the observed causal patterns of our world. Using simple illustrative [but not exhaustive] examples, again:
6 –> To store information, we require contingency, and to store a lot of information, we also need complexity [a large set of possible arrangements]. But the contingency has to be directed based on symbols arranged according to rules, e.g. the ASCII characters of this post.
7 –> We routinely experience and observe digitally coded, functionally specific, complex information [dFSCI], especially in linguistic contexts and in algorithmic contexts. When we do so, we see that routinely and reliably the causal process for dFSCI is directed contingency.
8 –> This is consistent with the further observation that on the vast — 1,0000 bits of storage capacity has 1.07*10^301 possible configurations — configuration spaces for dFSCI, random walk search strategies on the scope of the observed cosmos [up to ~ 10^150 Planck time states of its atoms across its thermodynamic lifespan] will be unable to sample an appreciable fraction of the space.
9 –> So, islands of specific function will be practically unreachable by undirected search strategies, on want of adequate search resources relative to the gamut of the config space.
(This is the context of the Dembski-Marks discussion on the difference active information makes to getting to hot or target zones or islands in the spaces: using knowledge, intelligence and purpose, designers can cut across the space and land close to or on islands of desired function. My son’s first bow-making attempt worked well enough to project arrows a considerable distance: bringing together objects as diverse and apparently unrelated as guava tree shoots, spokeshaves, machetes, knives, volcanic boulders in our backyard, bootlaces, duct tape [flight feathers] and Youtube videos. His second attempt will improve on that performance.)
10 –> So, we have reason not only to empirically associate the observation of dFSCI with directed contingency instantiated by intelligence [as observed, without any attempt to speculate on its ultimate nature], but to see that such dFSCI is not credibly produced by stochastic contingency, and mechanical necessity is a non-starter.
11 –> Thus, we infer that dFSCI is a reliable empirically observable sign that points to directed contingency as cause, a cause equally reliably known to be instantiated by intelligence. (And intelligence is not to be equated to embodiment or even embodiment as humans, e.g. without skill and expert knowledge, you will not be a successful smith of a katana, or a PC.)
12 –> Onlookers, observe how, yet again, we do not see the offering of a credible empirical counter-example; but instead, attempts to raise rhetorical objections. That strengthens the force of the inference from dFSCI to the directed contingency as credible cause it points to.
13 –> Now, we see just such dFSCI as algorithmic information in the living cell, associated with a complex organised network of molecular nanomachines; indeed the DNA codes for the machines too. We have a metabolising entity that transforms available materials into the machines of life, and it is capable of self-replication based on the stored coded information.
14 –> That is more advanced tech than we have for now, but it is recognisable, per the von Neumann self replicator and digital information technology. So we have good inductive reason to infer on best explanation to the cell as just that: a product of art, not chance and/or mechanical necessity.
15 –> In turn such identification of credible causal process implies that the most likely underlying explanation is intelligence purposing to create life.
16 –> This does not start from the a priori assumption of intelligence in the remote past, as, if the evidence had pointed to blind mechanical necessity and/or chance, that would have been the default inference.
17 –> And when we compare those who try to argue that we must only explain on chance plus necessity when we come to maters of origins, and the manifest inadequacies of their theories or models, that highlights the strength of the inference.
18 –> On observing the increments in dFSCI to originate major body plans [10+ mnbits, dozens of times over], we likewise see that directed contingency is the best explanation.
___________________
So, the design inference is not an a priori procedure based on inadequate definitions but a step by step empirical inference based on basic facts.
GEM of TKI
#294 gpuccio
PAP is a philosophical concept. You can accept it in your philosophy (libertarians do that) or not (compatibilists don’t).
Surely even philosophical concepts can be described? I still feel like you have offered my no description of the difference between libertarian free will and compatabilist free will. You have shifted the difference to PAP but Green defines PAP as:
“PAP means that even given all the same antecedent conditions, an agent’s actions could have been otherwise”
To me this sounds like random.
Maybe it is not possible to define your concept of free will. But in that case does it matter? It is in some indescribable way different from mine – but mine includes everything that we care about – moral responsibility, freedom to choose, etc. Why worry about an indescribable additional factor X?
Mark:
I don’t understand why you say that mi position (and in general the libertarian position) cannot be defined or described. I have amply made exactly that in this thread, see especially my posts #59, 64, 91, 126, 128 and 129. I don’t want to say all that again. Please, read my posts if you want and we can discuss any single point you find convenient.
One thing is that you don’t agree with my positions, another that you deny their existence. I don’t deny that compatibilism exists, or that it can be defined or described. I affirm that, if correctly described, it supports a conception of free will which absolutely is not the same thing as the traditional, libertarian concept of free will.
Compatibilists have just created a new concept of determinism, and they have baptized it “free will”. But playing word games cannot change things. The old, libertarian concept of free will, which has inspired for centuries most philosophies and religions, is all another thing. You may refuse it, you may find it inconsistent, you may consider it just a figment of imagination: but to say that it cannot be described, defined, or differentiated from compatibilism is really too much for me…
MFOnlookers:
Randomness is not the only credible alternative to mechanical necessity.
Where high contingency exists — and surely there are many cases where this is real, starting with the text strings that form posts in this thread — purposeful direction can shape outcomes.
Does MF intend that we should understand that text strings posted in this thread are posted by either randomness [lucky noise generating meaningful text in English . . . ] or lawlike necessity [how would ewe be programmed to produce these text strings?] or both in combination?
When I bring determinism in direct or compatibilist forms down to something like this, it simply does not make sense.
It does seem that as intelligent agents, we do select particular alternatives from sets of possibilities open to us.
For instance, just now I decided that since MF has an announced policy of refusing to address points I make, I would address the onlooker instead.
That the change I just made — see I actually used two alternatives, shifting from one to the other — is random or programmed by some subtle driving forces that run me like a PC, is not at all reasonable.
GEM of TKI
StephenB:
Thank you for your thoughts on those verses. You noted how rewards and punishments in heaven make no sense from a determinist perspective. I agree. But I also think they make no sense from a libertarian perspective. As I’ve noted earlier, noncausal and event causal theories of libertarianism just inject a bit of indeterminism into the causal chain leading to an agent’s decision in order to satisfy PAP. Randomness cannot ground responsibility, though. With regards to the agent-causal theory of libertarianism, for all the reasons I’ve given above, it cannot ground moral responsibility either. Thus rewards and punishments make no sense on either a libertarian or a determinist perspective.
With regards to all those verses where God exhorts and urges his people to act in a certain way. None of those are inconsistent with determinism. God’s exhortations are the means he uses to change his people. You might ask, well, if people resist, then how can God still blame them, since they are not the ultimate source of their desires? This question is parallel to the one Paul addresses in Romans 9. Paul writes;
“You will say to me then, ‘Why does he honourable use and another for dishonourable use?”
Paul goes on to say that God does it to display all the facets of his glory. The main point to note, however, is that Paul does not appeal to man’s libertarian free will in order to solve the problem (which is good, because as we have seen, philosophical theories of free will do not solve the problem). Instead Paul insists that human’s not question God’s ways; God has a right to what he wants with creation. So Paul holds men morally responsible for their actions, but also insists that ultimately God (not man) is in control. So this is the paradox the bible teaches, and no resolution is offered. Libertarianism should not be embraced to solve the problem (Paul does not embrace it), and even if one did embrace it, it doesn’t help anyway.
A couple of other thoughts regarding ID, dualism, libertarianism, and the points that aiguy has been making:
Firstly, I think it’s helpful to separate libertarianism from dualism. The two do not go hand in hand. It is perfectly rational to embrace the latter but not the former.
Now which does ID need? Libertarianism or dualism, both, or none?
I think Dembski might require libertarianism and Meyer might require dualism. Let me explain:
Aiguy and GP seem to have come to the conclusion that Meyer needs to posit a conscious agent to make sense of design, because intelligence requires consciousness. I’d argue that consciousness cannot be made sense of within a physicalist ontology, and thus that Meyer requires dualism. BUT, this is not a priori. If consciousness can be made sense of within a physicalist ontology, then I think it would be fine to embrace a materialist conception of intelligence, and thus a materialist conception of design. So the commitment to dualism is a posteriori here.
With regards to Dembski, he uses a different definition of design than Meyer. Dembski defines design in a negative way; he tells you what it is not, not what it is. Essentially he defines design as the negation of chance and necessity. This is not the standard agent-based notion of design that Meyer uses. And he writes quite explicitly in The Design Inference that there is no necessary connection between his negation concept and the standard agent-based concept (e.g. see p8, p227).
So the reason I think Dembski may require libertarianism is because the only thing that is not chance and necessity is libertarianism. This does not necessarily require dualism (as Tim O’Connor has been keen to point out).
I don’t know if that makes any sense?
Green:
The decisive issue, as has been pointed out already, is responsible power of choice and decision.
I do not usually wear a theology hat at UD, but kindly observe Jn 3:16 – 21 with the support of Rom 2:5 – 8:
____________________
>> Jn speaks to willful turning away from truth one knows or should know, vs willingness to seek, live by and turn to the truth one has access to.
>> 1You, therefore, have no excuse, you who pass judgment on someone else, for at whatever point you judge the other, you are condemning yourself, because you who pass judgment do the same things . . . . . . . 11For God does not show favoritism. . . . >>
–> Here one condemns oneself out of his or her own mouth by passing he judgement on others that they ought to have done better, then failing to live by the same standards we set for others. [This one catches us ALL, just think about how we quarrel by claiming “you unfair me.”.]
–> But then it underscores persistence in the path of the truth and the right based on the light one has: we all stumble but we are obligated to get up and press on to the good and the true.
–> those who turn from the truth and the good they know or should know, condemn themselves.
______________________
These are not obscure or minor or unclear texts. And they are premised on responsible choice.
Which is intuitively highly relevant to fairness in judgement.
It is possible to construct systematic theologies that are more or less deterministic, but they struggle in the face of abundant testimony of the Scriptures.
My favourite view on this is that it is like playing chess with a grandmaster. You have freedom to move, but you are dealing with one who knows and understands far beyond your capacity, so the end is not in doubt, unless mercy is shown.
And, we know how that mercy was shown, in love, at what cost.
GEM of TKI
Green (#300):
I think you observations make some sense. And yet, to your question:
Now which does ID need? Libertarianism or dualism, both, or none?
I would still answer: none.
I am certainly a libertarian, and I am absolutely sure that, to use your words, “consciousness cannot be made sense of within a physicalist ontology”. So, if that makes me a dualist, then I am a dualist (but still don’t like the concept).
Meyer and Dembski may have different approaches, but they are saying essentially the same thing. The design approach is a very basic paradigm which has vast consequences, and can be formalized in different ways: none of that changes its essential strength and depth.
I have tried in my post #292 to demonstrate to formulate the whole UD theory without any necessity to make inferences about:
1) The nature of consciousness
2) The nature of intelligence
3) Free will
4) Dualism
It’s enough that we accepr that consciousness exists, that its inner representations can be described and that associations between those representations and outer facts can be traced.
I think that both Meyer and Dembski accept that consciousness cannot be explained in purely physical terms, and probabvly both are libertarians. But neither of those things is necessary for the ID theory, so each one of them chooses ( 🙂 ) to develop his personal discourse in the way he finds more congenial.
But they are essentially making the same discourse.
As I have always tried to say: ID is stronger and more important than each of its supporter’s views about it.
Green:
Why do you keep trying to inject a prioris into the design inference process?
We deal with a simple induction on facts readily in evidence. Such as, that designers exist and that when they exert directed contingency they often leave characteristic traces of that causal pattern behind. For instance, there is little doubt that texts of posts in this thread are intelligently created, not the product of undirected stocahstic contingency — lucky noise.
Similarly, when we observe the origin of digitally coded functional information and of the machines that process it, it is reliably the product of design.
That empirical reliability, per the uniformity principle so often used in origins science, then allows us to infer that similar signs tracing to the deep past credibly — on inference to best explanation of the cause of such signs — were similarly produced by directed contingency.
We then see that it is not an unreasonable inference to conclude inductively that C-chemistry cell based life is a product of directed contingency. That is not a proof, it is an induction, on warrant by the technique that undergirds the general work of science.
And on that inductive ladder, we then may further infer on the observation that, reliably,cause by directed contingency has its source in intelligent agents. So, a further rung of inference is that life had one or more designers, possibly a team.
Lifting our eyes to the observed [that’s important, we are not looking at speculative metaphysical claims . . . ] cosmos as a whole, we may then observe that it too shows signs — finetuning — that point to design, and design to facilitate C-chemistry cell based life. But in this case an extracosmic designer of a contingent material cosmos is credibly both intelligent and powerful. Further, the ultimate designer — per the association between signs and designs and designs and designers — is arguably a necessary being, and that implies an immaterial one [on the simplest grounds, an infinitely old material cosmos will have long since suffered heat death], with an active Mind, and power to create a material cosmos.
So, it is not unreasonable to use design inference empirical methods to infer to design of life, thence design of the cosmos that facilitates life, and onward to the designer of that cosmos. And, to see such a designer as immaterial, and to be Mind with power, is not unreasonable. This is now phil not sci, but that sounds a lot like the God theists speak of. That is theism is not unreasonable or irrational.
And in this context, we have inferred from the empirical to the implications and best explanations. We have not injected questionable a prioris, just we have not a priori ruled out relevant possibilities.
And as to the notion that design thinkers and theists will ruin science, let us jut observe that hey are a very large part of he circle of founders of modern science. On the whole, they are still a big slice of the scientific and allied professions. So, that slander should be put out to pasture.
And so, I return to the issue that directly concerns me: where in the above chain have I (per good argument) smuggled in an a priori, as opposed to refusing to shut the door, a priori, to a possibility?
If you [or others] cannot credibly show the step where such smuggling happened [and remember design and designers are defined by known example and family resemblance, not some a priori attempted one size fits all definition], the worldview level motive mongering that I see too much of in this thread is utterly irrelevant and uncalled for.
GEM of TKI
Green @299, I have read your latest comments on the subject of determinism and free will. Unfortunately, I have come to the conclusion that, on this matter at least, you are impervious to reason. I never make personal judgments, but I will simply offer an observation about human nature.
Most of us tend to look for the easy way out, and deterministic compatibilism certainly leads down that road. It is very to avoid the painful process of exercising and strengthening our will, which is the cost that every human must pay when he strives for virtue, a process that always involves saying yes to our good impulses and no to our bad impulses.
In effect, the compatibilist renders virtue meaningless, since virtue, by its very nature, consists of forming the will to prefer that which it ought to prefer and disdain that which it ought to disdain. The compatibilist is inclined to act on all his desires and disinclined to resist those which he ought to resist. In the final analysis, he can use his philosophy to avoid virtue’s demands by simply saying that he has no control at all over his impulses and cannot, therefore turn toward good and away from evil.
On the contrary, he can only turn in the direction that his cravings and appetites would lead him. He is, or soon will become, a slave to his passions. Virtue, after all, requires the moral strength to turn away from evil, yet the compatibilist has forfeited his capacity to make the requisite act of the will by claiming that he has no such capacity. Thus, he cannot follow his Savior’s advice to, “Be perfect as your heavenly Father is perfect.” The compatibilist, to the extent that he lives his philosophy, is, like the materialist determinst, a slave to his lower nature.
@304 should read, “It is very [easy] to avoid the painful process of exercising and strengthening our will, which is the cost that every human must pay when he strives for virtue, a process that always involves saying yes to our good impulses and no to our bad impulses.
Green,
That is not my position, no.
First of all, I think cognitive science has clearly shown that intelligence (i.e. planning and problem-solving abilities) does not require consciousness. It remains completely unclear what consciousness does; all that we really know is what consciousness feels like. Some people think that it can affect matter, and some have tried to test this hypothesis (i.e. paranormal researchers); but at present we have no empirical grounds to claim that dualist interactionism is true, so it remains in philosophical debate.
However, in order to make any sense at all, ID needs to be able to distinguish the explanation it offers, which is “intelligent causation”, from all other causes. Simply saying that “something intelligent” was the cause of some phenomenon tells us absolutely nothing about it – not one single thing. We don’t know if that means that this something can talk, or that it can read, or take an IQ test, or understand a melody or play Jeopardy or… anything else. And as we’ve seen repeatedly here, it really doesn’t help for ID to say that intelligent causes are “those capable of producing FSCI”, because that renders ID’s central premise as perfectly vacuous: The FSCI we observe in biology is cauesd by that which can produce FSCI”.
So what it is that will serve to set “intelligent causation” apart from all other causes, so that we can know what ID is trying to say? Unless ID can characterize this “intelligent cause” in a way we can understand in terms of our experience, then quite obviously ID has no reason to claim it is an empirical science.
The following have been offered by various people in order to address this problem:
volition This is some variety of free will, but as this thread has amply demonstrated, even Christian ID proponents disagree about volition is and what it does.
foresight When scientists test for foresight in animals, they place the animals in novel environments to see if they can generate novel solutions. They put food in jars and in tubes and under buckets, put obstacles in their way, they give them things to make tools out of, and so on. If the animal can figure out what to do to get the food, the scientists attribute foresight to them, which is generally considered to be one aspect of intelligence.
But scientists never, obviously, assume that some animal has foresight simply by observing the artifacts it makes! It would be completely mistaken to find that some animal had produced an artifact that had complex form and function and then assume that the animal had used foresight in order to build it. Unless the scientist can observe the animal in a novel situation, there is no way to assess the animal’s ability to solve problems. Termites build complex mounds with specialized chambers with irrigation for growing fungus, and shafts for efficient ventilation, along with archways that are built from both sides and meet in the middle. But termites display no ability to solve novel problems; the problems they can solve are restricted to “building a termite mound” (which they do very well). Before scientists tested termites they may have mistaken termites for general problem-solvers, but as far as we know termites can’t figure out other problems at all.
So there is no scientific way to infer what abilities something has simply by finding artifacts; we actually need to interact with something to assess its problem-solving abilities (or “foresight”). In the context of ID, then, there is simply no way of inferring what abilities the Designer had besides producing that which we are trying to explain (FSCI in biology). Just because human beings have a lot of different mental abilities doesn’t imply that the cause of life does too.
Perhaps the Designer could produce the biological structures we see, but could do nothing else at all – it couldn’t read a book, or understand a melody, or play Jeopardy, or… anything. Unless ID specifies some particular thing that the Designer is supposed to be capable of besides producing the FSCI we observe in biology, then ID is saying nothing at all about the Designer in terms of our experience.
intentionality A specific codon causes cellular machinery to append a specific amino acid. We say that the codon encodes the amino acid, or that it has a meaning of “append this amino acid”. But anyone familiar with the problem of intentionality will realize that it is very difficult (impossible) to say where this “meaning” resides. What is it that determines what means what? Where does this meaning reside – in the DNA? In the cellular machinery? In our understanding of this system of physical causes?
So it’s clear that without some operationalized definition of “intelligent cause” – and the opportunity to perform the tests implied by that definition – ID is saying nothing about this cause that has any meaning with regard to our experience.
Now, what Meyer and I agree on is this: The claim that this “intelligent cause” is conscious actually does make a meaningful statement that we all can understand about what distinguishes “intelligent cause” from all other causes. We each know what conscious experience is, so we can inter-subjectively agree that conscious experience exists, and by inductive inferences of varying strenghts we attribute this property to each other (and perhaps some other animals). If Meyer says that the cause of first life was conscious (and he says exactly that), then he really is saying something meaningful in terms of things we can experience.
The problem with this, however, is that the hypothesis that the cause of life was conscious cannot be tested, and there are no good grounds to infer it:
1) We have no scientific tests for evaluating the consciousness of unknown entities that we can’t interact with. The tests we do use to infer consciousness (such as the mirror test) are obviously not applicable in the context of ID.
2) We infer consciousness in other humans because (1) we are alike in so many other ways, (2) we know that certain brain structures are required to be working in order to support consciousness in humans and we can perform neurological tests to see similarities across human individuals (and perhaps other animals), and (3) we provide verbal reports of consciousness that are best interpretable as referring to the same thing across subjects. None of these three methods are applicable in the context of ID.
3) We do not know if consciousness is causal and there are reasons to think that complex form and function can be produced without it. Much of our thought occurs without conscious awareness, and other animals to which we do not typically attribute conscious (like termites) manage to produce FSCI anyway.
So that is why ID cannot scientifically explain life by appeal to “intelligent causation”: Either the mentalistic concepts involved are meaningful only for specific organisms and not in the abstract, or we have no way of ascertaining if they apply to the cause of life.
Some complain that we can’t define “life” any better than we can define “intelligence”. Exactly! That is why no scientist has ever tried to explain any phenomenon by offering “life” as the answer (except perhaps for vitalists!). “Life” is a general, hard-to-define term that describes what biologists study, not something we offer as an explanation for what we observe! Likewise, “intelligence”. I study “artificial intelligence”, but this just loosely describes the sorts of abilities we try to recreate in computers. No scientist ever explains anything by saying “intelligence is the cause!”.
Again: No scientist ever explains anything by saying “life is the cause” or “intelligence is the cause”. These terms have no operationalized definitions, and so it doesn’t mean anything in terms of our experience to say that “life” or “intelligence” caused something.
So, without an operationalized definition of intelligence, ID is forced to distinguish it’s cause some other way. The way ID does it is to implicitly resort to metaphysics, playing off people’s intuitive dualism without admitting it. Terms like “directed contingency” clearly refer to some variety of free will, but people accept it like it is a perfectly clear and observable fact known to science. Green has discussed the varieties of dualism and free will that various ID proponents appeal to, but none of these assumptions can be supported or refuted by appeal to our experience; they remain in philosophical debate.
This is precisely what I object to, and all of my posts here are directed at this very aspect of ID. I’m not a materialist, and I’m not a Darwinist, but I have studied minds for my entire career and I find it clear that ID fails utterly to define its terms and lay out its case for empirical support for the existence of some life-creating entity that supposedly shares mental abilities with human beings.
I disagree. Meyer explicitly says that his hypothesis is to a conscious agent. He is using “consciousness” to define his cause; otherwise nothing distinguishes his cause from all other causes in the universe.
I agree that Dembski implicitly assumes something like dualism when he explains life by something that is “not law and not chance”, and he has even admitted that his view requires “an expanded ontology”.
aiguy,
Your protestation that science cannot make scientific findings of “intelligent design” is belied by the fact that it does, as in cases of archaeology and forensics.
Unless you wish to argue that, if we were to find what appeared to be an artificial construct of some sort on an otherwise desolate and uninteresting planet, there is no way to scientificallt establish as best explanation that the object was most likely the product of intelligent design (some intelligent non-human race, perhaps), then your argument here fails simply because we know such findings are made, and can be made, no matter if “intelligence”, “consciousness”, and “design” is poorly defined or not.
gpuccio:)?
That such findings are made, and can be made, is really non-controversial; the only real question is if a rigorous methodology – such as the FSCI bound – can be found to specifically quantify when such a finding is appropriate, even when it is not suspected that humans (our only current bona-fide example of ID) were not involved with the phenomena in question.
Gpuccio
I certainly don’t want to make you repeat all your comments. I have looked through them and I cannot find anything that describes the essential difference between my concept of free will and yours. Let me try and explain by tackling it from an epistemological view. (I don’t think Green has done that – but I haven’t read all this posts).
How do you know about your type of free will? It can’t have been by observing it in other people. We have already determined that there is no defining difference in behaviour between our two versions. So presumably you know about it by introspection of your own exercise of free will. But how does this introspection inform you that when you exercise your free will it is not determined? I guess it is doesn’t come with a label saying “not determined”. Even if it did the label might be wrong. You might say that it is obvious to you from the nature of the experience that you are choosing. But imagine this. Suppose you deliberately and consciously exercise your free will over some simple decision e.g. whether to raise your hand or not (you can do it now perhaps). You luxuriate in your “simple glory of free will” and decide to raise your left hand after 7 seconds. But suppose as soon as you finish, an expert neuroscientist comes your from behind a screen and shows how he knew that you would spend 7 seconds luxuriating your free will before raising your left hand because he could trace the causal chain through the neurons leading to your decision. Is there anything that makes this logically impossible? Would this suddenly mean that you were not exercising true free will but only had the illusion?
William,
No, William, of course we do not do that at all. There has never been an archaeologist or a forensics expert who has determined that “intelligent design” in the abstract was responsible for anything, obviously. What they invariably infer is the action not of an “intelligent agent” in the abstract, but rather of a “human being”.
Read what I said again: Either the mentalistic concepts involved are meaningful only for specific organisms and not in the abstract, or we have no way of ascertaining if they apply to the cause of life.
We infer the activity of animals – specific animals, including human beings – and not the activity of some abstract class of things called “intelligent agents”.
Here’s what I wrote upthread vis-a-vis this issue, showing that a fire investigator infers not “intelligent agency” but rather “human beings”:
AIGUY: I have decided that this fire was set by an “intelligent agent”
CHIEF: What the heck is that?
AIGUY: It is something that is intelligent, but it isn’t a human being or complex physical life.
SETI seeks intelligent life forms, not intelligent non-life forms. I’ve been through this many times here. If we found a TV set on Jupiter, we would be justified in assuming something with eyes, ears, and hands existed there. If the artifact were similar enough to what humans build that we would assume it was a human-like life form, we might infer that it was sufficiently human-like that it would share other attributes – like a powerful brain that could read, write, understand a melody, play Jeoparday, and so on.
If, however, the object we found was something we find in nature and not something that reflects a human-like origin, none of these inferences would be justified.
I meant, “when it is not supsected that humans were involved” in #309.
The difference between SETI and ID:
SETI looks for things that are not found in nature and tries to infer intelligent life.
ID looks for things that are found in nature and tries to infer intelligent non-life.
These are very, very different sorts of endeavors.
aiguy:
I think cognitive science has clearly shown that intelligence (i.e. planning and problem-solving abilities) does not require consciousness.
Why do you say that? Can you show me an example of intelligence that can plan and solbe problems, and which does not originate from a conscious being? I am not necessarily saying that cponsciousness is the cause (although I firmly believe it): but I affirm that it is always associated with any form of intelligent processes. Which would anyway be a very goos reason to infer that it is the best explanation for all intelligence.
Regarding your other points, I again disagree. ID does not resort to metaphysics, as I have tried to show in my post #292. Design detection needs no metaphysics.
But I see your problem. If design detection, which never fails when we deal with supposed human artifacts, tels us that biological information is an artifact (and there is nothing metaphysical in that), what model can we choose to explain that?
The problem is that some of us think that the only conscious intelligent beings are humans. And we usually don’t believe that humans were present at OOL, or can have caused it, or the following evolution of it.
But, while I can agree with the second statement, I would say that the first is only a prejudice, and implies a definite view of reality, which needs not be shared by all, which has never been shared by all, and which has definitely beeen shared only by a minority in the past.
It is definitely possible that other conscious intelligent beings exist. If deosign detection tells us that biological information has all the properties of designed artifacts, and if we agree that humans are not a likely explanation of that, it is simply reasonable to consider that other conscious intelligent beings may be responsible, and try to understand if that is true by scientific inquiry. That’s not only a possibility, it is a scientific duty. There is nothing metaphysical in that.
Otherwise, we would remain with the only alternative that the origin of biological information cannot ever be explained. Which is not a very satisfying conclusion for science, or for human thought in general.
Because I am absolutely sure that any theory which does not include cosnciousness and design can never explain the origin of biological information. And that this can be demonstrated, indeed that it has already been demonstrated in the context of the ID theory.
aiguy,
SETI doesn’t seek intelligent life-forms; they seek evidence of intelligent life forms – IOW, a signal that can be deduced to be artificial – meaning, it must be similar enough to “what humans deliberately produce that is quantifiable different from what nature produces” so that we can recognize it.
Of course a design inference requires that the intelligence of the designer of the phenomena in question is similar enough, or produces product similar enough, to humans so that it is available to a method of quantification that works in differentiating human ID product from natural processes.
ID doesn’t claim to be able to identify ALL cases of ID, or even most – only those which are similar enough in kind to the quantifiable human baseline as to be recognizable products of human-like intelligence.
ID doesn’t purport that all product of ID will be discernible by the FSCI bound, or even by any means of investigation; only that some of it, like some of that which is produced by humans and perhaps other human-like (in intelligence) entities.
The FSCI bound quantifies what is available to known natural law and chance to produce, and shows that human intelligent design easily and regularly exceeds that bound by huge amounts on a daily basis, and is the only known commodity to do so.
Which means it is a reasonable inference that if we find a phenomena which exceeds that bound by a considerable margin, a human-like intelligence might be responsible.
Again, it is a pretty straight-forward inference to best explanation.
aiguy said: “ID looks for things that are found in nature and tries to infer intelligent non-life.”
False. ID leaves the vehicle for intelligence entirely undefined.
aiguy:
ID looks for things that are found in nature and tries to infer intelligent non-life.
Why non-life? Are you saying that our designer, whoever he is, a cosncious intelligent being, would be dead?
Even if the designer is God, he can well be a living God…
William,
The Designer that ID posits is either a complex life form or it is not.
If the Designer is itself a complex form, then it cannot logically be the cause of the very first living cell (Stephen Meyer claims that ID explains the creation of the very first living cell). And once you posit the existence of extra-terrestrial life, you may as well hypothesize that we are the descendents of that life form, rather than the products of its advanced engineering efforts.
That leaves only the possibily that ID is positing something that is not itself a complex, physical, FSCI-rich life form. That is why I say (truely) that ID must posit non-life in order to explain what it claims to be able to explain (at least in Stephen Meyer’s view).
gpuccio,
By “life” here I mean “a complex physical organism rich in FSCI”.
–aiguy: “SETI looks for things that are not found in nature and tries to infer intelligent life.
–ID looks for things that are found in nature and tries to infer intelligent non-life.”
Define “nature.”
William,
You are completely wrong about SETI. I can provide links if you wish, but I assure you that SETI hires astrobiologists and performs analyses regarding encephalization quotients that all assume (in their own words) that they are seeking life as we know it. They assess the probability of finding living things with complex brains on other planets, and focus their search on looking for places where life as we know it may have evolved.
Like I said I can find the links, but SETI researchers are quite clear that they are looking for life forms in outer space.
You are again mistaken. Nobody knows if human brains do anything that is not by “law and chance”. If we do, that would mean dualism is true, and that is a metaphysical speculation that is not supportable scientifically.
MathGrrl:)?
But I do like simple and clear questions. We have debated these points many times. I will try to sum up my answers, and then if you want we can go into further details.
1) CSI is quantitative, but requires also a qualitative part. To be more clear, I will refer from now on to the specific subset of CSI which is dFSCI, as defined in my post #292.
2) The qualitative part is the recognition of the functional specification. That means that a conscious intelligent observer must be able to recognize a function in the supposed designed object, to define it explicitly so that any conscious intelligent observe can verify its presence, and if possible to give an explicit way to measure that function, either as present or absent through some threshold, or even quantitatively through a definite numerical measurement.
So, as you csn see, this part is qualitative because an observe has to recognize and define the function, but in the end it gives us a quantitative result. The measurement of the function can be used as a coefficient for dFSCI.
Let’s make an example. We want to measure dFSCI in a protein, an enzyme. We recognize that the enzyme accelerates a specific reaction. So, we define that as its function. Then we define an arbitrary, but reasonable, threshold for that function (for example, that the reacttion must take place at least at a certain rate in standard conditions), and if that condition is verified we give a value of 1 to the specification coefficient, otherwise we give it a value of 0. In that way, for any molecule tested for that function, the function will be present or absent.
3) Then comes the measurement of the complexity of the protein. That’s the most difficult part. There are at least two ways to do that. One is valid in principle, but can be applied only with some approximation to proteins, at least until we have better understanding of them.
The general principle is that the complexity is the rate between the functional space and the search space. The search space for a protein is easy to calculate, it is 20^length of the protein in AAs.
The functional space, or target space, is the difficult part: it can be defined as the number of sequences of the same length which, if tested, would exhibit the function according to our definition.
Obviously, the measurement of the target space cannot empirically be made that way. So, we have to make reasonable inferences based on what we know of proteins and of the relation between structure and function. This is a subject of reseatrch and debate, and we are certainly making progress towards a better understanding.
If we have a reasonable assumption about the size of the functional space, the complexity of that protein can be easily calculated and expressed in bits, exactly like any other complexity (Kolmogorov complexiy, Shannon’s entropy).
4) If the specification coefficient is 1 for our original molecule (that is, if we confirm that it has the function), then the measurement of its complexity is also the measurement of ots functional complexity, expressed in Fits (functional bits). We have measured dFSCI.
5) Luckily, there are indirect methods to make that calculation. We have many times discussed the important paper by Durston, which makes that calculation for many protein families, using the concept of Shannon’s entropy, and gives the values in Fits. This is the title:
“Measuring the functional sequence complexity of proteins”
and this is the URL:
StephenB,
Fair enough: “Nature” here means “anything that is NOT the product of HUMAN activity.”
“Nature” here means “anything that is NOT the product of HUMAN activity.”
How can you tell?
Mark:
You are equivocating my position.
I don’t believe that we can have any empirical proof of free will (neither verification nor falsification). At least, not with our current knowledge.
So, I am not treating free will like I treat design. It is not a subject for which we have any real scientific or empirical argument.
In that sense I agree with you, empirical arguments cannot distinguish between different conceptions of free will. That means that neither mine nor yours can have empirical support.
That’s why I say that free will is a philosophical problem, and not a scientific one.
But that does not mean that we cannot have philosophical conceptions about it, and that those conception cannot be different. Those conceptions exist, and are different.
We don’t build our conception of reality only form empirical data.At least, I don’t believe that. You can believe as you like.
I believe in free will for many reasons, including my intuition, and a very clear consideration of what its negation implies for all our conception of reality and of ouhrselves. None of these reasons is really empirical, but all of them are very valid for me.
Compatibilists have a different conception, but I refuse their conception not only because it is different from mine, but also because I find it internally inconsistent and confused. For instance, I don’t believe that compatibilists have found any way to preserve moral responsibility under a non libertarian view of free will. They may think they have, but I continue to think that they are only using their words and their reason very badly.
From that point of view, I really prefer strict determinists. At least, they have the courage to face the cognitive consequences of what they believe.
GP,.
aiguy:
I’m not sure how your appeal to causal regress of life rebuts or even correlates to my comments in #315.
Intelligence – however it exists, even if discorporate and immortal – either produces product “like” human intelligence produces, or it does not; if it does, such product can be reasonably inferred to be the product of a human-like intelligence, even if isn’t housed in a human-like body.
UB,.
gpuccio:
Thanks for the quick and detailed response. I’ve seen similar calculations to yours in my reading here:
The problem with this calculation is that it assumes that the protein came into existence all at once, in its current form. That’s not what biologists see.
In fact, we known that various mutational mechanisms and the effects of natural selection and genetic drift can iteratively improve the performance of a particular protein for a given function and can generate proteins that have new functions (nylon eating bacteria and Lenski’s citrate consuming e. coli being two well-known examples).
Unless I’m missing something, that shows that, by your definition, CSI can be created without the intervention of an intelligent agent.
William,
My point was that SETI does not consider intelligence in the abstract; rather, it looks for life-as-we-know it. Since life-as-we-know-it cannot logically be the cause of life-as-we-know-it, SETI is doing something very different from ID.
In our constant, uniform, and repeated experience, FSCI is produced by complex physical FSCI-rich organisms. You may imagine that something else is capable of producing FSCI, but you have no scientific evidence of such a thing. As far as we can tell, complex information processing requires complex physical mechanisms – there are no known exceptions. So if ID is going to hypothesize that the first FSCI-rich organism was created by something with a human-like mind, they must provide some reason to believe that such a thing is possible.
The only scientific endeavor to investigate such claims is paranormal psychology, which hypothesizes that mental cause can act independently of physical mechanism. I am actually open to the possibility of paranormal events, but I think the evidence to date is exceedingly weak. If ID explicity started doing paranormal research to ascertain if its hypothesis was even plausible, then I would have a great of interest in that! However, unfortunately, ID proponents for some reason do not involve themselves in the scientific study of mind, so they have nothing at all to show as evidence that anything but a complex, physical, FSCI-rich organism can produce FSCI.
aiguy, in order to understand where you are coming from, I’m just curious …
1. For starters how do you scientifically demarcate life from non-life and the products of intelligence from non-intelligence? I’m asking since it appears that you have no problem with SETI being scientific in its research program, so you must be able to tell us how to demarcate between the two points above. If, indeed, SETI is able to separate intelligent life from all other combinations of life, intelligence, non-life and non-intelligence, then there must be some scientific methodology and definitions of key terms [intelligence and life], no?
Basically, I’d like to get your position on whether or not we could infer intelligent life as a source if we received a specific type of radio signal — you know, something wild like instructions on how to generate life from scratch and seed planets — from our radio telescopes. If not, what is your reasoning that we presently have or even theoretically could have a better explanation than intelligence life?
2. Even if you disagree that SETI is a scientific research program, you seem to believe that life is indeed a well enough defined concept to be utilized in science classes. So, how do you define life?
3. Next, even though we may not know how life and intelligence are generated, can we still say that there are specific molecules and patterns that require the existence of previous life and intelligence respectively?
For example, since proteins are most likely not ever generated by only law+chance absent the structure that you may or may not have been able to sufficiently define as “life,” if we see a protein — let’s say frozen in time in amber — can we reliably infer the existence of a previous living organism as a necessary cause?
4. Finally, if you have been able to define life and if you believe that the existence of proteins do not require the structure that you have defined as life, can you provide evidence for your position?
MathGrrl,
Here are my thoughts on CSI, if you are interested, including more in depth discussions that I’ve had on the subject. The first link contains a calculation. The other links expand and discuss different aspects with ID critics.
MathGrrl:
No, that’s not true. Your examples refer to instances of microevolution, where the function is already present and very small mutations (1-2 AAs) can tweak that function a little..
There is absolutely no example of emergence of a new complex function through microevolution. I have said many times that, if functions could be deconstructed in samll funbctional steps, each of them selectable for a reproductive advantage, then the model RV + NS could work. But that assumotion is simply not true.
Protein domains are structurally unrelated one to the other, and there are tousands of them They appear at OOL and then during evolution, and darwinian theory has no single model of how one of them appeared, they cannot be deconstructed into simple functional variations from a pre-existing different protein domain. Please, refer to the recent Axe paper for that. Or, if you don’t agree, just bring your arguments.
aiguy:
“they have nothing at all to show as evidence that anything but a complex, physical, FSCI-rich organism can produce FSCI.”
First, I’m a little confused as to why you keep sneaking in the term “physical” since you have on a couple occasions stated that you physical can’t be properly defined.
Other than that …
So, then you agree with ID Theory proper, that intelligence (foresight, as I have defined it) — with the caveat that it is complex and FSCI rich — is required to produce FSCI?
aiguy:.
I have already cvommented on the subconscious mins. As for really unconscious processes, or automatic ones, they are obviously based on intelligent information already present in our body and nervous systen structure, and that comes form the genome, os it is included in the set of biological information.
You still have to give one single example of intelligent processes which do not originate form conscious beings, or from biological information.
Aig,
”
This doesn’t like a rigorous delineation between what nature can accomplish versus what humans can do.
Reading your description, you make the distinction based upon the idea that we have all seen what humans do, and we can tell from that.
So if I understand your description correctly as you have stated it, should an instance come up where we find a (pick your own scenario) set of markings in a volcanic formation, we would start to assess the issue by first asking around to see if anyone had seen a human do it? Or is there something more substantial you’d like to add?
errata corrige:
“subconscious minds”
“comes from the genome, and it is included in the set of biological information”
Hi CJY,
I have no answers to this at all. “Life” is just a general descriptive label for the things biologists study, not a theoretical term used to explain anything. When I talk about FSCI coming exclusively from “life forms” here, I mean (and I’ve made this clear repeatedly) “complex, physical, FSCI-rich organisms”. So, to the extent that FSCI is an objectively clear concept, my statement about FSCI coming invariably from life forms is also objectively clear.
This has no meaning whatsoever. Things do what they do, and if you choose to call them “intelligent” then that’s fine. When I show some computer system I’ve built to somebody, it either does what I say it does or not. It doesn’t add anything to our understanding for us to try and decide if my system is “intelligent” or not!
SETI has not actually produced scientific results, but they do try to use science to inform their search. They are (as they say) looking for life-as-we-know-it, which means that in particular that they are looking for organisms with large brains. Astrobiologists at SETI compute what they call “encephalization quotients” (measures of brain evolution) in order to inform their search.
They use evolutionary biology as a foundation for astrobiology, and astrobiology as a foundation for looking for life forms in outer space. SETI is a distinctly non-ID-friendly endeavor, and SETI researchers have made that quite clear.
If we get some signal that looks like a life form sent it, we could tentatively infer some life form was behind it. To the extent that we have evidence that the life form was similar to humans in various ways, we may gain confidence that human-like physical and mental abilities could have been responsible. The conclusion would be that a complex, physical, FSCI-rich organism with sense organs, brains, muscles, and other human-like attributes was responsible.
No, it isn’t well-defined at all. Nobody offers “life” as an explanation for anything in any scientific theory. Same with “intelligence”. Or “dexterity”, or “athleticism”. These are all general descriptive labels, not rigorously defined concepts. They can’t be used in scientific theories to explain anything. In AI, we never explain anything by appeal to “intelligence”, and if somebody asks what we mean by “intelligence” we just say “you know, whatever you might call ‘intelligent’ if you saw a human being do it”. That is a distinctly subjective and vague definition, but that is all it means – which is why it can never be offered as an explanation for any phenomenon.
We do not know if there exists anything except law+chance. (In other words, we do not know if dualism is true).
I don’t really know what we would infer from a protein… if we found a protein on a meteor, maybe we’d infer it was produced some other way? I really don’t know.
No good definition for “life”, no. And no theory about how proteins form aside from where we see them synthesized in biological systems.
gpuccio:
The ability to digest citrate that evolved during Lenski’s long-running experiment seems to contradict your claim. Why do you think it does not?
“If we get some signal that looks like a life form sent it, we could tentatively infer some life form was behind it”
Again, how would you know?
“…and if somebody asks what we mean by “intelligence” we just say “you know, whatever you might call ‘intelligent’ if you saw a human being do it”.”
Again, how could this person possibly know to what you are referring?
Onlookers:
Look above.
Ask yourself: who have put forward empirical evidence with explicit steps of inductive inference therefrom, and who have raised philosophical debates on distractive side-points, for dozens of comments now?
What does that tell you of he balance on the empirical merits?
GEM of TKI
PS: MG, the problem with the incremental function of a protein claim is that proteins exist in islands of function [as studies of protein spaces show].
Until you are on the shoreline of function, discussion of incremental improvement is irrelevant. The calculation you have seen, a simple model [Durston et al have a much more serious but less comprehensible model], is premised on observed function and specificity, AND complexity beyond a reasonable threshold. For a protein, 1,000 bits of basic information storage capacity [on a flat distribution across AA’s] comes in at 2^1,000 = 20^x => x ~232.
In a small warm little pond, the dozens to hundreds of proteins, many of about this complexity, plus storage media, codes, algorithms etc would have to be formed to create the system in which the proteins have function. So, the threshold is already telling us that the cosmos we observe does not have the search resources to credibly at-random create just ONE of the relevant proteins. And that is before we get into issues over chirality, uncontrolled reaction environment — contrast the controlled programmed chemistry in the Ribosome, with chaperoned AA’s carefully chained — and worse.
When we look at a first organism, we then have to innovate 10’s of mns of DNA base pairs of information to get embryologically viable novel body plans, dozens of times over. The search space challenge just exploded far beyond the already insuperable level. It is not just one protein in isolation, but even for that, a viable protein [just on its information content] is maximally hard to get to without intelligent control, which is exactly what is going on in the cell, programed intelligent synthesis of specific nanotech, molecular scale machines.
The FSCI threshold is very useful for helping us see that.
aiguy:
That’s a really good argument. You’ve made some really good points worth further consideration, and I appreciate your time and patient responses.
I see your point; from your perspective, we have no good reason to consider intelligence to be transferrable outside of complex biology we find our only tangible example in; yet it seems that is precisely what we must do in order to make the case that life itself was generated by similar intelligence – regardless of whether or not our particular form of life was intelligently engineered by another human-like organic intelligence. The complex-organic buck has to stop somewhere.
That leaves human life open to explanation by ID, but not all FSCI-producing life (from the perspective of your argument).
I think the weak link in your argument is the idea that we do not have good or sufficient reason to suspect (for the purpose of justifying ID theory for the origin of life in our universe, not just human life) that FSCI-producing intelligence is, or can be, an extra-biological phenomena.
I think we can find one such “good reason”, which you supplied yourself when you said:
—————-
“In our constant, uniform, and repeated experience, FSCI is produced by complex physical FSCI-rich organisms. You may imagine that something else is capable of producing FSCI, but you have no scientific evidence of such a thing.
———————-
Yet … what produced complex, physical, FSCI-rich organisms, if the only thing that can produce such FSCI is the thing itself?
So, “something else”, besides “complex, physical, FSCI-rich organisms” must in fact be capable of producing FSCI-rich product. On that, we must agree.
It cannot be a reasonable conclusion, then, that FSCI is generated only by complex organisms (recursive problem). Since the ability to produce FSCI is a definitionally intelligent process, then intelligence (at least to the degree that is required to produce FSCI product above 1000 bits) **must** exist outside of complex organisms, unless one wishes to refer to infinite regress.
Thus, we have logically concluded that intelligence to the degree that it is defined as the ability to produce levels of FSCI over 1000 bits must in fact exist outside of complex biological organisms, or else complex, FSCI-rich organisms wouldn’t exist.
Gpuccio – I don’t normally do this to you but I really would like to know the answer to the question I asked above. Given the scenario I outlined …Suppose you went through exactly the mental processes of making a decision that you do at the moment and someone demonstrated how it was caused.
Would this suddenly mean that you were not exercising true free will but only had the illusion?
gpuccio:
.
UB,
William,
Thanks very much!
Yes, part of the problem with ID is its failure to provide an empirically-grounded definition of “intelligence”. The other problem is that if they really are talking about human-like minds including conscious awareness, then it does not follow from the evidence at hand that any such thing can exist without the complex physical information processing mechanisms we see in biological systems.
I really do have an open mind regarding what might be conscious… perhaps computers will someday be conscious and perhaps not – I think that is an open question. And perhaps disembodied immaterial mind exists too; I just don’t know. I don’t say these things are impossible, but I do object to ID’s rhetorical tricks that obscure the fact that these are precisely the questions that ID needs to address, rather than (as folks like Stephen Meyer says) simply inferring a cause already known to us as the cause of first life. That just isn’t true at all.
The answer is currently this: We do not know.
I suppose; else maybe FSCI exists eternally, or maybe there is an infinite number of universes so that an infinite number of them contains astronomically improbably FSCI… or…
It would seem to be the case, yes… unless perhaps infinite regressions actually do exist…
???? No. If you define “intelligent” as “that which produces FSCI”, then as I’ve shown, ID is reduced to a vacuous tautology (FSCI is produced by that which produces FSCI). No, you need another way to define intelligence in order to make this a synthetic rather than an analytic proposition (that is, in order to make a statement about the world rather than just a statement about the meaning of the word “intelligence”).
What do you mean by “intelligence” in that sentence? If all you mean is “able to produce FSCI”, then you really are arguing a very tight circle: You define intelligence to mean “that which produces FSCI”, and then you explain FSCI by appeal to “intelligence”.
I do agree (modulo the possibility of infinite universes, infinite regress, or eternal FSCI). But I strongly object to the use of the word “intelligence” if all you mean is “whatever creates FSCI”, because “intelligence” has so many other connotations (including “consciousness”, which most people – like gpuccio here – do associate with intelligence).
It is exactly this confusion – this equivocation – that makes ID so difficult to debate. The authors of ID have done a fine job in creating a way of talking about these things that obscure these implications and commitments.
molch:?
You are simply wrong. Both molecules are esterases, and share the same fold:
“Mutational analysis of 6-aminohexanoate-dimer hydrolase:
Relationship between nylon oligomer hydrolytic and esterolytic activities”
“Based upon the following findings, we propose that the
nylon oligomer hydrolase has newly evolved through amino
acid substitutions in the catalytic cleft of a pre-existing esterase
with the b-lactamase-fold”..
Excuse my obvious dumbness, but I really can’t understand what you mean here. Could you please clarify?
MathGrrl:
The ability to digest citrate that evolved during Lenski’s long-running experiment seems to contradict your claim. Why do you think it does not?
I will just mention what Behe says:
.)”
Have you any new information to show that the mutation in Lenski’s work was complex? IOW, that the changes in AA sequence required for the function change were of higher complexity than a reasonable threshold? (I don’t require Dembski’s UPB. For me, 10^-40 would be enough. I always try to be generous with my interlocutors).
aiguy:
It is not tautological to define fsci-production as an aspect of intelligence, nor to conclude that fsci-production requires intelligence.
One might argue that intelligence is poorly defined overall, but fsci cannot be accomplished, insofar as we know, without teleological decision-making; it’s the hallmark of fsci, it’s definining characteristic – functionally specified complex information – information that is specified for a particular function that cannot be arrived at without consideration of the target function being applied to the design process.
That is how humans achieve FSCI-rich designs; they consider the target function and purposefully arrange materials to acquire the target. No other search mechanism is known to produce FSCI-rich outupt.
I don’t think it’s reasonable to quibble that this necessarily teleological, purposeful, goal-oriented manipulation of materials and processes cannot be defined at least as one aspect of intelligence, even if it is not a comprehensive definition.
Thus, since the only thing we currently know produces such FSCI is FSCI-rich intelligent organisms (ourselves); whereas other FSCI-rich organisms, that don’t appear to be as “intelligent”, or intelligent at all (even if poorly defined) when compared to humans, do not produce FSCI-rich product.
Therefore, it is reasonable to conclude that since humans are apparently the only reliable, consistent producers of FSCI-rich product out of tens of millions of species of FSCI-rich organisms, and unless one wishes to quibble terms, the significant difference between humans and those other animals is our intelligence-based ability to produce FSCI-rich product; and since FSCI-rich organisms cannot be their own explanation, then it is reasoable to at least provisionally conclude that FSCI-producing intelligence **might** necessarily exist somehwer besides in FSCI-rich organisms.
None of that is “tautological”;
it is reasonable to infer that the production of FSCI, at least in the case of humans, requires intelligence;
and it is reasonable to causally connect it to intelligence;
it is reasonable to infer that the presence of intelligence is more important than just the presence of FSCI-rich biology (tens of millions of apparently unintelligent species);
it is reasonable to infer that FSCI-rich, intelligent organisms cannot create themselves, pop into existence ex nihilo, and to not refer to infinite regress;
it is therefore reasonable theorize that FSCI-producing intelligence must exist somewhere besides in FSCI-rich biology, which makes ID a reasonable theory even if one works from the “regress” argument that it must be suitable for explanation of the first FSCI-rich organism in the universe.
Aig,
My question was how would you know if you recieved a signal that “looks like a life form sent it”?
Again, my question was how would this person know what you were referring to?
—aiguy: “Fair enough: “Nature” here means “anything that is NOT the product of HUMAN activity.”
OK. That is a good, precise definition. That would mean, however, that both the human mind, which many believe to be non-material, and the human brain, which is obviously material, are both natural. Thus, if an ancient hunter constructs a
Mark (#343):
I have tried to express my views as clearly as possible in my previous posts. I will repeat the essence here.
To prove determinism, you shoud demonstrate that, given the circumstances before an action, that action and only that action was possible. If anybody can do that, I will believe in determinism (but not in compatibilism: I will simply believe that I am a complete automaton).
What I believe, instead, is that givene all the previous circumstances which act on an agent (both outer and inner), a certain range of actions, even only slightly different, is possible. The origin of tfhose different possible actions is always in a range of different (even slightly) possible inner reactions to the pre-existing circumstamces. That’s what I call “choice”, or “free will”.
You may say that such a choice would be irrational, or random, or void of value. I say that such a choice cannot be explained in conventional rational terms of cause and effect, because it comes from the transcendental self which cannot be understood in those terms. But that does not mean that the choice has no moral value: on the contrary, the choice consist exactly in a moral “alignment” or “disalignment” with a deep intuition which the self has of what is true and good. It is a choice which is at the same time of cognition and feeling (or probably, beyond both). And it can change our personal destiny for good or for bad, especially through the repeated exertion of good or bad choices.
351 continued:
Thus, if an ancient hunter constructs a spear, his activity would be classified as a natural cause indistinguishable from wind, air, and water, which are also natural causes.
gpuccio:
molch said: ?”
gpuccio said: “You are simply wrong. Both molecules are esterases, and share the same fold”
I know that both molecules are esterases and share the same fold. What that means is that the two FUNCTIONS share a similar MECHANISM. The functions themselves however, are vastly different. If you can’t see that, then your interpretation of the term function must be very different from the conventional use of the term.
molch said: .”
gpuccio said: “I really can’t understand what you mean here. Could you please clarify?”
You stated that the difference between Nylonase and a previously existing enzyme, that is assumed to be the predecessor to Nylonase, is small enough to be “vastly above the threshold” of what you consider to constitute CSI.
Under this definition of CSI, your composition of an english sentence requires no CSI, because the difference of any sentence you compose from at least one of all the other english sentences in existence before this one is above the threshold of complexity for CSI.
aiguy:
Frankly, I think that the belief that there is no evidence that consciousness can exist out of a physical framework is only a prejudice of reductionism. For centuries human cultures have believed differently. I and many others believe differently today. And I think I have many evidences, but certainly most of them would not make any sense to reductionists.
BA has often cited NDEs, for example. To me, NDEs are very strong evidence of many things, including a significant independence of conscious experiences from the physical brain. Mario Beauregard has collected many valuable arguments in favor of the spiritual nature of consciousness in his book.
But nothing of that would ever seem meaningful to a true reductionist. True reductionists are really dogmatic people. I respect their views, as I respect those of all, but I cannot accept that their views be considered a reference for what can be true and what cannot.
So I stick to my empirical position: if all that we know shows that biological information has the same properties as designed objects, those properties that no non designed object exhibits, I maintain that for me the best explanation is that some conscious intelligent being has designed that information. Reductionists may refuse that explanation a priori, but that is only evidence of their dogmatism.
So, my science will continue to tell me that we have to look for a designer, and to gather as much information as possible about him form facts.
molch:
The biochemical fucntion of both molecules is very similar: they are both esterases. And the difference in structure is very small. So, there is no variation of CSI.
When you say that degrading penicillin and digesting nylon are different functions, you are right, but you arer talking of a higher level function: not the direct function of the molecule, but the function of an existing system in which the molecule is integrated.
Now, we see in this case that two similar molecules (two esterases) are integrated into two different pre-existing systems (defense, nutrition) through a small twik in the structure of the first which allow a shift in affinity for a substrate.
But the information for the defense system or the nutrition system has not been created de novo: it was already there. As already there was the plasmid system, which is probably an active agent in the generation and utilization of the molecular change.
IOW, the only real change which is (probably) attained through a random search is the mutation of those few aminoacids which tweak the penicillinase structure so that it gets higher affinity for nylon. This adaptive change happens in the context of already existent, highly structured systems, and requires a very limited random search, which is perfectly in the range of microevolution, and implies no creation of CSI.
If nylonase, as darwinists have declared for a long time, had originated from a frameshift mutation of a completely different pre-existing sequence, then you would be right. But that was only a false theory.
molch:
what you say about words and sentences is simply wrong. Even if you calculate the combinatorics for words, and not for letters (which is not the case in protein domains, where the single aminoacid is the unit), a long enough phrase or discourse is certainly beyond the threshold I have given, even starting form existing sentences. Just to make an extreme (but not too extreme) example, could you please show me hoe you can get the text of Hamlet from a pre-existing text with changes simple enough not to be considered CSI?
gpuccio,
Now you are calling information intelligent, which makes no sense at all to me. How can information be intelligent? Honestly I think your terms are getting more, rather than less, confused.
In any event, information comes from heredity, yes, but it also comes from the environment.
We have no examples of FSCI that does not come from biological organisms.
Oh, come on! They woud certainly make sense to me… and I have actively sought exactly that evidence! The best I can find is from people like Robert Jahn, and it isn’t very good evidence at all. If you know of any better evidence, please provide a reference. But without that, you are projecting your unsupported beliefs onto others! Human cultures have believed all sorts of nonsense for centuries… that doesn’t make it true!
I have consistently pointed out that if ID wants to turn to paranormal research that would indeed make it a scientific endeavor. You ought to notice, however, that most ID proponents deny that paranormal research has anything to do with ID! If you wish to base ID on the strength of current evidence for mind/body independence, then simply say so, and be clear that ID is only scientific to the extent that your evidence for these things is scientific.
I’ll leave it at that – but someday we can have another thread about how sadly confused Dr. Beauregard (and his cohort Ms. O’Leary) are about these issues! (hint: placebos do not in any way discount physicalism!)
I don’t care about this. This is not an argument about anything.
I suggest you begin to argue what you’ve laid out here, then. You’re position is that mind can operate independently of mechanism, and your evidence comes from the “spiritual nature” of mind and from evidence like NDEs. That’s fine – I’m all for it. Just be clear where your evidence really comes from.
****************
UB,
That’s right – it isn’t. I wasn’t attempting to provide a rigorous definition of what humans can do, because nobody is attempting to explain anything by saying humans were responsible. On Earth the questions regarding who sets fires, cheats on lotteries, and commits crimes are easily solved because humans are the only animals that exist here with these sorts of abilities.
Of course – we all have a tremendous amount of knowledge and experience regarding the abilities and proclivities of human beings! How could you doubt this?
Sorry, I don’t follow. If we found some English writing inside a volcano, we would obviously imagine some English-speaking human being managed to put it there. If instead we imagined something else – like another sort of animal, or a disembodied spirit – it would be pretty ridiculous, right?
Someone very may well not know, or disagree! It happens all the time… I build a system and show it to somebody, and they say “Hey – that is really intelligent!”, while I think to myself actually this was a pretty trivial hack. Or, I might say “Look at this program – it’s really intelligent!” and my audience says “Oh, I don’t think it’s so intelligent”.
In the end it couldn’t matter less, because my systems do what they do, and they are either useful or not, and whether we subjectively label them “intelligent” or not makes no more difference than whether we choose to call them “cool” or “interesting” or “awesome”.
***********
William,
Correct. However, it is tautological to define intelligence soley by reference to its ability to produce FSCI, and then attempt to explain the existence of FSCI by appeal to intelligence.
Human beings manage to produce FSCI by using their brains. We don’t know how brains work. Maybe they operate according to laws we already understand (physics and chemistry) or maybe they operate according to laws we do not understand, or maybe there is some irreducible mental substance involved with libertarian free will too. Who knows?
Perhaps this process you describe does not occur as you think it does. Perhaps (this is a theory of many neuroscientists, including Nobel-winning Gerald Edelmann) the brain operates by random-variation and test, in a massively parallel search. As our consciousness and narrative brain functions gain access to the results, it seems as though we have somehow arrived at our solutions as if by magic – or by res cogitans.
Nobody knows how brains work. We do know, however, that we never see FSCI come from anything that does not have a functioning brain (i.e. a complex FSCI-rich mechanism).
Since you haven’t managed to define “intelligence” in any testable way apart from “able to produce FSCI”, then these statements are vacuous. Why is a human intelligent? Because we produce lots of FSCI. Why isn’t a worm intelligent? Because it doesn’t produce much FSCI. How do humans produce FSCI? By using their intelligence! etc.
You must EITHER define “intelligence” in terms of what it does (operationally) or in terms of how it does it (functionally). If you define it operationally, then your definition is “that which produces FSCI”… but you can’t then attempt to prove FSCI comes from intelligence, because you have simply defined it so.
Otherwise, you define it functionally (it uses this or that mechanism, neural networks or microtubules or immaterial mindstuff or whatever). If you do this, then you can attempt to demonstrate that these functions do indeed account for FSCI. Unfortunately, nobody has succeeded in this project. 30 years ago I thought we in AI would make progress toward this, and we have made some very interesting computer systems, but it is very clear that we still do not understand how FSCI is produced by human beings. And saying that we do it by being “purposeful” or “goal-oriented” says nothing at all – how do I know that a river isn’t “purposeful” when it finds a path to the sea?
Well no, I think other animals do produce FSCI-rich products. Termites and bees and wasps and spiders all produce artifacts which are astronomically unlikely to occur by any other means, even if they obviously lack the sophistication of human artifacts. Other animals that we often refer to as “intelligent” because of their learning and problem-solving abilities, like dolphins and chimps, don’t produce much FSCI at all!
??? Intelligence is defined as the ability to produce FSCI, and you are trying to explain why we are able produce FSCI by saying it’s because we are intelligent? What am I missing here?
These explanations are always tautological as long as you define “intelligence” in terms of what it creates (FSCI, the exact thing you are trying to explain) instead of how it works. And since we don’t know how we think, you can’t define intelligence in terms of how it works.
Tautologically true, again.
Let’s pick one, single, clear definition of the term “intelligence” in the context of ID and then stick with it. Otherwise we’ll all be going around in circles forever. You can’t equivocate between the functional and operational defintions… unless you’re goal is to keep the debate as confused as possible 🙂
*****************
CJY,
Fair enough; let’s just stick with “FSCI”. If FSCI is an objective concept (and I do accept that, at least arguendo) then that will suffice.
The one thing we know that can produce FSCI is an animal with neurons, cells, sense organs, etc. We do not have any way of explaining how animals produce FSCI, so “foresight” is nothing but another way of saying “however we manage to produce FSCI”. But logically, this can’t be the cause of FSCI in biology. So the answer is that nobody knows how FSCI originally came to exist.
**********************
StephenB,
AIG: “Fair enough: “Nature” here means “anything that is NOT the product of HUMAN activity.”
SB: OK. That is a good, precise definition. That would mean, however, that both the human mind, which many believe to be non-material, and the human brain, which is obviously material, are both natural.
Yes.
Uh, no. Not all natural causes do the same thing. Gravity does not do the same thing as the the strong nuclear force. Termites do not do the same things as crows. I really don’t know where you are going with this.
This started with my explanation of why SETI was different from ID:
SETI looks for things not found in nature (i.e. they look for things that are not known to occur outside of human activity), and if they find it, they will infer life (i.e. an FSCI-rich biological organism).
ID looks for things found in nature (i.e. things that human beings did not produce) and finding those they attempt to infer non-life (i.e. something that is not itself an FSCI-rich biological organism).
Ciao GP, KF, SB, WJM, and others…
Aig, I hope you’ll take a minute over the weekend and answer the three questions I asked.
– – – – – – –
– – – – – – –
– – – – – – –
I’d also like your clarification of something you stated earlier. You said “We in AI don’t have any need to define” intelligence. I find that facinating. Is that similar to Biology, where no universal, uncontested definition for Life can be found, but everyone gets on with it anyway? Tell me, is this lack of a definition the result of no one trying to establish any characteristics associated with intelligence, or is it the result of a discipline-wide ackowledgement that no such characteristics of intelligence are necessary in order to artificially replicate them?
Aig,
I didn’t ask you if we found some words written in a volcano, I asked if we found some markings, how would you decide the cause of those marking (human or natural) based upon your definition:
The question is – based upon your definition, how would you decide? And if your definition needs additional flushing out in order to begin to answer the question, then by all means do it.
How would someone disagree if they did not know what you meant to begin with? My question, again, is how would they know what you meant?
UB,
I’m trying my best to understand what you’re asking here. Whenever we see something that is familiar to us from some animal we know of, we believe that that animal was responsible. If we see a spider web, we figure a spider was responsible. Termite mound? Termites. Bee hive? Bees.
If we see some sort of “markings” in a volcanic formation, it depends what sort of markings they were. If it was recognizable as human writing, we would figure some human did it. If the markings looked like a lava flow, we would figure it was lava. If the markings didn’t look like anything we’d ever seen, then we wouldn’t know what caused them. This all seems pretty obvious.
I think I’ve gone over this. If you read SETI papers, they discuss the various things they look for.
First, they look at everything we see that is not caused by life forms and eliminate those from what they’re looking for. So they don’t interpret the cosmic background radiation as a sign from a life form, for example.
Next, they look for things that a life form would send if they wanted (like us) to communicate over long distances. These would be narrow-band signals that don’t arise from any known astrophysical events. SETI doesn’t look for codes or languages or anything that looks complex – they look for simple, narrow-band E-M transmissions.
Finally, if they do pick up some signal like that, they would attempt to locate its origin to see if it comes from a place hospitable to life as we know it. If it did, they might claim that they really do have evidence for astrobiological signals. Otherwise, they wouldn’t be able to say what it was caused by. Of course SETI hasn’t found anything like that so far, so I guess we’ll just have to wait and see.
Again, my answer is there may very well be disagreement about what some people think should be called “intelligent”. It is a very loose, informal, subjective label. I may say something is “beautiful”, for example, and somebody might agree with me while somebody else might disagree.
We could provide some operational definition of “beauty” of course. We might say a face is beautiful if the features are symmetric to within some tolerance, or if the skin is smooth, etc… but these metrics would be arbitrary (not theory-driven) and also they would only apply to faces and not to landscapes for example.
Same with “athleticism”. We all might agree Micheal Jordan is athletic, but we might disagree about Tiger Woods. And would we call a gorilla athletic? How about a flea? A worm? There are no right or wrong answers here – they are just subjective, descriptive labels we use informally.
Same with “dextrous” or “interesting” or… “intelligent”. People use these words all the time and we have a general, subjective, intuitive sense of what they refer to. But none of these concepts have rigorous meanings, so none of them can ever be used in a scientific context as an explanation of some phenomenon. In order to provide a scientific explanation of something, we need to provide a precise description of it that will enable independent researchers to reliably agree on when they are observing it and when they are not.
Exactly so, yes.
It simply isn’t of any utility to even attempt it. Who cares what label somebody applies to my program? AI programs do what they do – they design circuits, or they recognize faces, or they learn to categorize documents, or they play chess, or they play Jeopardy. If you’d like to call these systems “interesting” or “cool” or “awesome” or “intelligent”, that’s fine, but it doesn’t tell us anything new about these systems!
“Intelligence” is not a thing, not a cause, not a force. It is a property of complex systems… but it isn’t a well-defined property like mass or charge or Kolmogorov complexity. It is a property of complex systems that is a subjective, informal, descriptive label, like “interesting” or “athletic” or “dextrous” or “awesome”.
Have a good weekend, UB!
gpuccio:
“When you say that degrading penicillin and digesting nylon are different functions, you are right, but you arer talking of a higher level function: not the direct function of the molecule, but the function of an existing system in which the molecule is integrated.”
Yes, I am indeed talking about what you call the “higher level function’ – because that is the function that in fact confers the survival value to the individuals that carry the trait. The molecule’s activity in the cell is the mechanism behind the actual functional, survival-relevant trait. If that kind of function is not the one addressed by CSI or FCSI or dFSCI or whatever the current flavor is, then I don’t know what these concepts could possibly mean at all. If the fact that “we see in this case that two similar molecules (two esterases) are integrated into two different pre-existing systems (defense, nutrition)” does not, according to you, constitute a novel function, then I don’t know what in the world would.
“Even if you calculate the combinatorics for words, and not for letters (which is not the case in protein domains, where the single aminoacid is the unit), a long enough phrase or discourse is certainly beyond the threshold I have given, even starting form existing sentences.”
Well here is your inconsistency: you want me to start assembling meaningful sentences from random words, but you know very well that, just like in the Nylonase example, where “this adaptive change happens in the context of already existent, highly structured systems, and requires a very limited random search”, sentences are not developed from searching random words, but using rules of grammar and word sequences in the context of already existent, highly structured systems. No matter how “intelligent” or “complex” or “forseeing” you are, you would be completely unable to write a single meaningful english sentence WITHOUT the previous knowledge and practice of all these already existent, highly structured systems. So, you make pre-existing words fit into pre-existing grammatical frameworks by pre-existing rules every day, and you claim that the resulting function of the sentence, it’s meaning, has CSI.
“Just to make an extreme (but not too extreme) example, could you please show me hoe you can get the text of Hamlet from a pre-existing text with changes simple enough not to be considered CSI?”
Your example in this context is, in fact, too extreme, because I have never claimed that anything as complex(referring to it’s length)as Hamlet has ever come about in one single evolutionary step. However, a page out of Hamlet could easily be obtained by a reasonably small number of simple re-combinations, “point-mutations”, and frame-shifts (all within the existing complex, highly structured set of rules) from other existing english sentences. Do you doubt that?
molch, the primary reason to presuppose that nylonase is not the random generation of functional information is because it rapidly adapts to detoxify the nylon from the environment, strongly suggesting a designed mechanism,but more importantly,:
Nylon Degradation – Analysis of Genetic Entropy
Excerpt: At the phenotypic level, the appearance of nylon degrading bacteria would seem to involve “evolution” of new enzymes and transport systems. However, further molecular analysis of the bacterial transformation reveals mutations resulting in degeneration of pre-existing systems.
This was a very interesting discussion to read and it makes me really glad to be part of this community.
I just want to briefly comment on a few points that I disagree with.
-“rejected the ‘agent-causal’ view of libertarianism because the idea that a substance, rather than a property of the substance, can cause anything is unintelligible.”
I’m sorry but there is absolutely nothing unintelligent about a substance causing something. This merely seems like a semantic issue than anything else… a property of a substance is a good explanation while a substance is not? Come on.
I think gpuccio is absolutely right in referring to it as the transcendent self (or ‘I’). Whether we call it a substance, a pumpkin or the big bad wolf (given the unwarranted and militant stance some philosophers hold against dualism) is irrelevant. In short, what we call it is of no importance.
I also think that O’Connor is perfectly within his right to insist on the irreducibility of the agent. It is not the case that for an explanation to be acceptable one must provide a reductionist or a mechanomorphic analysis of the object in question. Furthermore, I believe this approach is very misguided as the agent is in fact a subject and not an object and ought to be addressed as such.
-“Agent-causal libertarianism seriously undermines human rationality because it leads to the conclusion that humans make choices for no reason whatsoever.”
No. O’Connor defends the position that according to agent causality a framework is provided in which an agent may chose to utilize a specific reason, although the reason itself is not sufficient in determining behavior. Also, just because some actions of the agent might be spontaneous it does not necessarily follow that all the actions will be. So agent causality does not undermine rationality whatsoever.
I also noticed that you say that the ability to act without reason is irrational. If by that you mean that the action itself is irrational then we’re fine. But if by that you mean that the notion that one can act spontaneously without specific reason to do so, then no. There is nothing irrational about the ability to act irrationally. If fact I believe that our ability to act knowingly and willfully in an irrational way (for example jump on the bed, repeat the words ‘good morning’ in 3 different languages and then kiss the palm of your right hand – something that I assume you’ve never done before nor have any specific motive in doing so) is perfectly coherent and if anything supporting of the view of agent-causation.
Finally, I believe much of the literature on free will/determinism is ridden with definitional and semantic issues and gimmicks as well as false dichotomies which exacerbate the problem instead of alleviating it. It’s precisely for that reason that I find it more fruitful to use simple language in addressing the matter.
-“Nobody knows if human brains do anything that is not by “law and chance”. If we do, that would mean dualism is true, and that is a metaphysical speculation that is not supportable scientifically.”
The term chance is a very loosely defined term, and as I have explained in a previous thread it is often a mere substitute for human ignorance.
Also, it would be more accurate to state that currently, the discipline of science is incapable of addressing the matter and furthermore, it is unknown if it may ever will. The reason I say this is not to sound rude but rather to remind all, myself included, of the limits of science and how it is not the ultimate decider of warranted knowledge. It is very far from being that actually – in fact, it will never be that – and the reason I explicate it is not so much to undermine the scientific enterprise but rather to keep it honest and grounded as so it doesn’t turn into self-refuting scientism.
This is for everyone!
I think people that support the free will thesis such as myself will find this video of John Conway (mathematician from Princeton) very interesting. The Free Will Theorem as he calls it.
Video:
Article:
Enjoy!
i.e. mulch, if neo-Darwinian evolution can’t even pass the first step for increasing complexity/information above what was already present in the parent strain in the fitness test, why in blue blazes should we presuppose that it could produce countless volumes of encyclopedias of functional information that vastly surpasses man’s ability to code?. The amount of information in human DNA is roughly equivalent to 12 sets of The Encyclopaedia Britannica—an incredible 384 volumes worth of detailed information that would fill 48 feet of library shelves! try to ‘quantum teleport’ just one human body (change a physical human body into “pure information” and then ‘teleport’ it to another physical location) it would take at least 10^32 bits just to decode the teleportation event, or a cube of CD-ROM disks 1000 kilometers on 1 side, and would take over one hundred million centuries to transmit all that information for just one human body even with the best optical fibers conceivable!
(A fun talk on teleportation – Professor Samuel Braunstein –)
On top of that the entire digital output of the entire world is only 10^21 bytes or 10^22 bits and Werner Gitt observes that the storage capacity of just “1 cubic cm of DNA is 10^21 bits. (DNA – deoxyribonucleaic acid.)”
.”
bornagain:
”.”
You seem to completely miss the point of the fitness test that you yourself propose. A fitness test is only meaningful in a particular environment, most importantly the one the organism in question currently lives in. In the environment that contains nylon, the nylon-eating strain is obviously a lot more fit than the strain that doesn’t eat nylon. And that the vice versa scenario is also true is not just unsurprising, it would be surprising if it were otherwise. IF there were any organism on this earth that was equally superbly fit in any environment, then that organism would have already out-competed every other organism in any environment there was and is, and we would have a single species of organism left. Humans have gotten pretty successful in out-competing many other species in a variety of environments. But they are obviously not equally fit in all existing environments, because if a human ends up in the middle of the Pacific without a boat, he/she is obviously awesomely unfit in that environment, and will most likely die very quickly. Plankton, on the other hand, are doing just fine out there.
actually mulch, bacteria have dramatically terra-formed the environment of this planet to make it fit for higher life forms. Higher life forms that the bacteria could care less about:
notes;.
Michael Denton – Stromatolites Are Extremely Ancient – Privileged Planet – video
Ancient Microorganisms Helped Build 3.4-billion-year-old Stromatolite Rock Structures
Both the oldest Stromatolite fossils, and the oldest bacterium fossils, found on earth demonstrate an extreme conservation of morphology which, very contrary to evolutionary thought, simply means they have not changed and look very similar to Stromatolites and bacteria of today.
Shark’s Bay – Modern Stromatolites – Pictures
Contrary to what materialism would expect, these very first photosynthetic bacteria found in the fossil record, and by chemical analysis of the geological record, are shown to have been preparing the earth for more advanced life to appear from the very start of their existence for the bacteria to continue to live to do their work of preparing the earth for more advanced life to appear.
cont. mulch; ‘magical’
These following articles explore some of the other complex geochemical processes that are also involved in the forming of the red banded iron, and other precious ore,.
Rich Ore Deposits Linked to Ancient Atmosphere – Nov. 2009
Excerpt: Much of our planet’s mineral wealth was deposited billions of years ago when Earth’s chemical cycles were different from today’s..,,, Joye has shown that oxygen levels in parts of the Gulf contaminated with oil have dropped. Since microbes need oxygen to eat the petroleum, that’s evidence that the microbes are hard at work. (Thank God)
Here are a couple of links showing the crucial link of a minimal level of metals to biological life:
Transitional Metals And Cytochrome C oxidase – Michael Denton – Nature’s Destiny
cont. mulch;. | https://uncommondescent.com/intelligent-design/intelligent-design-and-the-demarcation-problem/ | CC-MAIN-2021-25 | refinedweb | 69,806 | 51.68 |
THE SQL Server Blog Spot on the Web
What is the simplest way to drive a CPU to 100% using T-SQL? If it’s a language like C++ or C#, the simplest way is to create a tight loop and perform some work that uses CPU cycles inside the loop. For instance, the following tight loop in C# can peg a CPU to 100%:
while (true)
Math.Sin(2.5);
Note that on a multi-processor machine, this code may end up being executed on more than one processor, and you may not see any particular processor being pegged at 100%. But the cumulative effect is equivalent to keeping a single processor busy.
What about T-SQL? Does the following T-SQL code keep a CPU 100% busy?
set nocount on
go
declare @a float
while(1=1)
begin
select @a = cos(2.5);
end
If you try the above SQL batch as is in SQL Server Management Studio from a remote client, it’s unlikely that it would peg a CPU on the server side at 100%. When I ran the above code, it kept a single CPU at about 20~30%. So apparently, the loop in this little piece of T-SQL code is not tight enough to keep a CPU fully occupied.
Note To help see the CPU usage clearly, for all the T-SQL tests in this post I explicitly set the affinity mask option on the SQL Server instance to use only a single processor.
Is there a way to execute a tight loop in T-SQL? Yes, all you need to do is to put the above code inside a stored procedure:
create proc p_test
as
end
If you execute proc p_test, you would see a CPU being pegged at 100%.
What is the difference between executing a loop in a SQL batch and inside a stored procedure for them to exhibit such dramatically different CPU usage?
The difference is that while executing the loop in a batch, SQL Server has to yield numerous times to chat with the client, whereas when it’s executing the loop inside a stored procedure, it doesn’t. [Updated. Note that as Andy Kelly pointed out, this was when SET NOCOUNT ON was in effect. All bets are off if you don't do SET NOCOUNT ON.]
An example should make this abundantly clear. Let’s modify the loop a bit to iterate a finite number of times (say 10,000,000 times):
The batch:
declare @a float,
@i int = 1
while(@i<=10000000)
select @a = cos(2.5);
select @i += 1;
The stored procedure:
@i int = 1
In my test environment, the batch finished in about 33 seconds, whereas the stored procedure finished in about 4 seconds. The stored procedure was about eight times faster!
It was most revealing to look at the client statistics. With the batch, the client sent one TDS packet to the server, but received 151,623 TDS packets from the server. In contrast, with the stored procedure, the client sent one TDS packet and received one TDS packets.
It’s dramatically chattier to execute the batch than the stored procedure. In fact, it’s so chatty that it is unable to keep the CPU busy doing useful work. are an application developer, you should find a new job if you can't do multu-threaded programming. What if you are a DBA? Probably not, not to the same extent anyway.
Some in our community clearly realize the importance of being able to do things in parallel. Adam Machanic, for instance, has put in a lot of efforts in this area and is trying hard to spread the message on parallelism.
Unfortunately, the community in general does not seem to be as convinced. Perhaps when you've become accustomed to finding workarounds to drive nails with a wrench, you may not realize that hammers are a much better tool for that task.
Although doing parallelism does not necessarily mean that every DBA should become conversant in multi-threaded programming, I’d argue that it’s a good skill to have, and once you are comfortable with it, you’ll find plenty of opportunity to fruitfully apply it.
Here is a little anecdotal evidence from my recent experience.
So I needed to automate the removal and creation of a lot of replication setups, and for that I fully automated the generation of the replication delete and create scripts. I also automated the execution of these scripts. However, one potential issue with executing these scripts is that since these are kind of DDL scripts, and as you may know, any DDL change can easily get blocked. Thus, I had to automate the monitoring of blocking. Furthermore, I needed to automatically remove any blocking, if the execution ofthe a replication script got blocked.
Now, this handling of blocking needed to be granular and precise in that I needed to clear a spid only if it’s blocking the execution of a replication delete or create script. The solution I ended up with is to have the code that controlled the replication script execution spawn a monitoring thread for each of the servers (i.e. the publishers and subscribers), and as the main control code cycling through these servers to apply the replication scripts, it goes through the following logic:
The actual logic is slightly more complex because when we execute the replication create/delete script on a publisher, the publisher will make a connection to the subscriber and/or the distributor, we don't want those connections be blocked either. The monitoring thread must take care of that.
Although most or many of the DBA work appear to be conducive to single-threaded automation, tasks like this are not. If you think using multi-threading for this task is an overkill or there is a better way, l’d appreciate it if you eave a comment.
My previous post showed a simple test that appears to suggest that you may experience significant performance degradation if multiple users are calling the same SQLCLR function at the same time and they are all catching a lot of exceptions.
However, it’s not clear whether that behavior is limited to SQLCLR or applies to .NET in general. To see if I would run into similar behavior, I wrote a simple C# program for a quick test. To simulate the concurrent exception-handling load, the test program spawns 10 background threads, each calling the following method nonstop (the complete program is listed at the end of this post):
static void TryCatch()
{
int c = 0;
for(int i = 0; i < 200000; i++)
{
try { c= i/c; } catch (Exception) { c = 1;}
}
}
Note that variable c is assigned value 0, thus forcing a divide-by-zero exception, and this exception is handled 200,000 times in a loop.
The elapsed time of the method is measured both when there is no additional background threads and when 10 additional threads are running.
On my old two-core 2GB PC workstation, the following output from the program is typical among many runs:
C:\junk>test2.exe
TryCatch() without any background thread = 15617968 ticks
TryCatch() without any background thread = 15559616 ticks
TryCatch() without any background thread = 15566064 ticks
TryCatch() without any background thread = 17472496 ticks
TryCatch() without any background thread = 15782952 ticks
Thread 0 Called TryCatch() 1000 times.
Thread 1 Called TryCatch() 1000 times.
Thread 2 Called TryCatch() 1000 times.
Thread 3 Called TryCatch() 1000 times.
Thread 4 Called TryCatch() 1000 times.
Thread 5 Called TryCatch() 1000 times.
Thread 6 Called TryCatch() 1000 times.
Thread 7 Called TryCatch() 1000 times.
Thread 8 Called TryCatch() 1000 times.
Thread 9 Called TryCatch() 1000 times.
TryCatch() with 10 background threads = 17498336 ticks
TryCatch() with 10 background threads = 17535984 ticks
TryCatch() with 10 background threads = 17664424 ticks
TryCatch() with 10 background threads = 17515200 ticks
TryCatch() with 10 background threads = 17465312 ticks
TryCatch() with 10 background threads = 17498432 ticks
TryCatch() with 10 background threads = 17508656 ticks
TryCatch() with 10 background threads = 17710856 ticks
^C
At least for this test, the adverse concurrency impact that we saw with SQLCLR--and reported in the previous post--is not observed.
Although it’s not strictly an apple-to-apple comparison between this test without SQLCLR and that described in the previous post with SQLCLR, the end user experience is so different that it calls into question why SQLCLR does not seem to handle many concurrent exceptions as gracefully. I don’t have an answer.
I have absolutely no knowledge of how SQLCLR works internally, and can’t explain the concurrency behavior observed in the previous post.
By the way, when I set variable c to 1 in the TryCatch() method, thus avoiding the exception, its concurrency impact (or the lack of) did not change much, if at all.
Anyway, here is the test program. For the output presented above, the program was compiled with .NET Framework 3.5.
using System;
using System.Diagnostics;
using System.Threading;
public partial class Test
{
public static void Main()
Stopwatch stop_watch = new Stopwatch();
// warming up a bit
for(int i = 0; i < 5; i++)
TryCatch();
// measure the elaped time without any additional background threads
stop_watch.Reset();
stop_watch.Start();
stop_watch.Stop();
Console.WriteLine("TryCatch() without any background thread = {0} ticks",
stop_watch.ElapsedTicks);
Thread.Sleep(2000);
Thread[] user_threads = new Thread[10];
for(int i = 0; i < 10; i++)
user_threads[i] = new Thread(new ThreadStart(StartTryCatch));
user_threads[i].Name = i.ToString();
user_threads[i].IsBackground = true;
user_threads[i].Start();
Thread.Sleep(10);
Thread.Sleep(5000);
// now measure the elaped time again with 10 additiona threads running
for(int i = 0; i < 20; i++)
Console.WriteLine("TryCatch() with 10 background threads = {0} ticks",
stop_watch.ElapsedTicks);
// this will never be reached. You have to Cltr-C to stop the program
for (int i = 0; i < 10; i++)
if (user_threads[i] != null)
{
user_threads[i].Join();
}
}
static void StartTryCatch()
int i = 0;
while (true)
if (i % 1000 == 0)
Console.WriteLine("Thread {0} Called TryCatch() 1000 times.", Thread.CurrentThread.Name);
i++;
};.
After reading Adam Machanic’s comment to my previous post, I started to wonder if what I saw was all due to the fact that SQL Server blindly chooses to allocate at least one extent in response to each call to WriteToServer(), perhaps in its zeal to achieve minimal logging.
Such an algorithm sounds a bit crazy, or even dumb. But anything is possible.
To check if this is indeed the case, I wrote a simple test program below with most of the parameters hard coded. The only exception is the payload, i.e. the number of rows that WriteToServer() copies to the server. BTW, the target database needs to be in either bulk_logged or simple recovery mode.
using System.Collections;
using System.IO;
using System.Data;
using System.Data.SqlClient;
class Loader
static void Main(string[] args)
int payload = 0;
try
payload = int.Parse(args[0]);
catch
Console.WriteLine("***Err: The argument must be an integer.");
Environment.Exit(-1);
int row_counter = 0;
int write_2_server_counter = 0;
int extent_counter = 0;
using (SqlConnection bcp_conn = new SqlConnection("server=myServer;database=myDB;Integrated Security=SSPI;"),
conn = new SqlConnection("server=myServer;database=myDB;Integrated Security=SSPI;"))
try
bcp_conn.Open();
conn.Open();
try
{
SqlCommand cmd = new SqlCommand();
cmd.Connection = conn;
cmd.CommandType = CommandType.Text;
cmd.CommandText = "if object_id('dbo.junk') is not NULL " +
" drop table junk";
cmd.ExecuteNonQuery();
cmd.CommandText = "CREATE TABLE dbo.junk( " +
" c_id int NOT NULL, " +
" c_data char(950) NOT NULL)";
cmd.ExecuteNonQuery();
}
catch(Exception e)
Console.WriteLine(e.ToString());
Environment.Exit(-1);
DataTable dt = new DataTable();
dt.MinimumCapacity = 20000;
DataColumn c_id = new DataColumn();
c_id.DataType = System.Type.GetType("System.Int32");
c_id.ColumnName = "c_id";
dt.Columns.Add(c_id);
DataColumn c_data = new DataColumn();
c_data.DataType = System.Type.GetType("System.String");
c_data.ColumnName = "c_data";
dt.Columns.Add(c_data);
using (SqlBulkCopy bcp = new SqlBulkCopy(bcp_conn, SqlBulkCopyOptions.TableLock, null))
bcp.DestinationTableName = "dbo.junk";
bcp.BatchSize = 20000;
bcp.BulkCopyTimeout = 30000000;
for (int i = 1; i <= 1000; i++)
{
row_counter++;
DataRow row = dt.NewRow();
row["c_id"] = i;
row["c_data"] = "12345667890abcdefgh";
dt.Rows.Add(row);
if (row_counter % payload == 0)
{
bcp.WriteToServer(dt);
write_2_server_counter++;
extent_counter = GetExtentCount(conn);
Console.WriteLine("{0},{1}", write_2_server_counter, extent_counter);
dt.Rows.Clear();
}
}
if (dt.Rows.Count > 0)
bcp.WriteToServer(dt);
dt.Rows.Clear();
}
} // using
catch(Exception e)
Console.WriteLine(e.ToString());
Environment.Exit(-1);
} // Main()
static int GetExtentCount(SqlConnection conn)
int extent_count = 0;
SqlCommand cmd = new SqlCommand();
cmd.Connection = conn;
cmd.CommandType = CommandType.Text;
cmd.CommandText = "select cast(u.total_pages/8 as int) " +
" from sys.partitions p join sys.allocation_units u " +
" on p.partition_id = u.container_id " +
" where p.object_id = object_id('junk')";
extent_count = (int) cmd.ExecuteScalar();
catch(Exception e)
Console.WriteLine(e.ToString());
return extent_count;
} // class Loader
If you run this program (and I have run this against a SQL Server 2008 instance), you’ll find that as long as the payload parameter value you supply on the command line is 64 or less, SQL Server will allocate a new extent for every call to WriteToServer() no matter how little data is being written to the server and no matter how much free space is already allocated to the table
This behavior is utterly striking!!!
If the payload parameter value is 65, the amount of data no longer fits on one extent, and understandably SQL Server needs to allocate—and does allocate--two extents for each WriteToServer().
I’m all for reducing the amount of transaction logging whenever appropriate. Furthermore, if this space cost is the result of my explicitly configuring something to favor minimal logging, I’m willing to live with the consequence. But in this test nowhere did I say that I wanted minimal logging at any cost.
One may argue that this is an edge case because people are not expected to--or should not--call WriteToServer() with a small payload. But regardless, at least the potential space implication of doing so should have been clearly made known so that people can explicitly protect their code from falling into it. And in the software business, it’s often the edge cases that trip people up..:
Payload
(WriteToServer Row Number)
Load Duration (second)
>1440
~1440
397
200
300
20,000
278
3,000,000
270
1998
238
90?
Lookup tables are widely used in database applications for good reasons. Usually, a lookup table has a small number of rows and looking it up with a join is fast, especially when the table is already cached.
Recently, I needed to update every row in many relatively large tables, each of which was identically structured, had ~25 million rows, and was ~30GB in size. The tables were denormalized to include both a lookup index column (i.e. CategoryID, which was an integer) and the corresponding lookup value column (i.e. CategoryName, which was a char(50)). The batch update I was performing was to ensure that the CategoryName column of these tables had the correct matching the value. The CategoryID to CategoryName mapping was defined in a small lookup table, CategoryLookup, with 10 rows.
Question
What would be the most efficient method to perform this batch update?
Three lookup methods
For the batch update scenario described above, you have three alternatives to lookup the CategpryName values (assume that the table to be updated is called Transactions): t1
SET t1.CategoryName = t2.CategoryName
FROM Transactions t1 JOIN CategoryLookup t2
ON t1.CategoryID = t2.CategoryID
UPDATE Transactions
SET CategoryName =
(SELECT CategoryLookup.CategoryName
FROM CategoryLookup
WHERE CategoryLookup.CategoryID=Transactions.CategoryID )
You can also do the lookup with a scalar function. But it’s so horrifically inefficient that you should not seriously consider it. It’s not interesting to include in this discussion. In addition, you could do the lookup with an inline table valued function, which has a similar performance profile as that of the inline CASE method.
It should be highlighted that method 2 (the JOIN method) and method 3 (the Subquery method) are not semantically identical. For instance, if the Transactions table has a CategoryID value that is not present in the CategoryLookup table, the Subquery method will, if permitted, set the CategoryName column to NULL, or the update will fail if NULL is not permitted, whereas the JOIN method will leave the CategoryName value unchanged. For the scenario we are interested in, the results of these two methods are identical. All the CategorID values in the Transactions table are also in the CategoryLookup table and the mapping from CategoryID to CategoryName in the CategoryLookup table is one to one.
I ran a series of controlled tests that mimicked the update scenario described previously. To keep the tests more manageable, I used a smaller and artificially created Transactions table that had 5,000,000 rows and was ~5GB in size. You can find the DDLs and the test script at the bottom of this post.
Test results and practical implications
I made sure that the results shown below were stable in that (1) they were taken from 50 repeated tests with a small number of outliers thrown out, and (2) the remaining results were inspected and made sure that the variances were relatively small among them and the values exhibited a consistent pattern.
Clearly, if you do a massive number of lookups (like what I did in this test), the cumulative cost can be quite visible. In fact, in this test using an inline CASE expression was more than twice as fast as lookups using either a subquery or a straight join. As the number of rows increases, you can expect to see this difference (or the cost of doing lookups) grow more prominent. So, if you are doing a very large batch update, it’s definitely worth replacing the table lookups with an inline CASE expression for better performance.
The difference between the CASE method and the table lookups (either the Subquery method or the JOIN method) remained stable across different test environments. But the difference between the Subquery method and the JOIN method was more subtle. In fact, if you run the same test in a different environment, you may see different relative performance between them. In some environments, the Subquery method can perform significantly better than the JOIN method.
Although there was a significant performance penalty when using Subquery or JOIN lookups in a massive update, this does not mean you should jettison using lookups in your individual transactions. Because the marginal cost of doing an individual lookup is infinitesimally small compared to many other performance-related factors, you’d lose much more in terms of code reuse, flexibility, and so on if you start to embed ‘lookups’ inline. To emphasize, note that the difference between the CASE method and the Subquery method in the test was ~34 seconds. Divide 34 seconds by the 5,000,000 lookups the update did, we get 6.8 microseconds as the marginal cost of an individual lookup.
There is no surprise that avoiding a massive number of table lookups could give you better performance. But it’s still good to be able to appreciate it with some concrete numbers. My update of all those 25-million-row tables mentioned at the beginning of this post took more than 10 hours to complete and I used the subquery method. Had I had the results reported here, I could have finished the same update process in five hours. That would have been a very nice saving!
Test setup
The lookup DDL and data:
drop table CategoryLookup
create table CategoryLookup(CategoryID int, CategoryName char(20))
with tmp(a, b) as (
select 1, 'abc' + cast(1 as varchar(5))
union all
select a+1, 'abc' + cast(a+1 as varchar(5))
from tmp
where a < 10
insert CategoryLookup
select * from tmp
create clustered index cix_CategoryLookup on CategoryLookup(CategoryID)
The Transactions test table DDL and data:
drop table Transactions
create table Transactions(CategoryID int,
CategoryName char(50),
filler char(1000))
declare @i int
set @i = 1
begin tran
while @i <= 5000000
insert Transactions
select @i % 10 + 1, 'abc', 'filler'
if @i % 100000 = 0
begin
commit tran
begin tran
end
set @i = @i + 1
if @@trancount > 0
commit tran
sp_spaceused Transactions
create clustered index cix_Transactions on Transactions(CategoryID)
drop table test_log – this tale is used to log the test times
create table test_log (
Name varchar(50),
Num int,
StartTime datetime,
EndTime datetime NULL
)
The test script:
declare @dt datetime,
@i int
while @i < 20 -- run the test 20 times
set @dt = getdate()
insert test_log select 'CASE method', 10, @dt, NULL test_log
set EndTime = getdate()
where StartTime = @dt
set @dt = getdate()
insert test_log select 'Subquery method', 10, @dt, NULL
update Transactions
set CategoryName =
(select CategoryLookup.CategoryName
from CategoryLookup
where CategoryLookup.CategoryID= Transactions.CategoryID )
insert test_log select 'JOIN method', 10, @dt, NULL
update t1
set t1.CategoryName = t2.CategoryName
from Transactions t1
join CategoryLookup t2 on t1.CategoryID = t2.CategoryID
set @i = @i +1
The reported results were obtained on a DL585 G1 with 64GB of RAM and eight 2.6GHz cores, running Windows Server Enterprise 2003 and SQL Server 2008 SP2 Enterprise x64 Edition. 50GB was allocated to the SQL Server instance..
I was looking at some of my old notes on linked servers and found a tidbit on how the linked server connections are managed by SQL Server. I'm posting it here because I don’t think the information is widely known.
When you make a linked server call from a SQL Server instance (say ServerA) to another SQL Server instance (say ServerB) over Microsoft Native Client OLEDB Provider, SQL Server on ServerA acts as a client to the instance on ServerB and will open or reuse a connection to ServerB. That connection will be managed by the SQL Server instance on ServerA.
If you check on ServerB with the following query, you should see that connection from ServerA (if it's still there):
select * from sysprocesses
where hostname = 'ServerA'
and program_name = 'Microsoft SQL Server'
The question is, “how long will ServerA keep the connection alive if no call is using it? And can you configure it?”
I can’t find any official documentation to answer these two questions. But my own tests appear to yield a consistent answer to the first question. That is, a dormant SQL Server linked server connection will stay for about 4~5 minutes, and will be closed after that. All my attempts to see if this number is configurable suggest that the answer is negative. If anyone knows a more authoritative answer, please post it.
Here is a simple test to determine how long a dormant connection stays alive.
On ServerB, use the previous query to ensure that there is no linked server connection from ServerA. If there is, kill the connection and ensure ServerA does not open a new one.
On ServerB, run this script:
declare @dt datetime
while not exists (select * from sysprocesses
where hostname = 'ServerA'
and program_name = 'Microsoft SQL Server')
waitfor delay '00:00:01'
set @dt = getdate()
while exists (select * from sysprocesses
where hostname = 'ServerA'
and program_name = 'Microsoft SQL Server')
select 'Duration' = datediff(second, @dt, getdate())
Go to ServerA and run the next script:
select * from openquery(ServerB, 'select @@servername')
I sat in a Sybase ASE class last week for five days. Although it didn't cover the more advanced features introduced in the more recent versions of Sybase ASE, the class did touch all the basics of administering Sybase ASE. While I was successful in suppressing any urge to openly compare Sybase ASE with Microsoft SQL Server in the class, I could not help making mental notes on the differences between the two database platforms.
It's always interesting to look at how two DBMS platforms that share the same root went their own different ways in handling the same/similar tasks. And here are some random notes I jotted down while sitting in the class.
I had some limited experience with Sybase about 15 years ago, and had expected to see some major improvements in doing the basic tasks of managing a Sybase ASE instance. So I was surprised that very little had changed and it felt like a throwback to SQL Server 4.21a or 6.5. For those of you who never used SQL Server versions earlier than 7.0, you'd probably have a hard time understanding why having a good database backup file alone is not enough to get the database restored. Instead, you must first create the database in exactly the same layout as when the backup is taken. If you don't know that layout, there is a good chance that you won't get your database back even if you have the backup. As a result, religiously backing up your master database or religiously keeping your database creation scripts up to date become paramount important, much more important than it is with SQL Server. Having seeing a better alternative, this all seemed rather archaic to me.
SQL Server 2005 and later implemented a feature that allows a database to be online to the user as soon as the redo step is finished during the instance startup. A database on a Sybase ASE instance still needs to wait until both the redo phase and the undo phase are completed before it can accept user connections. I guess to help ease the pain, Sybase ASE allows the user to specify the order in which the databases on an instance are recovered. So if database XYZ is the most important, you could configure it to be recovered first before all the other user databases. This is nice and would be good to have on SQL Server. But because the SQL Server databases can come online faster, the feature is not as useful on SQL Server.
Like SQL Server, Sybase ASE also supports autogrowing the database space allocation. And it doesn't encourage anyone to rely on the feature just as it is discouraged on SQL Server. But because knowing exactly how space is allocated for a database is so much more critical on Sybase ASE than it is on SQL Server for recovery purposes, using autogrowth is therefore much more dangerous on Sybase ASE.
One nice thing about Sybase ASE is that all its sp_configure options are exposed in a configuration file which is a plain text file. You can still use sp_configure just like you do on SQL Server, but all the configuration changes are saved to the configuration file. The main motivation for externalizing the sp_configure options to a configuration file is that on Sybase ASE you could easily mis-configure an option causing the Sybase ASE to fail to start up. With the sp_configure options in a file, you can easily correct such a mistake and restart the instance without having to start the instance in some special mode just to correct the mistake. On SQL Server, this feature is less useful because you can hardly mis-configure a sp_configure option and cause the SQL Server instance to fail to start.
Pull.:
drop table item
create table item(i int, j int, cchar(200))
;with tmp(i, j, c) as (
select 1, 1, replicate('a', 200)
union all
select i + 1, j+1, replicate('a', 200)
from tmp
where i < 100000
insert item. | http://sqlblog.com/blogs/linchi_shea/default.aspx?p=3 | CC-MAIN-2014-35 | refinedweb | 4,530 | 61.46 |
So, basically, I’m trying to get the below code to work properly for
“piglatin”. I have gotten it to work with mulitple words, if the word
starts with a vowel, if the word starts with a consonant, and if the
word starts with two consonants. This was a hell of a lot of trial and
error getting to this point.
However, I am stuck and need a little help getting unstuck. One thing
about “piglatin” is that I need to turn words like “quale” into
“alequay”. Also I need to have words like “square” to turn into
“aresquay”. AKA, if qu shows up somewhere, I need those letters (and
any that proceed it) to shift to the back and ad an +ay to the end.
The current code turns “quale” into “ualeqay” instead.
I have thought about this and really am unclear how to proceed about it.
The below code is the code I was talking about and made myself. I
realize some things could probably be “cleaner” about it, but I’m just
starting to learn so it will do for now.
Anyhow, I would love to somehow get unstuck with this.
def translate(words)
z=""
vowels=%w(a e i o u)
consonants=%w(b c d f g h j k l m n p q r s t v w x y z)
words.gsub(/\w+/) do |word|
#the word in a array
z=word.scan(/\w/)
#checks to see if first letter has a vowel
contains_vowels=vowels & z.first.split(",")
two_consonants=z[0…1] & consonants
three_consonants = z[0…2] & consonants
if(contains_vowels.size>=1) z.join("")+"ay" elsif(three_consonants.size==3) x=z.shift(3) z.join("")<<x.join("")+"ay" elsif(two_consonants.size==2) x=z.shift(2) z.join("")<<x.join("")+"ay" else x=z.shift z<<x+"ay" z.join("") end
end
end | https://www.ruby-forum.com/t/trying-to-get-translator-to-work/225005 | CC-MAIN-2021-31 | refinedweb | 311 | 74.49 |
Things You'll Need:
- Disability statement
- Tax forms
- Receipts
- Step 1
Ask your doctor for a statement verifying your disability.
- Step 2
Gather all of your receipts for any impairment related expenses you intend to deduct on your tax return. Keep them in a safe place. You may need them later.
- Step 3
Call several different IRS customer service representatives and ask for advice on taking the deduction before you file your tax return. Write down their names and identification numbers.
- Step 4
Consider asking for professional advice. Don't assume, however, that all tax advisors are familiar with IRWEs. Shop around and find one who is.
- Step 5
Deduct your IRWEs on Schedule C, C-EZ (Profit or Loss from Business) or Schedule F (Profit or Loss from Farming) if you are a self-employed individual.
- Step 6
Complete Form 2106 (Employee Business Expenses) or Form 2106-EZ (Unreimbursed Employee Business Expenses) if you are a disabled employee. Your IRWEs are not subject to the 2 percent adjusted gross income limit that applies to other employee business expenses.
- Step 7
Enter the portion of the amount on Form 2106, line 10, or Form 2106-EZ, line 6, that's related to your impairment on Schedule A, line 27. Enter the amount that's unrelated to your impairment on Schedule A, line 20.
- Step 8
File an amended return (IRS form 1040X) to deduct IRWEs you did not deduct in previous years. You are allowed to file an amended return for the previous 3 tax years. | http://www.ehow.com/how_2076699_deduct-impairment-related-expenses.html | crawl-002 | refinedweb | 256 | 64.2 |
When we set out to build React Router v6, from the perspective of
@reach/router users, we had these goals:
@reach/router)
@reach/router(nested routes, and a simplified API via ranked path matching and
navigate)
If we were to make a
@reach/router v2, it would look pretty much exactly like React Router v6. So, the next version of
@reach/router is React Router v6. In other words, there will be no
@reach/router v2, because it would be the same as React Router v6.
A lot of the API is actually identical between
@reach/router 1.3 and React Router v6:
navigatehas the same signature
Linkhas the same signature
Most of the changes are just some renames. If you happen to write a codemod, please share it with us and we'll add it to this guide!
In this guide we'll show you how to upgrade each piece of your routing code. We'll do it incrementally so you can make some changes, ship, and then get back to migrating again when it's convenient. We'll also discuss a little bit about "why" the changes were made, what might look like a simple rename actually has bigger reasons behind it.
We highly encourage you to do the following updates to your code before migrating to React Router v6. These changes don't have to be done all at once across your app, you can simply update one line, commit, and ship. Doing this will greatly reduce the effort when you get to the breaking changes in React Router v6.
@reach/routerv1.3
<LocationProvider/>to the top of the app
The following changes need to be done all at once across your app.
<Router>elements to
<Routes>
<RouteElement default/>to
<RouteElement path="*" />
<Redirect />
<Link getProps />with hooks
useMatch, params are on
match.params
ServerLocationto
StaticRouter
React Router v6 makes heavy use of React hooks, so you'll need to be on React 16.8 or greater before attempting the upgrade to React Router v6.
Once you've upgraded to React 16.8, you should deploy your app. Then you can come back later and pick up where you left off.
@reach/routerv1.3.3
You should be able to simply install v1.3.3 and then deploy your app.
npm install @reach/router@latest
You can do this step one route component at a time, commit, and deploy. You don't need to update the entire app at once.
In
@reach/router v1.3 we added hooks to access route data in preparation for React Router v6. If you do this first you'll have a lot less to do when you upgrade to React Router v6.
// @reach/router v1.2 <Router> <User path="users/:userId/grades/:assignmentId" /> </Router>; function User(props) { let { // route params were accessed from props userId, assignmentId, // as well as location and navigate location, navigate, } = props; // ... } // @reach/router v1.3 and React Router v6 import { useParams, useLocation, useNavigate, } from "@reach/router"; function User() { // everything comes from a specific hook now let { userId, assignmentId } = useParams(); let location = useLocation(); let navigate = useNavigate(); // ... }
All of this data lives on context already, but accessing it from there was awkward for application code so we dumped it into your props. Hooks made accessing data from context simple so we no longer need to pollute your props with route information.
Not polluting props also helps with TypeScript a bit and also prevents you from wondering where a prop came from when looking at a component. If you're using data from the router, it's completely clear now.
Also, as a page grows, you naturally break it into multiple components and end up "prop drilling" that data all the way down the tree. Now you can access the route data anywhere in the tree. Not only is it more convenient, but it makes creating router-centric composable abstractions possible. If a custom hook needs the location, it can now simply ask for it with
useLocation() etc..
While
@reach/router doesn't require a location provider at the top of the application tree, React Router v6 does, so might as well get ready for that now.
// before ReactDOM.render(<App />, el); // after import { LocationProvider } from "@reach/router"; ReactDOM.render( <LocationProvider> <App /> </LocationProvider>, el );
@reach/router uses a global, default history instance that has side effects in the module, which prevents the ability to tree-shake the module whether you use the global or not. Additionally, React Router provides other history types (like hash history) that
@reach/router doesn't, so it always requires a top-level location provider (in React Router these are
<BrowserRouter/> and friends).
Also, various modules like
Router,
Link and
useLocation rendered outside a
<LocationProvider/> set up their own URL listener. It's generally not a problem, but every little bit counts. Putting a
<LocationProvider /> at the top allows the app to have a single URL listener.
This next group of updates need to be done all at once. Fortunately most of it is just a simple rename.
You can pull a trick though and use both routers at the same time as you migrate, but you should absolutely not ship your app in this state because they are not interoperable. Your links from one won't work for the other. However, it is nice to be able to make a change and refresh the page to see that you did that one step correctly.
npm install react-router@6 react-router-dom@6
LocationProviderto
BrowserRouter
// @reach/router import { LocationProvider } from "@reach/router"; ReactDOM.render( <LocationProvider> <App /> </LocationProvider>, el ); // React Router v6 import { BrowserRouter } from "react-router-dom"; ReactDOM.render( <BrowserRouter> <App /> </BrowserRouter>, el );
Routerto
Routes
You may have more than one, but usually there's just one somewhere near the top of your app. If you have multiple, go ahead and do this for each one.
// @reach/router import { Router } from "@reach/router"; <Router> <Home path="/" /> {/* ... */} </Router>; // React Router v6 import { Routes, Route } from "react-router-dom"; <Routes> <Route path="/" element={<Home />} /> {/* ... */} </Routes>;
defaultroute prop
The
default prop told
@reach/router to use that route if no other routes matched. In React Router v6 you can explain this behavior with a wildcard path.
// @reach/router <Router> <Home path="/" /> <NotFound default /> </Router> // React Router v6 <Routes> <Route path="/" element={<Home />} /> <Route path="*" element={<NotFound />} /> </Routes>
<Redirect/>,
redirectTo,
isRedirect
Whew ... buckle up for this one. And please save your tomatoes for a homemade margherita pizza instead of throwing them at us.
We have removed the ability to redirect from React Router. So this means there is no
<Redirect/>,
redirectTo, or
isRedirect, and no replacement APIs either. Please keep reading 😅
Don't confuse redirects with navigating while the user interacts with your app. Navigating in response to user interactions is still supported. When we talk about redirects, we're talking about redirecting while matching:
<Router> <Home path="/" /> <Users path="/events" /> <Redirect from="/dashboard" to="/events" /> </Router>
The way redirects work in
@reach/router was a bit of an experiment. It "throws" redirects and catches it with
componentDidCatch. This was cool because it caused the entire render tree to stop, and then start over with the new location. Discussions with the React team years ago when we first shipped this project led us to give it a shot.
After bumping into issues (like app level
componentDidCatch's needing to rethrow the redirect), we've decided not to do that anymore in React Router v6.
But we've gone a step farther and concluded that redirects are not even the job of React Router. Your dynamic web server or static file server should be handling this and sending an appropriate response status code like 301 or 302.
Having the ability to redirect while matching in React Router at best requires you to configure the redirects in two places (your server and your routes) and at worst encouraged people to only do it in React Router--which doesn't send a status code at all.
We use firebase hosting a lot, so as an example here's how we'd update one of our apps:
// @reach/router <Router> <Home path="/" /> <Users path="/events" /> <Redirect from="/dashboard" to="/events" /> </Router>
// React Router v6 // firebase.json config file { // ... "hosting": { "redirects": [ { "source": "/dashboard", "destination": "/events", "type": 301 } ] } }
This works whether we're server rendering with a serverless function, or if we're using it as a static file server only. All web hosting services provide a way to configure this.
If your app has a
<Link to="/events" /> still hanging around and the user
clicks it, the server isn't involved since you're using a client-side router.
You'll need to be more diligent about updating your links 😬.
Alternatively, if you want to allow for outdated links, and you realize you need to configure your redirects on both the client and the server, go ahead and copy and paste the
Redirect component we were about to ship but then deleted.
import { useEffect } from "react"; import { useNavigate } from "react-router-dom"; function Redirect({ to }) { let navigate = useNavigate(); useEffect(() => { navigate(to); }); return null; } // usage <Routes> <Route path="/" element={<Home />} /> <Route path="/events" element={<Users />} /> <Route path="/dashboard" element={<Redirect to="/events" />} /> </Routes>;
We figured by not providing any redirect API at all, people will be more likely to configure them correctly. We've been accidentally encouraging bad practice for years now and would like to stop 🙈.
<Link getProps />
This prop getter was useful for styling links as "active". Deciding if a link is active is kind of subjective. Sometimes you want it to be active if the URL matches exactly, sometimes you want it active if it matches partially, and there are even more edge cases involving search params and location state.
// @reach/router function SomeCustomLink() { return ( <Link to="/some/where/cool" getProps={(obj) => { let { isCurrent, isPartiallyCurrent, href, location, } = obj; // do what you will }} /> ); } // React Router import { useLocation, useMatch } from "react-router-dom"; function SomeCustomLink() { let to = "/some/where/cool"; let match = useMatch(to); let { isExact } = useMatch(to); let location = useLocation(); return <Link to={to} />; }
Let's look at some less general examples.
// A custom nav link that is active when the URL matches the link's href exactly // @reach/router function ExactNavLink(props) { const isActive = ({ isCurrent }) => { return isCurrent ? { className: "active" } : {}; }; return <Link getProps={isActive} {...props} />; } // React Router v6 function ExactNavLink(props) { return ( <Link // If you only need the active state for styling without // overriding the default isActive state, we provide it as // a named argument in a function that can be passed to // either `className` or `style` props className={({ isActive }) => isActive ? "active" : "" } {...props} /> ); } // A link that is active when itself or deeper routes are current // @reach/router function PartialNavLink(props) { const isPartiallyActive = ({ isPartiallyCurrent }) => { return isPartiallyCurrent ? { className: "active" } : {}; }; return <Link getProps={isPartiallyActive} {...props} />; } // React Router v6 function PartialNavLink(props) { // add the wild card to match deeper URLs let match = useMatch(props.to + "/*"); return ( <Link className={match ? "active" : ""} {...props} /> ); }
"Prop getters" are clunky and can almost always be replaced with a hook. This also allows you to use the other hooks, like
useLocation, and do even more custom things, like making a link active with a search string:
function RecentPostsLink(props) { let match = useMatch("/posts"); let location = useLocation(); let isActive = match && location.search === "?view=recent"; return ( <Link className={isActive ? "active" : ""}>Recent</Link> ); }
useMatch
The signature of
useMatch is slightly different in React Router v6.
// @reach/router let { uri, path, // params are merged into the object with uri and path eventId, } = useMatch("/events/:eventId"); // React Router v6 let { url, path, // params get their own key on the match params: { eventId }, } = useMatch("/events/:eventId");
Also note the change from
uri -> url.
Just feels cleaner to have the params be separate from URL and path.
Also, nobody knows the difference between URL and URI, so we didn't want to start a bunch of pedantic arguments about it. React Router always called it URL, and it's got more production apps, so we used URL instead of URI.
<Match />
There is no
<Match/> component in React Router v6. It used render props to compose behavior, but we've got hooks now.
If you like it, or just don't want to update your code, it's easy to backport:
function Match({ path, children }) { let match = useMatch(path); let location = useLocation(); let navigate = useNavigate(); return children({ match, location, navigate }); }
Render props are kinda gross (ew!) now that we have hooks.
<ServerLocation />
Really simple rename here:
// @reach/router import { ServerLocation } from "@reach/router"; createServer((req, res) => { let markup = ReactDOMServer.renderToString( <ServerLocation url={req.url}> <App /> </ServerLocation> ); req.send(markup); }); // React Router v6 // note the import path from react-router-dom/server! import { StaticRouter } from "react-router-dom/server"; createServer((req, res) => { let markup = ReactDOMServer.renderToString( <StaticRouter location={req.url}> <App /> </StaticRouter> ); req.send(markup); });
Please let us know if this guide helped:
Open a Pull Request: Please add any migration we missed that you needed.
General Feedback: @remix_run on Twitter, or email hello@remix.run.
Thanks! | https://beta.reactrouter.com/en/dev/upgrading/reach | CC-MAIN-2022-40 | refinedweb | 2,165 | 61.97 |
27 January 2009 16:24 [Source: ICIS news]
TORONTO (ICIS news)--DuPont continues to look for growth, especially in its agricultural and safety and protection divisions, but will be cautious in making any acquisitions even though many assets are being offered at very attractive terms in the current environment, the company's chief financial officer said on Tuesday.
“But we are going to be cautious [on acquisitions] … until we get a bit more visibility,” said Jeffrey Keefer during the company’s fourth-quarter analysts’ briefing.
“Asset values are down. I think those opportunities are going to be out there, but we are going to be cautious about jumping a little too early,” he said.
Keefer was responding to an analyst who asked if DuPont’s balance sheet was strong enough for the company to act quickly should an attractive $400-500m (€304-380m) acquisition opportunity arise.
As for the economic outlook, DuPont was not expecting a recovery but was assuming that markets would remain down in 2009, said Keefer.
However, DuPont was acting from a position of strength and expected to see an improvement in its performance in the second half of the year as its restructuring, costs savings and productivity measures were taking hold, he said.
“We are not waiting for a recovery. What we are focused on are our actions and the things we can control,” Keefer said.
CEO Ellen Kullman said DuPont would emphasise cost control, productivity and cash generation, but it would continue to invest in high-growth projects such as seeds and biofuels, even as it was “in the midst of a continued challenge and dislocation on a global basis”.
“We will emerge, in partnership with our customers, stronger, leaner and more agile and better poised for growth,” Kullman said.
DuPont earlier on Tuesday reported a $629m loss for the fourth quarter and warned of a weak first quarter.
The company said it was making about $730m in fixed cost and $1bn in working capital reductions.
Analysts at London-based international bank HSBC said in a research note that DuPont’s fourth quarter was weak across the board.
“The company reported year-on-year volume declines of 20% across its portfolio but what was particularly worrying was a 20% year-on-year volume decline in Asia Pacific,” the bank said.
HSBC rates DuPont’s shares “buy” with a target price of $33.
The shares were down 3.19% to $22.44 in Tuesday morning trading in ?xml:namespace>
($1 = €0.76)
For more on DuP | http://www.icis.com/Articles/2009/01/27/9188088/dupont-cautious-on-acquisition-opportunities-cfo.html | CC-MAIN-2014-41 | refinedweb | 420 | 60.65 |
Letters
Letters
Correction
Regarding a quotation in “They Said It” in the January 2010 issue of
LJ:
“We act as though comfort and luxury were the chief requirements of life,
when all that we need to make us happy is something to be enthusiastic
about” is not by Einstein, but rather by Charles Kingsley.
I'd give you a link, but it's easy enough to Google it yourself.
—
Jason Gade
He's correct, that should have been Charles Kingsley.—Ed.
Marconi?
I noticed on page 4 of the December 2009 issue that next month's issue will
focus on Amateur Radio. The accompanying text reads “...Marconi's ushered
in the era of radio communications...”. I hope a reputable publication
such as yours is not going to perpetuate the myth that Marconi invented
radio. There's no doubt he was brilliant at self-promotion, but he did not
invent radio. Many people contributed to the development of radio
technology, and Marconi was one of them. But if you insist on giving
recognition to only one person, it should be Nikola Tesla. The US
Supreme Court ruled in 1943 against Marconi and in favor of Tesla, deciding
that Tesla's patent (645,576) had priority. Please do some fact-checking
before perpetuating a ridiculous myth. Here's a few links to start with:
en.wikipedia.org/wiki/Invention_of_radio,
en.wikipedia.org/wiki/History_of_radio,
en.wikipedia.org/wiki/Nikola_Tesla and
en.wikipedia.org/wiki/Guglielmo_Marconi.
—
Jeff Harp
Bose?
In “When All Else Fails—Amateur Radio, the Original Open-Source
Project” in the January
2010 issue, guest editor David A. Lane, KG4GIY,
wrongly mentioned that Marconi
invented radio. In fact, Sir J. C. Bose, the Indian scientist, was the true
inventor of radio. He pioneered the iron filling coherer and lead galena
crystal semiconductor receiver. Sir Bose invented the horn antenna and
studied properties
of refraction, diffraction and dispersion of millimeter and sub-millimeter
waves. The above facts were published in the Proceedings of the
IEEE a few
years back. I am an Advanced Grade licensed Indian Radio Amateur and life
member of the Amateur Radio Society of India for nearly three decades.
—
Ananda Bose, VU2AMB
David A. Lane replies: Ask 100 people who invented the radio, and of those who bother to answer, you will likely get one answer, and neither Tesla nor Bose will be it. Perhaps the paragraph should have read “...almost since Marconi popularized the thing...”. The truth is that history misremembers all the time, and the true geniuses are forgotten by those who come up with the reference implementations. Clearly, both Tesla and Bose contributed to the science that has led us to where we are today, just as much as Marconi and Motorola.
Webinars
I continually receive invitations to various Webinars on many topics and
issues. Some don't suit me but many do. Here's the problem: my work
schedule often prevents me from tuning in or participating at the time the
Webinar is presented. I'd like to find a way to “record” these Webinars
with all the video and audio, so that in the evening when back at home or on
the weekend, I can sort through and watch the most pertinent of these
events. Is this possible using Linux? Please advise!
—
Scott S. Jones
I feel your pain. There's a couple problems working against us with Webinars. One is that the format is far from standard. The other is that for some reason, many are specifically designed to be done only in real time. (I suspect those are specifically when demonstrations are happening in real time.) I don't think there's anything as Linux users we can do differently, apart from using a screen capture package at the onset of a Webinar. Ideally, they would be archived archive afterward, so we could watch at our leisure. We learned that ourselves here at Linux Journal. We did a live show (Linux Journal Live) that got many more views after the fact. Our schedules are just too full!—Ed.
Distance
Regarding Dave Taylor's column on calculating distance in the December 2009 issue of LJ, not knowing the number of bits of precision that your system offers, the solution still is quite easy.
The manner in which you converted from degrees to radians used M_PI. M_PI is a single-precision floating-point number “hard-coded” to the value of 3.141593. Whereas, a more precise representation of PI can be obtained by using 4.*atan(1.). (atan(1.) is the angle pi/4, so 4.*atan(1.) is equivalent to PI.
The modification has been made to Dave's code below, and it gives the correct result. As he mentioned in his article, this will give a 10% greater answer for the distance between the latitudes and longitudes, which Dave's code provided an answer of 917.984 miles:
#include <stdio.h> #include <math.h> #include <stdlib.h> #define EARTH_RADIUS (6371.0072 * 0.6214) #define TORADS(degrees) (degrees * (atan(1) / 65.))); }
When I teach Bash, I regularly have students get information from
the Web and process it via awk, sed and so on. Fun projects that I have used
before are getting the current temperature for any city in the world,
getting the headline news and getting current stock quotes.
Thanks to Dave Taylor and Linux Journal, students are
able to see that Linux/UNIX
can do far more than Windows ever could dream of doing.
—
Paul J. Wilkinson
Dave Taylor replies: Great! Thanks for helping fine-tune the code.
Dark Days?
Every month, I read the Letters section, and more often than not, there is
an end user who knocks Linux as being for “computer
specialists” and not
ready for the mainstream. I find this attitude unfortunate. With every release,
distributions get better and better, and don't suffer from the constant
barrage of malware, viruses and the like we see on Windows. In the December
2009 issue, a
reader commented that “It's back to the dark days of MS-DOS all over
again.” Dark days? I seem to remember back in the mid-1980s, DOS
actually worked quite well. What gets me is that Microsoft continues to produce
OSes and products that have numerous bugs and security flaws. Users are
forced to license virus and malware scanners that are resource hogs and are
doing a job that should have been prevented in the first place. Maybe we
should all send a bill for our lost time in re-installing, scanning,
cleaning and so on to Microsoft—they seem to have no issues collecting
license fees. What about the hundreds of hours my staff has lost over the
years?
—
George
Heck, I'd settle for just the hours spent trying to find drivers for hardware under Windows! I agree, it is sad people still see Linux as a difficult, cryptic, command-line-only operating system. It's still largely a matter of unfamiliarity. Many folks new to Linux are still overwhelmed by its nuances. The same can be said for people switching from Windows to OS X though, so I don't think it's really a Linux problem. I think we just need to keep pushing Linux in places it makes sense. As usual, my suggestion would be to start in schools!—Ed.
Waiting for Godot
Regarding Mitch Frazier's “Non-Linux FOSS” (December 2009): Explore2fs has
been around for more than a decade. It works as designed, but development is
slow. The write function has remained on the to-do list for most of the
decade. I simply got tired of waiting. I found an excellent driver at.
—
Peter Bratton
Slow or stalled development is often a fact of life with small open-source software projects, which is why it's important to help support projects you find useful. Note that the driver provided at is freeware, but it is not open source.—Ed.
Peace Prize?
From the Portland Linux/UNIX Group e-mail list, by Keith Loftyoorn Jagland (chair), Kaci Kullmann Five (deputy chair), Sissel Ron.
Sounds like a great idea. What do you guys think?
—
Michael Rasmussen
A Peace Prize? For Linus? I dunno, I've read some of his posts to the kernel mailing list. Hehehe, all joking aside, I think the community Linus represents certainly deserves recognition. Since the prize goes to an individual, it does take some of the focus away from some other amazing contributors. That said, I could think of many worse recipients. He's got my vote!—Ed.
Security Tip
Mick, hope you enjoyed DEFCON, and excellent article in October 2009 issue of Linux
Journal.
It's moot now, but I thought I'd mention that when I travel, I edit my
/etc/hosts file with entries for important DNS names (my bank, my Webmail
and so on) to reduce the chance someone is spoofing them on an untrusted LAN. I
comment out the entries when I get back home.
I don't know if this really adds to my security, but I pretend it does.
Thanks for the great work.
—
Paul
Mick Bauer replies: As a matter of fact, your /etc/hosts idea is an excellent one. DNS spoofing is an important part of many man-in-the-middle attacks, so being able to skip DNS lookups altogether when using untrusted networks definitely would be useful.
It also may be possible to run a local DNS-caching dæmon like nscd (which is commonly included in many distros by default), tuned in such a way that if you visited a given important site before you travel, your system will use its local cached lookup results instead of doing new DNS lookups while you're on the road. Thanks for passing on your idea and your kind words!
Decaf, Amazon EC2 on Android
As a longtime Linux Journal reader (I started when I was still studying more than ten years ago), I would like to draw your attention to decaf. decaf is an Android application for managing and monitoring your Amazon EC2 infrastructure. We were finalists in the Android Developer Challenge 2, resulting in a sixth place in the Misc. Category. (We were a bit disappointed but very proud to have come that far in the competition.)
We developed decaf primarily for ourselves, but we are trying to grow a community to make decaf development sustainable. I see that you covered Amazon EC2 multiple times, therefore, I think decaf might be of interest to your community.
You can read about decaf at decaf.9apps.net. I hope you
find this interesting. If you have any questions, please ask.
—
Jurg van Vliet
Cool! Thanks Jurg. I just bought a Droid, so I'll have to check it out.—Ed.
Re: Ruby Articles
In response to the letter in the January 2010 issue regarding Ruby articles: I'd suggest looking into Clojure. It's a fairly new, Lisp-based language (it originally appeared in 2007 according to Wikipedia) that I first heard about from a professional Ruby programmer who now swears by it. He's written quite a few Clojure articles on his blog at briancarper.net/tag/clojure. Most of it is over my head as I've been out of the programming game for several years, but it has live code examples and might be an easier starting point than some dry manual or FAQ on clojure.org. Fun fact: that blog itself was written by Brian in Clojure.
Brian's a pretty easy guy to talk to and most likely would have some good recommendations on where to go to learn more about it or dig up material for an article.
As a side note, I'd personally be interested in seeing a roundup of all of the
different languages that other LJ readers are no doubt
sending e-mail messages about
similar to this one even as we speak.
—
Marcus Huculak
Our next issue will have an interview with the creator of Clojure. And, perhaps we can get our Webmistress to put up a language poll on LinuxJournal.com. It would be great information to have!—Ed.
More on Ruby Articles
I had literally been right on the edge of writing a letter complaining
about all the Ruby articles you've been printing for what feels like years
now, when I saw MK's letter in the January 2010 issue. Naturally, I agree
completely, and I'm afraid I'm going to stray into language-war territory
here, but where's the Perl in LJ? If you're a
sysadamin, it's invaluable,
but for the Web programmers, what about all the progress around
“modern”
Perl with Moose, Catalyst, DBIC and so on? What about a regular look at the
marvels of CPAN? I know it's kind of mythical, but how about some things
on Perl6? I realize lots of languages exist, and frankly any
break from Ruby would be good, but how about introducing new folks to the
depth, breadth and future of Perl? Surely there aren't many languages more
Linux than Perl?
Still, I appreciate the magazine!
—
Steve Rippl
You make a good point, Steve. We'll see what we can do. (No, that's not a blow off, I promise!)—Ed.
Request
I would like to see a PHP program that searches an e-mail POP or IMAP, then
automatically reports scams to abuse@live.com, abuse@gmail.com and so on. This
program would find e-mail addresses in the body as well as the header. The
report would inform the appropriate e-mail service were to find the e-mail
address in the forwarded message. The program would delete
the message from the inbox automatically. I have more than 150 domains that scam me all the
time along with the free mail services to boot.
—
Stefan Ronnkvist
That sounds interesting, Stefan. I'd urge you either to start such a project or search around SourceForge to see if someone already has done the same. It sure beats forwarding them all by hand!—Ed.
Linux Mini vs. Mac Mini
I'm surprised I don't see Linux alternatives to a Mac Mini. The alternative hardware needs to be small and quiet (like Mac Mini). Hardware suppliers like Logic Supply and Polywell have a dizzying selection of hardware, but the cost is more than $1,000. Why isn't there a selection of “Linux Mini” alternatives equal or better than a Mac Mini at competitive prices?
For reference, a $600 Mac Mini () features
2.26GHz Intel Core 2 Duo, 2GB DDR3 SDRAM, 160GB hard drive, gigabit
Ethernet, 8x double-layer SuperDrive, NVIDIA GeForce 9400M graphics with
dual video ports, USB and Firewire ports, Mac OS X Snow Leopard
and 14 W power at idle.
—
greg bollendonk
I think one of the problems is that the demand is so low. For hardware companies, I think creating a Linux-based alternative to the Mac Mini would be possible, but they'd most likely sell more if they just made Windows terminals out of them.
One way to build a device like you're describing would be to soup up a thin client. There are a bunch of places that sell Linux thin clients that easily could have a hard drive added to them. Polywell, for example, has several thin-client options that are full-blown computers (at least one less than $200) just waiting for a Linux install.—Ed. | http://www.linuxjournal.com/magazine/letters-23?quicktabs_1=1 | CC-MAIN-2015-35 | refinedweb | 2,582 | 64.1 |
By John Hanley, Alibaba Cloud Tech Share Author. Tech Share is Alibaba Cloud’s incentive program to encourage the sharing of technical knowledge and best practices within the cloud community..
In Part 1, we discussed the concepts related to SSL Certificates and Let's Encrypt in detail. In this part, we will explain how to create your Account Key, Certificate Key and Certificate Signing Request (CSR).
The following source code examples do not have error checking. These code snips are designed to demonstrate how to interface with ACME. For more complete examples, review the source code in the examples package that you can download: ACME Examples in Python (Zip - 20 KB).
I could not find documentation on the size of the private key. I have been testing with a key size of 4096 bits and this works just fine.
There are numerous methods to create the Account Key. Let's look at two methods: writing a Python program and using OpenSSL from the command line. Included are examples showing how to work with private keys.
This example does not use the openssl python libraries. This example uses the crypto libraries which makes creating a private key very simple. Following this example is one using openssl which is more complicated but has more options.
make_account_key.py
""" Let's Encrypt ACME Version 2 Examples - Create Account Key """ from Crypto.PublicKey import RSA filename = 'account.key' key = RSA.generate(4096) with open(filename,'w') as f: f.write(key.exportKey().decode('utf-8'))
make_account_key2.py
import sys import OpenSSL from OpenSSL import crypto filename = 'account.key' key = crypto.PKey() key.generate_key(crypto.TYPE_RSA, 4096) key_material = crypto.dump_privatekey(crypto.FILETYPE_PEM, key) val = key_material.decode('utf-8') with open("account.key", "wt") as f: f.write(val)
OpenSSL Command Line Example:
openssl genrsa -out account.key 4096
OpenSSL command line options:
View details and verify the new account key:
openssl rsa -in account.key -text -check -noout
Extract the public key from the private key:
openssl rsa -pubout -in account.key -out account.pub.
We will repeat the above examples to create the certificate key. The difference is that the filename will be the name of our domain name that we will be issuing the certificate for. Change "domain.com" to your domain name.
make_certificate_key.py
""" Let's Encrypt ACME Version 2 Examples - Create Certificate Key """ from Crypto.PublicKey import RSA domainname = "example.com" filename = domainname + '.key' key = RSA.generate(4096) with open(filename,'w') as f: f.write(key.exportKey().decode('utf-8'))
OpenSSL Command Line Example:
openssl genrsa -out example.com.key 4096
OpenSSL command line options:.
Generating a CSR is easy with OpenSSL. All that is required is the domain name and optionally an email address. In the following example, replace domainName with your domain name and emailAddress with your email address.
This example also removes all the subject fields that Let's Encrypt does not process such as C, ST, L, O and OU and does add the subjectAltName extension that Chrome requires.
make_csr.py
""" Let's Encrypt ACME Version 2 Examples - Create CSR (Certificate Signing Request) """ importOpenSSL KEY_FILE = "certificate.key" CSR_FILE = "certificate.csr" domainName = 'api.neoprime.xyz' emailAddress = 'support@neoprime.xyz' def create_csr(pkey, domain_name, email_address): """ Generate a certificate signing request """ # create certificate request cert = OpenSSL.crypto.X509Req() # Add the email address cert.get_subject().emailAddress = email_address # Add the domain name cert.get_subject().CN = domain_name san_list = ["DNS:" + domain_name] cert.add_extensions([ OpenSSL.crypto.X509Extension( b"subjectAltName", False, ", ".join(san_list).encode("utf-8")) ]) cert.set_pubkey(pkey) cert.sign(pkey, 'sha256') return cert # Load the Certicate Key data = open(KEY_FILE, 'rt').read() # Load the private key from the certificate.key file pkey = OpenSSL.crypto.load_privatekey(OpenSSL.crypto.FILETYPE_PEM, data) # Create the CSR cert = create_csr(pkey, domainName, emailAddress) # Write the CSR to a file in PEM format with open(CSR_FILE,'wt') as f: data = OpenSSL.crypto.dump_certificate_request(OpenSSL.crypto.FILETYPE_PEM, cert) f.write(data.decode('utf-8'))
In Part 3 we will begin going through each Let's Encrypt ACME API using the account.key, certificate.key and certificate.csr files to generate and install SSL certificates for Alibaba Cloud API Gateway and CDN.
Let's Encrypt ACME on Alibaba Cloud – Part 1
Creating an Ecosystem for Redevelopment with Alibaba Cloud DataWorks
2,630 posts | 656 followersFollow
Alibaba Clouder - June 25, 2018
Alibaba Clouder - June 28, 2018
Alibaba Clouder - June 27, 2018
Alibaba Clouder - July 1, 2020
Alibaba Clouder - February 21, 2019
Alibaba Clouder - February 18, 2019
2,630 posts | 656 | https://www.alibabacloud.com/blog/lets-encrypt-acme-with-alibaba-cloud-api-gateway-and-cdn-part-2_593778 | CC-MAIN-2022-05 | refinedweb | 746 | 52.46 |
As Most of You Might Already Be Having Some Experience With ASP.NET Mvc ,Let’s Have a Look Into Some of the New Features Introduced With ASP.NET Mvc 5.We Could Have Implemented Functionality Provided by Some of These New Features in Earlier Versions of Mvc as Well But Now as These Features Are a Part of the Mvc Framework Using Them Is Easier Than Ever as There Are No Configuration Changes or External Components required.As We Discuss the New Features We Would Be Refreshing Our Knowledge of the Functionality Provided in the Prior Versions Before the Introduction of These New Features.
The first feature that would be specially useful for us is Attribute based routing. To understand Attribute based routing we will see how routing is implemented in the previous versions of MVC.
As MVC is all about creating loosely coupled applications ,most of the features in MVC compliment creating loosely coupled architecture. Routing is one such MVC feature which decouples the URL schema of the application from the rest of the application. To understand use of routing let's see how request handling was done one or two decades earlier.
In ASP.NET webforms and in previous web technologies whenever we make a browser request for a resource such as a web page, the web server expects that it exists physically on the server which it returns to the browser ,either executing it and returning the HTML (in the case of dynamic pages such as aspx pages) or as such if it’s an html page. In other words there has to be One to One mapping between the URL and the requested resource such as the web form or the html page.
Request handling in conventional Web Forms application 's
This was the normal scenario prior to the ASP.NET MVC. In ASP.NET MVC since the request is handled by action methods there has to be some way to map the URL to the appropriate action method as there is no physical file to handle the requested URL.This mapping is provided by the routing system.
Request handling in MVC application
All the routes used in the application are added to the RouteCollection .So routing system acts as an interface between the request and the request handler. This means we can structure our URL according to our requirement like we can create user friendly and Search Engine Optimized URL's. Also the URL's can be changed anytime without requiring the change to the application logic since the application logic is decoupled from the URL schema used in the application.
Following is the basic structure of a requested URL as understood by the routing system.
Notice that the segments are a part of the URL separated by the slash character (“/”) excluding the host name. We can define a route to handle the above URL in the RegisterRoutes method in RouteConfig.cs file as
public static void RegisterRoutes(RouteCollection routes)
{
routes.IgnoreRoute("{resource}.axd/{*pathInfo}");
routes.MapRoute(
name: "Ecommerce",
url: "{controller}/{action}/{id}",
defaults: new { controller = "Home", action = "About", id = UrlParameter.Optional }
);
}
Here url is the url -pattern that is used to determine if the incoming url should be handled by the route as Mvc application can have multiple routes.The segments in the url-pattern in the braces {} are called the segments variables ,like {controller},{action} and {id} .The values for these segment variables are extracted from the url.Though we have used variables here segments can also be a literal like a string.
The above route is added to the routes.This has the following disadvantages
Attribute based routing provides an elegant way to solve this issue.
This is where attribute based routing comes in. Using attribute based routing we can define the route in the same place where action method is defined. Following is an example of a route defined using the Route attribute. As you can see the route is directly attached to the action method.
[Route("Products/Electronics/{id}")]
public ActionResult GetElectronicItems(string id)
{
ViewBag.Id = id;
return View();
}
To enable attribute based routing we need to add the.
We can also specify if there is any optional parameter in the URL pattern defined by the Route attribute with the “?” character.
If the above action method is called and and the value for “id” parameter is not provided we will get an exception since id is a required parameter. We can make it an optional parameter by making the following changes.
[Route("Products/Electronics/{id?}")]
public ActionResult GetElectronicItems(int? id) {
ViewBag.Id = id; return View();
}
Note that we have made id an optional parameter by using “?”.Also since id is a value type we have to make it nullable type since we always need to provide values for value type parameters as they cannot have null values.
We can also specify parameter constraints placing the constraint name after the parameter name separated by colon. For example we can specify that the parameter type should be integer by using the following
[Route("Products/Electronics/{id:int}")]
Now If we do not specify integer parameter then the route will not match even if other segments match. Some of the other useful constraints that we can use in Route attribute are:
Route Constraint
Used To
x:bool
Match a bool parameter
x:maxlength(n)
Match a string parameter with maximum length of n characters
x:minlength(n)
Match a string parameter with minimum length of n characters
x:max
Match an integer parameter with a maximum value of n.
x:min
Match an integer parameter with a minimum value of n.
x:range
Match an integer parameter within a range of values.
x:float
Match floating-point parameter.
x:alpha
Match uppercase or lowercase alphabet characters
x:regex
Match a regular expression.
x:datetime
Match a DateTime parameter.
If we have multiple action methods in a controller all using the same prefix we can use RoutePrefix attribute on the controller instead of putting that prefix on every action method.
Like we can attach the following attribute on the controller
[RoutePrefix("Products")]
So now our Route attribute on our action method does not need to specify the common prefix
[Route("Electronics/{id}")]
Filters in MVC provide us with an elegant way to implement cross cutting concerns.Cross cutting concerns is the functionality that is used across our application in different layers.Common example of such functionality includes caching ,exception handling and logging.
Cross cutting concerns should be centralized in one location instead of being scattered and duplicated across the entire application.This makes updating such functionality very easy as it is centralized in one location. Filters provide this advantage as they are used to implement cross cutting concerns using a common logic that can be applied to different action methods and controllers.Filters are implemented as classes that contains code that is executed either before the action method is executed or after the action method executes. We can create global or controller filters in MVC 4.Global filters are filters that are applied to all the action methods in the application while controller filters apply to all the action methods in the controller.
We can create a global filter by creating a class and registering it as a global filter
public class TestGlobalFilterAttribute : ActionFilterAttribute
{
public override void OnActionExecuting(ActionExecutingContext context)
{
base.OnActionExecuting(context);
context.RequestContext.HttpContext.Response.Write("Global filter's in MVC are cool...");
}
}
//Register our global filter
GlobalFilters.Filters.Add(new TestGlobalFilterAttribute());
As we have registered our filter as global action filter this will automatically cause the filter to execute for any action method .Below we can see our global filter in action.
Now suppose we want one of our action method’s not to execute filter logic.In MVC 4 there is not direct way to implement this.
MVC 5 provides a filter overrides feature which we can apply to the action method or controller which selectively excludes the global filter or the controller filter for the specified action method or controller.
Now if want to override our global action filter in one of our action method's then we just need to apply the OverrideActionFilters attribute.Below we have applied the attribute to the default About method in the HomeController.
[OverrideActionFilters]
public ActionResult About(){
ViewBag.Message = "Your application description page.";
return View();
}
This is a very simple example to understand Filter overrides.But they can be useful in many different scenarios.One scenario where it is useful is :
[Authorize(Users="ashish")][RoutePrefix("Products")]
public class ProductsController : Controller
{
[Route("Electronics/{categoryId:maxlength(3)}")]
[OverrideAuthorization]
public ActionResult GetElectronics(string categoryId)
{
ViewBag.categoryId= t;
return View();
}
Here we have applied the Authorize filter to the controller and then have selectively overridden it for the GetElectronics action method.
As filters are of four types as:
So we can have four Filter Overrides corresponding to these
All of the above Attributes implements the IOverrideFilter interface.Infact this is how the MVC framework determines whether to exclude any filter type for an action method or controller.
I feel this is one feature that will be useful in many different scenario's.
Prior to ASP.NET MVC 5 ASP.NET Membership was the common way to handle the authentication. What this meant was that the task of authenticating and managing the user credentials was handled by ASP.NET automatically. Using ASP.NET membership the user credentials were stored in the database typically relational database such as SQL Server.
But with advances in the technology user’s today are connected with each other using social networks and using portable devices. So the ASP.NET membership system which expected the user credentials to be stored in the local database that needed to be overhauled.This much needed change is provided by the new membership system in ASP.NET which is called ASP.NET Identity.
Some of the main features of ASP.NET Identity are
Using third party authentication providers the task of authenticating the users can be delegated to a third party site such as google. There are numerous sites that supports openID protocol such as google. With MVC 5 we can use it as part of a newly created project
Let's see how we can create an MVC project that using ASP.NET Identity to authenticate a user using a social site such as google.
When we select create a new project option in visual studio first thing we notice is that there is only single template to create ASP.NET options instead of multiple templates for ASP.NET WebForms , MVC and other ASP.NET frameworks. This common window is part of newly introduced One ASP.NET which means that services like scaffolding are available to all the ASP.NET frameworks like WebForms and MVC.
On selecting the ASP.NET web application template option a new window opens.It has options to select and customize different frameworks like MVC and Web Forms and to chose the Authentication system.
Run the application in the home page when you click the login link on the right corner you will navigate to the login page. Notice that there is option here to use another service to login.
We can enable to use another service by going to the ConfigureAuth method in Startup.Auth file and un-commenting the last line.
Now if we run the application and go to the login page we can see the below page.
On clicking Google button we are redirected to the following screen where user name and password can be entered.
The google authentication is enabled since we used the UseGoogleAuthentication() method. Similarly we can use the authentication methods for Microsoft and other social sites. So using ASP.NET Identity it's just a matter of making a method call to use third party authentication.
Scaffolding means generating code automatically from the model.Suppose we have some model for which we want to generate the skeleton code for the basic Create Read Update and Delete operations then instead writing the common code we can use Scaffolding feature of MVC 5.
There is a code generator for MVC application that can automatically generate code from the model. We have ElectronicItems class that we can use to automatically generate the controller code and view. The advantage of using Scaffolding is that we can very easily and quickly generate code using a model.
public class ElectronicItems
{
public string Name { get; set; }
public string Description { get; set; }
public int Id { get; set; }
}
For using scaffolding right click a item in a project and select “New Scaffolded Item”.
Select the “MVC 5 Controller..” option from the dialog box that opens.
Enter the Controller name and select the model for which to generate the controller for.Click on Add and we will have the controller and view generated for us.
As you can see the following Controller is Generated containing the basic C.R.U.D operations that you had to write manually in the previous versions.
So you might have felt how scaffolding can reduce the amount of code that you had to write manually in the earlier versions of MVC.
So these are some of the new features introduced in MVC 5.I am sure they will make your life easier as a developer.
For common C# questions you will find the following helpful:C# Common. | https://www.codeproject.com/Articles/728216/ASP-NET-MVC-New-Features?msg=5377424 | CC-MAIN-2020-10 | refinedweb | 2,229 | 54.93 |
Testing is the process of ensuring that the code you write performs in the way it was intended. Testing encompasses many different aspects that are relevant to a Pylons application. For example, it is important the libraries do what they are supposed to, that the controllers return the correct HTML pages for the actions they are performing, and that the application behaves in a way that the users find intuitive.
Here are some of the many reasons why writing effective tests is a very good idea when developing a Pylons application:
At its heart, testing is about having confidence in the code you have written. If you have written good tests, you can have a good degree of confidence that your code works as it should. Having confidence is especially important if you are using beta or prerelease code in your application. As long as you write tests to ensure the features of the product you are using behave as you require, then you can have a degree of confidence that it is OK to use those particular features in your application.
Although writing effective tests does take time, it is often better to spend time writing tests early in the project than debugging problems later. If you have written tests, every time you make a change to the code, then you can run the tests to see whether any of them fail. This gives you instant feedback about any unforeseen consequences your code change has had. Without the tests, bugs that are introduced might not be picked up for a long time, allowing for the possibility that new code you write might depend on the code working in the incorrect way. If this happens, fixing the bug would then break the new code you had written. This is why later in a project fixing minor bugs can sometimes create major problems. By failing to write effective tests, you can sometimes end up with a system that is difficult to maintain.
It is worth noting that Pylons and all the components that make up Pylons have their own automated test suites. The Pylons tests are run every night on the latest development source using a tool called Buildbot, and the results are published online. Without these extensive test suites, the Pylons developers would not be able to have the confidence in Pylons that they do.
In this chapter, I’ll describe three types of testing you can use to help avoid introducing bugs into your Pylons project during the course of development:
- Unit testing
- Functional testing
- User testing
If you are interested in reading more about other types of software testing, the Wikipedia page is a good place to start:.
The most common form of testing is unit testing. A unit test is a procedure used to validate that individual units of your source code produce the expected output given a known input. In Python, the smallest testable parts of a library or application are typically functions or methods. Unit tests are written from a programmer’s perspective to ensure that a particular set of methods or functions successfully perform a set of tasks. In the context of a Pylons application, you would usually write unit tests for any helpers or other libraries you have written. You might also use a unit test to ensure your model classes and methods work correctly. Your Pylons project has a test_models.py file in the tests directory for precisely this purpose.
While unit tests are designed to ensure individual units of code work properly, functional tests ensure that the higher-level code you have written functions in the way that users of your Pylons application would expect. For example, a functional test might ensure that the correct form was displayed when a user visited a particular URL or that when a user clicked a link, a particular entry was added to the database. Functional tests are usually used in the context of a Pylons application to test controller actions.
Some people would argue that the best time to write unit and functional tests is before you have written the code that they would test, and this might be an approach you could take when developing your Pylons application. The advantages of this approach are the following:
- You can be confident the code you have written fulfils your requirements.
- It is likely the code you have written is not overengineered, because you have concentrated on getting the test suite to pass rather than future-proofing your code against possible later changes.
- You know when the code is finished once the test suite passes.
Another approach that helps you write code that meets your requirements without being overengineered is to write the documentation for your Pylons project first and include code samples. Python comes with a library called doctest that can analyze the documentation and run the code samples to check that they work in the way you have documented. You’ll learn more about doctest in Chapter 13.
The final type of testing you should strongly consider integrating into your development process is user testing. User testing doesn’t involve writing any automated tests at all but instead typically involves getting together a willing and representative group of the intended users of your product and giving them tasks in order to watch how they interact with the system. You then make notes of any tasks they struggle with or any occasions where the software breaks because of their actions and update your Pylons application accordingly, attributing any problems to deficiencies in your software rather than the incompetence of your users.
The end users of your product are often very good people to test your application on because they will have a similar (or greater) knowledge of the business rules for the tasks they are trying to use the system for, but they might not have the technical knowledge you do. This means they are much more likely to do unusual things during the course of their interaction with your application—things you have learned from experience not to do. For example, they might use the Back button after a POST request or copy unusual characters from software such as Microsoft Word that might be in an unexpected encoding. This behavior helps highlight deficiencies in your software that you may not have noticed yourself.
If you are developing a commercial product for a specific set of users, then there is a secondary reason for involving them in the testing of your prototype. It can help familiarize them with the system and give them a chance to highlight any gaps in the software as you are developing it; this in turn vastly reduces the chance of the software not being accepted at the end of the development process because the users have been involved all along the way. Ultimately, if the users of your application are happy with the way your Pylons application works, then it fulfils its goal.
Note
Of course, user testing is a topic in its own right, and I won’t go into it further here. User testing is also an important part of many software development methodologies that can be used with Pylons.
If you are interested in development methodologies, the Wikipedia articles on agile and iterative development are good places to start and are often useful methodologies to choose for a Pylons project. You might also be interested to read about the Waterfall method, which is a methodology frequently used in larger IT projects but that many people argue often doesn’t work in practice.
Writing unit tests without a testing framework can be a difficult process. The Python community is fortunate to have access to many good quality unit testing libraries including the unittest module from the Python standard library, py.test, and nose. Pylons uses nose to run the test suite for your Pylons application because it currently has a slightly more advanced feature set than the others. nose is installed automatically when you install Pylons, so it is ready to go.
Before you learn about how to write unit tests specifically for Pylons, let’s start by writing some simple tests and using nose to test them.
Here’s what a very simple unit test written using nose might look like:
def test_add(): assert 1+1 == 2 def test_subtract(): a = 1 b = a + 1 c = b-1 assert b-a == c
The example uses the Python keyword assert to test whether a particular condition is True or False. Using the assert statement in this way is equivalent to raising an AssertionError but is just a little easier to write. You could replace the test_add() function with this if you prefer:
def test_add(): if not 1+1 == 2: raise AssertionError()
nose differentiates between assertion errors and other exceptions. Exceptions as a result of a failed assertion are called failures, whereas any other type of exception is treated as an error.
To test the example code, save it as test_maths.py, and run the following command:
$ nosetests test_maths.py
You’ll see the following output:
.. ---------------------------------------------------------------------- Ran 2 tests in 0.001s OK
For every test that passes, a . character is displayed. In this case, since there are only two tests, there are two . characters before the summary. Now if you change the line a = 1 to a = 3 in the test_subtract() function and run the test again, the assertion will fail, so nose tells you about the failure and displays F instead of the . for the second test:
.F ====================================================================== FAIL: test_maths.test_subtract ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/james/lib/python2.5/site-packages/nose-0.10.3-py2.4.egg/ nose/case.py", line 182, in runTest self.test(*self.arg) File "/home/james/Desktop/test_maths.py", line 8, in test_subtract assert b-a == c AssertionError ---------------------------------------------------------------------- Ran 2 tests in 0.002s FAILED (failures=1)
This isn’t too helpful because you can’t see the values of a, b, or c from the error message, but you can augment this result by adding a message after the assert statement to clarify what you are testing like this:
assert b-a == c, "The value of b-a does not equal c"
which results in the following:
.F ====================================================================== FAIL: test_maths.test_subtract ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/james/lib/python2.4/site-packages/nose-0.10.3 -py2.5.egg/nose/case.py", line 182, in runTest self.test(*self.arg) File "/home/james/Desktop/test_maths.py", line 8, in test_subtract assert b-a == c AssertionError: The value of b-a does not equal c ---------------------------------------------------------------------- Ran 2 tests in 0.002s FAILED (failures=1)
This is better but still not particularly helpful because you still can’t tell the values of a, b, or c from the output. nose has three different ways to help you solve this problem, covered in the following three sections.
Any print statements you add to your tests are displayed only if the test fails or results in an error. Try modifying the test_subtract() function so that it looks like this:
def test_subtract(): a = 3 print "a is %s"%a b = a + 1 print "b is %s"%b c = b-1 print "c is %s"%c assert b-a == c
If you run the test again, you’ll see the following extra information displayed after the AssertionError:
-------------------- >> begin captured stdout << --------------------- a is 3 b is 4 c is 3 --------------------- >> end captured stdout << ----------------------
From this you can easily see that 4-3 != 3, but the test output wouldn’t be cluttered with these debug messages unless the test failed. If you would prefer nose to always print debug messages like these, you can use the -s option so that it doesn’t capture the standard output stream.
If you run nose with the -d flag, it will try to display the values of the variables in the assert statement:
$ nosetests -d test_maths.py
In this case, the output also contains the following line, so you can immediately see the mistake:
>> assert 4-3 == 3
If you want even more flexibility to debug the output from your tests, you can start nose with the --pdb and --pdb-failures options, which drop nose into debugging mode if it encounters any errors or failures, respectively. As the option names suggest, nose invokes pdb (the Python debugger), so you can use the full range of commands supported by the pdb module.
Let’s give it a try—start by running the test again with the new flags set:
$ nosetests --pdb --pdb-failures test_maths.py
Now when the failure occurs, you’ll see the pdb prompt:
.> /home/james/Desktop/test_maths.py(11)test_subtract() -> assert b-a == c (Pdb)
You can display a list of commands with
Of these, some of the most important are l, which lists the code nearby, and q, which exits the debugger so that the tests can continue. The prompt also works a bit like a Python shell, allowing you to enter commands. Here’s an example session where you print some variables, obtain help on the l command, and then exit the debugger with q:
(Pdb) print b-a 1 (Pdb) print c 3 (Pdb) h l l(ist) . (Pdb) l 6 print "a is %s"%a 7 b = a + 1 8 print "b is %s"%b 9 c = b-1 10 print "c is %s"%c 11 -> assert b-a == c 12 [EOF] (Pdb) q;
The pdb module and all its options are documented at.
nose uses a set of rules to determine which tests it should run. Its behavior is best described by the text from the nose documentation:.
To specify which tests you want to run, you can pass test names on the command line. Here’s an example that will search dir1 and dir2 for test cases and will also run the test_b() function in the module test_a.py in the tests directory. All these tests will be looked for in the some_place directory instead of the current working directory because the code uses the -w flag:
$ nosetests -w some_place dir1 dir2 tests/test_a.py:test_b
When you are developing a Pylons application, you would normally run nosetests from the Pylons project directory (the directory containing the setup.py file) so that nose can automatically find your tests.
Note
For more information about nose, see the wiki at.
Pylons provides powerful unit testing capabilities for your web application utilizing paste.fixture (documented at) to emulate requests to your web application. Pylons integrates paste.fixture with nose so that you can test Pylons applications using the same techniques you learned for nose in the previous section.
Note
It is likely that at some point Pylons will switch to using the newer WebTest package, but since WebTest is simply an upgrade of paste.fixture with some better support for Pylons’ newer request and response objects, the upgrade shouldn’t introduce huge changes, and therefore the contents of this section should still largely apply.
To demonstrate functional testing with paste.fixture, let’s write some tests for the SimpleSite application. If you look at the SimpleSite project, you’ll notice the tests directory. Within it is a functional directory for functional tests that should contain one file for each controller in your application. These are generated automatically when you use the paster controller command to add a controller to a Pylons project. To get started, update the tests/functional/test_page.py file that was generated when you created the page controller so that it looks like this:
from simplesite.tests import * class TestPageController(TestController): def test_view(self): response = self.app.get(url_for(controller='page', action='view', id=1)) assert 'Home' in response
The page controller doesn’t have an index() action because you replaced it as part of the tutorial in Chapter 8, so the previous example tests the view() action instead.
The self.app object is a Web Server Gateway Interface application representing the whole Pylons application, but it is wrapped in a paste.fixture.TestApp object (documented at). This means the self.app object has the methods get(), post(), put(), delete(), do_request(), encode_multipart(), and reset(). Unless you are doing something particularly clever, you would usually just use get() and post(), which simulate GET and POST requests, respectively.
This gets the URL path specified by url using a GET request and returns a response object.
This is very similar to the get() method, but it performs a POST request, so params are put in the body of the request rather than the query string. It takes similar arguments and returns a response object.
The example you’ve just added to tests/functional/test_page.py uses the get() method to simulate a GET request to the URL /page/view/1. Because a fully configured Pylons environment is set up in the simplesite.tests module, you are able to use url_for() to generate the URL in the same way you would in a Pylons controller. The get() method returns a paste.fixture response object. You can then use this to check the response returned was the one you expected. In this case, you check that the home page contains the text 'Home' somewhere in the response.
In addition to the methods on the self.app() object, Pylons also gives you access to some of the Pylons globals that have been created during the request. They are assigned as attributes of the paste.fixture response object:
To use them, just access the attributes of the response after you’ve used a get() or post() method:
def test_view(self): response = self.app.get(url_for(controller='page', action='view', id=1)) assert 'Home' in response assert 'REQUEST_METHOD' in response.req.environ
Note
The paste.fixture response object already has its own request object assigned to it as the .request attribute, which is why the Pylons request global is assigned to the .req attribute instead.
For simple cases, it is fine to work with the paste.fixture response object, but for more complicated cases, you will probably prefer to work with the more familiar Pylons response global available as response.response.
Before you test the previous method, you really need to understand exactly how the test setup works because, as it stands, it could damage your development setup. Let’s see why in the next section.
To run the test, you would execute the nosetests command, but before you do, let’s consider what this command actually does.
When the nosetests command is executed, it reads some of its configuration from the project’s setup.cfg file. This file contains a section that looks like this:
[nosetests] with-pylons=test.ini
This tells nose to use the test.ini file to create a Pylons application rather than the development.ini file you’ve been using so far.
Note
You can also choose the config file to use for the tests by specifying the --with-pylons option on the command line. Likewise, you can also put other nosetests command-line options in the setup.cfg file if that is more convenient. Here are some examples of options you could set:
[nosetests] verbose=True verbosity=2 with-pylons=test.ini detailed-errors=1
The test.ini file is specifically for your testing configuration and (if properly configured) allows you to keep your testing and development setups completely separate. It looks like this:
# # SimpleSite - Pylons testing = 0.0.0.0 port = 5000 [app:main] use = config:development.ini # Add additional test specific configuration options as necessary.
This should seem very familiar, but notice the line use = config:development.ini in the [app:main] section. This causes the test.ini [app:main] section to use exactly the same configuration as the development.ini file’s [app:main] section. Although this can save you some effort in some circumstances, it can also cause your tests to interfere with your development setup if you are not careful.
Once nosetests has correctly parsed the test.ini file, it will look for available tests. In doing so, it imports the tests/__init__.py module, which executes this line:
SetupCommand('setup-app').run([config['__file__']])
Although it looks fairly innocuous, this results in the equivalent of this command being executed:
$ paster setup-app test.ini
You’ll recall from Chapter 8 that this will run the project websetup.py file’s setup_app() function with the configuration from test.ini, but because the test.ini file currently uses the configuration from the [app:main] section of the development.ini file, it will be called with the same sqlalchemy.url as your development setup. If your websetup.py was badly written, this could damage the data in your development database.
To make matters worse, the test.ini file doesn’t come with the development.ini file’s logging configuration, so you can’t even see what is happening behind the scenes. Copy all the logging lines from the development.ini file to the end of test.ini starting with # Logging configuration and ending at the end of the file. If you run nosetests again, you will see what has been happening behind the scenes:
$ nosetests 20:14:02,807 INFO [simplesite.websetup] Adding home page... 20:14:02,918 INFO [simplesite.websetup] Successfully set up. . ---------------------------------------------------------------------- Ran 1 test in 0.347s OK
As you can see, every time the test suite was run, a new home page was accidentally added to the database. You can confirm this by starting the Paste HTTP server and visiting to verify that an extra home page has been added.
Now that you understand what is happening, update the test.ini file with its own configuration so the [app:main] section looks like this:
[app:main] use = egg:SimpleSite full_stack = true cache_dir = %(here)s/data beaker.session.key = simplesite beaker.session.secret = somesecret # SQLAlchemy database URL sqlalchemy.url = sqlite:///%(here)s/test.db
Notice that sqlalchemy.url has been changed to use test.db. This still doesn’t prevent a new home page from being added each time the tests run, so you should update websetup.py too. Ideally, you want a completely fresh database each time the tests are run so that they are consistent each time. To achieve this, you need to know which config file is being used to run the tests. The setup_app() function takes a conf object as its second argument. This object has a .filename attribute that contains the name of the file used to invoke the setup_app() function.
Update the setup_app() function in websetup.py to look like this. Notice the import of os.path on line 3 as well as the code to drop existing tables if using the test.ini file in lines 16-20.
With these changes in place, let’s run the test again. Notice that the Dropping existing tables... log message appears in the output:
$ nosetests 14:02:58,646 INFO [simplesite.websetup] Dropping existing tables... ... some lines of output ommitted ... 14:02:59,603 INFO [simplesite.websetup] Adding homepage... 14:02:59,617 INFO [sqlalchemy.engine.base.Engine.0x...26ec] BEGIN 14:02:59,622 INFO [sqlalchemy.engine.base.Engine.0x...26ec] INSERT INTO page (content, posted, title, heading) VALUES (?, ?, ?, ?) 14:02:59,623 INFO [sqlalchemy.engine.base.Engine.0x...26ec] [u'Welcome to the SimpleSite home page.', '2008-11-04 14:02:59.622472', u'Home Page', None] 14:02:59,629 INFO [sqlalchemy.engine.base.Engine.0x...26ec] COMMIT 14:02:59,775 INFO [simplesite.websetup] Successfully set up . ---------------------------------------------------------------------- Ran 1 test in 0.814s OK
This time the setup_app() function can determine that it is being called with the test.ini config setup, so it drops all the tables before performing the normal setup. Because you updated the test.ini file with a new SQLite database, you’ll notice the test database test.db has been created in the same directory as test.ini.
The page controller’s save() action currently looks like this:
"
Let’s write a test to check that this action behaves in the correct way:
- GET requests are disallowed by the @restrict decorator.
- The action returns a 404 Not Found response if no ID is specified.
- The action returns a 404 Not Found response for IDs that don’t exist.
- Invalid data should result in the form being displayed.
- The action saves the updated data in the database.
- The action sets a flash message in the session.
- The action returns a redirect response to redirect to the create action.
You could write the tests for each of these in a single method of the TestPageController class, but if one of the tests failed for any reason, nose would not continue with the rest of the method. On the other hand, some of the tests are dependent on other tests passing, so you cannot write them all as separate methods either. For example, you can’t test whether a flash message was set if saving the page failed. To set up the tests correctly, let’s create the following methods:
def test_save_prohibit_get(self): """Tests to ensure that GET requests are prohibited""" def test_save_404_invalid_id(self): """Tests that a 404 response is returned if no ID is specified or if the ID doesn't exist""" def test_save_invalid_form_data(self): """Tests that invalid data results in the form being returned with error messages""" def test_save(self): """Tests that valid data is saved to the database, that the response redirects to the view() action and that a flash message is set in the session"""
These tests will require some imports, so add the following lines to the top of tests/functional/test_page.py:
from routes import url_for from simplesite.model import meta from urlparse import urlparse
Now let’s implement the test methods starting with test_save_prohibit_get(), which looks like this:
class TestPageController(TestController): def test_save_prohibit_get(self): """Tests to ensure that GET requests are prohibited""" response = self.app.get( url=url_for(controller='page', action='save', id='1'), params={ 'heading': u'Updated Heading', 'title': u'Updated Title', 'content': u'Updated Content', }, status = 405 )
As you can see, the example uses the get() method of self.app to simulate a GET request to the save() action with some sample params, which will be sent as part of the query string. By default, the get() and post() methods expect either a 200 response or a response in the 300s and will consider anything else an error. In this case, you expect the request to be denied with a 405 Method Not Allowed response, so to prevent paste.fixture from raising an exception, you have to specify the status parameter explicitly. Because paste.fixture checks that the status will be 405, you don’t have to add another check on the response object.
Now let’s look at the test_save_404_invalid_id() method:
def test_save_404_invalid_id(self): """Tests that a 404 response is returned if no ID is specified or if the ID doesn’t exist""" response = self.app.post( url=url_for(controller='page', action='save', id=''), params={ 'heading': u'Updated Heading', 'title': u'Updated Title', 'content': u'Updated Content', }, status=404 ) response = self.app.post( url=url_for(controller='page', action='save', id='2'), params={ 'heading': u'Updated Heading', 'title': u'Updated Title', 'content': u'Updated Content', }, status=404 )
As you can see, this code is similar but uses the post() method and performs two tests rather than one. In the first, no ID is specified, and in the second the ID specified doesn’t exist. In both cases, you expect a 404 HTTP response, so the status parameter is set to 404.
The test_save_invalid_form_data() method is more interesting. Once again a POST request is triggered, but this time the title is empty, so the @validate decorator should cause the page to be redisplayed with the error message Please enter a value:
def test_save_invalid_form_data(self): """Tests that invalid data results in the form being returned with error messages""" response = self.app.post( url=url_for(controller='page', action='save', id='1'), params={ 'heading': u'Updated Heading', # title is required so this next entry is invalid 'title': u'', 'content': u'Updated Content', } ) assert 'Please enter a value' in response
As you can see from the last line, the presence of the error message in the response is tested. Because you expect a 200 HTTP response, there is no need to specify the status argument, but you can if you like.
Finally, let’s look at the test_save() method:
def test_save(self): """Tests that valid data is saved to the database, that the response redirects to the view() action and that a flash message is set in the session""" response = self.app.post( url=url_for(controller='page', action='save', id='1'), params={ 'heading': u'Updated Heading', 'title': u'Updated Title', 'content': u'Updated Content', } ) # Test the data is saved in the database (we use the engine API to # ensure that all the data really has been saved and isn't being returned # from the session) connection = meta.engine.connect() result = connection.execute( """ SELECT heading, title, content FROM page WHERE id=? """, (1,) ) connection.close() row = result.fetchone() assert row.heading == u'Updated Heading' assert row.title == u'Updated Title' assert row.content == u'Updated Content' # Test the flash message is set in the session assert response.session['flash'] == 'Page successfully updated.' # Check the respone will redirect to the view action assert urlparse(response.response.location).path == url_for( controller='page', action='view', id=1) assert response.status == 302
The first part of this test generates a paste.fixture response after posting some valid data to the save() action. A SQLAlchemy connection object is then created to perform a SQL SELECT operation directly on the database to check the data really has been updated. Next you check the session contains the flash message. You’ll remember from earlier in the chapter that certain Pylons globals including session are available as attributes of the response object. In this example, response.session is tested to ensure the flash message is present. Finally, you want to check the HTTP response headers contain the Location header with the correct URL to redirect the browser to the view() action. Here we are using the Pylons response object because it has a .location attribute specifying the location header rather than the paste.fixture response object. The location header contains the whole URL, so you use urlparse() to just compare that the path component matches the path to the view() action.
Once you’ve implemented the tests, you can check they pass by running nosetests in your main project directory:
$ nosetests simplesite/tests/functional/test_page.py ... log output omitted ... .. ---------------------------------------------------------------------- Ran 4 tests in 0.391s OK
The tests all pass successfully, so you can be confident the save() action functions as it is supposed to function.
Tip
The TestPageController is derived from the TestController class, which itself subclasses the standard Python unittest.TestCase class. This means you can also use its helper methods in your tests. The unittest.TestCase object is documented at. This is well worth a read if you plan to write anything more than simple tests.
As you saw earlier in the chapter, Pylons adds certain objects to the response object returned by paste.fixture when you call the self.app object with one of the HTTP methods such as get() or post(). You can also set up your own objects to be added to the response object. If a test is being run, Pylons makes available a paste.testing_variables dictionary in the request.environ dictionary. Any objects you add to this dictionary are automatically added as attributes to the paste.fixture response object. For example, if you had a custom Cache object that you wanted to make available in the tests, you might modify the __call__() method in the BaseController in your project’s lib/base.py file to look like this:
class BaseController(WSGIController): def __call__(self, environ, start_response): # Add the custom cache object if 'paste.testing_variables' in environ: environ['paste.testing_variables']['cache'] = CustomCacheObj() try: return WSGIController.__call__(self, environ, start_response) finally: meta.Session.remove()
In the TestPageController you would now find the response object has a .cache attribute:
def test_cache(self): response = self.app.get(url(controller='page', action='view', id='1')) assert hasattr(response, 'cache') is True
For more details on running tests using paste.fixture, visit.
Sometimes it is useful to be able to test your application from the command line. As you saw earlier in the chapter, one method for doing this is to use the --pdb and --pdb-failures options with nose to debug a failing test, but what if you want to quickly see how a particular part of your Pylons application behaves to help you work out how you should write your test? In that case, you might find the Pylons interactive shell useful.
The Pylons interactive shell enables you to use all the tools you would usually use in your tests but in an interactive way. This is also particularly useful if you can’t understand why a particular test is giving you the result it is. The following command starts the interactive shell with the test setup, but you could equally well specify development.ini if you wanted to test your development setup:
$ paster shell test.ini
Here’s the output you receive:
Pylons Interactive Shell Python 2.5.1 (r251:54863, Apr 15 2008, 22:57:26) [GCC 4.0.1 (Apple Inc. build 5465)] All objects from simplesite.lib.base are available Additional Objects: mapper - Routes mapper object wsgiapp - This project's WSGI App instance app - paste.fixture wrapped around wsgiapp >>>
As you can see, the shell provides access to the same objects as you have access to in the actions of your functional test classes.
You can use the Pylons interactive shell in the same way you would usually use a Python shell. Here are some examples of its use:
>>> response = app.get('/page/view/1') 13:24:31,824 INFO [sqlalchemy.engine.base.Engine.0x..90] BEGIN 13:24:31,828 INFO [sqlalchemy.engine.base.Engine.0x..90] SELECT page.id AS page_id, page.content AS page_content, page.posted AS page_posted, page.title AS page_title, page.heading AS page_heading FROM page WHERE page.id = ? LIMIT 1 OFFSET 0 13:24:31,828 INFO [sqlalchemy.engine.base.Engine.0x..90] [1] >>> assert 'Updated Content' in response >>> print response.req.environ.has_key('REMOTE_USER') False >>>
Notice that you receive the same logging output because of the logging configuration you added to test.ini ``earlier in the chapter. Also notice that because you’ve already run the ``nosetests command, the database currently has the text Updated Content for the content rather than the message Welcome to the SimpleSite home page., which was the original value.
In this chapter, you saw how nose, paste.fixture, and the Pylons interactive shell work together to allow you to test Pylons applications. You’ve seen how to use some of the more common options available to nose to customize the output from the tests and how to debug failures and errors with Python’s pdb module. You also learned the difference between unit testing, functional testing, and user testing and saw why all of them are important. You also now know exactly how Pylons sets up your tests so that you can customize their behavior by changing the websetup.py file or adding new objects to the paste.fixture response object.
In the next chapter, you’ll look at some of the recommended ways to document a Pylons project, and you’ll learn about one more type of testing known as a doctest, which allows examples from within the documentation to be tested directly. | https://thejimmyg.github.io/pylonsbook/en/1.1/testing.html | CC-MAIN-2017-39 | refinedweb | 5,944 | 64.1 |
Project Euler 5: Find the smallest number divisible by each of the numbers 1 to 20
Project Euler 5 Problem Description
Project Euler 5: 2520 is the smallest number that can be divided by each of the numbers from 1 to 10 without any remainder.
What is the smallest number that is evenly divisible by all of the numbers from 1 to 20?
Project Euler 5 Analysis
Euclid’s algorithm
The smallest positive number that is evenly divided (divided without remainder) by a set of numbers is called the Least Common Multiple (LCM). All we have to do is find the LCM for the integers {1, 2, 3, 4, …, 20} using Euclid’s algorithm.
After some reflection you might realize correctly that every integer is divisible by 1, so 1 can be removed from the list and calculate 2 through 20.
But by furthering this reasoning and we can eliminate other factors. We leave 20 in the calculation but then remove its factors {2, 4, 5, 10}. Any number evenly divisible by 20 is also evenly divisible by these factors. 19 is prime and has no factors – it stays. 18 has factors {2, 3, 6, 9} and we already removed 2 but we can remove 3, 6, and 9. 17 is prime – it stays too.
We continue this numeric carnage until our original list of {1..20} becomes the much smaller {11, 12, 13, 14, 15, 16, 17, 18, 19, 20}. Following this reasoning, the lower half of the set is completely irrelevant and can be eliminated from consideration without changing the result.
from fractions import gcd def lcm(a, b): return a // gcd(a, b) * b L = 20 print 'LCM of all numbers from 1 through', L, 'is', reduce(lcm, range(L//2+1, L+1))
A better method for this problem
For larger ranges, say greater than 105, this method becomes very slow. The method outlined below can adeptly handle larger sets such as {2, 3, 4, …, 106}.
Taking 20 as our example, we know the LCM must be divisible by every prime less than or equal to 20. In this case those primes are {2, 3, 5, 7, 11, 13, 17, 19}. For the primes less than square root of 20 there may be an exponent involved, and for those greater than the square root of 20 the exponent will always be 1. Let’s look at the breakdown:
24 × 32 × 5 × 7 × 11 × 13 × 17 × 19 = 232,792,560.
We use the log function to determine the exponent of the prime, p, as log(20)/log(p) by looping through the primes.
For example, the floor of (log 20 / log 2) = 4; the largest exponent of 2 in the prime factors (24 = 16). Again, the floor of (log 20 / log 3) = 2; the largest exponent of 3 in the prime factors (2×32=18).
The HackerRank version ups the limit for 1 ≤ n ≤ 40 and runs up to 10 consecutive test cases. This algorithm handles those test cases in less than a hundredth of a second.
Project Euler
- Notes on Python usage:
The statement:
from Euler import prime_sieveincludes a function for calculating prime numbers from our Euler.py library module into the program. Many useful functions can be imported this way without copying its definition into each program. When an improvement is made or an error corrected it only has to be done once and is reflected in the scripts that import it. This keeps scripts short and concise. Function
prime_sieveis listed in Common Functions and Routines for Project Euler.
In the GCD version we changed
return a * b / gcd(a,b)to
return a / gcd(a,b) * b. ).” [from Wikipedia]
Python’s
reduce(function, sequence, initial value) returns a single value constructed by calling the binary function
mulon the first two items of the sequence, then on the result and the next item and so on.
- The code shown below removes the use of reduce to achieve the same results.
a = 1 for p in primes: a*= p**int(log(L)/log(p))
Project Euler 5
Hi Mike,
I once calculated the smallest number divisible by every number between 1 and 100. I posted this on my facebook profile back in 2009.
69,720,375,229,712,477,164,533,808,935,312,303,556,800 is the smallest number divisable by every number from 1 to 100. I calculated this on 4-3-2003.
Scott
I ran this program to 100 and confirmed your result. For 1 to 1000 it is:
7128865274665093053166384155714272920668358861885893040452001991154324087
5811114994764441519138715869117178170195752565129802640676210092514658710
0430513107268626814320019660997486274593718834370501543445252373974529896
3145674982128236956232823794011068809262317708861979540791247754558049326
4757378299233527517967352480424636380511370343312147817468508784534856780
21888075373249921995672056932029099390891687487672697950931603520000
I won’t bother putting in the commas. | https://blog.dreamshire.com/project-euler-5-solution/ | CC-MAIN-2017-51 | refinedweb | 771 | 70.02 |
Created on 2003-11-30 06:27 by virtualspirit, last changed 2005-01-05 15:56 by jafo.
/usr/bin/idle2.3 wrapper (from the 2.3.2-1pydotorg rpm) ignores
command-line arguments when calling idle.py. Fixed only by adding
"$*" to the script.
Logged In: YES
user_id=149084
Appears to be an issue with the Tools section of
..../Misc/RPM/python-2.4.spec
Backport to release23-maint
Assigning to Sean Reifschneider
Logged In: YES
user_id=81797
I went ahead and turned this into Python code and called
"execvp", passing the command arguments onto the idle call.
This way there are no worries about how the shell handles
$* expansion. The current CVS version results in something
looking like:
#!/usr/bin/env python2.4
import os, sys
os.execvp("/usr/bin/python2.4", ["/usr/bin/python2.4",
"/usr/lib/python2.4/idlelib/idle.py"] + sys.argv[1:])
print "Failed to exec Idle"
sys.exit(1)
(assuming default build settings)
Logged In: YES
user_id=149084
This really ought to get into release24-maint and probably
release23-maint if you think anyone will ever build an rpm
from 2.3.5 (which is due out soon).
Logged In: YES
user_id=81797
I've tried checking out python23-maint, but the resulting
tree I'm getting has the latest .spec file. I can make the
changes to 2.3, but I need some help on how to get a tree I
can check in against, or if I should submit patches to the
tracker.
Logged In: YES
user_id=81797
mwh helped me get the right check-out of the maintenance
releases. I've committed these changes to the 2.3 and 2.4
maintenance branches. | http://bugs.python.org/issue851459 | crawl-002 | refinedweb | 283 | 76.42 |
Created on 2012-10-11 11:11 by eudoxos, last changed 2012-11-29 17:54 by asvetlov. This issue is now closed.
I have several compiled modules linked into one .so file and import them using imp.load_dynamic.
Only the first module imported with load_dynamic is imported properly, all subsequent calls of load_dynamic on the same file ignore the first argument (name) and return the first module again. The init function is also called only for the first module imported by load_dynamic.
The bug is reproducible for python 2.7.3 and 3.2.2. Test case is attached.
Here inline simplified source for 2.7:
foo.c:
#include<stdio.h>
#include<Python.h>
PyMODINIT_FUNC initfoo(){
(void) Py_InitModule("foo",NULL);
printf("initfoo()\n");
}
PyMODINIT_FUNC initbar(void){
(void) Py_InitModule("bar",NULL);
printf("initbar()\n");
}
PyMODINIT_FUNC initbaz(void){
(void) Py_InitModule("baz",NULL);
printf("initbaz()\n");
}
test.py:
import sys,imp
# import foo using the normal machinery
sys.path.append('.')
import foo
# this is OK
print imp.load_dynamic('bar','foo.so')
# this imports *bar* again, but should import baz
print imp.load_dynamic('baz','foo.so')
# this imports *bar* again, although the module is not defined at all
print imp.load_dynamic('nonsense','foo.so')
Compiled with
gcc -shared -fPIC foo.c -o foo.so `pkg-config python --cflags --libs`
I get when running "python test.py" output:
initfoo()
initbar()
<module 'bar' from 'foo.so'>
<module 'bar' from 'foo.so'>
<module 'bar' from 'foo.so'>
The module 'bar' is imported 3 times, although the 2nd import should import *baz* and the third import should fail ("nonsense" module does not exist).
Did this actually work in a previous version of Python, and if so what version?
I tried with python 2.4.5 and 2.5.2 in chroot (using ubuntu hardy, which packaged both of them) and the result is exactly the same for both. I doubt I am able to install anything older in a sensible manner.
This is an enhancement request, then.
No, it is an old bug, since the behavior does something else than documented () and reasonably expected -- imp.load_dynamic("baz","foo.so") imports the "foo" module under some circumstances.
It's actually a documentation bug.
While I understand that this behavior went unnoticed for ages and can be seen therefore as unimportant, designating this as documentation bug is quite absurd; perhaps the following wording would be appropriate:
.. note::
If this function is called multiple times on the same file (in terms of inode; symlink pointing to same file is fine), it will return the module which was first imported via `load_dynamic` instead of the requested module, without reporting any error. The previous call to `load_dynamic` may not be in the same part of the code, but it must happen within the same interpreter instance..
I found the cause of the behavior (perhaps it is common knowledge, but I am new to python source); imp.load_dynamic calls the following functions
Python/import.c: imp_load_dynamic ()
Python/importdl.c: _PyImport_LoadDynamicModule ()
Python/import.c: _PyImport_FindExtensionObject ()
where the last one uses the extensions object (), which is explained atObject()
immediately after the module initialization function succeeds. A
copy can be retrieved from there by calling
_PyImport_FindExtensionObject().
The fact that extensions are keyed by file name explains why opening the .so through symlink does not return the old extension object:
# foo.so
# bar.so -> foo.so (works for both symlinks and hardlinks)
imp.load_dynamic("foo","foo.so")
imp.load_dynamic("bar","bar.so") # will return the bar module
I will investigate whether marking the module as capable of multiple initialization could be a workaround for the issue -- since the quoted comment further says ():
Modules which do support multiple initialization set their m_size
field to a non-negative number (indicating the size of the
module-specific state). They are still recorded in the extensions
dictionary, to avoid loading shared libraries twice.
To fix the issue, I suggest that the *extensions* dict is keyed by (filename,modulename) tuple for dynamically loaded modules. This would avoid any ambiguity. Grepping through the code shows that the *extensions* object is only accessed from Python/import.c, therefore regressions should be unlikely. What do you think?
I did not notice it was not documented in python 3.3 anymore -- my fault, sorry.
In case there is no functional replacement for it, I will try to raise it on the ML. I am currently writing some code in 2.7 which relies on it (I don't see another way of packing multiple compiled modules into one file without using symlinks, which won't work under windows; it saves me lots of trouble with cross-module symbol dependencies and such, avoids RTLD_GLOBAL, rpath and such nasty stuff), and don't want to throw it away with future migration to 3k.
The new functional equivalent is importlib.machinery.ExtensionFileLoader (), but underneath the hood it uses the same code as imp.load_dynamic() did.
"""To prevent initializing an extension module more than
once, we keep a static dictionary 'extensions' keyed by module name
(for built-in modules) or by filename (for dynamically loaded
modules), containing these modules.
"""
So there can be only one module per filename.
But what if this dictionary was keyed by tuple(name, filename) instead?
Yes, that's what I suggested at the end of msg172656 - including modulename in the key.
Brett, would it be OK if I make patch against 3.3 (or head, if you prefer) to key by (modulename,filename) for compiled modules?
I had a look at importlib.machinery.ExtensionFileLoader and it will suffer from the same issue.
Should I open a separate bug for the post-3.3 patch, and keep this as documentation bug for 2.7?
Yes, I think keeping this bug as the doc bug and opening a new one for the enhancement is the best way to go.
issue16421 was opened for py3k. Just for the sport of writing, I fixed that in python 2.7 (tip) as well, though other seemed to defend the view it was not a bug, hence not fixable in 2.7.
I think it should not be fixed in 2.7, so I guess to close the issue as wontfix.
The behaviour won't change in 2.7, but the docs at still need to be clarified.
e.g. add a note like:
Note:.
(probably flagged as a CPython implementation detail, since it's really an accident of the implementation rather than a deliberately considered language feature)
Pushed doc patch.
Nick, is it good for you?
The doc patch LGTM.
New changeset 94de77bd0b4b by Andrew Svetlov in branch '2.7':
Issue #16194: document imp.load_dynamic problems
Documentation is fixed, behavior cannot be changed.
Close the issue | https://bugs.python.org/issue16194 | CC-MAIN-2021-04 | refinedweb | 1,116 | 59.19 |
Windows.
Introduction:
- create a mobile service and add tables to it
- update the app to use the mobile service
- test the mobile service hosted on Azure Mobile Services
To follow along with me, you need a Windows Azure account. You can sign up for a Windows Azure trial if you don't have an account yet.
1. Create a Mobile Service Azure Management Portal.
Step 1: Add a Mobile Service
Log in to the Azure Management Portal and click the NEW button in the navigation bar. Expand Compute > Mobile Service and click Create.
Step 2: Select Database Type, Region, and Runtime
In the New Mobile Service wizard, select a free 20 MB SQL database or use one of your existing databases. Select JavaScript from the Backend menu and enter a subdomain for the new mobile service in the URL text box.
Note that the name of the mobile service needs to be unique. An error is displayed next to URL when the name/subdomain you entered isn't available.
Step 3: Specify Database Settings
When you create a new mobile service, it is automatically associated with a SQL database. The Azure Mobile Services backend then provides built-in support for enabling remote apps to securely store and retrieve data from it, without you having to write or deploy any custom server code.
To configure the database, enter the name of the database in the Name field. Next, enter Server Login Name and Server Login Password to access the SQL database server.
Click the checkmark in the bottom right to complete the process. You have now created a new mobile service that can be used by your mobile apps. Before you can start storing data, you first need to create a table that can store your application's data.
Note that the use of a database in a different region is not recommended because of additional bandwidth costs and higher latencies.
2. Add a Table to the Mobile Service
In this step, we will add a table named ToDoItem to the mobile service, which will be used by the client app to save the user's to-do items.
Step 1: Create a New Table
From the Data tab in the Azure Management Portal, click Create to add a new table to the mobile service. Name the table ToDoItem and set a permission level against each operation. For the ToDoItem table, I have used the default permission settings.
Click the checkmark in the bottom right to complete the table setup process. In just a few seconds, you have added the ToDoItem table to the mobile service.
Step 2: Add Columns to the Table
The ToDoItem.
To add the additional columns, click Add Column from the Columns tab of the ToDoItem table. The text column is of type String and the completed column is of type Boolean.
These are the columns of the ToDoItem table.
3. Configure the App to Use the Mobile Service
The app needs to be configured correctly to use the mobile service. You need to add code to connect your app to your mobile service and save data to the cloud.
Right-click the project name in the Solution Explorer and choose Add > Connected Service from the menu. In the Add Connected Service dialog box that appears, choose Azure Mobile Services and click Configure.
Next, choose the mobile service that you created earlier from the list of existing services in your account. You will need to provide your credentials to connect and list the mobile services in your Windows Azure account.
Select the mobile service we created and click Add to complete the process. The wizard will then add all the required references to your project. The references can also be added manually by installing the required package using NuGet. Right-click your client project, select Manage NuGet Packages, search for the WindowsAzure.MobileServices package, and add a reference for the package.
The wizard installs the required NuGet packages, adds a reference for the mobile service's client library to the project, and updates the project source code. The wizard also adds a new static field to the
App class that looks like this: the pages in your app. You need to add this code manually to App.xaml.cs if you are not using the wizard.
4. Update the App to Use the Mobile Service
You need to update your Windows Phone app to use the mobile service as a backend service. You only need to make changes to the MainPage.cs project file.
Step 1: Add the
ToDoItem Class Definition
Add a new model class,
ToDoItem, to your project. The model class contains properties corresponding to the columns in the ToDoItem table we created earlier.; } }
The
JsonPropertyAttribute method is used to define the mapping between property names in the client app and column names in the corresponding table. A reference to Newtonsoft.Json package must be added to the project to make this work.
Step 2: Add Code to Insert and Fetch Items
Add the following
using statement to MainPage.xaml.cs:
using Microsoft.WindowsAzure.MobileServices;
Add the following lines at the top of MainPage.xaml.cs to create a mobile services-aware binding collection and proxy class for the database table.
private MobileServiceCollection<ToDoItem, ToDoItem> items; private IMobileServiceTable<ToDoItem> todoTable = App.tutsplusdemoClient.GetTable<ToDoItem>();
Next, create an
InsertToDoItem method to insert a new item into the table. Add the
async modifier to the method and add the following code to insert an item.
public async Task InsertToDoItem(ToDoItem toDoItem) { await todoTable.InsertAsync(toDoItem); items.Add(toDoItem); }
This code works if your table has permissions set to Anybody with an Application Key. If you change the permissions to secure your mobile service, you'll need to add user authentication support. See Adding Authentication Using Azure Mobile Services.
Create a
RefreshTodoItems method that sets the binding to the collection of items in the ToDoItem table, which contains all of the
ToDoItem objects returned from the mobile service. We display a message box if a problem occurs while executing the query.; } }
Step 3: Add Controls to MainPage.xaml
We now have to update MainPage.xaml to display to-do items and add the ability to add to-do items. Below is what the XAML code could look like for a simple user interface that contains a TextBox to insert items and a ListView to view to-do items.
>
The
InsertToDoItem method is called when the Save button is tapped, which inserts the to-do item into the table.
private async void ButtonSave_Click(object sender, RoutedEventArgs e) { var todoItem = new TodoItem { Text = TextInput.Text }; await InsertTodoItem(todoItem); }
The
RefreshToDoItems method is invoked when the Refresh button is tapped. In this method, we fetch all the items in the table.
private async void ButtonRefresh_Click(object sender, RoutedEventArgs e) { ButtonRefresh.IsEnabled = false; //await SyncAsync(); // offline sync await RefreshTodoItems(); ButtonRefresh.IsEnabled = true; }
5. Test the Mobile Service
The final step of this tutorial is to review the data stored in the mobile service. In the Windows Azure classic portal, click the ToDoItem table under the Data tab of your mobile service. Under the Browse tab, you can view all the items in the table.
Conclusion
This tutorial demonstrates the basics of using Azure Mobile Services as a backend for a Windows Phone app. Creating a mobile service and using it in the app to store data in the cloud is easy to implement.
More complex scenarios involve supporting offline data sync. You can also add offline data sync support to the app following this tutorial. You can restrict table permissions to allow only authenticated users to update the table following this Envato Tuts+ article.
Feel free to download the tutorial's source files for reference. Remember to configure the app to use Azure Mobile Services before deploying it.
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| https://code.tutsplus.com/tutorials/how-to-add-azure-mobile-services-to-a-windows-phone-app--cms-24178 | CC-MAIN-2017-22 | refinedweb | 1,326 | 64.81 |
Results 1 to 2 of 2
Thread: iMovie HD import problem
iMovie HD import problem
- Member Since
- Sep 29, 2008
- 7
Hi, I want to import a HD 1080i .M4V file into iMovie but when I click on "File>Import iMovie HD Project..." and go to access the file it does not allow me to select it.
I am using iMovie version 7.1.4 (585)
The HD video is from a Sanyo Xacti HD camera. The .MP4 version of the file is also inaccessible to import.
What gives?
- Member Since
- Jan 22, 2010
- Location
- Victoria, BC
- 20,911
- Specs:
- Mid-2012 MBP (16GB, 1TB HD), Monoprice 24-inch second monitor, iPhone 5s 32GB, iPad Air 2 64GB
iMovie HD is very old, and only supports MP4 simple profile. HD video is not "simple profile" and thus is not supported.
Solution: convert the file to DV or HDV format, or use a more recent version of iMovie or some other video-editing tool. MPEG Streamclip is a free tool (not the only one however) that can convert the files for you.
Thread Information
Users Browsing this Thread
There are currently 1 users browsing this thread. (0 members and 1 guests)
Similar Threads
imovie import problemBy ripp in forum Movies and VideoReplies: 1Last Post: 03-17-2010, 09:00 AM
import issue: only showing imovie 08:last import ... unable to locate previous importBy DAIDAI in forum Movies and VideoReplies: 0Last Post: 12-25-2008, 03:03 PM
iMovie 08 HD Import ProblemBy bjf123 in forum Movies and VideoReplies: 0Last Post: 04-12-2008, 10:43 PM
iMovie Import ProblemBy GarlandM in forum OS X - Apps and GamesReplies: 3Last Post: 09-06-2007, 01:27 PM
Frame rate problem on iMovie HD importBy hedleyw in forum Movies and VideoReplies: 1Last Post: 01-11-2007, 04:53 AM | http://www.mac-forums.com/forums/movies-video/210479-imovie-hd-import-problem.html | CC-MAIN-2016-07 | refinedweb | 305 | 66.37 |
Library tutorials & articles
.NET Applets
- Introduction
- To The Code!
- The HTML Code
Introduction
This is going to be my first tutorial. It's not final, end-all, documentation on this subject. But it is what I have gotten from Channel9 discussions and online documentation. It will walk you through setting up a .NET Applet.
First of all, what is a .NET Applet? An Applet is basically a small application hosted in a larger application, in this case, the browser. Most common are Java Applets, but they are not the only ones. Items in the Windows Control Panel are called Control Panel Applets. .NET Applets are called that because they resemble Java Applets, but that is not their official name. Generally, the procedure I am about to describe is called ' Hosting Windows Forms controls in Internet Explorer '.
How do you create a .NET Applet? Very basically, you create a Windows Forms user control. That's right, you write Windows Forms code and your browser (Internet Explorer only) handles it. Then you put an
<object> tag into your html and specify some information about the Applet (the dll file, namespace, and class).
How does this work? Well, Internet Explorer is able to run .NET controls probably by using COM to host mscoree.dll (If you know more, let me know). If the user/client has the .NET Framework installed, and Internet Explorer 6.0 (I am not sure of 5) your user control will simply be gobbled up and used as a local control.
Security
'Whoa! What is Microsoft doing to us? Think of the hole this could open up!!!', you say. Calm down. It runs in a ‘sandbox'. A special zone that limits what the control can do. It won't let you access local files with the exception of the
OpenFileDialog box, and then can only use the dialog box's
OpenFile() method. You cannot use the
SaveFileDialog. So you don't have to worry about some one messing with your hard drive. (To see which permissions an assembly has for all the different zones, you can go to Control Panel/Administrative Tools/.NET Framework 1.1 Configuration, and then under Runtime Security Policy/Machine/Code Groups/All Code.) saw that developing the control in .net 1.1 framework is it possible... but using 2.0 i got some truble...
do you have more news?!?
Very nice and concise. Good work M!
This thread is for discussions of .NET Applets. | http://www.developerfusion.com/article/4683/net-applets/ | crawl-002 | refinedweb | 410 | 70.19 |
Permalink to this page:
Preface
Please see the Other Resources Link for other pages describing how they were able to link Tomcat with a connector. With luck, someone documented their experience in an environment which is similar to yours.
Here is a link to the Apache Tomcat Connectors (aka JK Connectors) project page. It contains more configuration and installation information.
...
- What is JK (or AJP)?
- Which connector: mod_jk or mod_proxy?
- What about mod_jserv, mod_jk2, mod_webapp (aka warp)?
- Why should I integrate Apache HTTP Server with Apache Tomcat? (or not)
- At boot, is order of start up (Apache HTTP Server vs Apache Tomcat) important?
- Is there any way to control the content of automatically generated mod_jk.conf-auto?
- How do I bind to a specific ip IP address?
- Where can I download a binary distribution of my connector?
- I'm having strange UTF-8 issues with my request parameters.
- How do I configure apache tomcat connectors for a heavy load site?
AnswersWhat/1.1.
- mod_proxy_http2 uses HTTP/2 as communication protocol. I has support for both secure (h2) and cleartext (h2c) variants of HTTP/2. Both are understood by Tomcat 8.5 and later.). (See AJP with stunnel.)
mod_proxy also supports
ProxyPassMatch directive._
(
...
Source:)What about mod_jserv, mod_jk2, mod_webapp (aka warp)?
All of these connectors have been abandoned long ago. Do not use any of them.
...
-.
- jk2 is a refactoring of mod_jk and uses the Apache Portable Runtime (apr). But due to lack of developer interest, it is unsupported. Do not use mod_jk2.
There are many reasons to integrate Tomcat with Apache. And there are reasons why it should not be done too. Needless to say, everyone will disagree with the opinions here. With the performance of Tomcat 5 and 6, performance reasons become harder to justify. So here are the issues to discuss in integrating vs not.
- Clustering. By using Apache as a front end you can let Apache act as a front door to your content to multiple Tomcat instances. If one of your Tomcats fails, Apache ignores it and your Sysadmin can sleep through the night. This point could be ignored if you use a hardware loadbalancer and Tomcat's clustering capabilities.
- Clustering/Security. You can also use Apache as a front door to different Tomcats for different URL namespaces (/app1/, /app2/, /app3/, or virtual hosts). The Tomcats can then be each in a protected area and from a security point of view, you only need to worry about the Apache server. Essentially, Apache becomes a smart proxy server.
- Security. This topic can sway one either way. Java has the security manager while Apache has a larger mindshare and more tricks with respect to security. I won't go into this in more detail, but let Google be your friend. Depending on your scenario, one might be better than the other. But also keep in mind, if you run Apache with Tomcat - you have two systems to defend, not one.
- Add-ons. Adding on CGI, perl, PHP is very natural to Apache. Its slower and more of a kludge for Tomcat. Apache also has hundreds of modules that can be plugged in at will. Tomcat can have this ability, but the code hasn't been written yet.
- Decorators! With Apache in front of Tomcat, you can perform any number of decorators that Tomcat doesn't support or doesn't have the immediate code support. For example, mod_headers, mod_rewrite, and mod_alias could be written for Tomcat, but why reinvent the wheel when Apache has done it so well?
-)
No. This way - either apache Apache HTTPd or tomcat Apache Tomcat can be restarted at any time independent independently of one another.How do I bind to a specific
...
IP address?
Each Connector element allows an
address property. See the HTTP Connector docs or the AJP Connector docs.
You cannot: you need to download the source and compile it for your platform. The source distributions are available from the standard location. Note that JPackage.org has RPM distributions for the connectors as well as tomcat itself: JPackage.orgI'm having strange UTF-8 issues with my request parameters.How do I configure
...
Apache Tomcat connectors for a heavy load site?
See Performance and Monitoring | https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=103098826&selectedPageVersions=20&selectedPageVersions=21 | CC-MAIN-2022-33 | refinedweb | 704 | 68.47 |
Xtreme Visual Basic Talk
>
Legacy Visual Basic (VB 4/5/6)
>
VBA / Office Integration
>
Word, PowerPoint, Outlook, and Other Office Products
> Help with VB.NET and MS Access 2002
PDA
View Full Version :
Help with VB.NET and MS Access 2002
Masta120
08-23-2004, 09:36 AM
I needed to be able to convert MS Access 2002 *.mdb files into *.mde (self executing) files from a console based app. I needed this to be a console application, basically a separate executable, so I cannot use VBA. I know the code to do this in VB6, but do not know how to in VB.NET.
Problem is that VB6 will only generate Access 97 files, not 2002. The code I used under VB6 is:
Dim x As Object
Dim strFileIn As String
Dim strFileOut As String
' I gave the strings some absolute file paths
Set x = CreateObject("Access.Application")
x.SysCmd 603, strFileIn, strFileOut
I tried this code under VB.NET but I could not get it to work correctly. Does anyone know what I am doing wrong? Should the code be the same for VB.NET? Is there another (or better) way to convert these mdb files to mde files using .NET (does not have to be VB)? I am going to need to do this often, for a lot of files.
MKoslof
08-23-2004, 05:13 PM
Hmmm...not sure how you would do this in .Net. You can still instaniate COM objects via CreateObject, though it is not the preferred method in the .Net namespace.
What is the error you receive? Let me try to do some testing on my end.
Masta120
08-24-2004, 05:54 AM
Well, in VB.NET, I was having a lot of problems. I couldn't get even simple stuff to work (yeah, I am a newbie to .NET).
Even still, is .NET the best way to go? Is it possible to use a VB script? I tried my code in a VB script and it still didn't work on the machine with Access 2002 installed, but worked perfectly on my Access 97 machine. How about C++? I tried to implement automation in VC++ (version 6, not .NET) but was having problems with that also.
Well, any help that can be provided is MUCH appreciated....
MKoslof
08-24-2004, 07:55 AM
Let me see what I can come up with For .Net. I don't think the version of Access has anything to do with whether or not it is physically possible. Access 97 uses the old Jet 3.51 provider, while Access 2000 and higher uses the Jet 4.0 Provider so the engine itself changed. Meaning, I am confident you can convert an .mdb file to a .mde file if using VB (*at least I think you can) if you are working with a newer version of Access.
The same applies with VB Script, you would be able to envoke the same methods and COM objects. .Net will also allow you to reference COM objects (it is in the Com object references tab) or use late binding via CreateObject(). Now, I don't think there will be any native .Net method that will do this kind of conversion. So, the API or COM object route *might* be your best alternative in the .Net namespace. I have not tried anything like this, so give me some time to see what I can come up with. I have Access XP to test with and 2003.
Masta120
08-25-2004, 07:28 AM
Well, like I said, I used that code sample that I posted and built a console based executable with VB6. It worked perfectly on my machine with Access 97.
I then brought over the same executable (code sample I posted) to my machine with Access 2002 (the same machine also still has Access 97 installed), along with my mdb file that was created in Access 97. The executable did not do anything on the Access 2002 pc. I then tried to convert the mdb into a 2000 mdb and a 2002 mdb. Still, the executable did not do anything. It ran, with no errors, no output messages, just nothing happened.
I tried the same with the VB script I used on my Access 97 machine. I tried using it on my Access 2002 machine (with the "cscript" command), with each mdb version (97, 2000, and 2002). Again, the script ran, with no errors, and just did not do anything.
Thats why I thought VB.NET would be my solution. I would prefer to keep this in VB6 if I could, but I would use .NET if I had to.
MKoslof
08-25-2004, 07:46 AM
So this is your current code that works with only Access 97:
Dim x As Object
Dim strFileIn As String
Dim strFileOut As String
' I gave the strings some absolute file paths
Set x = CreateObject("Access.Application")
x.SysCmd 603, strFileIn, strFileOut
Let me see what I can come up with..again, COM Interop is not exactly the best method within .Net. But at the same time, I don't think you are going to find any .Net native methods to do this either. All replication and conversion classes are not tailored to Access...MDB to MDE conversion is not something .Net will understand automatically.
MKoslof
08-25-2004, 09:00 AM
OK, there has been some confusion as to where to put this thread. Masta can you clarify if you want to use VB or .Net for this? From your last comments it appears you want to use VB as your top choice..is that correct? If so, this post would be best suited for the Legacy Database forum. If you want to use .Net this is best suited for one of the .Net forums. Until we get clarification I will leave it in this forum (VBA Integration) so some of the other VBA/Access guys can take a look. Please let us know, best case scenerio what language or platform you want to use for this. Thanks
MKoslof
08-25-2004, 10:12 AM
Grrr...I see your frusteration, either using the SysCmd or the DoCmd.RunCommand acCmdMakeMDEFile method will WORK for Access 97 and 2000.
However Access XP does not support either method. You might need a service pack upgrade..not sure. It appears XP has changed its security structure and this is not documented anywhere that I can find. You might be able to get really complicated and work around this..I haven't had success yet with Access 2002. That is the root of your problem..it has nothing to do with VB, .Net or VBA..it is something within XP. And this is quite strange, considering 2000 and XP basically have the same Jet Provider functionality...it is must be something within the internal security settings not exposed....
MKoslof
08-25-2004, 10:28 AM
OK, you gotta love Access.. :) this will work.
First, make sure you have done this:;en-us;278376
Note, I had a NEW database that I created in XP...but I STILL had to convert this to 2002 format...not sure why, but what can do is automate the whole procedure, do a DoCmd.RunCommand method in order to convert the Database...or do it by hand if you can afford to.
Then, you MUST make sure all code in modules is fully qualified, meaning you have explicit calls to ADODB.connection, DAO.Database, etc. Also, make sure you have Option Explicit in ALL modules..if not, the conversion will fail as well.
Then, you can use this from VB to automate the MDE conversion (.NEt as well via COM Interop)
Function GenerateMDEFile(MyPath As String)
On Error Resume Next
Dim Acc As Access.Application
Set Acc = CreateObject("Access.Application")
SendKeys MyPath & "{Enter}{Enter}"
SendKeys "{Enter}"
Acc.DoCmd.RunCommand acCmdMakeMDEFile
Set Acc = Nothing
MsgBox "done"
End Function
Wamphyri
08-25-2004, 10:44 AM
Just to add. I don't have any success with access 2000 when using the CreateObject method. However explicitly adding a reference to access 2000 and using
The Syscmd 603 doesn't work when using
Dim x As Object
Set x = CreateObject("Access.Application")
It works using (perhaps you could try that on 2002)
Dim x as Access.Application
set x = New Access.Application
MKoslof
08-25-2004, 10:50 AM
I am using 2002 and that worked for me fine...I have a reference to the Access Object library in my project file...it actually worked using late binding and early binding. For CreateObject to work, you might need to add the qualified version.
accApp = CreateObject("Access.Application.9")
Masta120
08-25-2004, 01:28 PM
Hey guys,
Thanks again for your prompt responses. I really appreciate it.
But, I am still having problems. What I was trying to do was write a console app, that I can run and pass a file name to. Then this console executable will then create this mde file. I was going to set up a batch file, that would call this executable, since I need to do this often and for a lot of files.
MKoslof, I tried your code, and I think I am still doing something wrong. But, the function call to "DoCmd.RunCommand acCmdMakeMDEFile", I thought this was mainly for VBA, running VB inside of an Access database. I did try this in my VB6 console app, and there were problems.
One of the problems was VB6 didn't like "Dim Acc As Access.Application". It could not recognize the type for Acc.
You had said you got this working? From a standalone VB app? Was it with VB.NET? I don't have VB.NET here so I will try it when I get home today.
MKoslof
08-25-2004, 01:31 PM
I ran this from Legacy VB and .NEt.
In your VB project, go to references...select the Microsoft Access Object Library v10.0 (if Access XP) now you should be able to qualify the Access.Application object.
If you want to have a batch file call the VB executable I don't see anything wrong with that..all object references would be fully compiled within the exe.
Masta120
08-25-2004, 03:12 PM
I tried this in VB6, and selected MS Access Object Library. I copied the code exactly as you posted it, called "GenerateMDEFile" in my sub main, and it compiled fine. But when I ran it, it launched a file dialog window prompting me for the file name to save my MDE to. I tried using sendkeys to send another {ENTER} but it didn't have any effect.
Is there a way to get around that last window through code? I didn't want to have any manual intervention when running the executable...
MKoslof
08-25-2004, 03:28 PM
Yes, I noticed that too. This ONLY occurs when the mde file already exists..so it is prompting you to rename it (it appears it will not overwrite). Basically this code names the mdb in the same directory..so if you pass C:\db1.mdb as the database it creates a new file called C:\db1.mde. So you probably already have this mde file on your CPU..that is why it is prompting you.
To get around this, you can use this function and check if the mde file already exists, if so, back out and tell the user to try again with a different name. See below....
Private Sub Command2_Click()
'call the procedure passing in the database name
GenerateMDEFile ("C:\db2.mdb")
'here switch the mdb extension with .mde
'test if it has already been created
MDEName = Left$(MyPath, Len(MyPath) - 4) & ".mde"
If Not FileExists(MDEName) Then
SendKeys MyPath & "{Enter}{Enter}"
SendKeys "{Enter}"
NAcc.DoCmd.RunCommand acCmdMakeMDEFile
Else
MsgBox "this mde file already exists, specify a different name."
Exit Function
End If
Set NAcc = Nothing
End Function
Masta120
08-26-2004, 04:34 PM
Ok MKoslof, I FINALLY got it to work :) ! Well not completely. I got your code to work. Mine is still not perfect. Like I said before, I am trying to do all of this from a console app. When I run the app from the command line (I open up a command window, change directories to my VB workspace, and run my executable), it goes to the next line, and then it echos the input path I specified at the command line as a parameter. Then it just gets hung up there and freezes. I have to kill 2 processes in the task manager, "msaccess.exe", and "mde.exe" (my executable name). Furthermore, before killing "msaccess.exe" while it was frozen, I tried to double click on the database I was using as an input to open it. When I did, it prompted me with the file dialog window, asking me what name do I want to save my MDE as. After killing the 2 processes, I get the "Done" msgbox (you'll see in my code below) and the command prompt comes back to life.
I am assuming something is not right with the "SendKeys" in my console app. Seeing the echo on the next line makes me think the "SendKeys" is going to the DOS window instead of the Access window.
Here is the code I am using. Any ideas?
Option Explicit
Private Declare Function GetStdOutHandle Lib "kernel32" Alias "GetStdHandle" _
(Optional ByVal HandleType As Long = -11) As Long
Private Declare Function WriteFile Lib "kernel32" (ByVal hFile As Long, _
ByVal lpBuffer As String, ByVal cToWrite As Long, ByRef cWritten As Long, _
Optional ByVal lpOverlapped As Long) As Long
Private Function ConsoleWrite(sText As String) As Long
WriteFile GetStdOutHandle, ByVal sText, Len(sText), ConsoleWrite
End Function
Public Sub Main()
Dim arg As String
Dim x As Object
Dim strFileIn As String
Dim strFileOut As String
On Error Resume Next
arg = Command()
strFileIn = arg
'strFileIn = "d:\temp\db1_02.mdb"
' get the output file to kill if it exists
strFileOut = Left(strFileIn, Len(strFileIn) - 1)
strFileOut = strFileOut + "e"
Kill strFileOut
GenerateMDEFile (strFileIn)
MsgBox "Done"
MDEName = Left$(MyPath, Len(MyPath) - 4) & ".mde"
If Not FileExists(MDEName) Then
SendKeys MyPath & "{Enter}{Enter}"
SendKeys "{Enter}"
NAcc.DoCmd.RunCommand acCmdMakeMDEFile
Else
Exit Function
End If
Set NAcc = Nothing
End Function
MKoslof
08-26-2004, 05:08 PM
UG..well instead of using SendKeys you can try using API calls such as SendMessage to simulate key presses. From your console app you might get better results that way.
Basically use FindWindow() to actually track down the Access Window. Then once you have the required window, you can use sendmessage to send key strokes to the activated applicaition. | http://www.xtremevbtalk.com/archive/index.php/t-184910.html | crawl-002 | refinedweb | 2,463 | 75.71 |
Provided by: explain_1.4.D001-8_amd64
NAME
explain_lca2010 - No medium found: when it's time to stop trying to read strerror(3)'s mind.
MOTIVATION
The idea for libexplain occurred to me back in the early 1980s. Whenever a system call returns an error, the kernel knows exactly what went wrong... and compresses this into less that 8 bits of errno. User space has access to the same data as the kernel, it should be possible for user space to figure out exactly what happened to provoke the error return, and use this to write good error messages. Could it be that simple? Error messages as finesse Good error messages are often those “one percent” tasks that get dropped when schedule pressure squeezes your project. However, a good error message can make a huge, disproportionate improvement to the user experience, when the user wanders into scarey unknown territory not usually encountered. This is no easy task. As a larval programmer, the author didn't see the problem with (completely accurate) error messages like this one: floating exception (core dumped) until the alternative non‐programmer interpretation was pointed out. But that isn't the only thing wrong with Unix error messages. How often do you see error messages like: $ ./stupid can't open file $ There are two options for a developer at this point: 1. you can run a debugger, such as gdb(1), or 2. you can use strace(1) or truss(1) to look inside. · Remember that your users may not even have access to these tools, let alone the ability to use them. (It's a very long time since Unix beginner meant “has only written one device driver”.) In this example, however, using strace(1) reveals $ strace -e trace=open ./stupid open("some/file", O_RDONLY) = -1 ENOENT (No such file or directory) can't open file $ This is considerably more information than the error message provides. Typically, the stupid source code looks like this int fd = open("some/thing", O_RDONLY); if (fd < 0) { fprintf(stderr, "can't open file\n"); exit(1); } The user isn't told which file, and also fails to tell the user which error. Was the file even there? Was there a permissions problem? It does tell you it was trying to open a file, but that was probably by accident. Grab your clue stick and go beat the larval programmer with it. Tell him about perror(3). The next time you use the program you see a different error message: $ ./stupid open: No such file or directory $ Progress, but not what we expected. How can the user fix the problem if the error message doesn't tell him what the problem was? Looking at the source, we see int fd = open("some/thing", O_RDONLY); if (fd < 0) { perror("open"); exit(1); } Time for another run with the clue stick. This time, the error message takes one step forward and one step back: $ ./stupid some/thing: No such file or directory $ Now we know the file it was trying to open, but are no longer informed that it was open(2) that failed. In this case it is probably not significant, but it can be significant for other system calls. It could have been creat(2) instead, an operation implying that different permissions are necessary. const char *filename = "some/thing"; int fd = open(filename, O_RDONLY); if (fd < 0) { perror(filename); exit(1); } The above example code is unfortunately typical of non‐larval programmers as well. Time to tell our padawan learner about the strerror(3) system call. $ ./stupid open some/thing: No such file or directory $ This maximizes the information that can be presented to the user. The code looks like this: const char *filename = "some/thing"; int fd = open(filename, O_RDONLY); if (fd < 0) { fprintf(stderr, "open %s: %s\n", filename, strerror(errno)); exit(1); } Now we have the system call, the filename, and the error string. This contains all the information that strace(1) printed. That's as good as it gets. Or is it? Limitations of perror and strerror The problem the author saw, back in the 1980s, was that the error message is incomplete. Does “no such file or directory” refer to the “some” directory, or to the “thing” file in the “some” directory? A quick look at the man page for strerror(3) is telling: strerror - return string describing error number Note well: it is describing the error number, not the error. On the other hand, the kernel knows what the error was. There was a specific point in the kernel code, caused by a specific condition, where the kernel code branched and said “no”. Could a user‐space program figure out the specific condition and write a better error message? However, the problem goes deeper. What if the problem occurs during the read(2) system call, rather than the open(2) call? It is simple for the error message associated with open(2) to include the file name, it's right there. But to be able to include a file name in the error associated with the read(2) system call, you have to pass the file name all the way down the call stack, as well as the file descriptor. And here is the bit that grates: the kernel already knows what file name the file descriptor is associated with. Why should a programmer have to pass redundant data all the way down the call stack just to improve an error message that may never be issued? In reality, many programmers don't bother, and the resulting error messages are the worse for it. But that was the 1980s, on a PDP11, with limited resources and no shared libraries. Back then, no flavor of Unix included /proc even in rudimentary form, and the lsof(1) program was over a decade away. So the idea was shelved as impractical. Level Infinity Support Imagine that you are level infinity support. Your job description says that you never ever have to talk to users. Why, then, is there still a constant stream of people wanting you, the local Unix guru, to decipher yet another error message? Strangely, 25 years later, despite a simple permissions system, implemented with complete consistency, most Unix users still have no idea how to decode “No such file or directory”, or any of the other cryptic error messages they see every day. Or, at least, cryptic to them. Wouldn't it be nice if first level tech support didn't need error messages deciphered? Wouldn't it be nice to have error messages that users could understand without calling tech support? These days /proc on Linux is more than able to provide the information necessary to decode the vast majority of error messages, and point the user to the proximate cause of their problem. On systems with a limited /proc implementation, the lsof(1) command can fill in many of the gaps. In 2008, the stream of translation requests happened to the author way too often. It was time to re‐examine that 25 year old idea, and libexplain is the result.
USING THE LIBRARY
The interface to the library tries to be consistent, where possible. Let's start with an example using strerror(3): if (rename(old_path, new_path) < 0) { fprintf(stderr, "rename %s %s: %s\n", old_path, new_path, strerror(errno)); exit(1); } The idea behind libexplain is to provide a strerror(3) equivalent for each system call, tailored specifically to that system call, so that it can provide a more detailed error message, containing much of the information you see under the “ERRORS” heading of section 2 and 3 man pages, supplemented with information about actual conditions, actual argument values, and system limits. The Simple Case The strerror(3) replacement: if (rename(old_path, new_path) < 0) { fprintf(stderr, "%s\n", explain_rename(old_path, new_path)); exit(1); } The Errno Case It is also possible to pass an explicit errno(3) value, if you must first do some processing that would disturb errno, such as error recovery: if (rename(old_path, new_path < 0)) { int old_errno = errno; ...code that disturbs errno... fprintf(stderr, "%s\n", explain_errno_rename(old_errno, old_path, new_path)); exit(1); } The Multi‐thread Cases Some applications are multi‐threaded, and thus are unable to share libexplain's internal buffer. You can supply your own buffer using if (unlink(pathname)) { char message[3000]; explain_message_unlink(message, sizeof(message), pathname); error_dialog(message); return -1; } And for completeness, both errno(3) and thread‐safe: ssize_t nbytes = read(fd, data, sizeof(data)); if (nbytes < 0) { char message[3000]; int old_errno = errno; ...error recovery... explain_message_errno_read(message, sizeof(message), old_errno, fd, data, sizeof(data)); error_dialog(message); return -1; } These are replacements for strerror_r(3), on systems that have it. Interface Sugar A set of functions added as convenience functions, to woo programmers to use the libexplain library, turn out to be the author's most commonly used libexplain functions in command line programs: int fd = explain_creat_or_die(filename, 0666); This function attempts to create a new file. If it can't, it prints an error message and exits with EXIT_FAILURE. If there is no error, it returns the new file descriptor. A related function: int fd = explain_creat_on_error(filename, 0666); will print the error message on failure, but also returns the original error result, and errno(3) is unmolested, as well. All the other system calls In general, every system call has its own include file #include <libexplain/name.h> that defines function prototypes for six functions: · explain_name, · explain_errno_name, · explain_message_name, · explain_message_errno_name, · explain_name_or_die and · explain_name_on_error. Every function prototype has Doxygen documentation, and this documentation is not stripped when the include files are installed. The wait(2) system call (and friends) have some extra variants that also interpret failure to be an exit status that isn't EXIT_SUCCESS. This applies to system(3) and pclose(3) as well. Coverage includes 221 system calls and 547 ioctl requests. There are many more system calls yet to implement. System calls that never return, such as exit(2), are not present in the library, and will never be. The exec family of system calls are supported, because they return when there is an error. Cat This is what a hypothetical “cat” program could look like, with full error reporting, using libexplain. #include <libexplain/libexplain.h> #include <stdlib.h> #include <unistd.h> There is one include for libexplain, plus the usual suspects. (If you wish to reduce the preprocessor load, you can use the specific <libexplain/name.h> includes.) static void process(FILE *fp) { for (;;) { char buffer[4096]; size_t n = explain_fread_or_die(buffer, 1, sizeof(buffer), fp); if (!n) break; explain_fwrite_or_die(buffer, 1, n, stdout); } } The process function copies a file stream to the standard output. Should an error occur for either reading or writing, it is reported (and the pathname will be included in the error) and the command exits with EXIT_FAILURE. We don't even worry about tracking the pathnames, or passing them down the call stack. int main(int argc, char **argv) { for (;;) { int c = getopt(argc, argv, "o:"); if (c == EOF) break; switch (c) { case 'o': explain_freopen_or_die(optarg, "w", stdout); break; The fun part of this code is that libexplain can report errors including the pathname even if you don't explicitly re‐open stdout as is done here. We don't even worry about tracking the file name. default: fprintf(stderr, "Usage: %ss [ -o <filename> ] <filename>...\n", argv[0]); return EXIT_FAILURE; } } if (optind == argc) process(stdin); else { while (optind < argc) { FILE *fp = explain_fopen_or_die(argv[optind]++, "r"); process(fp); explain_fclose_or_die(fp); } } The standard output will be closed implicitly, but too late for an error report to be issued, so we do that here, just in case the buffered I/O hasn't written anything yet, and there is an ENOSPC error or something. explain_fflush_or_die(stdout); return EXIT_SUCCESS; } That's all. Full error reporting, clear code. Rusty's Scale of Interface Goodness For those of you not familiar with it, Rusty Russel's “How Do I Make This Hard to Misuse?” page is a must‐read for API designers.‐03‐30.html 10. It's impossible to get wrong. Goals need to be set high, ambitiously high, lest you accomplish them and think you are finished when you are not. The libexplain library detects bogus pointers and many other bogus system call parameters, and generally tries to avoid segfaults in even the most trying circumstances. The libexplain library is designed to be thread safe. More real‐world use will likely reveal places this can be improved. The biggest problem is with the actual function names themselves. Because C does not have name‐spaces, the libexplain library always uses an explain_ name prefix. This is the traditional way of creating a pseudo‐name‐space in order to avoid symbol conflicts. However, it results in some unnatural‐sounding names. 9. The compiler or linker won't let you get it wrong. A common mistake is to use explain_open where explain_open_or_die was intended. Fortunately, the compiler will often issue a type error at this point (e.g. can't assign const char * rvalue to an int lvalue). 8. The compiler will warn if you get it wrong. If explain_rename is used when explain_rename_or_die was intended, this can cause other problems. GCC has a useful warn_unused_result function attribute, and the libexplain library attaches it to all the explain_name function calls to produce a warning when you make this mistake. Combine this with gcc -Werror to promote this to level 9 goodness. 7. The obvious use is (probably) the correct one. The function names have been chosen to convey their meaning, but this is not always successful. While explain_name_or_die and explain_name_on_error are fairly descriptive, the less‐used thread safe variants are harder to decode. The function prototypes help the compiler towards understanding, and the Doxygen comments in the header files help the user towards understanding. 6. The name tells you how to use it. It is particularly important to read explain_name_or_die as “explain (name or die)”. Using a consistent explain_ name‐space prefix has some unfortunate side‐effects in the obviousness department, as well. The order of words in the names also indicate the order of the arguments. The argument lists always end with the same arguments as passed to the system call; all of them. If _errno_ appears in the name, its argument always precedes the system call arguments. If _message_ appears in the name, its two arguments always come first. 5. Do it right or it will break at runtime. The libexplain library detects bogus pointers and many other bogus system call parameters, and generally tries to avoid segfaults in even the most trying circumstances. It should never break at runtime, but more real‐world use will no doubt improve this. Some error messages are aimed at developers and maintainers rather than end users, as this can assist with bug resolution. Not so much “break at runtime” as “be informative at runtime” (after the system call barfs). 4. Follow common convention and you'll get it right. Because C does not have name‐spaces, the libexplain library always uses an explain_ name prefix. This is the traditional way of creating a pseudo‐name‐space in order to avoid symbol conflicts. The trailing arguments of all the libexplain call are identical to the system call they are describing. This is intended to provide a consistent convention in common with the system calls themselves. 3. Read the documentation and you'll get it right. The libexplain library aims to have complete Doxygen documentation for each and every public API call (and internally as well).
MESSAGE CONTENT
Working on libexplain is a bit like looking at the underside of your car when it is up on the hoist at the mechanic's. There's some ugly stuff under there, plus mud and crud, and users rarely see it. A good error message needs to be informative, even for a user who has been fortunate enough not to have to look at the under‐side very often, and also informative for the mechanic listening to the user's description over the phone. This is no easy task. Revisiting our first example, the code would like this if it uses libexplain: int fd = explain_open_or_die("some/thing", O_RDONLY, 0); will fail with an error message like this open(pathname = "some/file", flags = O_RDONLY) failed, No such file or directory (2, ENOENT) because there is no "some" directory in the current directory This breaks down into three pieces system‐call failed, system‐error because explanation Before Because It is possible to see the part of the message before “because” as overly technical to non‐ technical users, mostly as a result of accurately printing the system call itself at the beginning of the error message. And it looks like strace(1) output, for bonus geek points. open(pathname = "some/file", flags = O_RDONLY) failed, No such file or directory (2, ENOENT) This part of the error message is essential to the developer when he is writing the code, and equally important to the maintainer who has to read bug reports and fix bugs in the code. It says exactly what failed. If this text is not presented to the user then the user cannot copy‐and‐paste it into a bug report, and if it isn't in the bug report the maintainer can't know what actually went wrong. Frequently tech staff will use strace(1) or truss(1) to get this exact information, but this avenue is not open when reading bug reports. The bug reporter's system is far far away, and, by now, in a far different state. Thus, this information needs to be in the bug report, which means it must be in the error message. The system call representation also gives context to the rest of the message. If need arises, the offending system call argument may be referred to by name in the explanation after “because”. In addition, all strings are fully quoted and escaped C strings, so embedded newlines and non‐printing characters will not cause the user's terminal to go haywire. The system‐error is what comes out of strerror(2), plus the error symbol. Impatient and expert sysadmins could stop reading at this point, but the author's experience to date is that reading further is rewarding. (If it isn't rewarding, it's probably an area of libexplain that can be improved. Code contributions are welcome, of course.) After Because This is the portion of the error message aimed at non‐technical users. It looks beyond the simple system call arguments, and looks for something more specific. there is no "some" directory in the current directory This portion attempts to explain the proximal cause of the error in plain language, and it is here that internationalization is essential. In general, the policy is to include as much information as possible, so that the user doesn't need to go looking for it (and doesn't leave it out of the bug report). Internationalization Most of the error messages in the libexplain library have been internationalized. There are no localizations as yet, so if you want the explanations in your native language, please contribute. The “most of” qualifier, above, relates to the fact that the proof‐of‐concept implementation did not include internationalization support. The code base is being revised progressively, usually as a result of refactoring messages so that each error message string appears in the code exactly once. Provision has been made for languages that need to assemble the portions of system‐call failed, system‐error because explanation in different orders for correct grammar in localized error messages. Postmortem There are times when a program has yet to use libexplain, and you can't use strace(1) either. There is an explain(1) command included with libexplain that can be used to decipher error messages, if the state of the underlying system hasn't changed too much. $ explain rename foo /tmp/bar/baz -e ENOENT rename(oldpath = "foo", newpath = "/tmp/bar/baz") failed, No such file or directory (2, ENOENT) because there is no "bar" directory in the newpath "/tmp" directory $ Note how the path ambiguity is resolved by using the system call argument name. Of course, you have to know the error and the system call for explain(1) to be useful. As an aside, this is one of the ways used by the libexplain automatic test suite to verify that libexplain is working. Philosophy “Tell me everything, including stuff I didn't know to look for.” The library is implemented in such a way that when statically linked, only the code you actually use will be linked. This is achieved by having one function per source file, whenever feasible. When it is possible to supply more information, libexplain will do so. The less the user has to track down for themselves, the better. This means that UIDs are accompanied by the user name, GIDs are accompanied by the group name, PIDs are accompanied by the process name, file descriptors and streams are accompanied by the pathname, etc. When resolving paths, if a path component does not exist, libexplain will look for similar names, in order to suggest alternatives for typographical errors. The libexplain library tries to use as little heap as possible, and usually none. This is to avoid perturbing the process state, as far as possible, although sometimes it is unavoidable. The libexplain library attempts to be thread safe, by avoiding global variables, keeping state on the stack as much as possible. There is a single common message buffer, and the functions that use it are documented as not being thread safe. The libexplain library does not disturb a process's signal handlers. This makes determining whether a pointer would segfault a challenge, but not impossible. When information is available via a system call as well as available through a /proc entry, the system call is preferred. This is to avoid disturbing the process's state. There are also times when no file descriptors are available. The libexplain library is compiled with large file support. There is no large/small schizophrenia. Where this affects the argument types in the API, and error will be issued if the necessary large file defines are absent. FIXME: Work is needed to make sure that file system quotas are handled in the code. This applies to some getrlimit(2) boundaries, as well. There are cases when relatives paths are uninformative. For example: system daemons, servers and background processes. In these cases, absolute paths are used in the error explanations.
PATH RESOLUTION
Short version: see path_resolution(7). Long version: Most users have never heard of path_resolution(7), and many advanced users have never read it. Here is an annotated version: Step 1: Start of the resolution process If the pathname starts with the slash (“/”) character, the starting lookup directory is the root directory of the calling process. If the pathname does not start with the slash(“/”) character, the starting lookup directory of the resolution process is the current working directory of the process. Step 2: Walk along the path Set the current lookup directory to the starting lookup directory. Now, for each non‐ final component of the pathname, where a component is a substring delimited by slash (“/”) characters, this component is looked up in the current lookup directory. If the process does not have search permission on the current lookup directory, an EACCES error is returned ("Permission denied"). open(pathname = "/home/archives/.ssh/private_key", flags = O_RDONLY) failed, Permission denied (13, EACCES) because the process does not have search permission to the pathname "/home/archives/.ssh" directory, the process effective GID 1000 "pmiller" does not match the directory owner 1001 "archives" so the owner permission mode "rwx" is ignored, the others permission mode is "---", and the process is not privileged (does not have the DAC_READ_SEARCH capability) If the component is not found, an ENOENT error is returned ("No such file or directory"). unlink(pathname = "/home/microsoft/rubbish") failed, No such file or directory (2, ENOENT) because there is no "microsoft" directory in the pathname "/home" directory There is also some support for users when they mis‐type pathnames, making suggestions when ENOENT is returned: open(pathname = "/user/include/fcntl.h", flags = O_RDONLY) failed, No such file or directory (2, ENOENT) because there is no "user" directory in the pathname "/" directory, did you mean the "usr" directory instead? If the component is found, but is neither a directory nor a symbolic link, an ENOTDIR error is returned ("Not a directory"). open(pathname = "/home/pmiller/.netrc/lca", flags = O_RDONLY) failed, Not a directory (20, ENOTDIR) because the ".netrc" regular file in the pathname "/home/pmiller" directory is being used as a directory when it is not. unlink(pathname = "/tmp/dangling/rubbish") failed, No such file or directory (2, ENOENT) because the "dangling" symbolic link in the pathname "/tmp" directory refers to "nowhere" that does not exist If the resolution of the symlink is successful and returns a directory, we set the current lookup directory to that directory, and go to the next component. Note that the resolution process here involves recursion."). open(pathname = "/tmp/dangling", flags = O_RDONLY) failed, Too many levels of symbolic links (40, ELOOP) because a symbolic link loop was encountered in pathname, starting at "/tmp/dangling" It is also possible to get an ELOOP or EMLINK error if there are too many symlinks, but no loop was detected. open(pathname = "/tmp/rabbit‐hole", flags = O_RDONLY) failed, Too many levels of symbolic links (40, ELOOP) because too many symbolic links were encountered in pathname (8) Notice how the actual limit is also printed. non‐directory, because of the requirements of the specific system call). (ii) It is not necessarily an error if the final component is not found; maybe we are just creating it. The details on the treatment of the final entry are described in the manual pages of the specific system calls. (iii) It is also possible to have a problem with the last component if it is a symbolic link and it should not be followed. For example, using the open(2) O_NOFOLLOW flag: open(pathname = "a‐symlink", flags = O_RDONLY | O_NOFOLLOW) failed, Too many levels of symbolic links (ELOOP) because O_NOFOLLOW was specified but pathname refers to a symbolic link (iv) It is common for users to make mistakes when typing pathnames. The libexplain library attempts to make suggestions when ENOENT is returned, for example: open(pathname = "/usr/include/filecontrl.h", flags = O_RDONLY) failed, No such file or directory (2, ENOENT) because there is no "filecontrl.h" regular file in the pathname "/usr/include" directory, did you mean the "fcntl.h" regular file instead? (v) It is also possible that the final component is required to be something other than a regular file: readlink(pathname = "just‐a‐file", data = 0x7F930A50, data_size = 4097) failed, Invalid argument (22, EINVAL) because pathname is a regular file, not a symbolic link (vi) FIXME: handling of the "t" bit. Limits There are a number of limits with regards to pathnames and filenames. Pathname length limit There is a maximum length for pathnames. If the pathname (or some intermediate pathname obtained while resolving symbolic links) is too long, an ENAMETOOLONG error is returned ("File name too long"). Notice how the system limit is included in the error message. open(pathname = "very...long", flags = O_RDONLY) failed, File name too long (36, ENAMETOOLONG) because pathname exceeds the system maximum path length (4096) Filename length limit Some Unix variants have a limit on the number of bytes in each path component. Some of them deal with this silently, and some give ENAMETOOLONG; the libexplain library uses pathconf(3) _PC_NO_TRUNC to tell which. If this error happens, the libexplain library will state the limit in the error message, the limit is obtained from pathconf(3) _PC_NAME_MAX. Notice how the system limit is included in the error message. open(pathname = "system7/only-had-14-characters", flags = O_RDONLY) failed, File name too long (36, ENAMETOOLONG) because "only-had-14-characters" component is longer than the system limit (14) Empty pathname In the original Unix, the empty pathname referred to the current directory. Nowadays POSIX decrees that an empty pathname must not be resolved successfully. open(pathname = "", flags = O_RDONLY) failed, No such file or directory (2, ENOENT) because POSIX decrees that an empty pathname must not be resolved successfully Permissions The permission bits of a file consist of three groups of three bits.. When neither holds, the third group is used. open(pathname = "/etc/passwd", flags = O_WRONLY) failed, Permission denied (13, EACCES) because the process does not have write permission to the "passwd" regular file in the pathname "/etc" directory, the process effective UID 1000 "pmiller" does not match the regular file owner 0 "root" so the owner permission mode "rw-" is ignored, the others permission mode is "r--", and the process is not privileged (does not have the DAC_OVERRIDE capability) Some considerable space is given to this explanation, as most users do not know that this is how the permissions system works. In particular: the owner, group and other permissions are exclusive, they are not “OR”ed together.
STRANGE AND INTERESTING SYSTEM CALLS
The process of writing a specific error handler for each system call often reveals interesting quirks and boundary conditions, or obscure errno(3) values. ENOMEDIUM, No medium found The act of copying a CD was the source of the title for this paper. $ dd if=/dev/cdrom of=fubar.iso dd: opening “/dev/cdrom”: No medium found $ The author wondered why his computer was telling him there is no such thing as a psychic medium. Quite apart from the fact that huge numbers of native English speakers are not even aware that “media” is a plural, let alone that “medium” is its singular, the string returned by strerror(3) for ENOMEDIUM is so terse as to be almost completely free of content. When open(2) returns ENOMEDIUM it would be nice if the libexplain library could expand a little on this, based on the type of drive it is. For example: ... because there is no disk in the floppy drive ... because there is no disc in the CD‐ROM drive ... because there is no tape in the tape drive ... because there is no memory stick in the card reader And so it came to pass... open(pathname = "/dev/cdrom", flags = O_RDONLY) failed, No medium found (123, ENOMEDIUM) because there does not appear to be a disc in the CD‐ROM drive The trick, that the author was previously unaware of, was to open the device using the O_NONBLOCK flag, which will allow you to open a drive with no medium in it. You then issue device specific ioctl(2) requests until you figure out what the heck it is. (Not sure if this is POSIX, but it also seems to work that way in BSD and Solaris, according to the wodim(1) sources.) Note also the differing uses of “disk” and “disc” in context. The CD standard originated in France, but everything else has a “k”. EFAULT, Bad address Any system call that takes a pointer argument can return EFAULT. The libexplain library can figure out which argument is at fault, and it does it without disturbing the process (or thread) signal handling. When available, the mincore(2) system call is used, to ask if the memory region is valid. It can return three results: mapped but not in physical memory, mapped and in physical memory, and not mapped. When testing the validity of a pointer, the first two are “yes” and the last one is “no”. Checking C strings are more difficult, because instead of a pointer and a size, we only have a pointer. To determine the size we would have to find the NUL, and that could segfault, catch‐22. To work around this, the libexplain library uses the lstat(2) sysem call (with a known good second argument) to test C strings for validity. A failure return && errno == EFAULT is a “no”, and anythng else is a “yes”. This, of course limits strings to PATH_MAX characters, but that usually isn't a problem for the libexplain library, because that is almost always the longest strings it cares about. EMFILE, Too many open files This error occurs when a process already has the maximum number of file descriptors open. If the actual limit is to be printed, and the libexplain library tries to, you can't open a file in /proc to read what it is. open_max = sysconf(_SC_OPEN_MAX); This one wan't so difficult, there is a sysconf(3) way of obtaining the limit. ENFILE, Too many open files in system This error occurs when the system limit on the total number of open files has been reached. In this case there is no handy sysconf(3) way of obtain the limit. Digging deeper, one may discover that on Linux there is a /proc entry we could read to obtain this value. Catch‐22: we are out of file descriptors, so we can't open a file to read the limit. On Linux there is a system call to obtain it, but it has no [e]glibc wrapper function, so you have to all it very carefully: long explain_maxfile(void) { #ifdef __linux__ struct __sysctl_args args; int32_t maxfile; size_t maxfile_size = sizeof(maxfile); int name[] = { CTL_FS, FS_MAXFILE }; memset(&args, 0, sizeof(struct __sysctl_args)); args.name = name; args.nlen = 2; args.oldval = &maxfile; args.oldlenp = &maxfile_size; if (syscall(SYS__sysctl, &args) >= 0) return maxfile; #endif return -1; } This permits the limit to be included in the error message, when available. EINVAL “Invalid argument” vs ENOSYS “Function not implemented” Unsupported actions (such as symlink(2) on a FAT file system) are not reported consistently from one system call to the next. It is possible to have either EINVAL or ENOSYS returned. As a result, attention must be paid to these error cases to get them right, particularly as the EINVAL could also be referring to problems with one or more system call arguments. Note that errno(3) is not always set There are times when it is necessary to read the [e]glibc sources to determine how and when errors are returned for some system calls. feof(3), fileno(3) It is often assumed that these functions cannot return an error. This is only true if the stream argument is valid, however they are capable of detecting an invalid pointer. fpathconf(3), pathconf(3) The return value of fpathconf(2) and pathconf(2) could legitimately be -1, so it is necessary to see if errno(3) has been explicitly set. ioctl(2) The return value of ioctl(2) could legitimately be -1, so it is necessary to see if errno(3) has been explicitly set. readdir(3) The return value of readdir(3) is NULL for both errors and end‐of‐file. It is necessary to see if errno(3) has been explicitly set. setbuf(3), setbuffer(3), setlinebuf(3), setvbuf(3) All but the last of these functions return void. And setvbuf(3) is only documented as returning “non‐zero” on error. It is necessary to see if errno(3) has been explicitly set. strtod(3), strtol(3), strtold(3), strtoll(3), strtoul(3), strtoull(3) These functions return 0 on error, but that is also a legitimate return value. It is necessary to see if errno(3) has been explicitly set. ungetc(3) While only a single character of backup is mandated by the ANSI C standard, it turns out that [e]glibc permits more... but that means it can fail with ENOMEM. It can also fail with EBADF if fp is bogus. Most difficult of all, if you pass EOF an error return occurs, but errno is not set. The libexplain library detects all of these errors correctly, even in cases where the error values are poorly documented, if at all. ENOSPC, No space left on device When this error refers to a file on a file system, the libexplain library prints the mount point of the file system with the problem. This can make the source of the error much clearer. write(fildes = 1 "example", data = 0xbfff2340, data_size = 5) failed, No space left on device (28, ENOSPC) because the file system containing fildes ("/home") has no more space for data As more special device support is added, error messages are expected to include the device name and actual size of the device. EROFS, Read‐only file system When this error refers to a file on a file system, the libexplain library prints the mount point of the file system with the problem. This can make the source of the error much clearer. As more special device support is added, error messages are expected to include the device name and type. open(pathname = "/dev/fd0", O_RDWR, 0666) failed, Read‐only file system (30, EROFS) because the floppy disk has the write protect tab set ...because a CD‐ROM is not writable ...because the memory card has the write protect tab set ...because the ½ inch magnetic tape does not have a write ring rename The rename(2) system call is used to change the location or name of a file, moving it between directories if required. If the destination pathname already exists it will be atomically replaced, so that there is no point at which another process attempting to access it will find it missing. There are limitations, however: you can only rename a directory on top of another directory if the destination directory is not empty. rename(oldpath = "foo", newpath = "bar") failed, Directory not empty (39, ENOTEMPTY) because newpath is not an empty directory; that is, it contains entries other than "." and ".." You can't rename a directory on top of a non‐directory, either. rename(oldpath = "foo", newpath = "bar") failed, Not a directory (20, ENOTDIR) because oldpath is a directory, but newpath is a regular file, not a directory Nor is the reverse allowed rename(oldpath = "foo", newpath = "bar") failed, Is a directory (21, EISDIR) because newpath is a directory, but oldpath is a regular file, not a directory This, of course, makes the libexplain library's job more complicated, because the unlink(2) or rmdir(2) system call is called implicitly by rename(2), and so all of the unlink(2) or rmdir(2) errors must be detected and handled, as well. dup2 The dup2(2) system call is used to create a second file descriptor that references the same object as the first file descriptor. Typically this is used to implement shell input and output redirection. The fun thing is that, just as rename(2) can atomically rename a file on top of an existing file and remove the old file, dup2(2) can do this onto an already‐open file descriptor. Once again, this makes the libexplain library's job more complicated, because the close(2) system call is called implicitly by dup2(2), and so all of close(2)'s errors must be detected and handled, as well.
ADVENTURES IN IOCTL SUPPORT
The ioctl(2) system call provides device driver authors with a way to communicate with user‐space that doesn't fit within the existing kernel API. See ioctl_list(2). Decoding Request Numbers From a cursory look at the ioctl(2) interface, there would appear to be a large but finite number of possible ioctl(2) requests. Each different ioctl(2) request is effectively another system call, but without any type‐safety at all - the compiler can't help a programmer get these right. This was probably the motivation behind tcflush(3) and friends. The initial impression is that you could decode ioctl(2) requests using a huge switch statement. This turns out to be infeasible because one very rapidly discovers that it is impossible to include all of the necessary system headers defining the various ioctl(2) requests, because they have a hard time playing nicely with each other. A deeper look reveals that there is a range of “private” request numbers, and device driver authors are encouraged to use them. This means that there is a far larger possible set of requests, with ambiguous request numbers, than are immediately apparent. Also, there are some historical ambiguities as well. We already knew that the switch was impractical, but now we know that to select the appropriate request name and explanation we must consider not only the request number but also the file descriptor. The implementation of ioctl(2) support within the libexplain library is to have a table of pointers to ioctl(2) request descriptors. Each of these descriptors includes an optional pointer to a disambiguation function. Each request is actually implemented in a separate source file, so that the necessary include files are relieved of the obligation to play nicely with others. Representation The philosophy behind the libexplain library is to provide as much information as possible, including an accurate representation of the system call. In the case of ioctl(2) this means printing the correct request number (by name) and also a correct (or at least useful) representation of the third argument. The ioctl(2) prototype looks like this: int ioctl(int fildes, int request, ...); which should have your type‐safety alarms going off. Internal to [e]glibc, this is turned into a variety of forms: int __ioctl(int fildes, int request, long arg); int __ioctl(int fildes, int request, void *arg); and the Linux kernel syscall interface expects asmlinkage long sys_ioctl(unsigned int fildes, unsigned int request, unsigned long arg); The extreme variability of the third argument is a challenge, when the libexplain library tries to print a representation of that third argument. However, once the request number has been disambiguated, each entry in the the libexplain library's ioctl table has a custom print_data function (OO done manually). Explanations There are fewer problems determining the explanation to be used. Once the request number has been disambiguated, each entry in the libexplain library's ioctl table has a custom print_explanation function (again, OO done manually). Unlike section 2 and section 3 system calls, most ioctl(2) requests have no errors documented. This means, to give good error descriptions, it is necessary to read kernel sources to discover · what errno(3) values may be returned, and · the cause of each error. Because of the OO nature of function call dispatching withing the kernel, you need to read all sources implementing that ioctl(2) request, not just the generic implementation. It is to be expected that different kernels will have different error numbers and subtly different error causes. EINVAL vs ENOTTY The situation is even worse for ioctl(2) requests than for system calls, with EINVAL and ENOTTY both being used to indicate that an ioctl(2) request is inappropriate in that context, and occasionally ENOSYS, ENOTSUP and EOPNOTSUPP (meant to be used for sockets) as well. There are comments in the Linux kernel sources that seem to indicate a progressive cleanup is in progress. For extra chaos, BSD adds ENOIOCTL to the confusion. As a result, attention must be paid to these error cases to get them right, particularly as the EINVAL could also be referring to problems with one or more system call arguments. intptr_t The C99 standard defines an integer type that is guaranteed to be able to hold any pointer without representation loss. The above function syscall prototype would be better written long sys_ioctl(unsigned int fildes, unsigned int request, intptr_t arg); The problem is the cognitive dissonance induced by device‐specific or file‐system‐specific ioctl(2) implementations, such as: long vfs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg); The majority of ioctl(2) requests actually have an int *arg third argument. But having it declared long leads to code treating this as long *arg. This is harmless on 32‐bits (sizeof(long) == sizeof(int)) but nasty on 64‐bits (sizeof(long) != sizeof(int)). Depending on the endian‐ness, you do or don't get the value you expect, but you always get a memory scribble or stack scribble as well. Writing all of these as int ioctl(int fildes, int request, ...); int __ioctl(int fildes, int request, intptr_t arg); long sys_ioctl(unsigned int fildes, unsigned int request, intptr_t arg); long vfs_ioctl(struct file *filp, unsigned int cmd, intptr_t arg); emphasizes that the integer is only an integer to represent a quantity that is almost always an unrelated pointer type.
CONCLUSION
Use libexplain, your users will like it.
libexplain version 1.4 Copyright (C) 2008, 2009, 2010, 2011, 2012, 2013, 2014 Peter Miller
AUTHOR
Written by Peter Miller <pmiller@opensource.org.au> explain_lca2010(1) | http://manpages.ubuntu.com/manpages/disco/man1/explain_lca2010.1.html | CC-MAIN-2020-29 | refinedweb | 7,452 | 60.04 |
I am trying to retrieve original connection information requested within a callback registered with libevent:
#include <evhttp.h>
#include <iostream>
//Process a request
void process_request(struct evhttp_request *req, void *arg){
//Get the request type (functions correctly)
std::cout << req->type << std::endl;
//Get the address and port requested that triggered the callback
//When this is uncommented, the code no longer compiles and throws
//the warning below
struct evhttp_connection *con = req->evcon;
std::cout << con->address << con->port << std::endl;
}
int main () {
//Set up the server
struct event_base *base = NULL;
struct evhttp *httpd = NULL;
base = event_init();
if (base == NULL) return -1;
httpd = evhttp_new(base);
if (httpd == NULL) return -1;
//Bind the callback
if (evhttp_bind_socket(httpd, "0.0.0.0", 12345) != 0) return -1;
evhttp_set_gencb(httpd, process_request, NULL);
//Start listening
event_base_dispatch(base);
return 0;
}
$g++ -o basic_requests_server basic_requests_server.cpp -lpthread -levent -std=c++11
basic_requests_server.cpp:45:18: error: invalid use of incomplete type ‘struct evhttp_connection’
std::cout << con->address << con->port << std::endl;
^
In file included from /usr/include/evhttp.h:41:0,
from basic_requests_server.cpp:1:
/usr/include/event2/http.h:427:8: error: forward declaration of ‘struct evhttp_connection’
struct evhttp_connection *evhttp_connection_base_new(
Why can't I access the elements of this struct?
As far as I understand, connections (that is
struct evhttp_connection) are meant for internal purposes only.
You should not directly use them or their fields, but you can get a pointer to a connection and pass that pointer around.
It had been done on purpose, to avoid clients bind to the internal representation of a connection (that can change silently this way thus).
That's why the type is not actually exposed. You can think of it as an opaque pointer you are not allowed to dereference.
See here for an in-depth explanation. | https://codedump.io/share/r4546C4PcNg9/1/39forward-declaration-of-struct39-while-retrieving-libevent-connection-information | CC-MAIN-2017-26 | refinedweb | 296 | 51.68 |
To me Elixir and Clojure seem like sister programming languages. Both are modern, dynamic and functional languages that embrace concurrency and immutability. Both Elixir and Clojure provide rich metaprogramming capabilities through macros. To understand macros let us look at how typically code execution works in any language —
Code → Lexical Analysis & Parsing → AST → Execution
When we write normal functions, we usually do not care about how they are evaluated / executed internally. However, with macros we can work with the AST (Abstract Syntax Tree) level directly and manipulate the code there which provides us with unprecedented flexibility to create new language constructs and DSLs.
To see the striking similarities between Clojure and Elixir macros we will take a simple example. We will write a macro in both languages that takes a piece of code and reports the time of execution around it.
To see measurable results we will take an algorithm that is time consuming, nothing better than calculating Collatz sequences. The problem statement -?
The code in both Elixir and Clojure is simple enough -
defmodule Euler_14 do require Timer defp calc(num, acc) do num = trunc(num) dec = num / 2 inc = (num * 3) + 1 r = rem(num, 2) case { num, r } do {1, _} -> length(acc) {_, 0} -> calc(dec, [dec | acc]) {_, 1} -> calc(inc, [inc | acc]) end end defp pmap(collection) do collection |> Enum.map(&(Task.async(fn -> calc(&1, [&1]) end))) |> Enum.map(&Task.await/1) end def collatz do 1..1_000_000 |> Enum.chunk_every(100) |> Enum.map(fn(arr) -> pmap(arr) end) |> List.flatten |> Enum.with_index(1) |> Enum.max_by(fn(t) -> elem(t, 0) end) |> IO.inspect end end
And in Clojure -
(ns clj-fun.collatz) (defn calc [num] (loop [n num acc [num]] (let [inc (+ 1 (* n 3)) dec (/ n 2)] (if (== n 1) {num (count acc)} (if (== 0 (rem n 2)) (recur dec (conj acc dec)) (recur inc (conj acc inc))))))) (defn solve [] (apply max-key val (into {} (pmap calc (range 1 1000000)))))
Now lets build the timing macro in Elixir -
defmodule Timer do defmacro time_it(name, do: block) do quote do start_time = Time.utc_now result = unquote(block) IO.puts "Elapsed time for #{unquote(name)}: #{Time.diff(Time.utc_now, start_time, :milliseconds)} milliseconds" result end end end
As mentioned earlier, macros work at the AST level, so a macro gets the AST version of the code and then needs to return the manipulated AST version of the code. In Elixir we can produce the AST easily by quote, in example above we inject the code which we need to execute / time and merge it in our custom quoted AST using unquote. So to time our collatz function we can write it as (among other ways) -
def collatz do Timer.time_it "collatz" do 1..1_000_000 |> Enum.chunk_every(100) |> Enum.map(fn(arr) -> pmap(arr) end) |> List.flatten |> Enum.with_index(1) |> Enum.max_by(fn(t) -> elem(t, 0) end) |> IO.inspect end end
or from the REPL -
Timer.time_it "collatz", do: Euler_14.collatz
which gives us -
Elapsed time for collatz: 10184 milliseconds {525, 837799}
Clojure essentially uses the same concepts but with a slightly different syntax -
(defmacro time-it [name expr] `(let [start# (. System (nanoTime)) ret# ~expr] (prn (str "Elapsed time for " ~name ": " (int (/ (- (. System (nanoTime)) start#) 1000000)) " milliseconds")) ret#))
This is essentially the same idea as Elixir, ` is the same as quote and ~ is the same as unquote. Plus Clojure provides # to bind values safely within the macro.
Just for fun, the output of the Timer from Clojure is -
Elapsed time for collatz: 13102 milliseconds [837799 525]
That's it. We looked at macros and built a simple one in Elixir and Clojure and studied their similarities. As usual, with great power comes great responsibility, macros are powerful but should only be used when a normal function cannot do the job. | https://rockyj.in/2018/03/03/macros_elixir_clojure.html | CC-MAIN-2018-30 | refinedweb | 632 | 62.38 |
, weeks 10 September to 8 October, 2002.
(Sorry for not publishing the cwn during the last few weeks ... I've
just moved with my wife and baby boy from Paris to Philadelphia and was
too busy ;-)
1) Significant Ocaml commercial applications
2) building web services using oCaml
3) OcamlSpread 0.0.1 released
4) Berkeley DB wrapper
5) OCamake Release
6) Visual ML Release
7) Objective CAML oreilly book
8) OCaml-SOAP library
9) debian woody rebuilt packages
10) Toolpage 0.9
11) Pocket PC?
12) HTMLC
13) ICFP 2002 programming contest
14) Fourth shared patch
======================================================================
1) Significant Ocaml commercial applications
----------------------------------------------------------------------
Matt Boyd described:
Now that it's no longer viable (we're out of business)
:-(, I figure it's safe to list ALVE Recorder among
the significant OCaml commercial applications. It's a
product I developed which digitally records telephone
calls. We sold a few to a few local Austin
businesses, but that's about it. It's now to be sold
at a bankruptcy auction. Anyway, if you guys are
still keeping a list, feel free to list my ALVE
Recorder application if you think it significant. We
may decide to open source it, but it isn't my
decision. Also, anybody looking for a decent Ocaml
programmer with commercial application experience :-)?
ALVE Recorder specifications:
- Attaches to any ODBC database (using my CamlDB
interface).
- Plug-in Agent status information.
- SOAP-based message passing (using my streams-based
parser).
- Plug-in digital sound file conversion.
- Deployed on multiple PBX's (including Siemens and
NorTel).
- Enhanced capabilities through linkage with NorTel
Symposium.
- Interacts with varous PBX systems using a variety of
telehony API's including:
- Brooktrout Vantage VPS digitizing cards.
- Intel VoiceBridge-PC phone emulator card.
- Dialogic D/82JCT-U phone emulator card.
- A large portion of the Dialogic system release
library.
- Dialogic CT-Connect.
- Siemens Open-Real-Time-Link (ORTL).
- NorTel's Symposium system.
- NorTel's Meridian MAX interface.
======================================================================
2) building web services using oCaml
----------------------------------------------------------------------
Arnaud Sahuguet asked:
I am looking for ways to build web services using oCaml. (* this effort
is part of the GALAX project at Bell-Labs. See for more info. *)
First I would like to point that out this includes two different aspects:
1- building the web services themselves (e.g. putting a SOAP interface
on top of a database and spitting XML)
This is the server side, if you will.
2- glueing together web services
This is more the client side.
For 1-, it is not clear to me that oCaml has a competitive edge compared
to other approaches, mainly because 1- requires a lot of "legacy"
libraries not necessarily available for oCaml.
For 2-, however, the main components needed are an HTTP stack (HTTP,
TCP, SSL, etc.) and an XML stack (XML parser, etc.). And this is where a
functional language can really show its value.
I was looking at:
- ocamlNet
- cgi
and they support some aspects but not all that is needed like SSL,
cookies, etc. Are there other libraries that would do that for me?
As a more general question, shouldn't we (meaning of "we" to be defined
:-) implement these stacks in oCaml?
Is there any value in doing it (except for the experience and fun of
doing it)? Is there any advantage in having the stack (and whatever is
underneath) available as oCaml constructs?
These protocols are complex and keep evolving. Taking a reference
implementation (like libwww which is complete, maintained, supported and
updated) and adding oCaml wrapper on top would make more sense to me.
Our value added would be in the design of a nice API on top.
I am not saying that this should be done for everything, but when there
is no (or little) value in having the low-level implementation details
available as oCaml constructs, this is -- from my point of view -- the
way to go.
I would like to raise the same question for XML libraries where
namespaces, entity resolution, XML schemas (and God knows what they are
going to invent) need to be supported. Should everything be done in
oCaml? What is the value of having the low-level implementation details
of XML trees available as oCaml objects?
As mentioned on various previous postings, the oCaml community is
smaller than the Perl or Python ones. We need to be smarter and nimbler
in our efforts.
and Jerome Simeon added:
Since Arnaud bite the bullet already, let me give a few infos about
Galax.
Galax is an implementation of the XPath 2.0 / XQuery 1.0 family of
working drafts. (See). It is a complete
implementation of those two languages. People on this list will probably
be interested to know it also comes with (alpha) support for XML Schema
and static type inference.
Galax is open-source and implemented in Caml. The development is
(mostly) done by Mary Fernandez from AT&T and myself.
We are planning for an official release by the end of the month (which
is the reason we did not advertise it yet), but people interested can
find a very early prototype and more details on the Galax Web site.
().
Voila,
Stay tuned for more in a couple of weeks :)
======================================================================
3) OcamlSpread 0.0.1 released
----------------------------------------------------------------------
Yurii Rashkovskii announced:
OcamlSpread, a wrapper for a Spread () group
communication toolkit has a first release today.
It is quite inmature (it doesn't implement all of the kinds of
functions provided by Spread now) and probably has a couple of bugs.
At this moment OcamlSpread is distributed under the terms of GNU GPL
but will be GNU LGPL later (with notice of Spread license, too)
WARNING: This release should not be used in production.
BTW, I don't spend a lot of time to code it now (last change was about
a month ago) so contributors and/or new maintainer are welcome.
Homepage URL:
======================================================================
4) Berkeley DB wrapper
----------------------------------------------------------------------
Yaron Minsky announced:
It took me a while, but I just posted a version of my Berkeley DB
wrapper. Here's the URL:
There are currently problems with the transactional support (it
segfaults and I don't know why), but other than that seems to work
pretty well. There is currently good-but-not-complete support for
dbenvs, dbs, cursors and transactions.
then added:
Now that I realize that custom blocks and finalized blocks don't
actually have the same format, I was able to resolve the problems with
my berkeley DB wrapper. If anyone is interested, the new version is now
available here. The fixed version is now available in the same place as
before:
Still no promises that it's bugless, but it seems to work for me.
======================================================================
5) OCamake Release
----------------------------------------------------------------------
Nicolas Cannasse announced:
I'm pleased to announce the first release of OCamake.
OCamake is an automatic compiler for the OCaml language.
It can be used as :
- a standalone application which compile and link
- a Makefile generator from a given set of files
One of its better usage should be in education where teaching
how-to write a Makefile is sometimes painful ( for both students &
teacher ). Note that OCamake as also special features for integration under
MS Visual Studio.
Examples of usages:
ocamake *.ml *.mli -o myapp.exe
- will compile and link all the sources files in the current directory.
ocamake -clean *.ml *.mli -o myapp.exe
- will delete all the intermediate files produced by the compilation
process.
ocamake -mak *.ml *.mli -o myapp.exe
- will create a Makefile which can be user either by MAKE or NMAKE to
compile the project.
More informations are available in the HTML documentation of the
distribution, which is available at :
OCamake is distributed under the GPL.
I would like to thanks Lexifi for its support in this project.
======================================================================
6) Visual ML Release
----------------------------------------------------------------------
Nicolas Cannasse announced:
I'm pleased to announce the release of Visual ML.
VisualML is an OCaml project wizard for Microsoft Visual Studio.
It enables easy creation, compilation and errors corrections of OCaml
projects
under Visual Studio.
Visual ML require OCamake. Both can be downloaded at :
======================================================================
7) Objective CAML oreilly book
----------------------------------------------------------------------
Scott asked, Anders Selander answered (and it's a good occasion to
remind everyone of the translation of the oreilly book):
> I was just wondering if you could tell me whether or not the solutions for
> the exercises have been translated yet? I was curious to see the lexical
> tree exercise in chapter 2.
I do not know if all solutions in the book are translated yet, but the
ones in chapter 2 is. However, they are not yet included in the pdf
version, but you can find them in the web version on the following page:
The solutions should appear in pop-up windows when you move the pointer
over the orange words. Though in some browsers, like the one I ordinary
use, all pop-ups appear on top of each others, covering most of the
exercises. If that happens, try another browser.
======================================================================
8) OCaml-SOAP library
----------------------------------------------------------------------
Michel Mauny announced:
We are pleased to announce the first release of the OCaml-SOAP
library, a prototype implementation of a small subset of SOAP. The
author is Gaurav Chanda.
The distribution and online (rather terse) documentation are available
from:
======================================================================
9) debian woody rebuilt packages
----------------------------------------------------------------------
Stefano Zacchiroli announced:
I've set up an apt-gettable repository of Objective Caml related
packages rebuilt for debian woody, because almost all sid package (or at
least all of them that contains dynamic loading stuff) aren't
installable on woody due to dependencies unsatisfiablity.
The repository contains only binary packages (sources are the same of
the sid packages) and is accessible with this apt-get line:
deb unstable main contrib non-free
Currently the repository contains these binary packages rebuilt for
woody:
fort, libconfigwin-ocaml-dev, libgdome2-cpp-smart-dev,
libgdome2-cpp-smart0,netclient-ocaml-dev, libocamlnet-ocaml-dev, libpcre-ocaml,
libpcre-ocaml-dev, libpgsql-ocaml-dev, libpxp-ocaml-dev,
libshell-ocaml, libshell-ocaml-dev, libxstr-ocaml-dev, ocaml,
ocaml-base, ocaml-findlib, ocaml-native-compilers, ocaml-source,
zoggy
More packages will be added, hopely all ocaml relateds ones, and I will
try to keep the repository up to date with the new version of packages;
please let me know if something that you need is missing and/or not up
to date.
The old repository (located at ...)
will be no longer containing packages rebuilt for woody, but only sid
ones, so please remove it from your sources.list if you are running a
woody machine.
======================================================================
10) Toolpage 0.9
----------------------------------------------------------------------
Maxence Guesdon announced:
I'm glad to announce the release 0.9 of my new clic-o-matic tool, Toolpage.
This release contains:
- toolpage, which lets you describe the sotware you want to distrib and
generates the HTML distribution pages. It comes with a gimp script to
create fancy images for titles.
- faquin, which lets you define FAQs, sub-FAQs, sub-sub-FAQS, ... and
generates the HTML pages. By now, two simple generators are included
but you can use your own custom generator, à la ocamldoc.
- Toolhtml, a library to easily create HTML pages and some HTML elements
like list in tables, double lists, frames, ... This library will be
extended in the future.
You will find Toolpage here :
These pages have been generated by Toolpage.
A faquin output example can be found here :
======================================================================
11) Pocket PC?
----------------------------------------------------------------------
David McClain asked:
I just got an IPAQ Pocket PC. Unfortunately these are not intended for
people who need to program. There are some anemic offerings in VBScript, and
M$'s embedded VBasic and VC++. What this thing sorely needs is a good little
FP.
Is anyone out there working on such a thing? (...and I don't mean
Linux/Python...)
and Stefano Zacchiroli answered:
Damn! I was going to tell you Linux/Python indeed :-)))
Seriously, you can also try Linux/OCaml on the iPAQ, just grab OCaml
executables from the debian distribution built for arm architecture and
install it on the iPAQ running some linux distribution like "familiar".
======================================================================
12) HTMLC
----------------------------------------------------------------------
Pierre Weis announced:
New software: Htmlc, an HTML pages ``compiler''
I am pleased to announce the 1.0 version of Htmlc, a convenient little
tool to manage a set of WEB pages, to maintain the common look of
those pages and to factorize the repetive parts of their HTML code.
Htmlc encourages the usage of simple HTML templates that lowerize the
burden of writing your HTML pages.
Htmlc is still evolving from its initial satus of SSI static resolver
to the plain HTML page compiler we are all dreaming of. So, please,
don't hesitate to send your constructive remarks and contributions !
Htmlc home page is
Htmlc source files can be found at
======================================================================
13) ICFP 2002 programming contest
----------------------------------------------------------------------
Xavier Leroy announced:
The results of the ICFP 2002 programming contest have just been
announced, see. To my great delight,
the first prize went to a program written in (guess what?) OCaml by
Yutaka Oiwa, Eijiro Sumii, and Tatsurou Sekiguchi from Tokyo
University. Three cheers to them!
- Xavier Leroy, reporting live from ICFP in Pittsburgh, PA
======================================================================
14) Fourth shared patch
----------------------------------------------------------------------
malc announced:
Fourth shared patch is released. Highlights:
* Patch is now against OCaml 3.06
* Win32 support
* Many bugs where fixed
* Some sort of backtracing support for native code
* Added examples
Oh yeah, it can be found at
====================================================================== | http://lwn.net/Articles/12079/ | crawl-003 | refinedweb | 2,206 | 63.7 |
import France
Since 11/05/2020 (10 years of age) Jimbei is not available for new mating demands. Previous mating demands will of course go through as planned.
Jimbei is a very kind, gentle and playful dog
He loves to cuddle, has a lot of temperament and passion and realy ADORES to work and with a lot of passion.
He was and still is Stefaans perfect “working” friend. Since he obtained the title of Champion of Work Field Trial à l’anglaise in 2017 Jimbei enjoys his yearly picking-up season.
We hope that Jimbei will be a member of our family for many years to come.
Thank you Gillian for having entrusted Jinbei to us | https://flatpassions.be/en/hond/ch-mandaral-first-edition/ | CC-MAIN-2021-39 | refinedweb | 116 | 80.62 |
A mobile geodatabase is an implementation of the geodatabase using an SQLite database and is stored as a single file in a folder. You can create a mobile geodatabase directly in a folder in the Catalog pane or by running a geoprocessing tool or script.
For information about feature class and table name lengths and other size limits, see Mobile geodatabase size and name limits.
Learn how to create a mobile geodatabase using one of the methods described below.
Use the Catalog pane in ArcGIS Pro
Follow these steps to create a mobile geodatabase in the Catalog pane in ArcGIS Pro:
- Start ArcGIS Pro and open the Catalog pane, if necessary.
See Use the catalog pane, catalog view, and browse dialog box if you need instructions on how to open the Catalog pane.
- Right-click Databases or a folder under Folders in the Catalog pane and click New Mobile Geodatabase.
- On the New Mobile Geodatabase dialog box, browse to the location where you want to create a mobile geodatabase, type a name, and click Save.
Run the Create Mobile Geodatabase tool
To run the Create Mobile Geodatabase tool, complete the following steps:
- Open the Create Mobile Geodatabase tool in ArcGIS Pro.
You can use search to find the tool or open it directly from the Workspace toolset of the Data Management toolbox.
- Specify the folder location where you want the mobile geodatabase to be created.
- Type a name for the geodatabase.
- Click Run.
Run a Python script
To create a mobile geodatabase from a machine where ArcGIS Server or ArcGIS Pro is installed, you can run a Python script that calls the CreateMobileGDB_management function. This is useful if you need to create a mobile geodatabase from an ArcGIS client on a Linux machine or if you want to have a reusable, stand-alone script that you can alter and use to create other mobile geodatabases from Python.
Tip:
Because Python scripts run in Wine on Linux machines, use the Microsoft Windows path separator (\) for directory paths. In the examples provided, Z: is the root directory.
The following steps provide examples of how to use Python to create a mobile geodatabase:
- Open a Python command prompt.
- Either run a stand-alone script or type commands directly into the interactive interpreter.
In the first example, the createmgdb.py script contains the following information:
# Import system modules import os import sys import arcpy # Set workspace env.workspace = "Z:\home\user\mydata" # Set local variables out_folder_path = "Z:\home\user\mydata" out_name = "mymgdb.geodatabase" # Execute CreateMobileGDB arcpy.CreateMobileGDB_management(out_folder_path, out_name)
After you alter the script to run at your site, you can call it from a command prompt or Python window.
In this example, the Python commands are typed at the command prompt to create a mobile geodatabase (mymgdb.geodatabase) in the gdbs directory in the user's home directory on a Linux machine:
import arcpy arcpy.CreateMobileGDB_management("Z:\home\user\gdbs", "mymgdb.geodatabase")
See Datasets for a list of dataset types and geodatabase behaviors that are supported in mobile geodatabases in ArcGIS Pro. | https://pro.arcgis.com/en/pro-app/latest/help/data/geodatabases/manage-mobile-gdb/create-a-mobile-geodatabase.htm | CC-MAIN-2021-31 | refinedweb | 509 | 52.49 |
It is very unfinished, but I'm not sure how to continue because I'm still not familiar with the concept of object oriented programming (getters, setters, constructors, etc).
Could anyone please explain more object oriented programming to me? Also what do getters and setters do? And can you give me an example of using a constructor in the employee class? (I think that's what I'm supposed to do)
The assignment instructions are in the comments.
Sorry but the code is VERY incomplete right now. It won't even compile. I am so lost that I have no clue what I am doing anymore. Maybe I should have taken an actual class with a real teacher that could explain it...
Thank you so much. I hope somebody can help me, as I feel extremely hopeless after having spent the past 2 weeks wracking my brain on this simple assignment.
import java.util.Scanner; /*. Use an Employee class, a Name class, an Address class, and a Date class in your solution. Provide appropriate class constructors, getter methods, setter methods, and any other methods you think are necessary. Your program should prompt the user to enter data for several employees and then display that data. The number of employees to store data for shall be entered from the command line. (This is the most difficult assignment of the class. I have had only one student correctly implement it on their first try. Do not feel bad if you take multiple attempts.) Length: No overall restriction. There should be a class and constructor for each of Employee, Name, Address, and Date. Each of those should have a constructor that initializes every instance variable within the class. ." This translates, for the Employee class, into Employee { private int number; private Name name; private Address address; private Date hireDate; } You will need to have appropriate get/set methods in the Employee class for the data. You must model the Employee as it is an integral part of the assignment and Object Oriented design. */ public class project10 { public static void main( String[] args ) { Scanner input = new Scanner( System.in ); int numEmployees; System.out.println( "How many employees do you wish to enter?" ); numEmployees = input.nextInt(); Employee[] employeeArray = new Employee[numEmployees]; for ( int i = 0; i < numEmployees; i++ ) { Employee e1 = new Employee(); //??? Name first = new Name(); Name last = new Name(); System.out.println( "Enter the first name of the employee" ); e1.setFirstName( input.nextLine() ); System.out.println( "Enter the last name of the employee" ); e1.setLastName( input.nextLine() ); } } } class Employee { private int number; private Name FirstName; private Name LastName; private Address address; private Date hireDate; } class Name { private String FirstName; private String LastName; public Name() { FirstName = ""; LastName = ""; } public void setFirstName(String firstName) { FirstName = firstName; } public void setLastName(String lastName) { LastName = lastName; } public String getFirstName() { return FirstName; } public String getLastName() { return LastName; } } class Address { } class Date { private int month; private int day; private int year; public Date() { month = 0; year = 0; day = 0; } public void setDay( int dayOfMonth ) { day = dayOfMonth; } public void setMonth( int monthOfYear ) { month = monthOfYear; } public void setYear( int whichYear ) { year = whichYear; } public int getDay() { return day; } public int getMonth() { return month; } public int getYear() { return year; } } | http://www.dreamincode.net/forums/topic/290182-object-oriented-programming-assignment-help/ | CC-MAIN-2016-30 | refinedweb | 532 | 54.83 |
![if !(IE 9)]> <![endif]>
There exists the NetXMS project, which is a software product designed to monitor computer systems and networks. It can be used to monitor the whole IT-infrastructure, from SNMP-compatible devices to server software. And I am naturally going to monitor the code of this project with the PVS-Studio analyzer.
Links:
The NetXMS project is an open-source project distributed under the GNU General Public License v2. The code is written in the languages C, C++ and Java.
The project depends on a number of third-party libraries. To be honest, I felt too lazy to download some of them to get the project built. That's why it was checked not in full. Nevertheless, it doesn't prevent me from writing this post: my analysis is superficial anyway. It will be much better if the project's authors check it themselves. They are welcome to write to our support service: I will generate a temporary registration key for the PVS-Studio analyzer so that they could analyze it more thoroughly.
In the articles describing checks of open-source projects, I let myself be carried away with citing general errors. But 64-bit errors have not disappeared; they can be found everywhere. They are just not that interesting to discuss. When you show null pointer dereferencing, the bug is obvious. When you tell that a 32-bit variable can overflow in a 64-bit application, it's not that interesting. A coincidence of some certain circumstances must happen for such an error to occur; so you have to speak of it as a "potential error".
Moreover, it's much more difficult to detect 64-bit bugs. The rule set designed for 64-bit error detection produces a whole lot of false positives. The analyzer doesn't know the permissible range of input values and attacks everything it finds at least a bit suspicious. To find really dangerous fragments, you have to review a lot of messages; this is the only way to make sure that the program has been correctly ported to the 64-bit platform. It is especially true for applications that use more than 4 Gbytes of memory.
So, to be brief, writing articles about catching common bugs is much easier than writing about catching 64-bit ones. But this time I overcame my laziness and found several dangerous fragments of that kind. Let's start with them.
BOOL SortItems(...., _In_ DWORD_PTR dwData); void CLastValuesView::OnListViewColumnClick(....) { .... m_wndListCtrl.SortItems(CompareItems, (DWORD)this); .... }
V220 Suspicious sequence of types castings: memsize -> 32-bit integer -> memsize. The value being casted: 'this'. lastvaluesview.cpp 716
Earlier, in 32-bit systems, the pointer's size was 4 bytes. When you needed to save or pass a pointer as an integer type, you used the types DWORD, UINT and so on. In 64-bit systems the pointer's size has grown to 8 bytes. To store them in integer variables the types DWORD_PTR, UINT_PTR and some others were created. Function interfaces have changed accordingly. Note the way the SortItems() function is declared in the first line of the sample.
Unfortunately, the program still contains a conversion of a pointer to the 32-bit DWORD type. The program is compiled successfully. The pointer is explicitly cast to the 32-bit DWORD type and then inexplicitly extended to DWORD_PTR. The worst thing is that the program works well in most cases.
It will work until the CLastValuesView class's instances are created within the 4 low-order Gbytes of memory - that is, almost always. But it might happen that the program needs more memory. Or, memory fragmentation happens after a long run. The object will then be created outside the 4 Gbytes, and the error will reveal itself. The pointer will lose the 32 high-order bits, and the program's behavior will become undefined.
The bug is very easy to fix:
m_wndListCtrl.SortItems(CompareItems, (DWORD_PTR)this);
There are some other fragments with similar type conversions:
Each of these is a sliest bug; they are often very hard to reproduce. As a result, you get VERY RARE crashes after a long run.
The next error seems to be not that critical. A poorly calculated hash code, however, can cause search algorithms to slow down.
static int hash_void_ptr(void *ptr) { int hash; int i; /* I took this hash function just off the top of my head, I have no idea whether it is bad or very bad. */ hash = 0; for (i = 0; i < (int)sizeof(ptr)*8 / TABLE_BITS; i++) { hash ^= (unsigned long)ptr >> i*8; hash += i * 17; hash &= TABLE_MASK; } return hash; }
V205 Explicit conversion of pointer type to 32-bit integer type: (unsigned long) ptr xmalloc.c 85
The author writes in the comment that he is not sure if the function works well. And he's right. At the least, here is a bug when casting the pointer to the 'unsigned long' type.
The data models used in Windows and Linux systems are different. In Linux, the LP64 data model is accepted. In this model the 'long' type's size is 64 bits. Thus, this code will work as intended under Linux systems.
In Win64, the 'unsigned long' type's size is 32 bits. As a result, the high-order part of the pointer gets lost, and the hash is calculated not that well.
It is not solely because of explicit type conversions that 64-bit errors occur. But errors of this type are much easier to detect - for me as well. That's why let's have a look at one more poor type conversion.
static int ipfix_print_newmsg(....) { .... strftime(timebuf, 40, "%Y-%m-%d %T %Z", localtime( (const time_t *) &(hdr->u.nf9.unixtime) )); .... }
V114 Dangerous explicit type pointer conversion: (const time_t *) & (hdr->u.nf9.unixtime) ipfix_print.c 68
This is how the 'unixtime' class's member is declared:
uint32_t unixtime; /* seconds since 1970 */
And this is how the type 'time_t' is declared:
#ifdef _USE_32BIT_TIME_T typedef __time32_t time_t; #else typedef __time64_t time_t; #endif
As far as I can tell, the _USE_32BIT_TIME_T macro is not declared anywhere in the project. I didn't manage to find it, at least. It means that the localtime() function must handle time values represented by 64-bit variables, while it is an address of a 32-bit variable that is passed into the function in our sample. It's no good. The function localtime() will be handling trash.
I suppose the readers can see now why I'm not fond of writing about 64-bit errors. They are too plain and unconvincing. I don't feel like going on to search for other samples to show you at all. Let's instead study some general bugs. They look much more impressive and dangerous.
Nevertheless, 64-bit errors still exist, and if you care about the quality of your 64-bit code, I advise you to keep the viva64 diagnostic rule set at hand. These errors will stay hidden for a longer time than common bugs. For you to get scared, I recommend the following reading for the night:
In Linux, the SOCKET type is declared as a signed variable. In Windows, this type is unsigned:
typedef UINT_PTR SOCKET;
This difference often causes bugs in Windows programs.
static int DoRadiusAuth(....) { SOCKET sockfd; .... // Open a socket. sockfd = socket(AF_INET, SOCK_DGRAM, 0); if (sockfd < 0) { DbgPrintf(3, _T("RADIUS: Cannot create socket")); pairfree(req); return 5; } .... }
V547 Expression 'sockfd < 0' is always false. Unsigned type value is never < 0. radius.cpp 682
The 'sockfd' variable is of the UINT_PTR type. It results in that the 'sockfd < 0' condition never holds when the program runs under Windows. The program will try in vain to handle the socket which has not been opened.
You should fight your laziness and use special constants. This is what the code should look like:
if (sockfd == SOCKET_ERROR)
Similar incorrect checks can be found in the following fragments:
int ipfix_snprint_string(....) { size_t i; uint8_t *in = (uint8_t*) data; for( i=len-1; i>=0; i-- ) { if ( in[i] == '\0' ) { return snprintf( str, size, "%s", in ); } } .... }
V547 Expression 'i >= 0' is always true. Unsigned type value is always >= 0. ipfix.c 488
The 'i' variable has the size_t type. It means that the check "i>=0" is pointless. If zero is not found on the stack, the function will start reading memory far outside the array's boundaries. Consequences of this may be very diverse.
bool CatalystDriver::isDeviceSupported(....) { DWORD value = 0; if (SnmpGet(snmp->getSnmpVersion(), snmp, _T(".1.3.6.1.4.1.9.5.1.2.14.0"), NULL, 0, &value, sizeof(DWORD), 0) != SNMP_ERR_SUCCESS) return false; // Catalyst 3550 can return 0 as number of slots return value >= 0; }
V547 Expression 'value >= 0' is always true. Unsigned type value is always >= 0. catalyst.cpp 71
One of the most common error patterns is confusion of WCHAR strings' sizes. You can find quite a number of examples in our bug database.
typedef WCHAR TCHAR, *PTCHAR; static BOOL MatchProcess(....) { .... TCHAR commandLine[MAX_PATH]; .... memset(commandLine, 0, MAX_PATH); .... }
V512 A call of the 'memset' function will lead to underflow of the buffer 'commandLine'. procinfo.cpp 278
The TCHAR type is expanded into the WCHAR type. The number of characters in the array 'commandLine' equals the value MAX_PATH. The size of this array is 'MAX_PATH * sizeof(TCHAR). The 'memset' function handles bytes. It means that the mechanism needed to correctly clear the buffer should look like this:
memset(commandLine, 0, MAX_PATH * sizeof(TCHAR));
An even better way is to make it like this:
memset(commandLine, 0, sizeof(commandLine));
The CToolBox class is sick in the same way:
typedef WCHAR TCHAR, *PTCHAR; #define MAX_TOOLBOX_TITLE 64 TCHAR m_szTitle[MAX_TOOLBOX_TITLE]; CToolBox::CToolBox() { memset(m_szTitle, 0, MAX_TOOLBOX_TITLE); }
V512 A call of the 'memset' function will lead to underflow of the buffer 'm_szTitle'. toolbox.cpp 28
In the findIpAddress() function, a null pointer may get dereferenced. The reason is a copied-and-pasted line.
void ClientSession::findIpAddress(CSCPMessage *request) { .... if (subnet != NULL) { debugPrintf(5, _T("findIpAddress(%s): found subnet %s"), ipAddrText, subnet->Name()); found = subnet->findMacAddress(ipAddr, macAddr); } else { debugPrintf(5, _T("findIpAddress(%s): subnet not found"), ipAddrText, subnet->Name()); } .... }
V522 Dereferencing of the null pointer 'subnet' might take place. session.cpp 10823
The call of the debugPrintf() function was obviously copied. But the call in the 'else' branch is incorrect. The pointer 'subnet' equals NULL. It means that you cannot write "subnet->Name()".
#define CF_AUTO_UNBIND 0x00000002 bool isAutoUnbindEnabled() { return ((m_flags & (CF_AUTO_UNBIND | CF_AUTO_UNBIND)) == (CF_AUTO_UNBIND | CF_AUTO_UNBIND)) ? true : false; }
V578 An odd bitwise operation detected: m_flags & (0x00000002 | 0x00000002). Consider verifying it. nms_objects.h 1410
The expression (CF_AUTO_UNBIND | CF_AUTO_UNBIND) is very strange. It seems that two different constants should be used here.
void I_SHA1Final(....) { unsigned char finalcount[8]; .... memset(finalcount, 0, 8); SHA1Transform(context->state, context->buffer); }
V597 The compiler could delete the 'memset' function call, which is used to flush 'finalcount' buffer. The RtlSecureZeroMemory() function should be used to erase the private data. sha1.cpp 233
In functions related to cryptography, it is an accepted practice to clear temporary buffers. If you don't do that, consequences may be interesting: for instance, a fragment of classified information may be unintentionally sent to the network. Read the article "Overwriting memory - why?" to find out the details.
The function memset() is often used to clear memory. It is incorrect. If the array is not being used after the clearing, the compiler may delete the function memset() for the purpose of optimization. To prevent this you should use the function RtlSecureZeroMemory().
Many programmers are convinced that use of uninitialized variables is the most annoying and frequent bug. Judging by my experience of checking various projects, I don't believe it's true. This bug is very much discussed in books and articles. Thanks to that, everybody knows what uninitialized variables are, what is dangerous about them, how to avoid and how to find them. But personally I feel that much more errors are caused, say, through using Copy-Paste. But, of course, it doesn't mean that uninitialized variables are defeated. Here they are.
int OdbcDisconnect(void* pvSqlCtx) { .... SQLRETURN nSqlRet; .... if (nRet == SUCCESS) { .... nSqlRet = SQLDisconnect(pSqlCtx->hDbc); .... } if (SQLRET_FAIL(nSqlRet)) .... }
V614 Potentially uninitialized variable 'nSqlRet' used. odbcsapi.cpp 220
The nSqlRet variable becomes initialized only if we get into the 'if' operator's body. But it is checked after that all the time. It results in this variable's sometimes storing a random value.
Here are some other places where variables may be initialized not all the time:
It is a very common situation that due to refactoring a pointer check is put after a pointer dereferencing operation in the program text. A lot of examples can be found here.
To detect this error pattern the V595 diagnostic is used. The number of such defects found in code often reaches many dozens. To NetXMS's credit, however, I noticed only one code fragment of that kind:
DWORD SNMP_PDU::encodeV3SecurityParameters(...., SNMP_SecurityContext *securityContext) { .... DWORD engineBoots = securityContext->getAuthoritativeEngine().getBoots(); DWORD engineTime = securityContext->getAuthoritativeEngine().getTime(); if ((securityContext != NULL) && (securityContext->getSecurityModel() == SNMP_SECURITY_MODEL_USM)) { .... }
V595 The 'securityContext' pointer was utilized before it was verified against nullptr. Check lines: 1159, 1162. pdu.cpp 1159
There were some other V595 warnings, but I found them too unconvincing to mention in the article. Those must be just unnecessary checks.
Errors occurring when using the printf() and other similar functions are classic ones. The reason is that variadic functions don't control the types of the arguments being passed.
#define _ftprintf fwprintf static __inline char * __CRTDECL ctime(const time_t * _Time); BOOL LIBNETXMS_EXPORTABLE SEHServiceExceptionHandler(....) { .... _ftprintf(m_pExInfoFile, _T("%s CRASH DUMP\n%s\n"), szProcNameUppercase, ctime(&t)); .... }
V576 Incorrect format. Consider checking the fourth actual argument of the 'fwprintf' function. The pointer to string of wchar_t type symbols is expected. seh.cpp 292
The _ftprintf() macro is expanded into the function fwprintf(). The format string specifies that strings of the 'wchar_t *' type must be passed into the function. But the ctime() function returns a string consisting of 'char' characters. This bug must be left unnoticed, as it is situated inside the error handler.
Here are two more errors of that kind:
The 'new' operator earlier used to return 'NULL' when it failed to allocate memory. Now it throws an exception. Many programs don't take this change into account. It doesn't matter sometimes, but in some cases it may cause failures. Take a look at the following code fragment from the NetXMS project:
PRectangle CallTip::CallTipStart(....) { .... val = new char[strlen(defn) + 1]; if (!val) return PRectangle(); .... }
V668 There is no sense in testing the 'val' pointer against null, as the memory was allocated using the 'new' operator. The exception will be generated in the case of memory allocation error. calltip.cpp 260
The empty object 'PRectangle' was returned earlier if memory couldn't be allocated. Now an exception is generated when there is memory shortage. I don't know whether or not this behavior change is critical. Anyway, checking the pointer for being a null pointer doesn't seem reasonable anymore.
We should either remove the checks or use the 'new' operator that doesn't throw exceptions and returns zero:
val = new (std::nothrow) char[strlen(defn) + 1];
The PVS-Studio analyzer generates too many V668 warnings on the NetXMS project. Therefore I won't overload the article with examples. Let's leave it up to the authors to check the project.
static bool MatchStringEngine(....) { .... // Handle "*?" case while(*MPtr == _T('?')) { if (*SPtr != 0) SPtr++; else return false; MPtr++; break; } .... }
V612 An unconditional 'break' within a loop. tools.cpp 280
The loop body is executed not more than once. The keyword 'break' inside it must be unnecessary.
I haven't drawn any new conclusions from the check of the NetXMS project. Errors are everywhere; some of them can be found with static analysis - the earlier, the better.
I'll just give you some interesting and useful links instead of the conclusion: ... | https://www.viva64.com/en/b/0201/ | CC-MAIN-2018-17 | refinedweb | 2,655 | 58.79 |
Defines one screen of the user interface content. More...
The screen area of a mobile device is small, so an application's user interface is often composed of a set of separate screens of content or "pages". You can use pages in conjunction with the PageStack component to provide a navigation system for your application. Another navigation alternative is the TabGroup component that provides parallel sets of content.
See the Page Based Application Navigation overview for a higher level discussion about pages and page stacks.
Normally you base the pages (screens of content) of your application on the Page component but you can use other components or elements if you want. However, the benefit of the Page component is that it defines a contract how the page and the page stack interact. A Page component based page is notified when it becomes active or inactive and this allows you to perform various page-specific operations while the page is animated into the view or out of the view.
You can implement an application page either as a QML item or a QML component. You can regard a QML item as a particular page object and a QML component as a class definition of page objects. If your page is an item, you use the page directly as you have defined it. If you want to use a component page, you have to create an instance of that component page and use that instance. PageStack works transparently with either type of the page. The main thing you need to consider is how long you want the page to stay in the memory.
The code snippet below defines a page as an item. It is a page with a text that can be accessed externally.
// a page item Page { id: itemPage property alias message: pageText.text Text { id: pageText anchors.centerIn: parent text: "item page" font.pointSize: 25 color: "white" } }
The page described above can also be declared in its own file. This is probably the type of a page you will use most often because it encapsulates the page in its own file making its maintenance easy. The following code snippet is from FilePage.qml file.
import QtQuick 1.0 import Qt.labs.components.native 1.0 Page { id: filePage property alias message: pageText.text Text { id: pageText anchors.centerIn: parent text: "page from file" font.pointSize: 25 color: "white" } }
The page described earlier is an example of a simple page declared as a QML item. Declaring this same page as a component looks like this:
// a page component Component { id: componentPage Page { id: myComponentPage property alias message: pageText.text Text { id: pageText anchors.centerIn: parent text: "component page" font.pointSize: 25 color: "White" } } }
The default value of tools is null resulting in the toolbar belonging to the PageStack to be invisible.
If each Page in your application requires a different ToolBarLayout, then the PageStack can manage the ToolBar on your behalf. All that is required are the following steps:
When the current Page on the top of the PageStack changes, its tools will be set to be the ToolBarLayout contained within the ToolBar
See the ToolBar documentation for more details and example code.
The page's life cycle phases are instantiation, activation, deactivation, and destruction. The following rules apply:
The status property indicates the current state of the page. Combined with the normal Component.onCompleted() and Component.onDestruction() signals you can follow the entire life cycle of the page as follows:
See also PageStack, ToolButton, and ToolBarLayout.
Defines the page's orientation. It can have the following values:
The page stack that this page is owned by.
The current status of the page. It can be one of the following values:. | http://doc.qt.digia.com/qt-components-symbian-1.0/qml-page.html | CC-MAIN-2014-42 | refinedweb | 621 | 57.27 |
system man page
system — execute a shell command
Synopsis
#include <stdlib.h> int system(const char *command);
Description
Return Value.
Attributes
For an explanation of the terms used in this section, see attributes(7).
Conforming to
POSIX.1-2001, POSIX.1-2008, C89, C99.
Notes.
See Also
sh(1), execve(2), fork(2), sigaction(2), sigprocmask(2), wait(2), exec(3), signal(7)
Colophon
This page is part of release 4.11 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
Referenced By
confstr(3), ctgsy2.f(3), ctgsyl.f(3), ctl_backups(8), curs_scr_dump.3x(3), dtgsy2.f(3), dtgsyl.f(3), exec(3), execve(2), explain(1), explain(3), explain_lca2010(1), explain_system(3), explain_system_or_die(3), fio(1), guestfish(1), guestfs-hacking(1), gvpr(1), ibv_fork_init(3), libpipeline(3), lout(1), mailcap(4), mgarepo(8), mksh(1), modulefile-c(4), pbs_mom(8), popen(3), pseudo(1), pth(3), pthsem(3), spamprobe(1), stgsy2.f(3), stgsyl.f(3), tin(5), unbound.conf(5), wmclock(1), x11vnc(1), xbiff(1), ztgsy2.f(3), ztgsyl.f(3). | https://www.mankier.com/3/system | CC-MAIN-2017-30 | refinedweb | 192 | 53.37 |
Because the STREAMS subsystem of UNIXTM provides a framework on which communications services can be built, it i. A detailed discussion of Message Types is in Chapter 8, Messages - Kernel Level.
Some message types are defined as high-priority types. The others can have a normal priority of 0, or a priority (also called a band) from 1 to 255.Table 7-1 Ordinary Messages, Showing Direction of Communication Flow
Table 7-2 High-Priority Messages, Showing Direction of Communication FlowTable 7-2 High-Priority Messages, Showing Direction >= b_rptr >=>= b_wptr >= db_lim. For ordinary messages, a priority band can be indicated, and this band is used if the message is queued.
Figure 7-1 shows the linkages between msgb, datab, and data buffer in a simple message.
The message block.
SunOS has.
STREAMS provides utility routines and macros (identified in Appendix B "STREAMS Utilities"), lets modules create messages and pass them to neighboring modules. read(2) and write(2) are not enough to allow) let a user process and the Stream pass data and control information between one another while maintaining distinct message boundaries.), as shown in Figure 7-6.. Also see QUEUE(9S).
By convention, queue pairs are depicted graphically as side- by-side blocks, with the write queue on the left and the read queue on the right (see Figure Figure 7-7).. See "qband(9S) Structure " for more details. qsize(9F) can be used to determine the total number of messages on the queue. q_flag indicates the state of the queue. See Table 7-3 for the definition of these flags.
q_minpsz contains the minimum packet size accepted by the queue, and q_maxpsz contains the maximum packet size accepted by the queue. These are suggested limits, and some implementations of STREAMS may not enforce them. The SunOSTM Stream head enforces these values but is voluntary at the module level. allows them to be changed in the queue without modifying the streamtab and module_info values.
Be aware of the following queue flags. See queue(9S))(9 routine of a device is called once for the initial open of the device, then is called again on subsequent reopens of the Stream. Module open routines are called once for the initial push onto the Stream and again on subsequent reopens of the Stream. See open(9E). syntax of the open entry point is:
q -- Pointer to the read queue of this module.
devp -- Pointer to a device number that is always associated with the device at the end of the Stream. Modules cannot modify this value, but drivers can, as described in Chapter 9, STREAMS Drivers.
oflag -- For devices, oflag can contain the following bit mask values: FEXCL, FNDELAY, FREAD, and FWRITE. See Chapter 9, STREAMS Drivers for more information on drivers.
sflag -- When the open is associated with a driver, sflag is set to 0 or CLONEOPEN, see Chapter 9, STREAMS Drivers, "Cloning" for more details. If the open is associated with a module, sflag contains the value MODOPEN.
cred_p Pointer to the user credentials structure.
the open routines to devices are serialized (if more than one process attempts to open the device, only one proceeds and the others wait until the first finishes). Interrupts are not blocked during an open. So the driver's interrupt and open routines must allow for this.. Example 7-1. In other words, messages flow around the module until qprocson(9F) is called. Figure 7-9 illustrates this process.
The module or driver instance is guaranteed to be single-threaded before qprocson(9F) is called, except for interrupts or callbacks that must be handled separately. qprocson(9F) must be called before calling qbufcall(9F), qtimeout(9F), qwait(9F), or qwait_sig(9F) .
The); }
The put procedure passes messages from the queue of a module to the queue of the next module. The queue's put procedure is invoked by the preceding module to process a message immediately (see put(9F) and putnext(9F)). Almost all modules and drivers must have a put routine. The exception is that the read-side driver does not need a put routine because there can be no downstream component to call the put.
A driver's put procedure must do one of:
Process and free the message
Process and route the message back upstream
Queue the message to be processed by the driver's service procedure
A module's put procedure must do one of the following as shown in Example 7-3:
Process and free the message
Process the message and pass it to the next module or driver
Queue the message to be processed later by the module's service, it is better to process them immediately in the put procedure." in Chapter 12, MultiThreaded STREAMS for information). Perimeters are a facility fore specifing that the framework provide exclusive access for the entire module, queue pair, or an individual queue. Perimeters make it easier to deal with multithreaded issues, such as message ordering and recursive locking.
Mutex locks must not be held across a call to put(9F), putnext(9F), or qreply(9F).
Because of the asynchronous nature of STREAMS, don structure. This structure is not visible to other modules. For accessible information see strqget(9F) and strqset(9F). qband(9S). One flag defined for qb_flag is QB_FULL, which structure that are reserved and are not documented.
Figure 7-11 shows a queue with two extra bands of flow.
Several routines are provided to aid you in controlling each priority band of data flow. These routines are
flushband(9F) is discussed in "Flush Handling". bcanputnext(9F) is discussed in "Flow Control in Service Procedures", and the other two routines are described in the following section. Appendix B, STREAMS Utilities also has a description of these routines.
Typically, put procedures are required in pushable modules, but service procedures are optional. If the put routine queues messages, there must exist a corresponding service routine that handles the queued messages. If the put routine does not queue messages, the service routine need not exist.
Example 7-3 shows typical processing flow for a put procedure which works as follows:
A message is received by the put procedure associated-10 it don't implement flow control, it is possible to overflow queues and hang the system.
The STREAMS flow control mechanism is voluntary and operates between the two nearest queues in a Stream containing service procedures (see Figure 7-12). Stream Figure 7-12,), it causes the service procedure to it must(9.
qband(9S) and insq(9, lets a module or driver).
This chapter describes the structure and use of each STREAMS message type.
"General ioctl(2) Processing"
"Transparent ioctl(2) Messages"
"Transparent ioctl(2) Examples"
STREAMS.
The allocb(9F) utility routine allocates a message and the space to hold the data for the message. allocb(9F) returns a pointer to a message block containing a data buffer of at least the size requested, providing there is enough memory available. The routinereturns NULL on failure. allocb(9F) always returns a message of type M_DATA. The type can then be changed if required. b_rptr and b_wptr are set to db_base (see msgb(9S) and datab(9S)), which is the start of the memory location for the data.
allocb(9F) can return a buffer larger than the size requested. If allocb(9F) indicates buffers are not available (allocb(9F) fails), the put or service procedure cannot block to wait for a buffer to become available. Instead, bufcall(9F) defers processing in the module or the driver until a buffer becomes available.
If message space allocation is done by the put procedure and allocb(9F) fails, the message is usually discarded. If the allocation fails in the service routine, the message is returned to the queue. bufcall(9F) is called to set a call to the service routine when a message buffer becomes available, and the service routine returns.
freeb(9F) releases the message block descriptor and the corresponding data block, if the reference count (see datab(9S)) is equal to 1. If the reference count exceeds 1, the data block is not released.
freemsg(9F) releases all message blocks in a message. It uses freeb(9F) to free all message blocks and corresponding data blocks.
In Example 8-1, allocb(9F) is used by the bappend subroutine that appends a character to a message block:
/* * Append a character to a message block. * If (*bpp) is null, it will allocate a new block * Returns 0 when the message block is full, 1 otherwise */ #define MODBLKSZ 128 /* size of message blocks */ static int bappend(mblk_t **bpp, int ch) { mblk_t *bp; if ((bp = *bpp) != NULL) { if (bp->b_wptr >= bp->b_datap->db_lim) return (0); } else { if ((*bpp = bp = allocb(MODBLKSZ, BPRI_MED)) == NULL) return (1); } *bp->b_wptr++ = ch; return 1; }
bappend receives a pointer to a message block and a character as arguments. If a message block is supplied (*bpp != NULL), bappend checks if there is room for more data in the block. If not, it fails. If there is no message block, a block of at least MODBLKSZ is allocated through allocb(9F).
If allocb(9F) fails, bappend returns success and discards the character. If the original message block is not full or the allocb(9F) is successful, bappend stores the character in the block.
Example 8-2 shows the processing of all the message blocks in any downstream data (type M_DATA) messages. freemsg(9F) frees messages.
/* Write side put procedure */ static int modwput(queue_t *q, mblk_t *mp) { switch (mp->b_datap->db_type) { default: putnext(q, mp); /* Don't do these, pass along */ break; case M_DATA: { mblk_t *bp; struct mblk_t *nmp = NULL, *nbp = NULL; for (bp = mp; bp != NULL; bp = bp->b_cont) { while (bp->b_rptr < bp->b_wptr) { if (*bp->b_rptr == '\n') if (!bappend(&nbp, '\r')) goto newblk; if (!bappend(&nbp, *bp->b_rptr)) goto newblk; bp->b_rptr++; continue; newblk: if (nmp == NULL) nmp = nbp; else { /* link msg blk to tail of nmp */ linkb(nmp, nbp); nbp = NULL; } } } if (nmp == NULL) nmp = nbp; else linkb(nmp, nbp); freemsg(mp); /* de-allocate message */ if (nmp) putnext(q, nmp); break; } } }
Data messages are scanned and filtered. modwput copies the original message into new blocks, modifying as it copies. nbp points to the current new message block. nmp points to the new message being formed as multiple M_DATA message blocks. The outer for loop goes through each message block of the original message. The inner while loop goes through each byte. bappend is used to add characters to the current or new block. If bappend fails, the current new block is full. If nmp is NULL, nmp is pointed at the new block. If nmp is not NULL, the new block is linked to the end of nmp by use of linkb(9F).
At the end of the loops, the final new block is linked to nmp. The original message (all message blocks) is returned to the pool by freemsg(9F). If a new message exists, it is sent downstream.
buf.
When allocb(9F) fails and bufcall(9F) is called, a callback is pending until a buffer is actually returned. Since this callback is asynchronous, it must be released before all processing is complete. To release this queued event, use unbufcall(9F).
Pass the id returned by bufcall(9F) to unbufcall(9F). Then close the driver in the normal way. If this sequence of unbufcall(9F) and xxclose is not followed, a situation exists where the callback can occur and the driver is closed. This is one of the most difficult types of bugs to find during the debugging stage.
All bufcall(9F) and timeouts must be canceled in the close routine.
Some hardware using the STREAMS mechanism supports memory-mapped I/O (see mmap(2)) that allows the sharing of buffers between users, kernel, and the I/O card.
If the hardware supports memory-mapped I/O, data received from the hardware is placed in the DARAM (dual access RAM) section of the I/O card. Since DARAM is memory or is, the possibility for lock recursion and deadlock exists., STREAMS Utilities,); }
Close routine must wait for all esballoc(9F) memory to be freed.
Please see ioctl() section in the Writing Device Driversfor, it is the value of the cmd argument in the call to ioctl(2). The ioc_cr field is, its contents will be the value of the argument passed to ioctl(2). This can be a user address or numeric value. (see "Transparent ioctl(2) Processing").
An M_IOCTL message is processed by the first module or driver that recognizes it. If a module does not recognize the command, it should pass it down. If a driver does not recognize the command, it should send a negative acknowledgment or M_IOCNAK message upstream. In all circumstances, if a module or driver processes an M_IOCTL message it must acknowledge it.
Modules must always pass unrecognized messages on. Drivers should nak unrecognized ioctl(2) messages and free any other unrecognized message.
If a module or driver finds an error in an M_IOCTL message for any reason, it must produce the negative acknowledgment message. To do this, set the message type to M_IOCNAK and send the message upstream. No data or return value can be sent. If ioc_error is set to 0, the Stream head causes the ioctl(2) to fail with EINVAL. The module can set ioc_error to an alternate error number optionally.
ioc_error can be set to a nonzero value in both M_IOCACK and M_IOCNAK. This causes the value to be returned as an error number to the process that sent the ioctl(2).
If a module checks what ioctl(2) of other modules below it are doing, the module should not just search for a specific M_IOCTL on the write side, but also look for M_IOCACK or M_IOCNAK on the read side. For example, the module checks, for example, the TCSETA/TCGETA group of ioctl(2) calls as they pass up or down a Stream, it must never assume that because TCSETA comes down it actually has a data buffer attached to it. The user can form TCSETA as an I_STR call and accidentally given. It is therefore necessary to have some indication of the calling context where data is used.
The notion of data models as well as new macros for handling data structure access are discussed in Writing Device Drivers. A STREAMS driver or module writer should use these flags and macros when dealing with structures that change size between data models.
A new flag value which the cq_filler fields since alignment has changed.
Neither the transparent nor nontransparent method implements ioctl(2) in the Stream head, but in the Streams driver or module itself.-6, illustrates processing associated with an I_STR ioctl(2). lpdoioctl is called by lp write-side put or service procedure to process M_IOCTL messages:); } }
lpdoioctl illustrates driver M_IOCTL processing, which also applies to modules. In this example, only one command is recognized, SET_OPTIONS. ioc_count contains the number of user-supplied data bytes. For this example, ioc_count must equal the size of a short.
Once the command has been verified [lines 20-24], lpsetopt (not shown here) is called to process the request [lines 26-27]. lpsetopt returns 0 if the request is satisfied, otherwise an error number is returned.
If ioc_error is nonzero, on receipt of the acknowledgment the Stream head returns -1 to the application's ioctl(2) request and sets errno to the value of ioc_error. The ioctl(2) is acknowledged [lines 30-33)..
This example is for a driver. In the default case, for unrecognized commands, or for malformed requests, a nak is generated [lines 34-38). This is done by changing the message type to an M_IOCNAK and sending it back up stream. A module does not acknowledge (nak) an unrecognized command, but passes the message on. A module does not acknowledge (nak) a malformed request.
Transparent. The first illustrates M_COPYIN to copy data from user space. The second illustrates M_COPYOUT to copy data to user space. The third is a more complex example showing state transitions that combine M_COPYIN and M_COPYOUT.
In these examples the message blocks are reused to avoid the overhead of allocating, copying, and releasing message.. This is standard practice.
The Stream head guarantees that the size of the message block containing an iocblk(9S) structure is large enough to also hold the copyreq(9S) and copyresp(9S) structures.
Please see copyin() section in the Writing Device Driversfor information on the 64-bit data structure macros.
Example 8-7 illustrates only the processing of a transparent ioctl(2) request )s the structure (address) and the second copyin(9F) the buffer (address.ad.addr). Two states are maintained and processed in this example: GETSTRUCT is for copying in the address structure, and GETADDR for copying in the ad_addr of the structure.
xxwput verifies that the SET_ADDR is TRANSPARENT to avoid confusion with an I_STR ioctl(2), which uses a value of ioc_cmd equivalent to the command argument of a transparent ioctl(2). This is done by checking if the size count is equal to TRANSPARENT[line 28]. If it is equal to TRANSPARENT, then the message was generated from a transparent ioctl(2); that is not from an I_STR ioctl(2)[line 29-32]. */ cqp = (struct copyreq *)mp->b_rptr; /* Get user space structure address from linked * M_DATA block */ cqp->cq_addr = *(caddr_t *) mp->b_cont->b_rptr; cqp->cq_size = sizeof(struct address); /* MUST free linked blks */ freemsg(mp->b_cont); mp->b_cont = NULL; /* identify response */ cqp->cq_private = (mblk_t *)GETSTRUCT; /* Finish describing M_COPYIN message */: /* all M_IOCDATA processing done here */ xxioc(q, mp); break; } return (0); }
The transparent part of the SET_ADDR M_IOCTL message processing requires the address structure to be copied from user address space. To accomplish this, it issues an M_COPYIN request to the Stream head [lines 37-64].
The mblk is reused and mapped into a copyreq(9S) structure [line 42]. The user space address of bufadd is contained in the b_cont of the M_IOCTL mblk. This address and its size are copied into the copyreq(9S) message [lines 47-49]. The b_cont of the copy request mblk is not needed, so it is freed and then NULLed [lines 51-52].
The layout of the iocblk, copyreq, and copyresp structures is different between 32-bit and 64-bit kernels. Be cautious of any data structure overloading in the cp_private or the cq_filler fields since alignment has changed.
On receipt of the M_IOCDATA message for the SET_ADDR command, xxioc routine checks cp_rval. If an error occurred during the copyin operation, cp_rval is set. The mblk is freed [lines 93-96] and, if necessary, xxioc cleans up from previous M_IOCTL requests, freeing memory, resetting state variables, and so on. The Stream head returns the appropriate error to the user.
The cp_private field is set to GETSTRUCT [lines 97-99]. This indicates that the linked b_cont mblk contains a copy of the user's address structure. The example then copies the actual address specified in address.ad_addr.
The program issues another M_COPYIN request to the Stream head [lines 100-116], [line 118]. The ad_addr data is contained in the b_cont link of the mblk. If the address is successfully processed by xx_set_addr (not shown here), the message is acknowledged with a M_IOCACK message [lines 124-128]. If xx_set_addr fails, the message is rejected with an M_IOCNAK message [lines 121-122]. xx_set_addr is a routine (not shown in the example) that zero ioc_error, otherwise an error code could be passed to the user application. ioc_rval and ioc_count are also zeroed to reflect that a return value of 0 and no data is to be passed up [lines 124-128].
If the request cannot be processed, either an M_IOCNAK or M_IOCACK can be sent upstream with an appropriate error number. When sending an M_IOCNAK or M_IOCACK, freeing the linked M_DATA block is not mandatory, but is more efficient, as the Stream head frees it.fail*/ freemsg(mp); return; } switch ((int)csp->cp_private){ /*determine state*/ case GETSTRUCT: /* user structure has arrived */ /* reuse M_IOCDATA block */ mp->b_datap->db_type = M_COPYIN; mp->b_wptr = mp->b_rptr + sizeof (struct copyreq); cqp = (struct copyreq *)mp->b_rptr; /* user structure */ ap = (struct address *)mp->b_cont->b_rptr; /* buffer length */ cqp->cq_size = ap->ad_len; /* user space buffer address */ cqp->cq_addr = ap->ad_addr; freemsg(mp->b_cont); mp->b_cont = NULL; cqp->cq_flag = 0; cqp->cp_private=(mblk_t *)GETADDR; /*nxt st*/ qreply(q, mp); break; case GETADDR: /* user address is here */ /* hypothetical routine */ if (xx_set_addr(mp->b_cont) == FAILURE) { mp->b_datap->db_type = M_IOCNAK; iocbp->ioc_error = EIO; } else { mp->b_datap->db_type=M_IOCACK;/*success*/ /* can have been overwritten */ iocbp->ioc_error = 0; iocbp->ioc_count = 0; iocbp->ioc_rval = 0; } mp->b_wptr=mp->b_rptr + sizeof (struct ioclk); freemsg(mp->b_cont); mp->b_cont = NULL; qreply(q, mp); break;;
Please see copyout() section in the Writing Device Driversfor information on the 64-bit data structure macros.
Example 8-8 returns option values for this Stream device by placing them in the user's options structure. This is done by a transparent ioctl(2) call of the form
or by an I_STR case, opts_strioctl.ic_dp points to the options structure, optadd.
Example 8-8 illustrates support of both the I_STR and transparent forms of ioctl(2). The transparent form requires a single M_COPYOUT message following receipt of the M_IOCTL to copyout the contents of the structure. xxwput is the write-side put procedure of module or driver xx.
This example first checks if the ioctl(2) command is transparent [line 22]. If it is, the message is reused as an M_COPYOUT copy request message [lines 24-32]. The pointer to the receiving buffer is in the linked message and is copied into cq_addr [lines 26-27]. Since only a single copy out is being done, no state information needs to be stored in cq_private. The original linked message is freed, in case it isn't big enough to hold the request [lines 32-33]. As an optimization, the following code checks the size of the message for reuse:
A new linked message is allocated to hold the option request [lines 32-40]. When using the transparent ioctl(2) a M_IOCACK message [lines 59-73]. ioc_error, ioc_count, and ioc_rval are cleared to prevent any stale data from being passed back to the Stream head [lines 69-71].
If the message is not transparent (is issued through an I_STR ioctl(2)) the data is sent with the M_IOCACK acknowledgment message and copied into the buffer specified by the strioctl data structure [lines 50-51]. (iocbp->ioc_count == TRANSPARENT) { transparent = 1; cqp = (struct copyreq *)mp->b_rptr; cqp->cq_size = sizeof(struct options); /* Get struct address from linked M_DATA block */ cqp->cq_addr = (caddr_t) *(caddr_t *)mp->b_cont->b_rptr; cqp->cq_flag = 0; /* No state necessary - we will only ever get one * M_IOCDATA from the Stream head indicating success or * failure for the copyout */ } if (mp->b_cont) freemsg(mp->b_cont); if ((mp->b_cont = allocb(sizeof(struct options), BPRI_MED)) == NULL) { mp->b_datap->db_type = M_IOCNAK; iocbp->ioc_error = EAGAIN; qreply(q, mp); break; } /* hypothetical routine */ xx_get_options(mp->b_cont); if (transparent) { mp->b_datap->db_type = M_COPYOUT; mp->b_wptr = mp->b_rptr + sizeof(struct copyreq); } else { mp->b_datap->db_type = M_IOCACK; iocbp->ioc_count = sizeof(struct options); } */ /* reuse M_IOCDATA for ack */ mp->b_datap->db_type = M_IOCACK; mp->b_wptr = mp->b_rptr + sizeof(struct iocblk); /* can have been overwritten */ iocbp->ioc_error = 0; iocbp->ioc_count = 0; iocbp->ioc_rval = 0; qreply(q, mp); break; . . . } /* switch (mp->b_datap->db_type) */ return (0);
Example 8-9
Three pairs of messages are required following the M_IOCTL message: the first to copyin the structure; the second to copyin one user buffer; and the third to copyout the second user buffer. xxwput is the write-side put procedure for module or driver xx:) */ }
xxwput allocates a message block to contain the state structure and reuses the M_IOCTL to create an M_COPYIN message to read in the xxdata structure.
M_IOCDATA processing is done in xxioc():); c /* acknowledgment.
(Available as I-LIST2 file)
); } that can have values described in Table 8-1 is used to indicate. In the figures, dotted boxes indicate flushed queues.
The following takes place events taking place are: having only two bands of flow, normal and high priority. However, the latter alternative flushes the entire queue whenever an M_FLUSH message is received.
One use of the field b_flag of the msgb structure is provided to give the Stream head a way, STREAMS Utilities. No system-defined macros that manipulate global kernel data or introduce structure-size dependencies are permitted in these utilities. So, some utilities that have been implemented as macros in the prior Solaris system releases are implemented as functions in the SunOS 5 System. This does not preclude the existence of both macro and function versions of these utilities. It is intended that driver source code include a header file that picks up function declarations while the core operating system source includes. Since drivers are not permitted to access global kernel data structures directly, changes in the contents/offsets of information within these structures will not break objects.
STREAMS provides the means to implement a service interface between any two components in a Stream, and between a user process and the topmost module in the Stream. A service interface is defined at the boundary between a service user and a service provider (see Figure 8-4). A service interface is a set of primitives. The rules that, with the second through last blocks of type M_DATA. The first block in a PROTO message contains the control part of the primitive in a form agreed upon by the user and provider. The block is not intended to carry protocol headers. (Although its use is not recommended, upstream PROTO messages can have multiple PROTO blocks at the start of the message. respectively is normally used to acknowledge primitives composed of other messages. M_PCPROTO ensures that the acknowledgment completely transparent to the application. In Figure 8 would be used to define this primitive. The control part would identify the primitive as a connect request and would include the protocol address and options. The data part would contain the associated user data.
The service interface library example presented here includes four functions that let acknowledgment*/ t_scalar_t PRIM_type; /* always OK_ACK */ }; struct error_ack { /* error acknowledgment */. These are:
This request asks the provider to bind a specified protocol address. It requires an acknowledgment from the provider to verify that the contents of the request were syntactically correct.
This request asks the provider to send data to the specified destination address. It does not require an acknowledgment from the provider.
The three other primitives represent acknowledgments of requests, or indications of incoming events, and are passed from the service provider to the service user.
This primitive informs the user that a previous bind request was received successfully by the service provider.
This primitive informs the user that a nonfatal error was found in the previous bind request. It indicates that no action was taken with the primitive that caused the error.
This primitive indicates that data destined for the user has arrived.
The defined structures describe the contents of the control part of each service interface message passed between the service user and service provider. The first field of each control part defines the type of primitive being passed. acknowledgment */ t_scalar_t PRIM_type; /* always OK_ACK */ }; struct error_ack { /* error acknowledgment */ is that structure.: switches on the message type. The only types accepted are M_FLUSH and M_PROTO. For M_FLUSH messages, the driver performs the canonical flush handling (not shown). For M_PROTO messages, the driver assumes sent upstream.
You can only change M_IOCTL family of message types to other M_IOCTL message types.
M_DATA, M_PROTO, M_PCPROTO are dependent on the modules, drivers and service providers interfaces defined.
A message type should not be changed if the reference count > 1.
The data of a message should not be modified if the reference count > 1.
All other message types are interchangeable as long as sufficient space has been allocated in the data buffer of the message. byte (in all data messages queued) in the location pointed to by the arg parameter. The ioctl returns a 32-bit quantity for both 32-bit and 64-bit application., Therefore, code that passes the address of a long variable needs to be changed to pass an int variable for 64-bit applications.
The I_NREAD ioctl (streamio(7I)) is an informational ioctl which counts the data bytes as well as the number of messages in the stream head read queue. The number of bytes in the stream head read queue.
STREAMS modules and drivers send signals to application processes through a special signal message. If the signal specified by the module or driver is not SIGPOLL (see signal(5)),.)
This.
Also see Writing Device Drivers.
Solaris 7 Ethernet drivers, le(7D) and eepro(7D) both support Data Link Provider Interfaces (DLPI).
When an ifconfig device0 plumb is issued, the driver immediately receives a DL_INFO_REQ. The information requested by DL_INFO_ACK is shown in the dl_info_ack_t struct in /usr/include/sys/dlpi.h.
A driver can be a CLONE driver and also a DLPI Style 2 provider. Mapping minor numbers selected in the open routine to an instance prior to a DL_ATTACH_REQ using the instance in the getinfo routine is not valid prior to the DL_ATTACH_REQ. The DL_ATTACH_REQ request is to assign a physical point of attachment (PPA) to a Stream. The DL_ATTACH_REQ request can be issued any time after a file or Stream is opened. The DL_ATTACH_REQ request is not involved in assigning, retrieving, or mapping minor or instance numbers. You can issue a DL_ATTACH_REQ request for a file or Stream with a desired major/minor number. Mapping minor number to instance reflects, in most cases, that the minor number (getmino(dev)) is the instance number.
Each time a driver's attach routine is called, a minor node is created. If a non-CLONE driver needs to attach to multiple boards, that is, to have multiple instances and still create only one minor node, it is possible to use the bits of information in a particular minor number; for example `FF' to map to all other minor nodes.
This.
This.
This. */ }
This.
STREAMS.. So, inFigure later. later in this chapter. The Stream referenced by fd_ip is the controlling Stream for the IP multiplexer.
The order in which the Streams in the multiplexing configuration are opened is unimportant. If it is necessary to have intermediate modules in the Stream (see SunOS Reference Manual, Intro(7)) describes such restrictions. (bottom) and the upper half (top). bottom half qinit(9S) structures.. It is the control Stream, leading to a process, that, since (in this case there is only one) lower Stream head the following:,xe. | http://docs.oracle.com/cd/E19620-01/805-4038/6j3r5ncde/index.html | CC-MAIN-2016-07 | refinedweb | 5,097 | 54.52 |
in reply to
Re^2: optical encoder
in thread optical encoder
The misspelling is right in the code I've posted, not in the module. In fact SVG.pm doesn't handle style specially in any way (though I've pondered it should).
I've now actually tested the code, and this seems to work (the resulting SVG displays fine in firefox and inkscape).
use v6;
use SVG;
my $r = 150 / 2;
my @parts = :circle[
cx => $r,
cy => $r,
r => $r,
style => 'fill:none; stroke: black; stroke-width: 1',
];
for 0, 2 ...^ 360 -> $degrees {
my $rad = $degrees / 360 * 2 * pi;
@parts.push: 'line' => [
x1 => $r,
y1 => $r,
x2 => $r + $r * cos($rad),
y2 => $r + $r * sin($rad),
style => 'stroke: black; stroke-width: 1',
];
}
say SVG.serialize:
'svg' => [
width => 2 * $r,
height => 2 * $r,
@parts,
];
[download]
As for the xmlns:svg declaration, that just means that the svg XML namespace is set up as being interpreted as SVG. This allows you (in theory) to do fun stuff like embedding <svg:rect ...> elements straight into XHTML. But even for browsers that support it, it is only supported if the document is delivered as application/xml+xhtml, which is generally no fun, because most IE versions don't render such documents, but offer them for download. </rant>
My savings account
My retirement account
My investments
Social Security
Winning the lottery
A Post-scarcity economy
Retirement?! You'll have to pull the keyboard from my cold, dead hands
I'm independently wealthy
Other
Results (75 votes),
past polls | http://www.perlmonks.org/?node_id=960027 | CC-MAIN-2014-42 | refinedweb | 254 | 70.23 |
Monitor Kubernetes Cluster using New Relic
Hi Reader, You have landed right here meaning you’re probably looking to apprehend the strength of New Relic Monitoring and alerting capabilities or interested in learning something new.
Excellent!! What’s better to start with monitoring the Kubernetes cluster and get alerted every time something is going wrong?
Let us get started but what hack is New Relic?
New Relic is an observability platform that helps you build better software. You can bring in data from any digital source so that you can fully understand your system and know how to improve it.
With New Relic, you can:
- Bring all your data together: Instrument everything and import data from across your technology stack using relic agents, integrations, and APIs, and access it from a single UI.
- Analyze your data: Get all your data at your fingertips to find the root causes of problems and optimize your systems. Build dashboards and charts or use the powerful query language NRQL.
- Respond to incidents quickly: New Relic’s machine learning solution proactively detects and explains anomalies and warns you before they become problems.
Cool right?
Let us get this implemented to monitor the Kubernetes cluster today.
Assuming you are new to New Relic, let us start with creating a new relic account however in case you have already got one, feel free to pass this part :)
STEP 1: Follow the link and create a new account. It is free, forever!!
Enter your name, email and click on Start Now.
STEP 2: You will receive an email to verify your email, click on verify email and set your password.
Select one of the two regions where you want your data to store and click Save.
STEP 3: You will be now landed on the installation plan page, select Kubernetes.
now, click on Begin Installation.
STEP 4: Assuming you have Kubernetes cluster available, enter your cluster name in the placeholder and click continue. You may change the namespace in case you desire to install new relic agents in a different namespace.
Check all the required data you want to gather from the Kubernetes cluster according to your use case and click continue.
New Relic offers different ways to install its agents on the k8s cluster, either by using helm or directly by manifest files. I decide on the helm, you may directly deploy manifest files if you wish.
Copy the command and log in to your Kubernetes cluster.
$helm repo add newrelic && helm repo update && \
kubectl create namespace newrelic ; helm upgrade — install newrelic-bundle newrelic/nri-bundle \
— set global.licenseKey= <your license key>\
— set global.cluster=my-cluster \
— namespace=newrelic \
— set newrelic-infrastructure.privileged=true \
— set global.lowDataMode=true \
— set ksm.enabled=true \
— set kubeEvents.enabled=true
once this is successfully executed, run the below command to check if all the agents are installed.
$kubectl get pods -n newrelic
Wait until all the pods are up and running.
Now go back to your New Relic UI and click continue. Wait for 2–3 minutes and then you must see “We are successfully receiving data from your cluster. 🎉”
If you see this, congratulations you have integrated your Kubernetes cluster with your New Relic account. Now click on Kubernetes cluster explorer.
All the information of your cluster is visible as below:
Click on the Control plane to get all the core components monitored and click on events to understand everything happening and recorded as infrastructure events. It is difficult to monitor Cronjobs/Jobs using any conventional approach of monitoring. For example, Prometheus require an additional push gateway setup to scrap those matrices however with Kubernetes integration in New Relic this can be achieved as it is recorded as events from the Kubernetes cluster.
NRQL
NRQL is New Relic's SQL-like query language. You can use NRQL to retrieve detailed New Relic data and get insight into your applications, hosts, and business-important activity.
Click on Explorer -> browse data -> Events
Then switch to Query builder to query the data using New Relic Query Language.
Let's understand by an example to get all the pods that are not in the Running state. Use the below query the get the desired results.
SELECT podName FROM K8sPodSample WHERE clusterName =’my-cluster’ and status != ‘Running’
You can now add this to a dashboard by clicking on the Add to Dashboard button at the bottom.
You can also create an alert and get notified to act upon the pod failure.
Alerts and Notification channel
We need a notification channel to send alerts.
STEP 1: Click on create alert option below the query. Enter condition name and scroll down, you will notice the error as below:
The reason behind this error is that it is expected that the query should return an integer result otherwise it is considered invalid. So, we will modify our query to return an integer value.
$SELECT count(*) FROM K8sPodSample WHERE clusterName =’my-cluster’ and status != ‘Running’
Notice that I have replaced podName with count(*)
threshold states that a violation should open if the query returns a value above 1 for at least 5 minutes.
Note that a violation does not mean that an alert is triggered.
Now, scroll down to “Connect your condition to a policy” and choose the Kubernetes default alert policy from the existing policy. That's it, Click on Save condition.
You will see a popup message stating that your condition is saved.
STEP 2: Now, Create a New channel to receive alert. Click on Explorer -> Alerts & AI -> Alerts(classic) -> Channels
Click on create new notification channel from the top right and select any channel you want to receive notification on, I will choose Email.
Add details and click on Create channel.
Once you click on Create channel you will see an option of Send a test notification, Click on it and you will receive an email notification on the email ID.
STEP 3: Switch to Alert policies from the top and add Kubernetes default alert policy to this notification channel. This means we have linked policy with notification channel, meaning all the alert conditions with this policy now have a channel to send a notification to.
If you’ve made it this far, thank you for reading and congratulations we have just configured New Relic to monitor the Kubernetes cluster and set up alerts and notifications to acknowledge and act on the cause as soon as possible.
Hope you like it, Happy Reading! | https://medium.com/@rana.ash1997/monitor-kubernetes-cluster-using-new-relic-53140d41a935?source=read_next_recirc---------0---------------------3f85b62e_abd2_4bd0_a162_38121c76ec0f------- | CC-MAIN-2022-27 | refinedweb | 1,077 | 63.39 |
Important: Please read the Qt Code of Conduct -
Window resize very slow on Windows 10
I have just started to look at Qt Quick and I have a very basic program, essentially the same as when you start a Qt Quick Controls application project.
The problem is when I try to resize the window it takes a very long time to do so. This can be seen in the
.gifbelow.
The only information I could find on the web about people having a similar problem was that you could use the QML Profiler to find where the lag is being generated and sometimes it is due to the debugger. So below you can see the QML profiler and the gif was recorded in release mode.
As far as I can tell the animation is locking the GUI thread up which is causing the render or repainting to be slow but I am not sure what is causing it.
I would appreciate any help in solving the problem.
And there is not much code to it.
CD_Burner.pro
QT += qml quick CONFIG += c++11 SOURCES += main.cpp RESOURCES += qml.qrc QML_IMPORT_PATH = QML_DESIGNER_IMPORT_PATH = DEFINES += QT_DEPRECATED_WARNINGS qnx: target.path = /tmp/$${TARGET}/bin else: unix:!android: target.path = /opt/$${TARGET}/bin !isEmpty(target.path): INSTALLS += target"))); if (engine.rootObjects().isEmpty()) return -1; return app.exec(); }
main.qml
import QtQuick 2.7 import QtQuick.Controls 2.0 import QtQuick.Layouts 1.3 ApplicationWindow { visible: true width: 640 height: 480 title: qsTr("Hello World") SwipeView { id: swipeView anchors.fill: parent currentIndex: tabBar.currentIndex Page1 { Label { text: qsTr("First page") anchors.centerIn: parent } } Page { Label { text: qsTr("Second page") anchors.centerIn: parent } } Page { Label { text: qsTr("Third page") anchors.centerIn: parent } } } footer: TabBar { id: tabBar currentIndex: swipeView.currentIndex TabButton { text: qsTr("First") } TabButton { text: qsTr("Second") } TabButton { text: qsTr("Third") } } }
Page1.qml
import QtQuick 2.7 Page1Form { button1.onClicked: { console.log("Button Pressed. Entered text: " + textField1.text); } }
Page1Form.ui.qml
import QtQuick 2.7 import QtQuick.Controls 2.0 import QtQuick.Layouts 1.3 Item { property alias textField1: textField1 property alias button1: button1 RowLayout { anchors.horizontalCenter: parent.horizontalCenter anchors.topMargin: 20 anchors.top: parent.top TextField { id: textField1 placeholderText: qsTr("Text") } Button { id: button1 text: qsTr("Press Me") } } }
Specs: Windows 10, Qt 5.9, MSVC 2017
- SGaist Lifetime Qt Champion last edited by
Hi and welcome to devnet,
Please don't post the same question in multiple sub-forum. One is enough.
Closing this one. | https://forum.qt.io/topic/81496/window-resize-very-slow-on-windows-10 | CC-MAIN-2021-21 | refinedweb | 408 | 61.63 |
compact
Non-GC'd, contiguous storage for immutable data structures
See all snapshots
compact appears in
Module documentation for 0.1.0.1
compact
Non-GC’d, contiguous storage for immutable data structures.
This package provides user-facing APIs for working with “compact regions”, which hold a fully evaluated Haskell object graph. These regions maintain the invariant that no pointers live inside the struct that point outside it, which ensures efficient garbage collection without ever reading the structure contents (effectively, it works as a manually managed “oldest generation” which is never freed until the whole is released).
When would you want to use a compact region? The simplest use case is this: you have some extremely large, long-lived, pointer data structure which GHC has uselessly been tracing when you have a major collection. If you place this structure in a compact region, after the initial cost of copying the data into the region, you should see a speedup in your major GC runs.
This package is currently highly experimental, but we hope it may be useful to some people. It is GHC 8.2 only. The bare-bones library that ships with GHC is ghc-compact.
Quick start
Import
Data.Compact
Put some data in a compact region with
compact :: a -> IO (Compact a), e.g.,
cr <- compact someBigDataStructure, fully evaluating it in the process.
Use
getCompact :: Compact a -> ato get a pointer inside the region, e.g.,
operateOnDataStructure (getCompact cr). The data pointed to by these pointers will not participate in GC.
Import
Data.Compact.Serializeto write and read compact regions from files.
Tutorial
Garbage collection savings. It’s a little difficult to construct a
compelling, small example showing the benefit, but here is a very simple case
from the
nofib test suite, the
spellcheck program.
spellcheck is a very
simple program which reads a dictionary into a set, and then tests an input
word-by-word to see if it is in the set or not (yes, it is a very simple
spell checker):
import System.Environment (getArgs) import qualified Data.Set as Set import System.IO main = do [file1,file2] <- getArgs dict <- readFileLatin1 file1 input <- readFileLatin1 file2 let set = Set.fromList (words dict) let tocheck = words input print (filter (`Set.notMember` set) tocheck) readFileLatin1 f = do h <- openFile f ReadMode hSetEncoding h latin1 hGetContents h
Converting this program to use a compact region on the dictionary is very
simple: add
import Data.Compact, and convert
let set = Set.fromList (words dict) to read
set <- fmap getCompact (compact (Set.fromList (words dict))):
import System.Environment (getArgs) import qualified Data.Set as Set import System.IO import Data.Compact -- ** main = do [file1,file2] <- getArgs dict <- readFileLatin1 file1 input <- readFileLatin1 file2 set <- fmap getCompact (compact (Set.fromList (words dict))) -- *** let tocheck = words input print (filter (`Set.notMember` set) tocheck) readFileLatin1 f = do h <- openFile f ReadMode hSetEncoding h latin1 hGetContents h
Breaking down the new line:
compact takes an argument
a which must be pure
and immutable and then copies it into a compact region. This function returns a
Compact a pointer, which is simultaneously a handle to the compact region as
well as the data you copied into it. You get back the actual
a data that
lives in the region using
getCompact.
Using the sample
nofib input
(words and
input), we can take
a look at our GC stats before and after the change. To make the effect more
pronounced, I’ve reduced the allocation area size to 256K, so that we do more
major collections. Here are the stats with the original:
1,606,462,200 bytes allocated in the heap 727,499,032 bytes copied during GC 24,050,160 bytes maximum residency (21 sample(s)) 107,144 bytes maximum slop 71 MB total memory in use (0 MB lost due to fragmentation) Tot time (elapsed) Avg pause Max pause Gen 0 6119 colls, 0 par 0.743s 0.754s 0.0001s 0.0023s Gen 1 21 colls, 0 par 0.608s 0.611s 0.0291s 0.0582s INIT time 0.000s ( 0.000s elapsed) MUT time 2.012s ( 2.024s elapsed) GC time 1.350s ( 1.365s elapsed) EXIT time 0.000s ( 0.000s elapsed) Total time 3.363s ( 3.389s elapsed) %GC time 40.2% (40.3% elapsed) Alloc rate 798,416,807 bytes per MUT second Productivity 59.8% of total user, 59.7% of total elapsed
Here are the stats with compact regions:
1,630,448,408 bytes allocated in the heap 488,392,976 bytes copied during GC 24,104,152 bytes maximum residency (21 sample(s)) 76,144 bytes maximum slop 55 MB total memory in use (0 MB lost due to fragmentation) Tot time (elapsed) Avg pause Max pause Gen 0 6119 colls, 0 par 0.755s 0.770s 0.0001s 0.0017s Gen 1 21 colls, 0 par 0.147s 0.147s 0.0070s 0.0462s INIT time 0.000s ( 0.000s elapsed) MUT time 1.999s ( 2.054s elapsed) GC time 0.902s ( 0.918s elapsed) EXIT time 0.000s ( 0.000s elapsed) Total time 2.901s ( 2.972s elapsed) %GC time 31.1% (30.9% elapsed) Alloc rate 815,689,434 bytes per MUT second Productivity 68.9% of total user, 69.1% of total elapsed
You can see that while the version of the program with compact regions allocates slightly more (since it performs a copy on the set), it copies nearly half as much data during GC, reducing the time spent in major GCs by a factor of three. On this particular example, you don’t actually save that much time overall (since the bulk of execution is spent in the mutator)–a reminder that one should always measure before one optimizes.
Serializing to disk.
You can take the data in a compact region and save it to disk, so that you can
load it up at a later point in time. This functionality is provided by
Data.Compact.Serialized:
writeCompact and
unsafeReadCompact let you
write a compact to a file, and read it back again:
{-# LANGUAGE TypeApplications #-} import Data.Compact import Data.Compact.Serialize main = do orig_c <- compact ("I want to serialize this", True) writeCompact @(String, Bool) "somefile" orig_c res <- unsafeReadCompact @(String, Bool) "somefile" case res of Left err -> fail err Right c -> print (getCompact c)
Compact regions written to handles this way are subject to some restrictions:. This API does NOT do any safety checking and will probably segfault if you get it wrong. DO NOT run
unsafeReadCompacton untrusted input.
You must read out the value at the correct type. We will check this for you and raise an error if the types do not match. To tell
unsafeReadCompactwhat type it should read out with, the
TypeApplicationsextension may come in handy (this extension is guaranteed to be available, since compact only supports GHC 8.2 or later!)
Changes
Revision history for compact
0.1.0.0 – 2017-02-27
- First version. | https://www.stackage.org/lts-11.13/package/compact-0.1.0.1 | CC-MAIN-2018-34 | refinedweb | 1,160 | 65.62 |
The following form allows you to view linux man pages.
ctags [options] [file(s)]
etags [options] [file(s)]
The ctags and etags programs (hereinafter collectively referred to as
ctags, except where distinguished) generate an index (or "tag") file
for a variety of language objects found in file(s)..
Tag index files are supported by numerous editors, which allow the user
to locate the object associated with a name appearing in a source file
and jump to the file and line which defines the name. Those known about
at the time of this release are:
Vi(1) and its derivatives (e.g. Elvis, Vim, Vile, Lemmy), CRiSP,
Emacs, FTE (Folding Text Editor), JED, jEdit, Mined, NEdit (Nirvana
Edit), TSE (The SemWare Editor), UltraEdit, WorkSpace, X2, Zeus
Ctags is capable of generating different kinds of tags for each of many
different languages. For a complete list of supported languages, the
names by which they are recognized, and the kinds of tags which are
generated for each, see the --list-languages and --list-kinds options.
Unless the --language-force option is specified, the language of each
source file is automatically selected based upon a mapping of file
names to languages. The mappings in effect for each language may be
display using the --list-maps option and may be changed using the
--langmap option. On platforms which support it, if the name of a file
is not mapped to a language and the file is executable, the first line
of the file is checked to see if the file is a "#!" script for a recog-
nized language.
By default, all other files names are ignored. This permits running
ctags on all files in either a single directory (e.g. "ctags *"), or on
all files in an entire source directory tree (e.g. "ctags -R"), since
only those files whose names are mapped to languages will be scanned.
Note that spaces separating the single-letter options from their param-
eters are optional.
Note also that the boolean parameters to the long form options (those
beginning with "--" and that take a "[=yes|no]" parameter) may be omit-
ted, in which case "=yes" is implied. (e.g. --sort is equivalent to
--sort=yes). Note further that "=1" and "=on" are considered synonyms
for "=yes", and that "=0" and "=off" are considered synonyms for "=no".
Some options are either ignored or useful only when used while running
in etags mode (see -e option). Such options will be noted.
Most options may appear anywhere on the command line, affecting only
those files which follow the option. A few options, however, must
appear before the first file name and will be noted as such.
Options taking language names will accept those names in either upper
or lower case. See the --list-languages option for a complete list of
the built-in language names.
-a Equivalent to --append.
-B Use backward searching patterns (e.g. ?pattern?). [Ignored in
etags mode]
-e Enable etags mode, which will create a tag file for use with the
Emacs editor. Alternatively, if ctags is invoked by a name con-
taining spec-
ified) charac-
ter,.
which is located in a non-include file and cannot be seen (e.g.
linked to) from another file is considered to have file-limited
(e.g. static) scope. No kind of tag appearing in an include file
will be considered to have file-limited scope. If the first char-
acter sup-
plied to this option is not already mapped to a particular lan-
guage -
processor macros. When the identifiers listed are simple identi-
fiers, pur-
poses. The list of identifiers may be supplied directly on the
command line or read in from a separate file. If the first charac-
ter). Mul-
tiple -I options may be supplied. To clear the list of ignore
identifiers, supply a single dash ("-") for identifier-list.
This feature is useful when preprocessor macros are used in such a
way that they cause syntactic confusion due to their presence.
Indeed, this is the best way of working around a number of prob-
lems caused by the presence of syntax-busting macros in source
files (see CAVEATS, below). incor-
rectly parsed. Correct behavior can be restored by specifying -I
CLASS=class.
-L file
Read from file a list of file names for which tags should be gen-
erated. If file is specified as "-", then file names are read
from standard input. File names read using this option are pro-
cessed.
' capabil-
ity expres-
sions) --ver-
sion option, which will include "+wildcards" in the compiled fea-
ture list; otherwise, pattern is matched against file names using
a simple textual comparison.
If pattern begins with the character '@', then the rest of the
string is interpreted as a file name from which to read exclusion
patterns, one per line. If pattern is empty, the list of excluded
patterns is cleared. Note that at program startup, the default
exclude list contains "EIFGEN", "SCCS", "RCS", and "CVS", which
are names of directories for which it is generally not desirable
to descend while processing the --recurse option.
-
However, this option has one significant drawback:
changes to the source files can cause the line numbers
recorded in the tag file to no longer correspond to the
lines in the source file, causing jumps to some tags to
miss the target definition by one or more lines. Basi-
cally, defi-
nition '-' charac-
ter, the effect of each flag is added to, or removed from, those
currently enabled; otherwise the flags replace any current set-
tings. lan-
guage]
Each letter or group of letters may be preceded by either '+' to
add it to the default set, or '-' to exclude it. In the absence of
any preceding '+' or '-' sign, only those kinds explicitly listed
in flags will be included in the output (i.e. overriding the
default set). This option is ignored if the option --format=1 has
been specified. The default value of this sepa-
rated from the last tag line for the file by its terminating new-
line. This option is quite esoteric and is empty by default. This guar-
antee.
--langdef=name
Defines a new user-defined language, name, to be parsed with regu-
lar expressions. Once defined, name may be used in other options
taking language names. The typical use of this option is to first
define the language, then map file names to it using --langmap,
then specify regular expressions using --regex-<LANG> to define
how its tags are found.
--langmap=map[,map[...]]
Controls how file names are mapped to languages (see the
--list-maps option). Each comma-separated map consists of the lan-
guage name (either a built-in or user-defined language), a colon,
and a list of file extensions and/or file name patterns. A file
extension is specified by preceding the extension with a period hav-
ing the extension ".mak") to a language called "make", specify
"--langmap=make:([Mm]akefile).mak". To map files having no exten-
sion, specify a period not followed by a non-period character
(e.g. ".", "..x", ".x."). To clear the mapping for a particular
language (thus inhibiting automatic generation of tags for that
language), specify an empty extension list (e.g. "--langmap=for-
tran:"). To restore the default language mappings for all a par-
ticular language, supply the keyword "default" for the mapping.
To specify restore the default language mappings for all lan-
guages, specify "--langmap=default". Note that file extensions are
tested before file name patterns when inferring the language of a
file.
--language-force=language
By default, ctags automatically selects the language of a source
file, ignoring those files whose language cannot be determined
(see SOURCE FILES, above). This option forces the specified lan-
guage lan-
guage of list is not preceded by either a '+' or '-', the current
list will be cleared before adding or removing the languages in
list. Until a '-' is encountered, each language in the list will
be added to the current list. As either the '+' or '-' is encoun-
tered in the list, the languages following it are added or removed
from the current list, respectively. Thus, it becomes simple to
replace the current list with a new one, or to add or remove lan-
guages.
original source file(s), instead of their actual locations in the
preprocessor output. The actual file names placed into the tag
file will have the same leading path components as the preproces-
sor output file, since it is assumed that the original source
files are located relative to the preprocessor output file
(unless, of course, the #line directive specifies an absolute
path). This option is off by default. Note: This option is gener-
ally only useful when used together with the --excmd=number (-n)
option. Also, you may have to use either the --langmap or --lan-
guage-force option if the extension of the preprocessor output
file is not known to ctags.
--links[=yes|no]
Indicates whether symbolic links (if supported) should be fol-
lowed. avail-
able if regex support is not compiled into ctags (see the
--regex-<LANG> option). Each kind listed is enabled unless fol-
lowed auto-
matic reading of any configuration options from either a file or
the environment (see FILES).
The /regexp/replacement/ pair define a regular expression replace-
ment pattern, similar in style to sed substitution commands, with
which to generate tags from source files mapped to the named lan-
guage, a-
tor. (dis-
played using --list-kinds). Either the kind name and/or the
description may be omitted. If kind-spec is omitted, it defaults
to "r,regex". Finally, flags are one or more single-letter charac-
ters having the following effect upon the interpretation of reg-
exp:
b The pattern is interpreted as a Posix basic regular expres-
sion.
e The pattern is interpreted as a Posix extended regular
expression (default).
i The regular expression is to be applied in a case-insensi-
tive manner.
Note that this option is available only if ctags was compiled with
support for regular expressions, which depends upon your platform.
You can determine if support for regular expressions is compiled
in by examining the output of the --version option, which will
include "+regex" in the compiled feature list.
appear before the first file name. pro-
cessing and a brief message describing what action is being taken
for each file considered by ctags. Normally, ctags does not read
command line arguments until after options are read from the con-
figuration".
As ctags considers each file name in turn, it tries to determine the
language of the file by applying the following three tests in order: if
the file extension has been mapped to a language, if the file name
matches a shell pattern mapped to a language, and finally if the file
is executable and its first line specifies an interpreter using the
Unix-style "#!" specification (if supported on the platform). If a lan-
guage was identified, the file is opened and then the appropriate lan-
guage parser is called to operate on the currently open file. The
parser parses through the file and adds an entry to the tag file for
each language object it is written to handle. See TAG FILE FORMAT,
below, for details on these entries.
This implementation of ctags imposes no formatting requirements on C
code as do legacy implementations. Older implementations of ctags
tended to rely upon certain formatting assumptions in order to help it
#endif
short a;
long b;
}
Both branches cannot be followed, or braces become unbalanced and ctags
would be unable to make sense of the syntax.
If the application of this heuristic fails to properly parse a file,
generally due to complicated and inconsistent pairing within the condi-
tionals, ctags will retry the file using a different heuristic which
does not selectively follow conditional preprocessor branches, but
instead falls back to relying upon a closing brace ("}") in column 1 as
indicating the end of a block once any brace imbalance results from
following a #if conditional branch.
Ctags will also try to specially handle arguments lists enclosed in
double sets of parentheses in order to accept the following conditional
construct:
extern void foo __ARGS((int one, char two));
Any name immediately preceding the "((" will be automatically ignored
and the previous name will be used.
C++ operator definitions are specially handled. In order for consis-
tency with all types of operators (overloaded and conversion), the
operator name in the tag file will always be preceded by the string
"operator " (i.e. even if the actual operator definition was written as
"operator<<").
After creating or appending to the tag file, it is sorted by the tag
name, removing identical tag lines.
When not running in etags mode, each entry in the tag file consists of
a separate line, each looking like this in the most general case:
tag_name<TAB>file_name<TAB>ex_cmd;"<TAB>extension_fields
The fields and separators of these lines are specified as
specified on the command line was relative to the current directory,
then it will be recorded in that same manner in the tag file. See, how-
ever, the --tag-relative option for how this behavior can be modified.
Extension fields are tab-separated key-value pairs appended to the end
of the EX command as a comment, as described above. These key value
pairs appear in the general form "key:value". Their presence in the
lines of the tag file are controlled by the --fields option. The possi-
ble keys and the meaning of their values are as follows: con-
struct name and its value the name declared for that construct in the
program. This scope entry indicates the scope in which the tag was
found. For example, a tag generated for a C structure member would have
a scope looking like "struct:myStruct".
Ctrl-T Return to previous location before jump to tag (not widely
implemented)..
Because ctags is neither a preprocessor nor a compiler, use of prepro-
cessor macros can fool ctags into either missing tags or improperly
generating inappropriate tags. Although ctags has been designed to han-
dle certain common cases, this is the single biggest cause of reported
problems. In particular, the use of preprocessor constructs which alter
the textual syntax of C can fool ctags. You can work around many such
problems by using the -I option.
Note that since ctags generates patterns for locating tags (see the
--excmd option), it is entirely possible that the wrong line may be
found by your editor if there exists another source line which is iden-
tical to the line containing the tag. The following example demon-
strates this condition:
identical). This can be avoided by use of the --excmd=n option.
Ctags has more options than ls(1).
When parsing a C++ member function definition (e.g. "className::func-
tion"), ctags cannot determine whether the scope specifier is a class
name or a namespace specifier and always lists it as a class name in
the scope portion of the extension fields. Also, if a C++ function is
defined outside of the class declaration (the usual case), the access
specification (i.e. public, protected, or private) and implementation
information (e.g. virtual, pure virtual) contained in the function dec-
laration are not known when the tag is generated for the function defi-
nition. It will, however be available for prototypes (e.g
--c++-kinds=+p).
No qualified tags are generated for language objects inherited into a
class. sep-
arator, making it impossible to pass an option parameter con-
taining an embedded space. If this is a problem, use a configu-
ration tempo-
rary
to contain a set of default options which are read in the order
listed when ctags starts, but before the CTAGS environment vari-
able is read or any command line options are read. This makes it
possible to set up site-wide, personal or project-level
defaults. It is possible to compile ctags to read an additional
configuration file before any of those shown above, which will
be indicated if the output produced by the --version option
lists the "custom-conf" feature. Options appearing in the CTAGS
environment variable or on the command line will override
options specified in these files. Only options will be read from
these files. Note that the option files are read in line-ori-
ented mode in which spaces are significant (since shell quoting
is not possible). Each line of the file is read as one command
line parameter (as if it were quoted with single quotes). There-
fore, use new lines to indicate separate command-line arguments.
tags The default tag file created by ctags.
TAGS The default tag file created by etags.
The official Exuberant Ctags web site at:
Also ex(1), vi(1), elvis, or, better yet, vim, the official editor of
ctags. For more information on vim, see the VIM Pages web site at:
Darren Hiebert <dhiebert at users.sourceforge.net>
webmaster@linuxguruz.com | http://www.linuxguruz.com/man-pages/ctags/ | CC-MAIN-2018-34 | refinedweb | 2,820 | 60.75 |
#include <iostream> #include <cstdlib> #include <ctime> using namespace std; int main () { srand (time(0)); int count = 0; int random_num = (rand () % 100) + 1; int high_num = random_num; int low_num = random_num; float total = 0.0; while (count < 100000){ count++; random_num = (rand () % 100) + 1; if (high_num < random_num) high_num = random_num; if (low_num > random_num) low_num = random_num; total = total + random_num; } float average = total / 100000; cout << "The high number is: " << high_num << endl; cout << "The low number is: " << low_num << endl; cout << "The average is: " << average << endl; system ("Pause"); return 0; }
The program compiles and runs. It generates random numbers, displays the highest and lowest, and the average. I got it to get the highest and lowest numbers by fiddling around with the "if" statement and using the ">" and "<" signs.
Eventually I got it to work after trying combination of numbers and declared int statements and was able to get it to define the high and low numbers correctly.
My question is, how does it work?
if (high_num < random_num) high_num = random_num; if (low_num > random_num) low_num = random_num;
I set them to equal to the random number, which I defined later in my int statements. In the "if" statements, it shows the high number being less than a random number, and if it's true, the high number is equal to a random number.... I just don't get how it'd work. Wouldn't it be a greater than sign, which I've tried. All it does is make the lowest 100 and the highest 0.
Ohkay now I'm rambling. Could someone just tell me how that statement works | https://www.daniweb.com/programming/software-development/threads/258250/would-somebody-please-explain-what-a-certain-part-of-my-code-does | CC-MAIN-2017-09 | refinedweb | 259 | 68.81 |
Red Hat Bugzilla – Bug 2805
timed -n and -i switches are not working
Last modified: 2015-01-07 18:37:15 EST
where I run
timed -n test-net
to force timed to only bind to a specific network I get the
following error
timed: no network usable
Since my /etc/networks file has the correct infos in it. I
dig into the source code... The problem was nail down to the
following lines of the timed source code
timed.c line 393
nt->net = htonl(nt->net);
This seems to do nothing (nt->net doesn't change),
which suprised me since I'm on a little endian architecture
(i686).
So I wrote the following test program :
#include <netdb.h>
#include <stdio.h>
#include <netinet/in.h>
int
main ( int argc, char *argv[])
{
struct netent *net;
unsigned long haddr,naddr;
if ( argc != 2 ) {
fprintf( stderr, "usage: test <net>\n" );
exit(2);
}
net = getnetbyname( argv[1] );
if ( !net) {
fprintf( stderr, "no such network: %s\n",
argv[1] );
exit(1);
}
haddr = net->n_net;
naddr = htonl(haddr);
printf( "Network in host byte order: %lu\n", haddr);
printf( "Network in network byte order: %lu\n",
naddr);
exit(0);
}
Which when is compiled either as gcc -O2 -o test test.c or
gcc -o test test.c gives the same uncomprehensible result.
(/etc/networks contains)
192.168.250.0 test-net
$ ./test test-net
Network in host byte order: 4294967295
Network in network byte order: 4294967295
With the /etc/networks line
jbj-net 198.178.231
your test program produced (I like hex output)
porkchop:~ 1028 bash$ ./n jbj-net
Network in host byte order: c6b2e7
Network in network byte order: e7b2c600
correctly on a i686 little-endian machine.
Please reopen this bug with better info regarding timed. I'd
suggest that you look seriously at xntp3 rather than timed
if you are interested in distributing a reference time reliably. | https://bugzilla.redhat.com/show_bug.cgi?id=2805 | CC-MAIN-2017-30 | refinedweb | 315 | 64.61 |
90 [details]
The project.json file that is needed to run the program with dnx.
When using "stackalloc char[bufferSize]" and then passing bufferSize into another function, the Mono runtime will clear/zero out the bufferSize variable. This can be demonstrated with the following code:
using System;
namespace DNXTest
{
public class Program
{
public unsafe static void Main(string[] args)
{
Program p = new Program();
string typeName = typeof(Program).Name;
int bufferSize = 45;
fixed (char* value = typeName)
{
char* buffer = stackalloc char[bufferSize];
p.EncodeIntoBuffer(value, typeName.Length, buffer, bufferSize);
}
}
private unsafe void EncodeIntoBuffer(char* value, int valueLength, char* buffer, int bufferLength)
{
Console.WriteLine("bufferLength is " + bufferLength);
}
}
}
I expect to get output that says "bufferLength is 45" which is what I get when running on the coreclr, but instead on the Mono runtime I get "bufferLength is 0".
I am using Mono version 4.0.4.0 on a MacBook Pro OS X El Capitan.
I am using 'dnx run' to run the program, not mcs/mono. (Although, the code doesn't work on mcs/mono either - I am getting an unexpected NullReferenceException when running it through mcs/mono.)
To set up a repro:
1. Install Mono on the Mac.
2. Install the ASP.NET 5 RC from
3. In a terminal, make sure you are using the mono runtime by running: dnvm use 1.0.0-rc1-final -r mono
4. Copy the attached project.json file into a folder. Also add the above code into the same folder, giving the file the name Program.cs.
5. In a terminal, navigate to the folder with Program.cs and project.json.
6. dnu restore
7. dnx run
Note: This bug was found through investigating the following higher-level issues:
Created attachment 13891 [details]
The C# code file that demonstrates the bug.
Fixed in mono master 27432be3ec4c65ba618b18389561b57e2b2716cb. Thanks for the report. | https://xamarin.github.io/bugzilla-archives/36/36052/bug.html | CC-MAIN-2019-39 | refinedweb | 307 | 60.61 |
Hello guys,
I am trying to make a code using a nested loop structure. How would I input a number and the output would be one line of asterisk?
I am trying to get my code to have an output like this:
enter int: 1
*
enter int: 2
**
enter int: 3
***
ect.
This is the code I have so far, but after you enter a int, it just counts the total lines I have specified.
import java.util.Scanner; public class Loop1 { public static void main(String[] args) { Scanner input = new Scanner( System.in ); String line; System.out.println("Please enter the total number of lines to output, as any integer greater than 0 and less than 71:"); line = input.next(); for(int x=0; x<71; x++){ for(int y=0; y<=x; y++){ System.out.print("*"); } System.out.println(""); } } } | http://www.javaprogrammingforums.com/whats-wrong-my-code/11020-nested-loop-structure.html | CC-MAIN-2017-34 | refinedweb | 141 | 72.76 |
Chapter summary: When you want a
std::string, use
vcl_string.
The job of vcl is to fix your compiler. C++ is not just a language; the standard also includes an extensive library of classes and functions, which make ISO C++ a powerful and useful tool for building computer programs. Unfortunately, few C++ compilers available in 2001 have a bug-free implementation of the standard, so we have to supply our own bug fixes.
To give an example of the type of problems that vcl fixes, here are a few interpretations from the standard which have been observed in some well known vendors' libraries. Many are entirely within the letter of the law, but remain prone to introduce confusion.
On one compiler,
<iostream> and
<iostream.h> refer to
entirely different stream libraries, which may not be linked together.
Therefore every file in the program must use the same header. For us, the
<iostream> version is appropriate, but of course, not all of the
unix compilers support its use. The solution is for every vxl program
to include
<vcl_iostream.h>. In this way, we can maintain
consistency between the many compilers, and if we ever do need to use
another stream library, we can make the switch in one place.
Thus rule one is
Wherever you wish to include standard header
<foo>, you should include
<vcl_foo.h>instead.
Some compilers place STL classes such as
vector<> and
string in
namespace
std::, some don't. Yet others place them there, but do not
implement namespaces properly. Therefore, it is very difficult to write
portable code because sometimes one must say
std::vector, sometimes
one must use
vector. Again, we need a way which works on all
systems. We could try to insert
using namespace std; or
using
std::vector commands throughout the program, but (a) this is not
considered good C++, and (b) it doesn't work anyway.
The low-tech solution is simply to prefix each standard identifier with
vcl_, so that
vcl_vector works everywhere. And this is
what vxl does, when you include
<vcl_vector.h>. Thus, safe
programmers prefix everything in the standard library with
vcl_.
Wherever you wish to use standard class or function
foo, you should write
vcl_fooinstead.
This may seem excessive, but one gets used to it very quickly, and it quickly indicates to novice C++ programmers which functions are from the standard library. You might think that the designers of vxl would have been clever enough to avoid the vcl_ prefix by using fancy compiler flags, and many #defines. However, that way lies madness--trying to confuse a C++ compiler always rebounds on one.
Also, when time comes when all compilers will implement ANSI STL classes
in a consistent way, it's very easy to `perl away' the
vcl_ prefixes,
or replace them with
std::; it's much more difficult, if not impossible,
to insert
std:: prefixes when there are no
vcl_ prefixes.
This program is exemplary. It shows how every identifier in the ISO
library has been prefixed by
vcl_. It may look like extreme
overkill, but it works, and can be made to work on all compilers we've
seen.
The alternative is somewhat scary. It begins
This document has little more to say about the contents of VCL--a book on
C++ should describe it better than we can. However, it is important to
note that nothing more can go in there. If it's not in the standard, it's
not in VCL. Remember, VCL is full, nothing else can go in there. It
cannot for example be "helpfully" modified, Microsoft-style, to send
standard error to a window (but see also
vul_redirector).
The C++ ISO standard library headers include the functionality of the C ISO standard
library headers. For example, the declarations found in
<stdlib.h> can be
found in
<cstdlib> but in namespace
std::. This means that functions
like
printf() should be called using
std::printf() instead; omitting
the
std:: is wrong and won't work if the compiler is truly conforming. The
exception to this (see
[C.2.3] in the standard) is those names from ISO C
which are actually macros. The following is an incomplete list:
For example, the following code is the correct way to use C streams in VXL:
Note that it uses
assert,
stderr and not
vcl_assert,
vcl_stderr
even though it uses
vcl_fprintf,
vcl_abort. This may seem complicated and
hard to remember, but it isn't the fault of VCL. If your compiler were strictly conformant
you would still have to use
std::fprintf and you couldn't use
std::stderr.
Eventually the answer to this will be "all parts" but until compilers catch up with the language standard, the answer is "all but the following":
Of course, if you are just using VXL for your own purposes you may use whatever C++ constructs you like, you just can't put them in the core VXL libraries.
The justification for banning certain things in core libraries is to encourage the adoption of the core by reducing the possibility of porting problems. The justification for allowing it for Level 2 and greater libraries is that they are really pretty useful and hard to do without in more complex libraries than those in the core (e.g. RTTI for doing things like strategy patterns, or managing polymorphic class trees).
In C++, template instantiation is done by the compiler. In real life, it doesn't work as the standard says. In brief here is how template instantiation is supposed to work:
To understand the implications of this (and the meaning of "exported") let's consider the following program, composed of two "translation units" (i.e. files):
The class template
matrix<> just declared is defined in
Finally we refer to the matrix class in a little program:
The program is ill-formed because the
matrix<double> must be
instantiated before its use in
program.cxx, but the definition
isn't in scope at that point.
One way to fix this is to explicitly instantiate the required template in
some source file and make sure to compile that source file first.
Another is to include the definition of the template in the header file.
A third solution is to put the keyword
export in front of the declaration
of
matrix<T>, which makes it possible to implicitly instantiate
matrix<double> even when the definition is not in scope.
Unfortunately, there are at the time of writing (April 2001) no compilers
which understand and implement
export so we are currently limited to
using two kinds of templates:
vcl_vector<vcl_pair<int, vcl_string> >.
Templates/directories in the source tree and include things like
vnl_svd<T>which only need to be instantiated for a handful of types anyway.
Now, it gets worse. For various reasons it is sometimes advantageous to turn
off automatic instantiation of the first kind of template. This is only really
the case for some architectures but if you are unlucky enough to be using one
of them, you also have to explicitly instantiate your STL container classes
and algorithms in the
Templates/ directories. [You should consider skipping
the rest of this section until you actually have a template problem. Don't read it
just for pleasure.] To make it easier to do
this, and to make sure it works on all platforms, explicit instantiation is
done using preprocessor macros. The macro used to instantiate a class or function
is obtained by capitalizing the name (of that class or function) and appending
_INSTANTIATE. For example, here is how to instantiate a
vcl_map<int, X> where
X is the name of some class:
and here is how to instantiate
vcl_vector<X *>:
The naming convention for such files is described **** where ? ****.
If you are using the build system that comes with VXL and you aren't using
implicit instantiation you should put such instantiations in the
Templates/ directory or you will be stuffed.
First of all a definition: Assertions include anything that acts like an assert(). They check for some error condition that should not occur if the code is working correctly(1). They are there to detect broken code. The fact that they abort rather than do something more graceful is irrelevant because the program is already broken. Typical things to check for include array bounds violations, container size mismatches, invalid function parameters. The following things should not be considered as assertions; invalid user input, file input failure; users are too good at messing these things up, and should be treated sympathetically.
When putting an assertion in one of the vxl libraries, you
should make sure that it can be turned off using
NDEBUG.
This is the intention of the NDEBUG macro, and is very useful for
time-critical code.
The easiest way to do this is using the
assert() macro. If
you want to print out a more useful error message you could try
However you should bear in mind the extra compilation overhead compared
to just
#include <vcl_cassert.h>.
If you want finer control you can add extra control macros.
Indeed in the case of time-critical code, you are encouraged to provide
this extra control. You can have the default (i.e. when the control macro
is undefined) either include, or not include, the assertion.
In any case, you should ensure that defining
NDEBUG will
override your specialist macros, and turn off all assertions.
For example,
Of course, you should also document the effect of your macro in the function Doxygen markup (or class level if appropriate.)
Do not forward declare classes in vcl. For example,
In this case you should just include <vcl_string.h>. In the case of stream stuff, there is an include file of forward declarations that will work.
General rule: never forward declare vcl_something with "class vcl_something;" but either `#include <vcl_something_fwd.h>' or `#include <vcl_something.h>'
This document was generated by Ian Scott on January, 10 2008 using texi2html 1.76. | http://public.kitware.com/vxl/doc/release/books/core/book_3.html | crawl-001 | refinedweb | 1,656 | 61.67 |
[
]
Martin Sebor updated STDCXX-335:
--------------------------------
Fix Version/s: 4.3
Not sure we'll be able to do anything about this w/o the blessing of the committee since changing
the signature of the function would be easily detectable even for fundamental types. See the
test case below. Scheduled for 4.3.
#include <cassert>
#include <algorithm>
int main ()
{
int x = 0;
int y = 2;
const int &z = std::min (x, y);
x = 1;
assert (&z == &x);
assert (z == 1);
}
> std::min() suboptimal
> ---------------------
>
> Key: STDCXX-335
> URL:
> Project: C++ Standard Library
> Issue Type: Bug
> Components: 25. Algorithms
> Affects Versions: 4.1.3
> Environment: gcc 3.2.3, x86 Linux
> Reporter: Mark Brown
> Fix For: 4.3
>
>
>. | http://mail-archives.apache.org/mod_mbox/stdcxx-dev/200708.mbox/%3C23931865.1188424350903.JavaMail.jira@brutus%3E | CC-MAIN-2018-39 | refinedweb | 115 | 67.15 |
💬 Sonoff relay using MySensors ESP8266 wifi or mqtt gateway
- openhardware.io last edited by openhardware.io
Great! Added to the main site now.
Sorry, but what's the use case here?
With Sonoff it's either hard or impossible to add a radio or rs485 transport, so I don't see it as a real gateway.
If one wants to use it as a pure client and control it with MQTT - there is a great working solution available already:
The use case is that you can use the sonoff with all controllers that support mysensors.
With all my respect to the project and people behind it I believe that MQTT is more widely used than MySensors. So for me it would be more logical to add another MQTT client to the network rather than introduce any additional gateway. Just my 2¢.
This is not really a gateway. You can use the esp8266 gateway sketch, disable the nrf24 / rfm69 and directly attach sensors. So basically it's an relay actuator which is configured as a mysensors gateway.
- NeverDie Hero Member last edited by NeverDie
I think what's throwing everybody off is applying the word "gateway" to this sonoff device. Wouldn't it make more sense to have it remain a relay actuator (i.e. just a node) and to have the mysensors gateway be something else (of which there are many examples on this website)? i.e. it makes more sense to have it be a mysensors node (or MQTT node if that's your architecture) than to have it be a mysensors gateway. Wouldn't it?
This is not listed as a gateway but under sensors & actuators but maybe a "s/and/relay using" would be a fix of the headline. I selected the mqtt gateway example just because I have a few other NodeMCU's working as just MYS sensors with no radio but it's trivial to just re-use code for any of the other gateways, as you all know. I choose the mqtt addition just for fun and know there are other mqtt examples but nothing working out of the box with my HASS + MYS + mqtt setup. Regarding the gateway naming, at least I don't know any other way of using MYS and ESP8266 than using the gateway sketches with a disabled radio. What should they be called, just node? At the same time, adding a radio to the Sonoff and it's a.
- NeverDie Hero Member last edited by NeverDie
@danbi
Yup. It's either that or do the MQTT thing (which, personally, I favor, though right now it's a minority opinion) like AndrewZ said.
I don't see the problem here. An direct ip connection between the controller and the node is perfect for critical nodes as less components mean less things that can fail. I have flashed a Sonoff Touch switch with the esp8266 gateway sketch and it is running just fine. Or am I missing something?
@Jan-Gatzke What code modifications have you done to get the touch buttons to work or do they work like "normal" buttons? I really like the built-in form factor and was thinking about ordering a few.
All the touch stuff is handheld by an ic. You can use it like an normal button, active low. It is connected to pin 0. I have made a two way switch of it. With a long press I cycle through multiple scenes which control the color of my tv's led light. With a even longer press I can switch the tv light off. I can share the code when I found the time to clean it up.
I like the OTA update function of the esp very much. It's much simpler than the rf24/ Arduino solution. My sketch provides a browser based update function with just 5 lines of code. This was important to me because I don't want to get my switches out off the wall for every update. For the eu version of the sonoff this is mandatory because the programming header does not fit when the device is mounted.
@Jan-Gatzke Excelletn info, thanks! Yes, please share your code , the state doesn't matter for me at least
Ok, I just wiped out (hopefully) all my passwords and IP information.
/**. */ #include <SPI.h> //PSK" //,1,200 // If using static ip you need to define Gateway and Subnet address as well #define MY_IP_GATEWAY_ADDRESS 192,168,1,254 // Flash leds on rx/tx/err // #define MY_LEDS_BLINKING_FEATURE // Set blinking period // #define MY_DEFAULT_LED_BLINK_PERIOD 300 //> #else #include <ESP8266WiFi.h> #endif #include <WiFiClient.h> #include <ESP8266WebServer.h> #include <ESP8266mDNS.h> #include <ESP8266HTTPUpdateServer.h> #include <MySensors.h> #define RELAY_1_Pin 12 #define LED_1_Pin 13 #define TOUCH_1_Pin 0 #define RELAY_1 1 #define TOUCH_1 3 #define TOUCH_2 4 boolean relayon = false; int activescene = 1; volatile unsigned long LastChange = 0; volatile boolean RelayToggleRequired = false; volatile boolean SceneSwitchRequired = false; volatile boolean SceneSwitchTriggeredByTimeout = false; volatile boolean SceneSwitchTriggeredByTimeout2 = false; MyMessage msg(RELAY_1, V_LIGHT); MyMessage msg1(TOUCH_1, V_SCENE_ON); MyMessage msg2(TOUCH_2, V_LIGHT); ESP8266WebServer httpServer(80); ESP8266HTTPUpdateServer httpUpdater; const char* host = "sonofftouch1"; const char* location = "Couch"; char* update_username = "admin"; //Benutzername zum login für die OTA-Programmierung char* update_password = "MyPassword"; //Passwort String webPage = ""; unsigned long lastheartbeat = 0; void buttonChangeCallback() { if (digitalRead(TOUCH_1_Pin) == 1) { //LED off for feedback digitalWrite(LED_1_Pin, 1); //Button has been released, trigger one of the two possible options. if (millis() - LastChange > 500) { //Long Press if (!SceneSwitchTriggeredByTimeout) { SceneSwitchRequired = true; } } else if (millis() - LastChange > 50) { //Short press at least 50 ms for debounce RelayToggleRequired = true; } else { //Too short to register as a press } } else { //LED on for feedback digitalWrite(LED_1_Pin, 0); //Just been pressed - do nothing until released. } LastChange = millis(); SceneSwitchTriggeredByTimeout = false; SceneSwitchTriggeredByTimeout2 = false; } void setup() { webPage += "<h1>ESP8266 Web Server</h1>"; webPage += "<br><h1>"; webPage += host; webPage += "</h1>"; webPage += "<br><h1>"; webPage += location; webPage += "</h1>"; webPage += "<br><a href=/update>Click here to update</a>"; pinMode(TOUCH_1_Pin, INPUT_PULLUP); // sets the digital pin as output pinMode(RELAY_1_Pin, OUTPUT); pinMode(LED_1_Pin, OUTPUT); //LED off digitalWrite(LED_1_Pin, 1); MDNS.begin(host); httpUpdater.setup(&httpServer, "/update", update_username, update_password); httpServer.begin(); MDNS.addService("http", "tcp", 80); httpServer.on("/", []() { httpServer.send(200, "text/html", webPage); }); Serial.println("Enabling touch switch interrupt"); attachInterrupt(digitalPinToInterrupt(TOUCH_1_Pin), buttonChangeCallback, CHANGE); } void presentation() { // Send the sketch version information to the gateway and Controller sendSketchInfo("Sonoff Touch Mysensors", "1.0"); present(RELAY_1, S_LIGHT, "single click/onboard relay"); present(TOUCH_1, S_SCENE_CONTROLLER, "double click/scene controller"); present(TOUCH_2, S_LIGHT, "long press virtuel switch"); } void loop() { httpServer.handleClient(); if (millis() - lastheartbeat > 30000) { //sendHeartbeat(); sendBatteryLevel(90); lastheartbeat = millis(); } if (RelayToggleRequired) { ToggleRelay(); RelayToggleRequired = false; } if (SceneSwitchRequired) { SwitchScene(); SceneSwitchRequired = false; } if (millis() - LastChange > 600) { if (digitalRead(TOUCH_1_Pin) == 0) { if (SceneSwitchTriggeredByTimeout == false) { SceneSwitchTriggeredByTimeout = true; SwitchScene(); } } } if (millis() - LastChange > 3000) { if (digitalRead(TOUCH_1_Pin) == 0) { if (SceneSwitchTriggeredByTimeout2 == false) { SceneSwitchTriggeredByTimeout2 = true; SwitchSceneOff(); } } } } void receive(const MyMessage &message) { // We only expect one type of message from controller. But we better check anyway. if (message.type == V_LIGHT) { // Change relay state if (message.sensor == RELAY_1) { digitalWrite(RELAY_1_Pin, message.getBool() ? 1 : 0); relayon = message.getBool() ? 1 : 0; } // Write some debug info Serial.print("Incoming change for sensor:"); Serial.print(message.sensor); Serial.print(", New status: "); Serial.println(message.getBool()); } } void ToggleRelay() { Serial.println("Single Click"); if (relayon) { digitalWrite(RELAY_1_Pin, 0); relayon = false; } else { digitalWrite(RELAY_1_Pin, 1); relayon = true; } send(msg.set(relayon ? 1 : 0)); } void SwitchScene() { //Domoticz does not like 1 as scene. So we start with scene 2. Serial.println("Long press"); activescene += 1; if (activescene > 6) { activescene = 2; } Serial.print("Activating Scene "); Serial.println(activescene); send(msg1.set(activescene)); } void SwitchSceneOff() { Serial.println("3sec Long Press"); send(msg1.set(7)); }
@danbi
Configuring it as a gateway allows you to directly add it to the controller even it is isn't routing anything else than it's local I/O, making it very straightforward
@Jan-Gatzke Thanks! A Interesting and inspiring code example. Did you think about using the arduinoOTA option instead of the httpUpdater?
Yes, I thought about that. But during my tests the web updater was just most reliable and easiest to use. Arduino ota needs extensions to the IDE and sometimes the node was not found. Web updater works out of the box and never made any problems.
@Jan-Gatzke Great to know!
This sketch is awesome, First MySensors sketch I have used that worked straight off the bat.
The details page does need a few extra bits of info....
3.3v FTDI tool needed
Hold down the button on the Sonoff when you power it up to enter flash mode.
Can I ask please ? I don't know if I understand correctly principle of working Sonoff under mysensors , I uploaded sketch from here to Sonoff , Sonoff is connected to my WiFi network , now I need to make MQTT gateway connected trought LAN or WiFi to the raspberry where is running Domoticz ? I had nodeMCU will it work ? Thanks much; presentation() { // Send the sketch version information sendSketchInfo("Sonoff ethernet DHT22", "1.0"); // Register sensor present(CHILD_ID, S_BINARY); present(CHILD_ID_HUM, S_HUM); present(CHILD_ID_TEMP, S_TEMP); metric = getControllerConfig().isMetric; //; boolean needRefresh = (millis() - lastRefreshTime) > SLEEP_TIME; if (needRefresh) { lastRefreshTime = millis(); //); }
-
Suggested Topics
Sonoff + MySensors mqtt gateway + Home-Assistant
MySensors Ethernet Gateway RPi form-factor (WIZ811MJ module)
💬 MySensors Gateway for Raspberry PI
ESP8266 crashes while starting up and receiving a message from node
MQTT unsure of relay state
💬 MySWeMosGWShield - WeMos Mini MySensors Gateway Shield
Light dimmer with openhab and esp8266
Need help with Turning a LED On/Off with OpenHab and MQTT. | https://forum.mysensors.org/topic/5858/sonoff-relay-using-mysensors-esp8266-wifi-or-mqtt-gateway/27 | CC-MAIN-2019-22 | refinedweb | 1,570 | 55.74 |
2008-09-25 10:09:18 8 Comments
Consider the following code:
void Handler(object o, EventArgs e) { // I swear o is a string string s = (string)o; // 1 //-OR- string s = o as string; // 2 // -OR- string s = o.ToString(); // 3 }
What is the difference between the three types of casting (okay, the 3rd one is not a casting, but you get the intent). Which one should be preferred?
Related Questions
Sponsored Content
11 Answered Questions
[SOLVED] Why don't Java's +=, -=, *=, /= compound assignment operators require casting?
- 2012-01-03 10:10:32
- Honza Brabec
- 266325 View
- 3449 Score
- 11 Answer
- Tags: java casting operators variable-assignment assignment-operator
296 Answered Questions
[SOLVED] Hidden Features of C#?
- 2008-08-12 16:32:24
- Serhat Ozgel
- 654435 View
- 1476 Score
- 296 Answer
- Tags: c# hidden-features
10 Answered Questions
[SOLVED] When to use reinterpret_cast?
24 Answered Questions
[SOLVED] Cast int to enum in C#
26 Answered Questions
[SOLVED] Do I cast the result of malloc?
26 Answered Questions
[SOLVED] Why not inherit from List<T>?
- 2014-02-11 03:01:36
- Superbest
- 147584 View
- 1170 Score
- 26 Answer
- Tags: c# .net list oop inheritance
23 Answered Questions
10 Answered Questions
[SOLVED] Should 'using' directives be inside or outside the namespace?
- 2008-09-24 03:49:50
- benPearce
- 177133 View
- 1867 Score
- 10 Answer
- Tags: c# .net namespaces stylecop code-organization
@Vadim S. 2018-10-17 12:11:02
I would like to attract attention to the following specifics of the as operator:
@user4931677 2018-05-27 13:17:48
The following two forms of type conversion (casting) is supported in C#:
|
(C) v
• Convert the static type of v to c in the given expression
• Only possible if the dynamic type of v is c, or a subtype of c
• If not, an InvalidCastException is thrown
|
v as C
• Non-fatal variant of (c) v
• Thus, convert the static type of v to c in the given expression
• Returns null if the dynamic type of v is not c, or a subtype of c
@Dmitry 2017-08-03 07:34:30
Use direct cast
string s = (string) o;if in the logical context of your app
stringis the only valid type. With this approach, you will get
InvalidCastExceptionand implement the principle of Fail-fast. Your logic will be protected from passing the invalid type further or get NullReferenceException if used
asoperator.
If the logic expects several different types cast
string s = o as string;and check it on
nullor use
isoperator.
New cool feature have appeared in C# 7.0 to simplify cast and check is a Pattern matching:
@Lucas Teixeira 2016-09-25 04:47:48
It seems the two of them are conceptually different.
Direct Casting
Types don't have to be strictly related. It comes in all types of flavors.
It feels like the object is going to be converted into something else.
AS operator
Types have a direct relationship. As in:
It feels like the you are going to handle the object in a different way.
Samples and IL
@Sander 2008-09-25 10:16:26
Throws InvalidCastException if
ois not a
string. Otherwise, assigns
oto
s, even if
ois
null.
Assigns
nullto
sif
ois not a
stringor if
ois
null. For this reason, you cannot use it with value types (the operator could never return
nullin that case). Otherwise, assigns
oto
s.
Causes a NullReferenceException if
ois
null. Assigns whatever
o.ToString()returns to
s, no matter what type
ois.
Use 1 for most conversions - it's simple and straightforward. I tend to almost never use 2 since if something is not the right type, I usually expect an exception to occur. I have only seen a need for this return-null type of functionality with badly designed libraries which use error codes (e.g. return null = error, instead of using exceptions).
3 is not a cast and is just a method invocation. Use it for when you need the string representation of a non-string object.
@Anheledir 2008-09-25 10:41:25
You can assign 'null' to value-types when explicitly defined, e.g.: int? i; string s = "5"; i = s as int; // i is now 5 s = null; i = s as int; // i is now null
@Guvante 2008-09-25 10:47:18
RE: Anheledir Actually i would be null after the first call. You have to use an explicit conversion function to get the value of a string.
@Guvante 2008-09-25 10:48:50
RE: Sander Actually there is another very good reason to use as, it simplifies your checking code (Check for null rather then check for null and correct type) This is helpful since a lot of the time you would rather throw a custom one exception. But it is very true that blind as calls are bad.
@Calum 2008-09-25 11:22:45
#2 is handy for things like Equals methods where you don't know the input type.Generally though, yes, 1 would be preferred. Although preferred over that would obviously be using the type system to restrict to one type when you only expect one :)
@AnthonyWJones 2008-09-25 11:45:15
#2 is also useful when you have code that might do something specific for a specialised type but otherwise would do nothing.
@nXqd 2011-10-08 13:35:04
Thanks so much for this . I thought #2 is the modern style of casting :) which is quite stupid ...
@Alex KeySmith 2011-10-25 11:08:48
Hi @Sander, on 2. Could it be used for value types like this int? test = i as int?;
@Dave Cousineau 2012-06-20 01:52:07
you can also do the following:
int val = something as int? ?? someDefaultInt;
@Learner 2013-12-01 11:01:03
Sander: Your 2nd point didn't make any sense to me. Atleast I didn't understand. Also, you say you always expect exception to occur, that is not a good practise either.
@Aidiakapi 2015-04-28 16:31:49
As a general rule of thumb. Usage of
asmust be followed by a test to see if it is
null. Or at least other code that accepts
nullas a valid input. That said, I use
ascasting quite frequently, an example where I'd use it is for a method that accepts
IEnumerable<T>but can preallocate a dynamically sized collection if it's also an
ICollection.
ICollection col = input as ICollection; if (col != null) something.Reserve(col.Count);
@Suamere 2015-11-04 15:59:54
Thus far, I have only used 2
asfor overriding Equals:
public override bool Equals(object obj) { if ((obj as MyObject) == null) return base.Equals(obj); /*return: Handle it myself*/ }because that override uses object as a parameter and you have zero control about what people will pass into there.
@Eniola 2016-06-20 07:08:35
Why do you say returning error codes rather than throwing exceptions is bad design? Isnt exception throwing and handling, especially on tight loops, more expensive? What if the given library chooses to work around that limitation with the primitive approach of using error codes? Your statement will imply that the Win32 API is badly designed?
@Bennett Yeo 2015-10-13 20:21:25
Since nobody mentioned it, the closest to instanceOf to Java by keyword is this:
@Chris S 2008-09-25 11:05:43
Is prefered, as it avoids the performance penalty of double casting.
@Matt 2015-06-25 22:23:22
Hi Chris, the link that was in this answer is now a 404... I'm not sure if you've got a replacement you want to put in in it's place?
@BornToCode 2014-04-03 14:28:13
All given answers are good, if i might add something: To directly use string's methods and properties (e.g. ToLower) you can't write:
you can only write:
but you could write instead:
The
asoption is more readable (at least to my opinion).
@james 2016-08-31 07:43:03
the (o as string).ToLower() construct defeats the purpose of the as operator. This will throw a null reference exception when o cannot be cast to string.
@BornToCode 2016-08-31 11:39:26
@james - But who said that the sole purpose of the as operator is to throw exception if cast fails? If you know that o is a string and just want to write cleaner code you could use
(o as string).ToLower()instead of the multiple confusing brackets.
@james 2016-09-01 20:40:07
the purpose of the as is quite the opposite - it should not throw the exception when the cast fails, it should return null. Let's say your o is a string with a value of null, what will happen then? Hint - your ToLower call will fail.
@BornToCode 2016-09-03 21:11:55
@james - You're right, but what about the cases where I know for certain that it won't be null and I just need to do the casting for the compiler to let me access that object's methods?
@james 2016-09-04 23:49:28
you can definitely do that but it's not exactly best practice because you don't want to rely on the caller or external systems to ensure your value isn't null. If you're using C#6 then you could do (o as string)?. ToLower().
@Quibblesome 2008-09-25 10:31:28
@kdbanman 2015-07-31 17:51:40
This is an excellent addition to the accepted answer.
@j riv 2017-07-02 09:20:49
I get a sense this answer sounds good, but it might not be accurate.
@Quibblesome 2017-07-04 13:11:29
@jriv its worked for me for the past 15 years and still works so.....
@Uxonith 2017-09-13 22:42:24
I like the first two, but I would add "and you are sure it isn't null" to the third option.
@Quibblesome 2017-09-14 11:39:40
you can use Elvis (?.) these days to avoid having to care about that: obj?.ToString()
@Griswald_911 2018-08-13 20:08:54
@Quibblesome -- great answer but I had to stop to think about your rebuttle! it literally blows my mind that the language has been around well over 15 years. It feels like yesterday when we were being all "edgy" trying to convince senior devs to make the switch to C#.
@Quibblesome 2018-08-14 08:59:04
rebuttal* for the record.
@xtrem 2012-06-23 01:31:48
When trying to get the string representation of anything (of any type) that could potentially be null, I prefer the below line of code. It's compact, it invokes ToString(), and it correctly handles nulls. If o is null, s will contain String.Empty.
@Rob 2008-09-25 10:17:50
"(string)o" will result in an InvalidCastException as there's no direct cast.
"o as string" will result in s being a null reference, rather than an exception being thrown.
"o.ToString()" isn't a cast of any sort per-se, it's a method that's implemented by object, and thus in one way or another, by every class in .net that "does something" with the instance of the class it's called on and returns a string.
Don't forget that for converting to string, there's also Convert.ToString(someType instanceOfThatType) where someType is one of a set of types, essentially the frameworks base types.
@Brady Moritz 2010-08-15 18:36:54
According to experiments run on this page:
(this page is having some "illegal referrer" errors show up sometimes, so just refresh if it does)
Conclusion is, the "as" operator is normally faster than a cast. Sometimes by many times faster, sometimes just barely faster.
I peronsonally thing "as" is also more readable.
So, since it is both faster and "safer" (wont throw exception), and possibly easier to read, I recommend using "as" all the time.
@Glenn Slaven 2008-09-25 11:17:27
The as keyword is good in asp.net when you use the FindControl method.
This means you can operate on the typed variable rather then having to then cast it from
objectlike you would with a direct cast:
It's not a huge thing, but it saves lines of code and variable assignment, plus it's more readable
@Mark Cidade 2008-09-25 10:41:43
If you already know what type it can cast to, use a C-style cast:
Note that only with a C-style cast can you perform explicit type coercion.
If you don't know whether it's the desired type and you're going to use it if it is, use as keyword:
Note that as will not call any type conversion operators. It will only be non-null if the object is not null and natively of the specified type.
Use ToString() to get a human-readable string representation of any object, even if it can't cast to string.
@AnthonyWJones 2008-09-25 11:48:12
That's an interesting little gotcha regarding the type conversion operators. I have a few types that I've created conversions for, must watch out for that then.
@Joel in Gö 2008-09-25 10:31:39
2 is useful for casting to a derived type.
Suppose a is an Animal:
will get a fed with a minimum of casts.
@Chris Moutray 2012-05-09 09:38:19
But it would be better to use polymorphism for this type of problem...
@qreba47jhqb4e3lstrujvvdx 2012-11-24 03:53:57
@Chirs Moutray, that is not always possible, especially if it is a library.
@Blair Conrad 2008-09-25 10:16:50
It really depends on whether you know if
ois a string and what you want to do with it. If your comment means that
oreally really is a string, I'd prefer the straight
(string)ocast - it's unlikely to fail.
The biggest advantage of using the straight cast is that when it fails, you get an InvalidCastException, which tells you pretty much what went wrong.
With the
asoperator, if
oisn't a string,
sis set to
null, which is handy if you're unsure and want to test
s:
However, if you don't perform that test, you'll use
slater and have a NullReferenceException thrown. These tend to be more common and a lot harder to track down once they happens out in the wild, as nearly every line dereferences a variable and may throw one. On the other hand, if you're trying to cast to a value type (any primitive, or structs such as DateTime), you have to use the straight cast - the
aswon't work.
In the special case of converting to a string, every object has a
ToString, so your third method may be okay if
oisn't null and you think the
ToStringmethod might do what you want.
@John Gibb 2014-01-13 18:46:05
One note - you can use
aswith nullable value types. I.E.
o as DateTimewon't work, but
o as DateTime?will...
@BornToCode 2014-04-03 14:17:31
Why not using
if (s is string)instead?
@Blair Conrad 2014-04-03 14:29:08
@BornToCode, for me, largely personal preference. Depending on what you're doing, often after
ising, you'll have to cast again anyhow, so you have the is and then a hard cast. For some reason, the
asand null check felt better to me.
@Sergio Acosta 2008-09-25 10:15:32
'as' is based on 'is', which is a keyword that checks at runtime if the object is polimorphycally compatible (basically if a cast can be made) and returns null if the check fails.
These two are equivalent:
Using 'as':
Using 'is':
On the contrary, the c-style cast is made also at runtime, but throws an exception if the cast cannot be made.
Just to add an important fact:
The 'as' keyword only works with reference types. You cannot do:
In those cases you have to use casting.
@leppie 2008-09-25 10:18:28
'is' works on any type
@Sergio Acosta 2008-09-25 10:22:03
Thanks for pointing my mistake, you are right. I edited the answer. oops, sorry. | https://tutel.me/c/programming/questions/132445/direct+casting+vs+39as39+operator | CC-MAIN-2019-18 | refinedweb | 2,727 | 70.43 |
EagleEye v0.2.0EagleEye v0.2.0
EagleEye is a library for recording metrics inside Twisted applications and other related frameworks (eg. Klein). It consists of a decorator-based metric reporting module and a Twisted and Protocol Buffers based Riemann reporting module.
Current StatusCurrent Status
EagleEye is currently so alpha that it hurts.
InstallationInstallation
sudo pip install eagleeye will get the latest release version.
UsageUsage
Riemann ReportingRiemann Reporting
EagleEye.Riemann currently uses UDP for communication.
The following example submits a report on behalf of the host
WebServer1, saying that
MyCoolWebApp's API took 72ms to respond. In this example, Riemann is hosted on localhost on port 5555 - change this to fit your configuration.
from eagleeye.riemann import Riemann riemann = Riemann('127.0.0.1', 5555) riemann.submit({'host':'WebServer1', 'service': 'MyCoolWebApp_APIResponse', 'state': 'critical', 'description': 'my_api_function() took 72ms', 'metric_f': '72'})
The following fields can be sent to Riemann:
host- the host you're sending it from
service- the service the metric is for
state- the state that the service is in. Ones that work in Riemann-Dash are
ok(green),
warning(yellow) and
critical(red).
description- the description of the metric, free-form text. Shows up when you hover over the metric in Riemann-Dash, for example.
time- the time of the event, in Unix time.
ttl- the time in seconds that this state is valid for.
metric_f- the metric, in floating point (converted automatically for you by EagleEye).
metric_sint64- the metric, in a long (converted automatically for you by EagleEye).
metric_d- does not work in EagleEye yet - please use metric_f
Metric RecordingMetric Recording
EagleEye.Metric is a little class that does the Riemann reporting for you, as invisibly as possible. It will worry about setting up the Riemann connection with what you pass it initially. You can use the Metric and Riemann bits without conflicting, it seems.
This example sets up a Metric object which you can then use to decorate your functions. For this example, the host is
WebServer1, reporting for the service
DBOperation, and all metrics are rounded to 2 places after the decimal point. It will also have a
critical threshold of 5ms. In this example, Riemann is on localhost on port 5555.
from eagleeye.metric import Metric ee = Metric(myhost='WebServer1', timeprecision=2, host='127.0.0.1', port=5555) @ee.record('DBOperation', ee_criticalthreshold='5') def db_operation(stuff, things): # code goes here
EagleEye.Metric is 'invisible' - it won't block a chain of decorators, even. It also handles Deferreds.
Klein ReportingKlein Reporting
EagleEye.Metric can also report times for your Klein-using app.
@route('/login/process', methods=['POST']) @ee.record('app_login', ee_criticalthreshold='5') def pg_login_process(request): if request.args.get('username')[0] == "myuser": return "hi, myuser!" else: return "get out of here!"
DevelopingDeveloping
Think something could be done better? Let me know by email (hawkowl@outlook.com) or twitter (@hawkieowl) - if you think you can do it better, Patches Accepted(TM)! :)
EagleEye.riemannEagleEye.riemann
EagleEye uses Twisted for the Riemann UDP communication, and ProtoBuf for sending the metrics data over the wire. The protobuf.py shouldn't have to be changed - if it does, get the latest
.proto from the Riemann site and compile it with
mkdir pb && protoc --python_out pb proto.proto.
EagleEye.metricEagleEye.metric
EagleEye's reporting code is in eagleeye/metric.py and consists of a class with fun decorator stuff inside it. (It maybe relies on magic.)
TestsTests
EagleEye uses Twisted's wonderful Trial framework for running unit tests. To run them, cd to the top level project and run
tools/trial eagleeye. Tests are in
eagleeye/test/. Please note that the tests only test sending metrics - you should be looking at Riemann to make sure they show up on the other end, and edit the tests to point to your installation! | https://libraries.io/pypi/eagleeye | CC-MAIN-2022-21 | refinedweb | 630 | 51.55 |
Fl:
# What are machine tags?
Machine tags are tags that use a special syntax to define extra information about a tag. Many of you may already be familiar with machine tags by another name (triple tags) or because you are already using them, informally, in your code (for example, "geo:long=123.456"). Like tags, there are no rules for machine tags beyond the syntax to specify the parts of a machine tag....
# How do I query machine tags?
Via the API!
Specifically, using the "machinetags" parameter in the 'flickr.photos.search' method. Like tags, you can specify multiple machine tags as a comma separated list.
# Can I query the various part of a machine tag?
* Find photos using the 'dc' namespace :
{"machine_tags" => "dc:"}
In some ways it's a small announcement but yet it's another very useful step in adding structure and meaning to the ad-hoc user-defined parts of the web. And it certainly creates some interesting mashup possibilities.
[...] Source - Programmable Web [...]
[...] from O’Reilly Radar, ProgrammableWeb, and Dan Catt (who championed the concept at flickr, I [...] | https://www.programmableweb.com/news/flickr-introduces-machine-tags/2007/01/26 | CC-MAIN-2017-13 | refinedweb | 181 | 66.54 |
Using QualifiedName as the key type in WTF hash tables is currently a lot less efficient than it could be.
In the RoboHornet svgresize.html benchmark, I'm seeing ~200ms below QualifiedName instantiation and hash lookups on my MBP. I have a patch to cut this in half.
Created attachment 168815 [details]
Proposed patch
There is some debate as to if we should even be using QualifiedNames here. anyQName() is this odd hack that we seem to use to be the "null" QualifiedName, and it's basically only used for SVG code:
I think this patch is totally reasonable, and I can see it being a huge speed win. I'm just not sure if the concept of a null QualifieName (or using QualfiedName in a hash) even makes sense as we currently do it.
This patch makes me wonder if we shouldn't just ditch anyQName in prefernce for this nullQName anyway. :)
Love this patch. Nice work!
Comment on attachment 168815 [details]
Proposed patch
View in context:
> Source/WebCore/dom/QualifiedName.h:134
> + QualifiedNameComponents c = { name->m_prefix.impl(), name->m_localName.impl(), name->m_namespace.impl() };
> + name->m_existingHash = hashComponents(c);
I think it’d be nice to have this part be out of line to keep the inlined code tighter. Worth testing to see if it makes things faster or slower. If it’s not slower, we should do it.
Also, I’d name the local variable something other than c.
> Source/WebCore/svg/SVGElement.h:164
> + QualifiedNameComponents c = { nullAtom.impl(), key.localName().impl(), key.namespaceURI().impl() };
> + return hashComponents(c);
It’s be nice to name this something other than c.
I think we could cache this no-prefix form of the qualified name too; just add another field to QualifiedName for that.
Comment on attachment 168815 [details]
Proposed patch
View in context:
> Source/WebCore/ChangeLog:3
> + Clean up QualifiedName-as-hash-key scenario.
I would call this “optimize” not “clean up”.
anyQName is super mysterious to me. From the name and the "*" strings in it, I would expect it to have some sort of wildcard semantic, but it doesn't really. I wonder if Hyatt remembers what it's for?
Oddly enough anyQName was added by hyatt! Just hijacked by SVG for null purposes. :(
(In reply to comment #9)
> Oddly enough anyQName was added by hyatt! Just hijacked by SVG for null purposes. :(
Indeed, I remembered that he added it because I reviewed that patch. But I do not remember the original purpose.
Committed <>
(Darn, forgot to change the bug title in the ChangeLog.)
(In reply to comment #11)
> Committed <>
> (Darn, forgot to change the bug title in the ChangeLog.)
This progression is visible in various webkit-perf graphs (bigger is better):[[8195914,2001,173262]]&sel=1350758073445.8484,1350797740907.5178,0.9000000000000004,6&displayrange=7&datatype=running
Nice work! | https://bugs.webkit.org/show_bug.cgi?format=multiple&id=99394 | CC-MAIN-2019-30 | refinedweb | 469 | 67.25 |
(For more resources related to this topic, see here.)
Who should use Cognos Workspace Advanced?
With Cognos Workspace Advanced, business users have one tool for creating advanced analyses and reports. The tool, like Query Studio and Analysis Studio, is designed for ease of use and is built on the same platform as the other report development tools in Cognos. Business Insight Advanced/Cognos Workspace Advanced is actually so powerful that it is being positioned more as a light Cognos Report Studio than as a powerful Cognos Query Studio and Cognos Analysis Studio.
Comparing to Cognos Query Studio and Cognos Analysis Studio
With so many options for business users, how do we know which tool to use? The best approach for making this decision is to consider the similarities and differences between the options available. In order to help us do so, we can use the following table:
As you can see from the table, all three products have basic charting, basic filtering, and basic calculation features. Also, we can see that Cognos Query Studio and Cognos Workspace Advanced both have ad hoc reporting capabilities, while Cognos Analysis Studio and Cognos Workspace Advanced both have ad hoc analysis capabilities. In addition to those shared capabilities, Cognos Workspace Advanced also has advanced charting, filtering, and calculation features.
Cognos Workspace Advanced also has a limited properties pane (similar to what you would see in Cognos Report Studio). Furthermore, Cognos Workspace Advanced allows end users to bring in external data from a flat file and merge it with the data from Cognos Connection. Finally, Cognos Workspace Advanced has free-form design capabilities. In other words, you are not limited in where you can add charts or crosstabs in the way that Cognos Query Studio and Cognos Analysis Studio limit you to the standard templates.
The simple conclusion after performing this comparison is that you should always use Cognos Workspace Advanced. While that will be true for some users, it is not true for all. With the additional capabilities come additional complexities. For your most basic business users, you may want to keep them using Cognos Query Studio or Cognos Analysis Studio for their ad hoc reporting and ad hoc analysis simply because they are easier tools to understand and use. However, for those business users with basic technical acumen, Cognos Workspace Advanced is clearly the superior option.
Accessing Cognos Workspace Advanced
I would assume now that, after reviewing the capabilities Cognos Workspace Advanced brings to the table, you are anxious to start using it. We will start off by looking at how to access the product.
The first way to access Cognos Workspace Advanced is through the welcome page. On the welcome page, you can get to Cognos Workspace Advanced by clicking on the option Author business reports:
This will bring you to a screen where you can select your package. In Cognos Query Studio or Cognos Analysis Studio, you will only be able to select non-dimensional and dimensional packages based on the tool you are using. With Cognos Workspace Advanced, because the tool can use both dimensional and non-dimensional packages, you will be prompted with packages for both.
The next way to access Cognos Workspace Advanced is through the Launch menu in Cognos Connection. Within the menu, you can simply choose Cognos Workspace Advanced to be taken to the same options for choosing a package.
Note, however, that if you have already navigated into a package, it will automatically launch Cognos Workspace Advanced using the very same package.
The third way to access Cognos Workspace Advanced is by far the most functional way. You can actually access Cognos Workspace Advanced from within Cognos Workspace by clicking on the Do More... option on a component of the dashboard:
When you select this option, the object will expand out and open for editing inside Cognos Workspace Advanced.
Then, once you are done editing, you can simply choose the Done button in the upper right-hand corner to return to Cognos Workspace with your newly updated object.
For the sake of showing as many features as possible in this chapter, we will launch Cognos Workspace Advanced from the welcome page or from the Launch menu and select a package that has an OLAP data source. For the purpose of following along, we will be using the Cognos BI sample package great_outdoors_8 (or Great Outdoors).
When we first access it, we are prompted to choose a package. For these examples, we will choose great_outdoors_8:
We are then brought to a splash screen where we can choose Create new or Open existing. We will choose Create new.
We are then prompted to pick the type of chart we want to create. As we will see from the following screenshot, our options are:
Blank: It starts us off with a completely blank slate
List: It starts us off with a list report
Crosstab: It starts us off with a crosstab
Chart: It starts us off with a chart and loads the chart wizard
Financial: It starts us off with a crosstab formatted like a financial report
Existing...: It allows us to open an existing report
We will choose Blank because we can still add as many of the other objects as we want to later on.
Exploring the drag-and-drop interface and the right-click menu
Now that we have a blank template to start with, let's explore how to build a complex report and analysis using the drag-and-drop interface. With Cognos Report Studio, there is a concept of objects that can be brought onto your palette. Here, we will explore how to add these objects by clicking, holding, and moving them to the location that we want to place them in.
Adding objects to your report
The tool has a few main components. Each is listed and shown in the subsequent screenshot.
Toolbars: These toolbars provide additional options for controlling your report.
Palette: This is what will show up on your report. By default, the palette will load with data.
Insertable Objects: These are the objects that can be added to your report.
Text Item: A tool with which you can define the text that is used.
Block: This object is used for spacing and controlling where items appear on the report. A block is an area where other items can be inserted.
Table: A table can be used to separate items in the report or for inserting your own text.
Query Calculation: This item can be used to create a calculation based on data after it is aggregated in the query.
Intersection (Tuple): This allows you to add a single point of data based on dimensions and measures that you add through a wizard.
Image: This allows you to add an image to your report.
Crosstab Space: This option will add a blank row or column into an existing crosstab.
Crosstab Space (with fact cells): This option will add a column or row into an existing crosstab with fact cells to allow you to add additional information.
List: This option will insert a list data holder for adding levels, properties, or measures (similar to what you see when using Cognos Query Studio).
Crosstab: This option will insert a crosstab data holder for adding dimensions and measures (similar to what you see when using Analysis Studio).
Chart: This will allow you to add one of the new chart types to the report. Data will still need to be added to it, but this gives you a wizard for selecting the chart type that you want.
Hyperlink: This will allow you to create a link to another location on the Web or within your internal environment. You often see these used to link to a common area on your intranet.
Date: This will allow you to add a dynamic date to the report. Each time the report is run, it will be updated with the current system date.
Time: This will allow you to add a dynamic time to the report. Each time the report is run, it will be updated with the current system time.
Page Number: This will allow you to add a page number to your report. If you have multiple pages, they will be automatically updated with the correct page numbers.
We should start by looking at the objects that we can add to our report. The Insertable Objects pane has two main tabs, Source and Toolbox. Source is the package that you are working on, and toolbox is a list of objects that can be added to the palette.
We will start off by building our report from the Toolbox area. Here we can see the list of insertable objects as seen in the following screenshot:
For the purpose of this book, we will start by adding a table to the palette and choosing two columns and two rows. We will also check the Maximize width option to maximize the width so that the table takes up the entire screen:
We will now proceed to add additional objects into each quadrant of our table. In the upper-left quadrant, we will add a chart. This will give us our first look at the new charts that are available for inserting. To do this, drag-and-drop a chart object into the upper-left quadrant of the table. You will then be prompted to choose what chart type to insert:
The options that are available within the Insert Chart window are:
Column: This option allows you to choose between various column chart options, including standard column charts, cylinder column charts, and cone column charts
Line: This option allows you to choose between various clustered line charts
Pie, Donut: This option allows you to choose between various pie and donut chart options
Bar: This option allows you to choose between various bar chart options, including standard bar charts, cylinder bar charts, and cone bar charts
Area: This option allows you to choose a chart that is similar to a line chart; however, the area under the line is filled in
Point: This option allows you to create charts with data points only (no connecting lines)
Combination: This option allows you to create charts that have columns and lines
Scatter, Bubble: This option allows you to create reports with points that are dynamic in size based on a second measure
Bullet: This option allows you to create charts that reflect a measure compared to a target
Gauge: This option allows you to create various forms of gauge charts
Pareto: This option allows you to create Pareto charts for tracking individual data points and running totals
Progressive: This option is a column or bar chart and a running total as well
Advanced: This option contains 3D charts, radar charts, and heat map charts
For our purposes, we are going to start off by adding Clustered Cylinder with 3-D Effects from within the Column chart options to the upper-left quadrant. We will then drag in a second chart to the upper right-hand quadrant. For this chart, we will choose a Horizontal Bullet chart.
Let's continue by dragging a List to the lower-left quadrant and a Crosstab to the lower right-hand quadrant. When we are done dragging in our objects, our palette should look like:
At this point, we have all of the objects that we want in our report. We need to start adding the data that we need to make this report meaningful.
Adding data to your reports
In order to add data to the report, we need to toggle back to the Source tab in the Insertable Objects area. When we do so, we will see a member tree for the package that we are working with by default. This is because the package is built from a multidimensional source; however, we could have used a relational source as well.
We can choose to change between views using the options at the top. These options are:
View Members Tree: This option will show the metadata as members that can be added for multidimensional analysis
View Metadata Tree: This option will show you the metadata and properties that can be added to the objects meant for reporting
Create Sets for Members (currently inserting individual members): This option will allow you to toggle between inserting sets and individual members from a members tree
Insert Single Member / Insert Children / Insert Member with Children: This option allows you to choose what parts of an object to insert when inserting a member
For our purposes, let's start by inserting members to the areas where we want to perform analysis. We will use members for the cylinder chart and the crosstab. We can begin by clicking on the cylinder chart (in the upper left-hand corner) in order to see our available drop points.
Here we have drop areas for Categories (x-axis), Default measure (y-axis), and Series (primary axis):
We will drag in the Years dimension from Years | Years to Categories (x-axis). We will also drag in the Products dimension from Products | Products to Series (primary axis). Finally, we will drag in Revenue from Measures | Revenue to Default measure (y-axis). Note that once we drag in our measure, the chart is populated with data.
Our final chart in the upper left-hand quadrant should look like the following screenshot:
We can now begin populating our crosstab. We want to depict the same information in our crosstab. Therefore, we will drag in Years from Years | Years to Columns, Products from Products | Products to Rows, and Revenue from Measures | Revenue to Measures. The end result in our lower left-hand quadrant will look like:
Now, we will toggle over to View Metadata Tree so that we can build our reporting objects:
We will start by adding data to the bullet chart. If we select the bullet chart, we can see what data can be added:
The options available are as follows:
Bullet Measure: This is the measure that we are tracking and are interested in.
Target Measure: This is a measure that represents what our goal is for the bullet measure.
Default: This is the default measure.
Series (matrix rows): This represents rows of bullet charts that can be shown. This will do the same thing as thing as Categories until there are items dropped inuntil there are items dropped into both areas.
Categories (matrix columns): This represents columns of data that can be shown. This will do the same thing as thing as Categories until there are items dropped inuntil there are items dropped into both areas.
We are going to drag in Revenue from Measures | Revenue to Bullet Measure and Sales target from Measures | Sales target to Target Measure. When we are done with this, we will see data. To finalize our view, we drag in Product line from Products | Products | Product line to Categories (matrix columns). Our end result for the chart in the upper right-hand quadrant will look like:
Finally, we will build out our list report by dragging in Product line, Revenue, and Sales target as columns in the lower left-hand quadrant. Our list will look like this with the new items added:
We have now added to our report the data that we want to report on and analyze.
Drilling down
The key feature needed in order to perform an analysis is a drill down. Luckily, the hard work is done on the backend during the creation of the multidimensional data source. All we have to do is let the tool know that it is ok to allow drilling. This is accomplished from the Data menu under Drill Options....:
We are given two basic drilling options from this menu. We can choose Allow drill-up and drill-down, which we will be sure to check now for our reporting purposes. We can also choose Allow this report to be a package-based drill-through source. This means that, if there are drill-throughs defined in the package, we can access them.
With our drill-down enabled, let's go ahead and run the report for the first time by choosing the blue play button
at the top:
Our end result will look like:
This now looks like a report. However, because we have drill-up and drill-down enabled, we can click on any component of the report and drill to the next level of detail. We can also right-click to be taken to a menu that allows us to choose Drill Down, Drill Up, or Go To (drill through):
As you can see, this menu also allows us to download the chart (saves the chart as an image), read the glossary (provides definitions for some items), or view lineage (traces the item selected back to the source).
Creating calculations
We can now begin further enhancing our report and analysis with calculations. We are going to start off by adding a calculation to our list report that is in the lower left-hand quadrant.
First, we will highlight Revenue and Sales target.
Then, we will right-click to bring up our right-click menu and choose Calculate and then Custom. From there, we can choose % Difference from the drop-down list.
We can then choose % Difference (Sales target, Revenue). This will essentially give us a variance calculation.
We can choose to provide a different name for the default name as well. We will go ahead and name this one Variance. Once that is complete, you will see in the following screenshot, that it automatically formats the new column as a percentage:
In order to create a more complex calculation, you have to right-click on your new calculated query item and choose Edit Query Expression... from the list of available options. From this menu, you can freeform most calculations that Cognos BI supports.
In addition, the Functions tab will provide common functions, and each will show a tip if you click on it:
This is sort of a trick for getting the most out of Cognos Workspace Advanced. The default calculation menu will show only basic calculations; however, you are able to create more advanced calculations by editing the query in this way.
Understanding the other buttons
Now that we have covered the basics, it is important to understand our other options on the toolbar. Let's go from left to right. The first few buttons are all geared toward saving, opening, cutting, copying, pasting, and report-wide undo and redo functionality.
The buttons in the previous screenshot are:
New: This option will allow you to create a new report.
Open: This option will allow you to open an existing report.
Save: This option will allow you to save the report for future use or to be shared.
Cut: This option allows you to copy an item and move it to another place. It also erases the item from the original location.
Copy: This option allows you copy an item and create a duplicate for it in a new location.
Paste: This option is used to finalize the copy or cut actions with the creation of the new version of the item that was copied or cut.
Delete: This option will remove an item that is selected.
Undo: This option will reverse an action that was done.
Redo: This option will redo an action that was undone.
The next option is to run with different run options:
The next section has all the standard options that we have seen in the other two business-user studios:
The options are listed as follows:
Filter: This option will allow you to create a filter that limits the data being retrieved
Suppress: This option will allow you to remove rows or columns with zeros
Explore: This option allows you to perform analysis actions on your data
Sort: This option allows you to choose sorting options for your data
Summarize: This option will allow you to create summary aggregations for your various measures
Calculate: This option will allow you to create a calculation on any of your items
Group: This option will allow you to bundle within a data item
Pivot: This option will allow you to toggle from a list to a crosstab
Section: This option will allow you to create sections based on the contents of a data item or dimension
Swap: This option will allow you to swap columns and rows on a crosstab
Chart: This option will allow you to create a chart on your report
Layout: This option will allow you to choose a standard layout template for your report
In addition to these toolbars, there is a wealth of capabilities available in the menu bar and the formatting bar that can be explored for further enhancing your Cognos Workspace Advanced reports and analyses.
Using external data
Another way to expand the capabilities of this product is to bring in external data. External data is data that is not already included in the Cognos BI package that is being used. External data is typically some form of flat file (such as a CSV file). The ability to incorporate external data is a new feature in IBM Cognos Business Intelligence v10.x that is available only in Cognos Workspace Advanced and Cognos Report Studio.
In order to incorporate external data into your report, you will need to select the
icon in the Insertable Objects area that represents the external data option.
Once you click on the icon, you will be prompted with the External Data wizard:
This wizard will walk you through the process of creating a connection between data that is outside of Cognos BI and data that is within a package. The first step is to select the data that you want to bring in. This is done very simply by clicking on the Browse... button and finding the file with the information that you want to bring in:
You can then choose which columns from the file to bring in and what to name the new namespace that you are adding.
After you click on Next>, you will be able to choose how to perform your data mapping.
You can choose an existing report (this is typically the report that you are working on; however, that is not required) to map to the external data.
For our purposes, we will choose Product Revenue from the Go Sales and Retailers folder. Here we will create a new link between the external data and the existing report by clicking on the New Link button. We will then click on Next> again:
In the next section of the wizard, we are prompted to select the data attributes for the data that we have. This is possibly the most important part of this entire process.
Unfortunately, if we select the wrong data type for an item that is being linked to while on this screen, it can affect our ability to create the relationship, and we will get errors when trying to pull data from both locations at once. Once we have all the data type options set correctly, we can click on Next> and move on to the final step:
In the final step, we can choose the cardinality that we want for the relationship that ties in our external data. When we are done, we can click on Finish, and it will take us to a place where we can name our new package and publish it to a location of our choice:
We have now officially created a new package with external data.
The business case for Cognos Workspace Advanced
Cognos Workspace Advanced was designed for business users that want it all. So, if you have users that need both query creation capabilities and analysis capabilities, this is the tool for them. Cognos Workspace Advanced adds a tool that provides flashy graphics and an easy-to-use interface to make these tasks easier than ever before. This tool also gives the IT group the ability to better enable their business users to do the things that they have historically done for them. For the world of business intelligence, this tool changes the game for those users.
As an IT group, the best way to convince the business of the value of this tool is to simply show it to them and then allow them to use it. As they find themselves more empowered to create their own reports and develop their own analysis, they will realize that this product decreases the time from question to answer for your business users.
Summary
Business Cognos Workspace Advanced adds the ability to perform queries and create analyses from one central location. It also further enhances the new Cognos Workspace product by allowing users to take an object from Cognos Workspace and further enhance it within this development product. In this article, we have compared Cognos Workspace Advanced to Query Studio and Analysis Studio. We have also looked at how to use the tool both from a basic and advanced perspective. With Cognos Workspace Advanced, you now have a one-stop shop for reporting and analysis for business users.
Resources for Article :
Further resources on this subject:
- Reporting Planning Data in IBM Cognos 8: Publish and BI Integration [Article]
- Feeds in IBM Lotus Notes 8.5 [Article]
- How to Set Up IBM Lotus Domino Server [Article] | https://www.packtpub.com/books/content/ibm-cognos-workspace-advanced | CC-MAIN-2016-50 | refinedweb | 4,235 | 65.35 |
SOME LIKE IT
hot & SOME LIKE ITHOTTER . . . as our intrepid reporter Robert Kiener discovered. Why do chile aficionados constantly seek out ever hotter chiles? “The easy answer,” says one expert, “is that we’re crazy.”
W
alking through the massive, bustling Abasto Market in Oaxaca City, Mexico, I am overwhelmed by the colorful, pungent mountains of fruit, vegetables, meat, fish, and fowl. As I pick my way over slippery floors and dodge scurrying customers, vendors offer me everything from exotic threecow’s-milk cheeses to the local specialty, chapulines (fried grasshoppers). I refuse to let them put me off my quest. I have come to Oaxaca City, one of Mexico’s culinary centers, in my search for some of the world’s hottest chile peppers. 3
RD
I
MONTH 2007
My guide, noted Mexican cooking expert and cookbook author Susana Trilling, leads me around a corner and we come upon stall after stall of chile peppers in every conceivable shape, size and color. They are piled high on wooden crates, spill out of plastic laundry baskets and overflow from brown burlap sacks “We have a wider variety of chile peppers here than almost any other place in the world,” Trilling declares. Indeed, chile pepper fanatics from around the world, known as “chile-heads,” make pilgrimages to this market to worship at the altar of the humble, but addictive, fruit. Trilling greets a chile pepper vendor like an old friend and begins pointing out the different varieties: dark brown chipotles, the spherical cascabel, the dried red guajillo, green and red serranos, ten-centimetre-long long mirasol, bullet-shaped piquins, the jalapeno At another stall she shows me some habaneros – lantern-shaped, bright orange chiles that look beguilingly beautiful. As I reach for one, Trilling grabs my arm. “Don’t touch that one unless you’re wearing gloves,” she tells me. “It’s dangerously hot.” If you don’t wear gloves and absentmindedly rub your eyes, it will be very painful, she adds. I ask what would happen if I bit off a small sample. “It will rip off the top of your head!” she replies. Her tone and her suddenly stern, deep brown 4
eyes convince me she’s not kidding. Despite the warning, I won’t be able to resist the lure of the Habaneros for long.
It’s a hot, hot world: harvesting chiles in Ahmedabad, India, (above left) and selling them in Hanoi, Vietnam, (above) and Sulawesi, Indonesia.
It is estimated that one in four people eats chiles every day. They are an integral part of diets around the world, from Mexico and the Middle East to Thailand and Korea. Chiles were first domesticated in South America, in what is now Bolivia, some 6000 years ago. Incas called the spicy fruit aji, and the Aztecs changed it to chilli. Thanks to Christopher Columbus, the newly christened chiles spread to Europe in the late 1400s. Portuguese and Spanish traders introduced them to Africa and Asia, where they were such a hit that locals soon viewed the
import as their own. Today, because of their popularity and ease of cross-pollination, there are thousands of varieties around the world. Thais eat more hot peppers – five grams per person per day – than anyone else in the world. India produces more chile peppers – over 2 millions acres worth – than any other country. Hungary’s famously pungent condiment, paprika (from the Latin for “pepper”), is derived from hot sweet red peppers. Paprika seeds are considered to be so valuable that the Hungarian government forbids their
PH OTOS : (OPENER) © CL INTON H USS EY; (A BOVE) © AM I T DAVE/REU TERS/CORBI S, © B RUNO M ORAND I , G ETTY I M AG ES (X2)
export. Chile peppers have long been a part of folklore around the world. They were used to deter vampires and werewolves in Eastern Europe and thought to ward off the “evil eye” by South and Central Americans. In northern Mexico chiles are still used in potions meant to make an enemy ill, and as a cure for hangovers. 5
RD
I
Everywhere you go, the first rule of chiles seems to be: the hotter the better. Recently, near the ancient Punjab town of Multan, a Pakistani friend told me over a dinner that featured an eyewatering variety of fiery chiles, “There’s no such thing as a chile that’s too hot.” Why do chileheads crave hotter and hotter chiles? “The easy answer is that we’re crazy,” says Dave DeWitt (also known as “Pope of Peppers”), author of a slew of chile pepperrelated books and founding editor of Chili Pepper Magazine. But there’s also a physiological explanation. Your body releases endorphins to counter the pain brought on by a red-hot chile. “Those endorphins create a mild euphoria or a minihigh or a pepper-high,” says DeWitt. The Aztecs labeled chile peppers on a wonderfully descriptive scale that included “hot,” “brilliant hot” and “runaway hot.” More recently, scientists developed the Scoville scale to do the same job. Mild jalapenos can range up to 5,000 Scoville units, while the frighteningly hot habanero (the Mexican chile Susana Trilling warned me about) can top 300,000 Scovilles. 6
S O M E L I K E I T H OT
MONTH 2007
Susana Trilling explores the many varieties of chiles at the Etla village market close to Oaxaca City, Mexico.
To see firsthand how powerful some peppers can be, I travel with Trilling some five hours west of Oaxaca City to Chalcatongo. “We’re going to meet Oaxaca’s ‘Queen of the Chiles’, Annalyse Ramirez Martinez,” Trilling explains as our car climbs higher and higher into the Sierra Madre Mountains. “She’s an expert cook and chile pepper dishes are her specialty.” When we arrive in the bustling marPHOTO: © ROBERT KI ENER
ket town, we buy a bagful of especially pungent chiles, the small, dried red chile costeno. They will be the main ingredient in a dish, salsa de barbacoa, Annalyse is making specially for us. Chile peppers get their kick from a chemical called capsaicin located in the inside wall of the pepper pod. When we arrive at Annalyse’s modest wood frame home, Trilling explains how potent capsaicin can be. The Incas burned red peppers to temporarily blind invading Spaniards, and the Mayans punished offenders by forcing them to inhale the acrid smoke of burning peppers. These days, police forces fight off bad guys with pepper spray and tear gas made from capsaicin “And we roast some chile peppers to drive the snakes out of our homes,” adds petite, black-haired Annalyse. Suddenly, the deceptively mildlooking chile costeno, which had been bubbling on Annalyse’s gas range, starts smoking. The small kitchen fills with pungent, acrid smoke. My eyes immediately start watering and I am soon convulsed by fits of coughing. I can bear it for less than a minute before I dash out into the courtyard, drop to my knees and breathe in deep gulps of fresh air. And I didn’t even eat one of them, I think as I regain my composure. The story of the chile is not all pain and tears. Scientists have discovered that a raw pepper has more vitamin C than an orange. This year University
of Nottingham researchers published findings that show capsaicin may be able to kill cancerous tumors. Other studies suggest capsaicin reduces cholesterol and pain associated with arthritis, diabetes, and muscle and joint problems. Scientists at Germany’s Max Planck Institute claim chile peppers may prevent the formation of blood clots. Hot peppers can also ease the symptoms of the common cold by breaking up congestion and reducing mucous. “It seems like there’s no end to chile peppers’ versatility,” says Danise Coon, program coordinator of New Mexico State University’s Chili Pepper Institute. The institute recently hit the headlines when the Guinness Book of World Records confirmed that one of its professors, Paul Bosland, had measured the heat level of the world’s hottest chile pepper using the scientifically developed Scoville Scale. The Bhut Jolokia, with a rating of just over one million, is nearly twice as hot as the former record holder, the Red Savina pepper. My trip to Mexico is nearing its end, and I can put off the inevitable no longer. It’s time to sample Mexico’s hottest chile pepper, the habanero. A plastic bag containing a half dozen of them sits on the desk of my Oaxaca City hotel room. Recalling Trilling’s advice not to touch them with my bare hands, I pull on disposable plastic gloves. I’ve learned that water is useless in putting out the fire that hot chiles light 7
RD
I
MONTH 2007
in your mouth. Instead, I have a quart of milk and some yogurt nearby; the proteins in milk will help neutralize the capsaicin. Like a surgeon, I carefully cut a tiny sliver of the flesh from one of the fourcentimetre-long pods. Using a toothpick, I place it on my tongue… A bomb instantly goes off in my mouth. My tongue is on fire. My upper palate feels like it’s been hit with a flamethrower. “HOT, HOT, HOT!” I scream. My eyes water, I begin gasping for breath. Beads of perspiration roll down my forehead. Surely, I think, my head will soon explode. How can a
8
tiny slice of chile pepper pack such a punch? I gulp down the milk and some yogurt but it is several hours before my mouth returns to something resembling normal. I don’t remember an endorphin rush. Later that day I decide to reward myself with a bowl of mango ice cream at a stall in the Benito Juarez Market. The five-alarm habanero seems a distant memory. Then, as I raise the spoon to my lips, the stall’s owner holds out a small bowl and asks, “Would you like chile sauce on that?”
Some like it hot | https://issuu.com/robertkiener/docs/chile_peppers | CC-MAIN-2021-31 | refinedweb | 1,667 | 70.13 |
zip_set_archive_comment − set zip archive comment
libzip (-lzip)
#include <zip.h>
int
The zip_set_archive_comment() function sets the comment for the entire zip archive. If comment is NULL and len is 0, the archive comment will be removed. comment must be encoded in ASCII or UTF-8.
Upon successful completion 0 is returned. Otherwise, −1 is returned and the error information in archive is set to indicate the error.
zip_set_archive_comment() fails if:
libzip(3), zip_file_get_comment(3), zip_file_set_comment(3), zip_get_archive_comment(3)
zip_set_archive_comment() was added in libzip 0.7. In libzip 0.11 the type of len was changed from int to zip_uint16_t.
Dieter Baron <dillo@nih.at> and Thomas Klausner <tk@giga.or.at> | https://fossies.org/linux/libzip/man/zip_set_archive_comment.man | CC-MAIN-2020-05 | refinedweb | 111 | 62.44 |
I wanted to continue with the enhancements to my Mix09 talk “building business applications with Silverlight 3”. In this section I am going to show how to get data from a REST based web services rather than directly using Entity Framework or Linq to Sql.
Let’s focus on the cloud source of data. We will use the same sample from the previous parts and change only the data access part to go against ADO.NET Data Services as the data store.
This pattern might be useful if you do not control your database directly and need to go through a services layer to access it..
Start with the MyApp project from one of the previous parts. In the server project, delete the northwind.mdf and the Northwind.edmx files.
Add a new project to the solution to hold the data service. I used an ASP.NET Web Application project and called it MyApp.Service…
Next, add the Northwind.mdf file to the app_data folder of this new project and create an entity framework model as we showed in part 2.
Now let’s add our REST based service.
Here we set up the DataService to use the EntityFramework provider then we enable access… notice the “*” is more of a demo mode sort of thing, the best practice here is to list the tables directly.
public class SuperEmployeeDataService : DataService< NORTHWNDEntities >
{
public static void InitializeService(IDataServiceConfiguration config)
{
config.SetEntitySetAccessRule("*", EntitySetRights.All);
}
}
Now we are good to go with our service.. To test it set it as the start up project and hit F5…
We are all set to consume this now from RIA Services… Notice all the above is standard ADO.NET Data Services work… Learn more about ADO.NET Data Services.
Now, let’s get into the meat with RIA Services.. Go back to the web project and add a services reference to the service we just created.
First let’s go into SuperEmployeeDomainService.metadata.cs and tweak this class to work with our new service reference. Basically you just need to make sure it is a partial class of the proxy SuperEmployee class from our service reference. Do this by changing the namespace to MyApp.Web.SuperEmployeeDataServiceReference. This class gives you a chance to add validation metadata and other information to make the client consumption more clean. We will validate these on the client and before the data gets pushed back to the service. Some examples:
[ReadOnly(true)]
[Key]
public int EmployeeID;
[RegularExpression("^(?:m|M|male|Male|f|F|female|Female)$",
ErrorMessage = "Gender must be 'Male' or 'Female'")]
public string Gender;
[Range(0, 10000,
ErrorMessage = "Issues must be between 0 and 1000")]
public Nullable<int> Issues;
Now we need to make our SuperEmployeeDomainContext work against this new service.
[EnableClientAccess()]
public class SuperEmployeeDomainService : DomainService
{
NORTHWNDEntities Context = new NORTHWNDEntities(
new Uri(""));
Notice here we use the DomainService base class rather than the EFDomainSerivce helper…
Methods such as GetSuperEmployee() work with no change! But they are now going over our REST based service to get to the database.
public IQueryable<SuperEmployee> GetSuperEmployees()
{
return this.Context.SuperEmployeeSet
.Where(emp=>emp.Issues>100)
.OrderBy(emp=>emp.EmployeeID);
}
UpdateSuperEmployee took a bit more tweaks as we need to propagate changes over to instances that the ADO.NET client library is tracking.
public void UpdateSuperEmployee(SuperEmployee currentSuperEmployee)
{
var q = from emp in Context.SuperEmployeeSet
where emp.EmployeeID == currentSuperEmployee.EmployeeID
select emp;
var e = q.FirstOrDefault();
e.Name = currentSuperEmployee.Name;
e.Gender = currentSuperEmployee.Gender;
e.Issues = currentSuperEmployee.Issues;
e.LastEdit = currentSuperEmployee.LastEdit;
e.Origin = currentSuperEmployee.Origin;
e.Publishers = currentSuperEmployee.Publishers;
e.Sites = currentSuperEmployee.Sites;
Context.UpdateObject(e);
}
Finally, we need to override Submit… What this does is gives us a chance to call PersistChangeSet() after all the changes in the change set have been processed (the call the the base method does that).
protected override void PersistChangeSet(ChangeSet changeSet)
{
base.PersistChangeSet(changeSet);
this.Context.SaveChanges();
}
Hit F5 and we are done! We now have exactly the same app, but with data coming from a service rather than directly from a data base.
Also notice how easy this was to move from one model to another.. That is another powerful reason to follow the RIA Services model.. it enables you to more easily change backend data sources without changing a lot of code on the client.
Managed to get to page 8 of 7 when browsing records.
Hi,
Thats cool post. I think you have missed the screenshots of adding teh RIA services .
Thanks,
Thani
I’m wondering if there is a clever way to regenerate the MyDomainService.metadata.cs file and preserve any attributes when my entity framework model changes (I add a table or column)? I have 100s of tables in my model. | https://blogs.msdn.microsoft.com/brada/2009/07/21/business-apps-example-for-silverlight-3-rtm-and-net-ria-services-july-update-part-7-ado-net-data-services-based-data-store/ | CC-MAIN-2017-13 | refinedweb | 787 | 57.47 |
Makes 0% sense.
I understood and still understand how to make vectors, but now it doesn't work, what the hell?
push_back is acting like a dick, it gives me an error when I try to call it, even though it's been called like this before, pfft.. It's weird.
Yes we know this works, and push_back works.Yes we know this works, and push_back works.Code:
#include <string>
#include <vector>
std::vector<int> vec;
Code:
vec.push_back(12);
But this doesn't work, and I even used vectors before, and even used this, now it doesn't work. Lol C++ is REEEEAALLY GREAT | http://cboard.cprogramming.com/cplusplus-programming/105664-vectors-work-when-they-want-printable-thread.html | CC-MAIN-2016-18 | refinedweb | 105 | 93.34 |
Caching is typically the most effective way to boost an application's performance.
For dynamic websites, when rendering a template, you'll often have to gather data from various sources (like a database, the file system, and third-party APIs, to name a few), process the data, and apply business logic to it before serving it up to a client. Any delay due to network latency will be noticed by the end user.
For instance, say you have to make an HTTP call to an external API to grab the data required to render a template. Even in perfect conditions this will increase the rendering time which will increase the overall load time. What if the API goes down or maybe you're subject to rate limiting? Either way, if the data is infrequently updated, it's a good idea to implement a caching mechanism to prevent having to make the HTTP call altogether for each client request.
This article looks at how to do just that by first reviewing Django's caching framework as a whole and then detailing step-by-step how to cache a Django view.
Dependencies:
- Django v3.0.5
- django-redis v.4.11.0
- Python v3.8.2
- python-memcached v1.59
- Requests v2.23.0
Contents
Objectives
By the end of this tutorial, you should be able to:
- Explain why you may want to consider caching a Django view
- Describe Django's built-in options available for caching
- Cache a Django view with Redis
- Load test a Django app with Apache Bench
- Cache a Django view with Memcached
Django Caching Types
Django comes with several built-in caching backends, as well as support for a custom backend.
The built-in options are:
- Memcached: Memcached is a memory-based, key-value store for small chunks of data. It supports distributed caching across multiple servers.
- Database: Here the cache fragments are stored in a database. A table for that purpose can be created with one of the Django's admin commands. This isn't the most performant caching type, but it can be useful for storing complex database queries.
- File system: The cache is saved on the file system, in separate files for each cache value. This is the slowest of all the caching types, but it's the easiest to set up in a production environment.
- Local memory: Local memory cache, which is best-suited for your local development or testing environments. While it's almost as fast as Memcached, it cannot scale beyond a single server, so it's not appropriate to use as a data cache for any app that uses more than one web server.
- Dummy: A "dummy" cache that doesn't actually cache anything but still implements the cache interface. It's meant to be used in development or testing when you don't want caching, but do not wish to change your code.
Django Caching Levels
Per-site cache
This is the easiest way to implement caching in Django. To do this, all you'll have to do is add two middleware classes to your settings.py file:
MIDDLEWARE = [ 'django.middleware.cache.UpdateCacheMiddleware', # NEW 'django.middleware.common.CommonMiddleware', 'django.middleware.cache.FetchFromCacheMiddleware', # NEW ]
The order of the middleware is important here.
UpdateCacheMiddlewaremust come before
FetchFromCacheMiddleware. For more information take a look at Order of MIDDLEWARE from the Django docs.
You then need to add the following settings:
CACHE_MIDDLEWARE_ALIAS = 'default' # which cache alias to use CACHE_MIDDLEWARE_SECONDS = '600' # number of seconds to cache a page for (TTL) CACHE_MIDDLEWARE_KEY_PREFIX = '' # should be used if the cache is shared across multiple sites that use the same Django instance
Although caching the entire site could be a good option if your site has little or no dynamic content, it may not be appropriate to use for large sites with a memory-based cache backend since RAM is, well, expensive.
Per-view cache
Rather than wasting precious memory space on caching static pages or dynamic pages that source data from a rapidly changing API, you can cache specific views. This is the approach that we'll use in this article. It's also the caching level that you should almost always start with when looking to implement caching in your Django app.
You can implement this type of cache with the cache_page decorator either on the view function directly or in the path within
URLConf:
from django.views.decorators.cache import cache_page @cache_page(60 * 15) def your_view(request): ... # or from django.views.decorators.cache import cache_page urlpatterns = [ path('object/<int:object_id>/', cache_page(60 * 15)(your_view)), ]
The cache itself is based on the URL, so requests to, say,
object/1 and
object/2 will be cached separately.
It's worth noting that implementing the cache directly on the view makes it more difficult to disable the cache in certain situations. For example, what if you wanted to allow certain users access to the view without the cache? Enabling the cache via the
URLConf provides the opportunity to associate a different URL to the view that doesn't use the cache:
from django.views.decorators.cache import cache_page urlpatterns = [ path('object/<int:object_id>/', your_view), path('object/cache/<int:object_id>/', cache_page(60 * 15)(your_view)), ]
Template fragment cache
If your templates contain parts that change often based on the data you'll probably want to leave them out of the cache.
For example, perhaps you use the authenticated user's email in the navigation bar in an area of the template. Well, If you have thousands of users then that fragment will be duplicated thousands of times in RAM, one for each user. This is where template fragment caching comes into play, which allows you to specify the specific areas of a template to cache.
To cache a list of objects:
{% load cache %} {% cache 500 object_list %} <ul> {% for object in objects %} <li>{{ object.title }}</li> {% endfor %} </ul> {% endcache %}
Here,
{% load cache %} gives us access to the
cache template tag, which expects a cache timeout in seconds (
500) along with the name of the cache fragment (
object_list).
Low-level cache API
For cases where the previous options don't provide enough granularity, you can use the low-level API to manage individual objects in the cache by cache key.
For example:
from django.core.cache import cache def get_context_data(self, **kwargs): context = super().get_context_data(**kwargs) objects = cache.get('objects') if objects is None: objects = Objects.all() cache.set('objects', objects) context['objects'] = objects return context
In this example, you'll want to invalidate (or remove) the cache when objects are added, changed, or removed from the database. One way to manage this is via database signals:
from django.core.cache import cache from django.db.models.signals import post_delete, post_save from django.dispatch import receiver @receiver(post_delete, sender=Object) def object_post_delete_handler(sender, **kwargs): cache.delete('objects') @receiver(post_save, sender=Object) def object_post_save_handler(sender, **kwargs): cache.delete('objects')
With that, let's look at some examples.
Project Setup
Clone down the base project from the cache-django-view repo, and then check out the base branch:
$ git clone --branch base --single-branch $ cd cache-django-view
Create (and activate) a virtual environment and install the requirements:
$ python3.8 -m venv venv $ source venv/bin/activate (venv)$ pip install -r requirements.txt
Apply the Django migrations, and then start the server:
(venv)$ python manage.py migrate (venv)$ python manage.py runserver
Navigate to in your browser of choice to ensure that everything works as expected.
You should see:
Take note of your terminal. You should see the total execution time for the request:
Total time: 2.23s
This metric comes from core/middleware.py:
import logging import time def metric_middleware(get_response): def middleware(request): # Get beginning stats start_time = time.perf_counter() # Process the request response = get_response(request) # Get ending stats end_time = time.perf_counter() # Calculate stats total_time = end_time - start_time # Log the results logger = logging.getLogger('debug') logger.info(f'Total time: {(total_time):.2f}s') print(f'Total time: {(total_time):.2f}s') return response return middleware
Take a quick look at the view in apicalls/views.py:
import datetime import requests from django.views.generic import TemplateView BASE_URL = ''
This view makes an HTTP call with
requests to httpbin.org. To simulate a long request, the response from the API is delayed for two seconds. So, it should take about two seconds for to render not only on the initial request, but for each subsequent request as well. While a two second load is somewhat acceptable on the initial request, it's completely unacceptable for subsequent requests since the data is not changing. Let's fix this by caching the entire view using Django's Per-view cache level.
Workflow:
- Make full HTTP call to httpbin.org on the initial request
- Cache the view
- Subsequent requests will then pull from the cache, bypassing the HTTP call
- Invalidate the cache after a period of time (TTL)
Baseline Performance Benchmark
Before adding cache, let's quickly run a load test to get a benchmark baseline using Apache Bench, to get rough sense of how many requests our application can handle per second.
Apache Bench comes pre-installed on Mac.
If you're on a Linux system, chances are it's already installed and ready to go as well. If not, you can install via APT (
apt-get install apache2-utils) or YUM (
yum install httpd-tools).
Windows users will need to download and extract the Apache binaries.
Add Gunicorn to the requirements file:
gunicorn==20.0.4
Kill the Django dev server and install Gunicorn:
(venv)$ pip install -r requirements.txt
Next, serve up the Django app with Gunicorn (and four workers) like so:
(venv)$ gunicorn core.wsgi:application -w 4
In a new terminal window, run Apache Bench:
$ ab -n 100 -c 10
This will simulate 100 connections over 10 concurrent threads. That's 100 requests, 10 at a time.
Take note of the requests per second:
Requests per second: 1.69 [#/sec] (mean)
Keep in mind that Django Debug Toolbar will add a bit of overhead. Benchmarking in general is difficult to get perfectly right. The important thing is consistency. Pick a metric to focus on and use the same environment for each test.
Kill the Gunicorn server and spin the Django dev server back up:
(venv)$ python manage.py runserver
With that, let's look at how to cache a view.
Caching a View
Start by decorating the
ApiCalls view with the
@cache_page decorator like so:
import datetime import requests from django.utils.decorators import method_decorator # NEW from django.views.decorators.cache import cache_page # NEW from django.views.generic import TemplateView BASE_URL = '' @method_decorator(cache_page(60 * 5), name='dispatch') # NEW
Since we're using a class-based view, we can't put the decorator directly on the class, so we used a
method_decorator and specified
dispatch (as the method to be decorated) for the name argument.
The cache in this example sets a timeout (or TTL) of five minutes.
Alternatively, you could set this in your settings like so:
# Cache time to live is 5 minutes CACHE_TTL = 60 * 5
Then, back in the view:
import datetime import requests from django.conf import settings from django.core.cache.backends.base import DEFAULT_TIMEOUT from django.utils.decorators import method_decorator from django.views.decorators.cache import cache_page from django.views.generic import TemplateView BASE_URL = '' CACHE_TTL = getattr(settings, 'CACHE_TTL', DEFAULT_TIMEOUT) @method_decorator(cache_page(CACHE_TTL), name='dispatch')
Next, let's add a cache backend.
Redis vs Memcached
Memcached and Redis are in-memory, key-value data stores. They are easy to use and optimized for high-performance lookups. You probably won't see much difference in performance or memory usage between the two. That said, Memcached is slightly easier to configure since it's designed for simplicity and ease of use. Redis, on the other hand, has a richer set of features so it has a wide range of use cases beyond caching. For example, it's often used to store user sessions or as message broker in a pub/sub system. Because of its flexibility, unless you're already invested in Memcached, Redis is much better solution.
For more on this, review this StackOverflow answer.
Next, pick your data store of choice and let's look at how to cache a view.
Option 1: Redis with Django, we first need to install django-redis.
Add it to the requirements.txt file:
django-redis==4.11.0
Install:
(venv)$ pip install -r requirements.txt
Next,
With the server up and running, navigate to.
The first request will still take about two seconds. Refresh the page. The page should load almost instantaneously. Take a look at the load time in your terminal. It should be close to zero:
Total time: 0.01s
Curious what the cached data looks like inside of Redis?
Run Redis CLI in interactive mode in a new terminal window:
$ redis-cli
You should see:
127.0.0.1:6379>
Run
ping to ensure everything works properly:
127.0.0.1:6379> ping PONG
Turn back to the settings file. We used Redis database number 1:
'LOCATION': 'redis://127.0.0.1:6379/1',. So, run
select 1 to select that database and then run
keys * to view all the keys:
127.0.0.1:6379> select 1 OK 127.0.0.1:6379[1]> keys * 1) ":1:views.decorators.cache.cache_header..17abf5259517d604cc9599a00b7385d6.en-us.UTC" 2) ":1:views.decorators.cache.cache_page..GET.17abf5259517d604cc9599a00b7385d6.d41d8cd98f00b204e9800998ecf8427e.en-us.UTC"
We can see that Django put in one header key and one
cache_page key.
To view the actual cached data, run the
get command with the key as an argument:
127.0.0.1:6379[1]> get ":1:views.decorators.cache.cache_page..GET.17abf5259517d604cc9599a00b7385d6.d41d8cd98f00b204e9800998ecf8427e.en-us.UTC"
Your should see something similar to:
"\x80\x05\x95D\x04\x00\x00\x00\x00\x00\x00\x8c\x18django.template.response\x94\x8c\x10TemplateResponse \x94\x93\x94)\x81\x94}\x94(\x8c\x05using\x94N\x8c\b_headers\x94}\x94(\x8c\x0ccontent-type\x94\x8c\ x0cContent-Type\x94\x8c\x18text/html; charset=utf-8\x94\x86\x94\x8c\aexpires\x94\x8c\aExpires\x94\x8c\x1d Fri, 01 May 2020 13:36:59 GMT\x94\x86\x94\x8c\rcache-control\x94\x8c\rCache-Control\x94\x8c\x0 bmax-age=300\x94\x86\x94u\x8c\x11_resource_closers\x94]\x94\x8c\x0e_handler_class\x94N\x8c\acookies \x94\x8c\x0chttp.cookies\x94\x8c\x0cSimpleCookie\x94\x93\x94)\x81\x94\x8c\x06closed\x94\x89\x8c \x0e_reason_phrase\x94N\x8c\b_charset\x94N\x8c\n_container\x94]\x94B\xaf\x02\x00\x00 <!DOCTYPE html>\n<html lang=\"en\">\n<head>\n <meta charset=\"UTF-8\">\n <title>Home</title>\n <link rel=\"stylesheet\" href=\"\ "\n\">\n\n</head>\n<body>\n<div class=\"container\">\n <div class=\"pt-3\">\n <h1>Below is the result of the APICall</h1>\n </div>\n <div class=\"pt-3 pb-3\">\n <a href=\"/\">\n <button type=\"button\" class=\"btn btn-success\">\n Get new data\n </button>\n </a>\n </div>\n Results received!<br>\n 13:31:59\n</div>\n</body>\n</html>\x94a\x8c\x0c_is_rendered\x94\x88ub."
Exit the interactive CLI once done:
127.0.0.1:6379[1]> exit
Skip down to the "Performance Tests" section.
Option 2: Memcached with Django
Start by adding python-memcached to the requirements.txt file:
python-memcached==1.59
Install the dependencies:
(venv)$ pip install -r requirements.txt
Next, we need to update the settings in core/settings.py to enable the Memcached backend:
CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': '127.0.0.1:11211', } }
Here, we added the MemcachedCache backend and indicated that Memcached should be running on our local machine on localhost (127.0.0.1) port 11211, which is the default port for Memcached.
Next, we need to install and run the Memcached daemon. The easiest way to install it, is via a package manager like APT, YUM, Homebrew or Chocolatey depending on your operating system:
# linux $ apt-get install memcached $ yum install memcached # mac $ brew install memcached # windows $ choco install memcached
Then, run it in a different terminal on port 11211:
$ memcached -p 11211 # test: telnet localhost 11211
For more information on installation and configuration of Memcached, review the official wiki.
Navigate to in our browser again. The first request will still take the full two seconds, but all subsequent requests will take advantage of the cache. So, if you refresh or press the "Get new data button", the page should load almost instantly.
What's the execution time look like in your terminal?
Total time: 0.03s
Performance Tests
If we look at the time it takes to load the first request vs. the second (cached) request in Django Debug Toolbar, it will look similar to:
Also in the Debug Toolbar, you can see the cache operations:
Spin Gunicorn back up again and re-run the performance tests:
$ ab -n 100 -c 10
What are the new requests per second? It's about 36 on my machine!
Conclusion
In this article, we looked at the different built-in options for caching in Django as well as the different levels of caching available. We also detailed how to cache a view using Django's Per-view cache with both Memcached and Redis.
You can find the final code for both options, Memcached and Redis, in the cache-django-view repo.
--
In general, you'll want to look to caching when page rendering is slow due to network latency from database queries or HTTP calls.
From there, it's highly recommend to use a custom Django cache backend with Redis with a
Per-view type. If you need more granularity and control, because not all of the data on the template is the same for all users or parts of the data changes frequently, then jump down to the Template fragment cache or Low-level cache API. | https://testdriven.io/blog/django-caching/ | CC-MAIN-2020-50 | refinedweb | 2,972 | 56.55 |
The SAX interface.
Introduction to SAX2:
- A start tag occurs (
<quote>).
- Character data (i.e. text) is found, "A quotation.".
- An end tag is parsed (
</quote>). SAX Bookmarks Example illustrates how to subclass QXmlDefaultHandler to read an XML bookmark file (XBEL) and how to generate XML by hand.
SAX2 Features.
Namespace Support via Features
As we have seen in the previous section, we can configure the behavior of the reader when it comes to namespace processing. This is done by setting and unsetting the and features.
They influence the reporting behavior in the following way:
- Namespace prefixes and local parts of elements and attributes can be reported.
- The qualified names of elements and attributes are reported.
- QXmlContentHandler::startPrefixMapping() and QXmlContentHandler::endPrefixMapping() are called by the reader.
- Attributes that declare namespaces (i.e. the attribute xmlns and attributes starting with xmlns:) are reported..
Summary
QXmlSimpleReader implements the following behavior:
The behavior of the entries marked with an asterisk (*) is not specified by SAX.
Properties. | https://doc.qt.io/qt-5/xml-sax.html | CC-MAIN-2020-05 | refinedweb | 163 | 51.44 |
How to Build an High Availability
MQTT Cluster for the Internet of Things
Create a scalable MQTT infrastructure using Node.js, Redis, HAProxy and nscale to make the deployment phase a piece of cake ☺
TL;DR
In this article I’ll show you how to creare a scalable MQTT cluster for the Internet of Things. Everything comes from the work made in Lelylan. If useful to you and your work, think about giving us a star on Github. It will help us to reach more developers.
Lelylan
Open Source Lightweight Microservices Architecture for the Internet of Things. For developers.
github.com
What are we talking about?
In a professional Internet of Things environment the availability and the scalability of your services is a key factor you need to take care of. For MQTT environments this means your broker needs a stable connection, always-on functionality and the capability of updating your private cloud infrastructure while it’s running in production. In this article we’ll share step by step all we’ve learned on building such an environment for Lelylan.
The main benefit of such a setup is that if one of your MQTT servers is not available, the still available brokers can handle the traffic. In other words, if one of the two (or more) nodes stop working the load balancer will reroute all incoming traffic to the working cluster node and you won’t have any interruptions on the client side.
We also wanted to simplify the deployment process. Using nscale was easy to reach the final result where we can deploy on 2 (or more) MQTT servers using just the following couple of commands.
$ nsd cont buildall
$ nsd rev dep head
Give me some tools
Follow the list of the used tools to reach the final result.
Mosca. A Node.js MQTT server/broker. It’s MQTT 3.1 compliant and it supports QoS 0 and QoS 1, together with the storage options for offline packets, and subscriptions.
HAProxy. A free, fast and reliable solution offering high availability, load balancing and proxying for TCP and HTTP based applications. It’s suited for very high traffic web sites.
Docker. Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications enabling apps to be quickly assembled from components and eliminating the friction between development and production environments.
Nscale. An open source project that makes it simple to configure, build and deploy a set of connected containers to constitute a working platform for distributed applications (using nscale you can easily formalize a process for deploying micro service based systems).
Redis. An open source, BSD licensed, advanced key-value cache and store.
Lelylan. Open Source Internet of Things
Lightweight microservices architecture for developers to build the Internet of Things
So, how do we start?
Here the steps we’re going to follow to crete, piece by piece, an high availability MQTT cluster for the Internet of Things.
- Setting up the MQTT server.
- Dockerizing our MQTT server.
- Adding HAProxy as load balancer.
- Making MQTT secure with SSL.
- Configuring nscale to automate the deployment workflow.
- Final Considerations
1. Setting up the MQTT broker
MQTT is a machine-to-machine (M2M)/“Internet of Things” connectivity protocol. It was designed as an extremely lightweight publish/subscribe messaging protocol and it is useful for connections with remote locations where a small code footprint is required and network bandwidth is at a premium.
The first time we looked for an MQTT solution was two years ago. We were searching for a secure (auth based), customisable (communicating with our REST API) and easy to use solution (we knew Node.js). We found in Mosca the right solution and, after two years, we’re happy with our choice ☺
The key metrics influencing your MQTT server choice could be different from ours. If so, check out this list of MQTT servers and their capabilities.
Give me some code chatter
We’re not going to describe every single line of code, but we’ll show you two main sections, showing how simple can be setting up an MQTT server.
The code we use to run MQTT server on Lelylan is available on Github.
Setting up the MQTT server
The code below is used to start the MQTT server. First we configure the pub/sub settings using Redis, we pass the pub/sub settings object to our server and we are done.
If you ask yourself, why Redis is needed as pub/sub solution, read the Q1 on FAQ. Being short we need it to enable a communication channel between the MQTT server and other microservices composing Lelylan.
Authenticating the physical objects
With Mosca you can authorize a client defining three methods, each of them used to restrict the accessible topics for a specific clients.
#authenticate
#authorizePublish
#authorizeSubscribe
In Lelylan we use the authenticate method to verify the client username and password. If the authentication is successful, the device_id is saved in the client object, and used later on to authorize (or not) the publish and subscribe functionalities.
If you want to learn more about MQTT and Lelylan check out the dev center.
2. Dockerizing our MQTT server
Docker is an awesome tool to deploy production systems. It allows you to isolate your code in a clean system environment by defining a Dockerfile, an installation “recipe” used to initialize a system environment.
Cool! Lets getting started.
Container definition
To build a container around our application, we first need to create a file named Dockerfile. In here we’ll place all the needed commands Docker uses to initialize the desired environment.
In the Dockerfile used to create a container around the MQTT server we ask for a specific Node.js version (FROM node:0.10-onbuild), add all files of the repo (ADD ./ .), install the node packages (RUN npm install), expose the port 1883 (EXPOSE 1883) and finally run the node app (ENTRYPOINT [“node”, “app.js”]). That’s all.
Run the Docker Image
Once we have a Dockerfile, we can build a docker container (if you haven’t Docker installed, do so - it supports all existing platforms, even Windows ☺). Once you have docker installed, we can build the container.
# building a container
$ docker build -t lelylan/mqtt
Which will eventually output
Successfully built lelylan/mqtt
Once we have built the container, we can run it to get a working image.
docker run -p 1883:1883 -d lelylan/mqtt
And we’re done! We now can make requests to our MQTT server.
When starting with Docker, it’s easy to make little confusion between containers and images. Read out what both of them means to make your mind clearer.
# OSX
$(boot2docker ip):1883# Linux
$
If you’re using OS X we’re using boot2docker which is actually a Linux VM, we need to use the $DOCKER_HOST environment variable to access the VM’s localhost, otherwise, if you’re using Linux use localhost.
Other commands we were using a lot
While learning how to use Docker, we wrote down a common to use list of commands. They all are basic, but we think it’s good to have a reference to look at when needed.
Container related commands# build and run a container without a tag
$ docker build .
$ docker run -p 80:1883 <CONTAINER_ID># build and run a container using a tag
$ docker build -t <USERNAME>/<PROJECT_NAME>:<V1>
$ docker run -p 80:1883 -d <USERNAME>/<PROJECT_NAME>:<V1>Image related commands# Run interactively into the image
$ docker run -i <IMAGE_ID> /bin/bash# Run image with environment variables (place at the beginning)
$ docker run -e "VAR=VAL" -p 80:1883 <IMAGE_ID># list all running images
$ docker ps# List all running and not running images
# (useful to see also images that exited because of an error).
$ docker ps -aKill images# Kill all images
docker ps -a -q | xargs docker rm -fLog related commands# See logs for a specific image
docker logs <IMAGE_ID># See logs using the tail mode
docker logs -f <IMAGE_ID>
3. Adding HAProxy as load balancer
At this point we have a dockerized MQTT server being able to receive connections from any physical object (client). The missing thing is that it doesn’t scale, not yet ☺.
Here comes HAProxy, a popular TCP/HTTP load balancer and proxying solution used to improve the performance and the reliability of a server environment, distributing the workload across multiple servers. It is written in C and has a reputation for being fast and efficient.
Terminology
Before showing how we used HAProxy, there are some concepts you need to know when using a load balancing.
If curious, you can find a lot of useful info in this article written by Mitchell Anicas
Access Control List (ACL)
ACLs are used to test some condition and perform an action (e.g. select a server, or block a request) based on the test result. Use of ACLs allows flexible network traffic forwarding based on a different factors like pattern-matching or the number of connections to a backend.
# This ACL matches if the path of user’s request begins with /blog
# (this would match a request of)
acl url_blog path_beg /blog
Backend
A backend is a set of servers that receives forwarded requests. Generally speaking, adding more servers to your backend will increase your potential load capacity and reliability by spreading the load over them. In the following example there is a backend configuration, with two web servers listening on port 80.
backend web-backend
balance roundrobin
server web1 web1.example.org:80 check
server web2 web2.example.org:80 check
Frontend
A frontend defines how requests are be forwarded to backends. Frontends are defined in the frontend section of the HAProxy configuration and they put together IP addresses, ACLs and backends. In the following example, if a user requests example.com/blog, it’s forwarded to the blog backend, which is a set of servers that run a blog application. Other requests are forwarded to web-backend, which might be running another application.
frontend http
bind *:80
mode http acl url_blog path_beg /blog
use_backend blog-backend if url_blog default_backend web-backend
Stop the theory! Configuring HAProxy ☺
The code we used to run the HAProxy server on Lelylan is defined by a Dockerfile and a configuration file describing how requests are handled.
The code we use to run HAProxy is available on Github
Get your HAProxy container from Docker Hub
To get started download the HAProxy container from the public Docker Hub Registry (it contains an automated build ready to be used).
$ docker pull dockerfile/haproxy
At this point run the HAProxy container.
$ docker run -d -p 80:80 dockerfile/haproxy
The HAProxy container accepts a configuration file as data volume option (as you can see in the example below), where <override-dir> is an absolute path of a directory that contains haproxy.cfg (custom config file) and errors/ (custom error responses).
# Run HAProxy image with a custom configuration file
$ docker run -d -p 1883:1883 \
-v <override-dir>:/haproxy-override dockerfile/haproxy
This is perfect to test out a configuration file
HAProxy Configuration
Follows the configuration for our MQTT servers, where HAProxy listens for all requests coming to port 1883, forwarding them to two MQTT servers (mosca_1 and mosca_2) using the leastconn balance mode (selects the server with the least number of connections).
2. To see the final configurations used by Lelylan checkout haproxy.cfg on Github. 1. During the HAProxy introduction we described the ACL, backend and frontend concepts. Here we used listen, a shorter but less expressive way to define all these concepts together. We used it because of some problems we had using backend and frontend. If you find out a working configuration using them, let us know.
To try out the new configuration (useful on development), override the default ones by using the data volume option. In the following example we override haproxy-override with the configuration file defined in /root/haproxy-override/.
$ docker run -d -p 80:80 1883:1883 \
-v /root/haproxy-override:/haproxy-override
dockerfile/haproxy
Create your HAProxy Docker Container
Once we have a working configuration, we can create a new HAProxy container using it. All we need to do is to define a Dockerfile loading the HAProxy container (FROM dockerfile/haproxy) to which we replace the configuration file defined in /etc/haproxy/haproxy.cfg (ADD haproxy.cfg /etc/haproxy/haproxy.cfg). We then restart the HAProxy server (CMD [“bash”, “/haproxy-start”]) and expose the desired ports (80/443/1883/8883).
NOTE. We restart HAProxy, not simply start, because when loading the initial HAProxy container, HAProxy is already running. This means that when we change the configuration file, we need to give a fresh restart to load it.
Extra tips for HAProxy
When having troubles with HAProxy, read the logs! HAProxy uses rsyslog, a rocket-fast system for log processing, used by default in Ubuntu.
# HAProxy log configuration file
$ vi /etc/rsyslog.d/haproxy.conf# Files where you can find the HAProxy logs
$ tail -f /var/lib/haproxy/dev/log
$ tail -f /var/log/haproxy.log
4. Making MQTT secure with SSL
We now have a scalable MQTT infrastructure where all requests are proxied by HAProxy to two (or more) MQTT servers. The next step is to make the communication secure using SSL.
Native SSL support was implemented in HAProxy 1.5.x, which was released as a stable version in June 2014.
What is SSL?
SSL (Secure Sockets Layer) is the accepted standard for encrypted communication between a server and a client ensuring that all data passed between the server and client remain private and integral.
Creating a Combined PEM SSL Certificate/Key File
First of all you need an SSL certificate. To implement SSL with HAProxy, the SSL certificate and key pair must be in the proper format: PEM.
In most cases, you simply combine your SSL certificate (.crt or .cer file provided by a certificate authority) and its respective private key (.key file, generated by you). Assuming that the certificate file is called lelylan.com.crt, and your private key file is called lelylan.com.key, here is an example of how to combine the files creating the PEM file lelylan.com.pem.
cat lelylan.com.crt lelylan.com.key > lelylan.com.pem
As always, be sure to secure any copies of your private key file, including the PEM file (which contains the private key).
Load the PEM File using Docker volumes
Once we’ve created our SSL certificate, we can’t save it in a public repo. You know, security ☺. What we have to do is to place it in the HAProxy server, making it accessible form Docker through data volumes.
What is Docker data volumes?
A data volume is a specially-designated directory within one or more containers that provide useful features shared data. You can add a data volume to a container using the -v flag to share any file/folder, using -v multiple times to mount multiple data volumes (we already used it when loading a configuration file for the HAProxy container).
Using data volumes to share an SSL certificate.
To share our SSL certificate, we placed it in /certs (in the HAProxy server), making it accessible through the /certs folder when running the Docker Container.
$ docker run -d -p 80:80 -p 443:443 -p 1883:1883 -p 8883:8883 \
-v /certs:/certs
-v /root/haproxy-override:/haproxy-override
Don’t forget to open the port 8883 (the default one for secure MQTT connections)
Once we have the SSL certificate available through Docker data volume, we can access it during through the HAProxy configuration file. All we need to do is to add one line of code to map the requests coming to the 8883 port to the SSL certificate placed in /certs and named lelylan.pem.
We’re done!
At this point we have a Secure, High Availability MQTT Cluster for the Internet of Things. Below, you can see an image representing the final result.
At this point, there’s one thing to make the architecture complete: we need a simple way to deploy it.
5. Configuring nscale to automate the deployment workflow
To make this possible we’ll use nscale, an open source project to configure, build and deploy a set of connected containers.
While we’ll describe some of the most important commands used by nscale, here you can find a guide describing step by step how nscale works.
Where do we deploy all of this stuff?
Digital Ocean is a simple cloud hosting, built for developers. For our deployment solution , all the droplets we’ll use, are based on Ubuntu and have Docker already installed.
Do not have a Digital Ocean account? Sign up through this link and get 10$ credit.
The first thing we had to do was to create 5 droplets, each of them dedicated to a specific app: 1 management machine (where the nscale logic will live), 1 HAProxy load balancer, 2 MQTT Mosca servers and 1 Redis server.
List of Droplets created for this tutorial on Digital Ocean.
Installing nscale
We’re now ready to install nscale into the management machine defined on Digital Ocean. We could also have used our local machine, but having a dedicated server for this, make it simple for all team members to deploy new changes.
Installation
Install Node.js via nvm (Node Version Manager).
curl | bash
Logoff, login and run the following commands.
# install needed dependencies
apt-get update
apt-get install build-essential# install node and npm
nvm install v0.10.33
nvm alias default v0.10.33
npm install npm@latest -g --unsafe-perm# install nscale
npm install nscale -g --unsafe-perm
The installation could take a while, it’s normal ☺
Github user configuration
To use nscale you need to configure GIT.
git config --global user.name "<YOUR_NAME>"
git config --global user.email "<YOUR_EMAIL>"
Create your first nscale project
Once all the configurations are done, login into nscale.
$ nsd login
At this point we can create our first nscale project, where you’ll be asked to set a name and a namespace (we used the same name for both of them).
$ nsd sys create
1. Set a name for your project: <NAME>
2. Set a namespace for your project: <NAMESPACE>
This command will result into an automatically generated project folder with the following structure (don’t worry about all the files you see; the only ones we need to take care of are definition/services.js and system.js).
|— definitions
| |— machines.js
| `— services.js *
|— deployed.json
|— map.js
|— npm-debug.log
|— README.md
|— sudc-key
|— sudc-key.pub
|— system.js *
|— timeline.json
`— workspace
...
At this point use the list command to see if the new nscale project is up and running. If everything is fine, you’ll see the project name and Id.
$ nsd sys list
Name Id
lelylan-mqtt 6b4b4e3f-f22e-4516-bffb-e1a8daafb3ea
Secure access (from nscale to other servers)
To access all servers nscale will configure, it needs a new ssh key for secure authentication solution with no passphrase.
ssh-keygen -t rsa
Type no passphrase, and save it with your project name. In our case we called it lelyan-key (remember that the new ssh key needs to live in the nscale project root, not in ~/.ssh/). Once the ssh key is created, setup the public key in all the servers nscale needs to configure: haproxy, mosca 1, mosca 2 and redis.
This can be done through the Digital Ocean dashboard or by adding the nscale public key to the authorized_keys with the following command.
cat lelylan-key.pub | \
ssh <USER>@<IP-SERVER> "cat ~/.ssh/authorized_keys"
If some problems occur, connect first to the server through SSH
ssh <USER>@<IP-SERVER>
SSH Agent Forwarding
One more thing you need to do on your management server (where the nscale project is defined), is to set the SSH Agent Forwarding. This allows you to use your local SSH keys instead of leaving keys sitting on your server.
# ~/.ssh/config
Host *
ForwardAgent yes
There is an open issue on this for nscale. If you do not set this up the deployment with nscale will not work out.
nscale configuration
We can now start configuring nscale, starting from the nscale analyzer, which defines the authorizations settings used to access the target machines. To make this possible edit ~/.nscale/config/config.json by setting the specific object from:
{
...
"modules": {
...
"analysis": {
"require": "nscale-local-analyzer",
"specific": {
}
}
...
}
to:
{
...
"modules": {
...
"analysis": {
"require": "nscale-direct-analyzer",
"specific": {
"user": "root",
"identityFile": "/root/lelylan/lelylan-key"
}
}
}
Adjust this config if you named your project and your key differently.
All we did was to populate the specific object with the user (root) and the identity file (ssh key) (this step will likely not be needed in a next release).
Processes definition
In nscale we can define different processes, where every process is a Docker container identified by a name, a Github repo (with the container source code) and a set of arguments Docker uses to run the image.
If you noticed that redis has not a Github repo, contgrats! At this point of the article shouldn’t be easy☺. For Redis we do not need the Github repo as we directly use the redis image defined in Docker Hub.
In this case we have 3 different type of processes: haproxy, mqtt and redis.
System Definition
Now that we’ve defined the processes we want to run, we can tell nscale where each of them should live on Digital Ocean through the system.js definition.
As you can see, system.js defines every machine setup. For each of them, we define the running processes (you need to use one between the ones previously defined in services.js), the machine IP address, the user that can log in and and the ssh key name used to authorize the access.
What if I want to add a new MQTT server
Add a new machine to the nscale system.js definition, the new server to the HAproxy configuration and you’re ready to go.
It’s deploying time☺
We can now compile, build and deploy our infrastructure.
# Compile the nscale project
nsd sys comp direct# Build all containers
# (grab a cup of coffee, while nscale build everything)
nsd cont buildall# Deploy the latest revision on Digital Ocean
nsd rev dep head
While we described the configurations needed to deploy on Digital Ocean, nscale is also good to run all services locally.
You’re done!
Once the setup is done, with the previous three commands, we’re ready to deploy an high availability MQTT cluster for the Internet of Things, adding new MQTT servers and scaling our infrastructure in a matter of minutes.
Conclusions
This article comes from the work I’ve made in Lelylan, an Open Source Cloud Platform for the Internet of Things. If you find this article useful, give us a star on Github (it will help to reach more developers).
Lelylan
Open Source Lightweight Microservices Architecture for the Internet of Things. For developers.
github.com
Source Code
In this article we showed how to build an high availability MQTT cluster for the Internet of Things. All of the code we use in production is now released as Open Source as follow.
- Lelylan MQTT HAProxy (TCP load balancer)
- Lelylan MQTT Server (Mosca implementation)
We’ll soon release also the nscale project (right now it contains some sensible information and we need to remove them from the repo).
Many thanks to nearForm and Matteo Collina (author of Mosca and part of the nscale team) for helping us answering any question we had about nscale and the MQTT infrastructure.
Building, testing and securing such an infrastructure took several months of work. We really hope that releasing it as Open Source will help you guys on building MQTT platforms in a shorter time.
Want to learn more?
Not satisfied? If you want to learn more about some of the topics we talked about, read out the following articles! | https://medium.com/@lelylan/how-to-build-an-high-availability-mqtt-cluster-for-the-internet-of-things-8011a06bd000 | CC-MAIN-2021-25 | refinedweb | 4,000 | 61.67 |
Hello,
I have a working C++ code wich is compiled using cmake. I am trying to call a CUDA wrapper function declared in a .cu file with the corresponding kernel but i get the error: undefined reference to “cuda_wrapper()”
I am including the corresponding .h of the .cu inside the .cpp file where the wrapper is called and for some reason i still get the error. An interesting thing is that i have tried to call this wrapper doing the same modificatios in other.cpp files in the same folder and for some of them, compiling is completed succesfully. I don’t know why it works for some .cpp but not in the one i need, even though they are all in the same folder and i include everything the exact same way.
Here there are some simplified versions of my files to make it clearer:
cuda.cu looks like this
#include "cuda.h" __global__ void kernel(a,b) { ... some calculations... } void cuda_wrapper() { ... kernel<<<dimGrid, dimBlock>>>(a,b); ... }
cuda.h looks like this
void cuda_wrapper();
file.cpp looks like this
#include "cuda.h" ... cuda_wrapper(); // Calling the wrapper from C++ file -> undefined reference error
I am not being able to make the cuda_wrapper() callable from the C++ file, what can i do to make it work? | https://forums.developer.nvidia.com/t/linking-error-calling-cuda-wrapper-function-from-a-c-file/156485 | CC-MAIN-2022-40 | refinedweb | 215 | 67.04 |
#include <iostream> using namespace std; int main() { if (true) { int b = 3; label_one: cout << b << endl; int j = 10; goto label_one; } }
In the code above
goto jumps to label_one, making the variable
j be destroyed and reconstructed in each cycle. But what happens to the
b variable ? Is it destroyed and reconstructed too or it is never destroyed? According to C++ ISO:
Transfer out of a loop, out of a block, or back past an initialized
variable with automatic storage duration involves the destruction of
objects with automatic storage duration that are in scope at the point
transferred from but not at the point transferred to.
My interpretation is that all variables in
if scope should be destroyed, but if thats the case, when are they re-initialized(variable
b in my code)?
Source: Windows Questions C++ | https://windowsquestions.com/2021/09/16/how-does-happen-the-destruction-of-variables-with-a-goto-statement-in-c/ | CC-MAIN-2022-05 | refinedweb | 137 | 65.56 |
I'm trying to compress sql server backup (
.bak
def fileType = "*.bak"
"cmd /c \"${rarCmd}\" a ${rarName} ${parameters} ${sourceDir} ${fileType}".execute()
Basket_backup_2014_07_30_010007.bak
Basket_backup_2016_07_31_010007.bak
Basket_backup_2016_08_05_010007.bak
Basket_backup_2016_08_05_010007.bak
fileType
Start WinRAR and click in menu Help on menu item Help topics. On tab Contents open list item Command line mode and click first on Command line syntax and you will see on opened help page:
WinRAR <command> -<switch1> -<switchN> <archive> <files...> <@listfiles...> <path_to_extract\>
Now let us compare this line with your code line:
"${rarCmd}" a ${rarName} ${parameters} ${sourceDir} ${fileType}
There is obviously already the mistake in your code to specify after the command first the archive file name and then the switches instead of first the switches and next the archive file name.
And there should be no space between
${sourceDir} and
${fileType}, but a backslash character.
Then open in contents list the sublist Switches and click on Alphabetic switches list. Build your
parameters with using this list while reading it from top to bottom. The most interesting switches for you are most likely
-cfg- -ep1 -ibck -inul -m5 -r- -tl -tn23h -y --
-tn23h means last modification date of file is within the last 23 hours (file time newer than current time minus 23 hours). You could also use
-tn1d for file last modified within 1 day.
In case of using console version
Rar.exe instead of GUI version
WinRAR.exe use the text file
Rar.txt in program files folder of WinRAR as this is the manual for the console version. There are some switches different between console and GUI version. | https://codedump.io/share/VLWpwMWxBMSZ/1/winrar-command-line-compress-specific-files | CC-MAIN-2016-44 | refinedweb | 263 | 72.26 |
atropa-string
JavaScript utilities for manipulating strings.
atropa-string
JavaScript utilities for manipulating strings.
Installation:
Usage
In node:
var string = require('atropa-string').string;console.log(string);
In the browser this module is attached to the global namespace
atropa, which
will be created if it does not exist.
Include
./browser/atropa-string_web.js in your page and
atropa.string will be available in your page.
For full documentation see
docs/jsdoc. For Visual Studio intellisense files
see
docs/vsdoc.
Tests-string_tests.html in your
favorite web browser.
To edit the tests for both the browser and Node, edit the jasmine test files in
browser/tests. For tests specific to Node edit the files in the
specs
directory.
Hacking-string.js please run the
srcFormat,
lint, and
buildDocs scripts on it before submitting a pull
request.
Author
Matthew Kastor
License
The license, gpl-3.0, can be found in the
License folder or online at | https://www.npmjs.com/package/atropa-string | CC-MAIN-2015-11 | refinedweb | 154 | 61.83 |
ProgAndy 68 Posted February 19, 2010 (edited) The AutoItObject team is proud to announce that the first version of our AutoItObject UDF is complete and ready to use. The project page is located at [currently missing] Please, report bugs and any other issues at our [currently missing], and not here. An overview of all the functions can be found in the online documentation [currently missing] or in the offline .chm documentation file which is included with the [currently missing]. If Origo has problems providing the download, the current version will be mirrored here The UDF requires the current AutoIt version v3.3.4.0! AutoItObject 1.2.8.2.exe AutoItObject 1.2.8.2.zip Please, leave your comments and experiences here. Regards, - trancexx - ProgAndy - monoceres - Kip Our work is published under the Artistic License 2.0 A copy of the FAQ to answer your most urgent questions right away: (can also be found at the online documentation: Q. What is Object Orientation and why do I need it? OO is a highly used programming paradigm that is widely used in many programming languages. In procedural programming (AutoIt, C, etc.) you pass around data to different functions. In OO however, you use objects that contains both the function and data. It's especially useful when dealing with larger projects and need to structure your code. OO also offers tools such as inheritance to save yourself from rewriting code. Q. How does this work? AutoIt already has support for COM objects. What this library does is it creates dynamic com objects during runtime that executes AutoIt code. Q. How does this affect my script performance wise? There is a minor speed difference between calling methods and calling functions directly. However the difference is minor and you'll probably never notice it. When execution reaches AutoIt code it continues at normal speed. Q. Does the library support inheritance? Yes. _AutoItObject_Create() has an optional $parent parameter. Q. What about multiple inheritance? No problems. Calls to _AutoItObject_Create() can be nestled. Q. Why isn't it possible to pass arguments as ByRef to methods? This is a limitation within AutoIt. It's not possible to overcome this problem by directly calling the member functions, but that goes against the OO thinking and will not be covered here (use common sense). Q. Why can't I use variables of ptr-type in arguments to methods? See previous answer. Q. Can I use arrays as properties? Yes. However, it's slower than usual. Q. Does this mean that the objects I create are available from other programs since they're actually COM-objects? No. The objects are created at runtime and for AutoIt's eyes only. Q. My GUI freezes! Why and how do I fix it? All methods are essentially dllcallbacks. Unfortunately this means that messages are not processed while your methods are being executed. As long as you keep your main loop outside any method, you'll be fine. Some helper-functions: When using the Wrapper, this are some simple methods to get a return value from the resulting array. expandcollapse popup; #FUNCTION# ==================================================================================================================== ; Name...........: _AIOResult ; Description ...: Returns the return value of the Call to a WraperObject function ; Syntax.........: _AIOResult(Const $aResult [, $vError=0] ) ; Parameters ....: $aResult - the resulting array ; $vError - [optional] value to be returned if result is no array (default: 0) ; Return values .: Success - Returnvalue ($aResult[0]) ; Failure - $vError, @error set to 1 ; Author ........: Prog@ndy ; Modified.......: ; Remarks .......: ; Related .......: ; Link ..........; ; Example .......; ; =============================================================================================================================== Func _AIOResult(Const $aResult, $vError=0) ; Author: Prog@ndy If IsArray($aResult) Then Return $aResult[0] Return SetError(1,0,$vError) EndFunc ; #FUNCTION# ==================================================================================================================== ; Name...........: _AIOParam ; Description ...: Returns the parameter value of the Call to a WraperObject function ; Syntax.........: _AIOParam(Const $aResult, $iParam, $vError=0) ; Parameters ....: $aResult - the resulting array ; $iParam - The parameterindex to return (0: result, 1: first parameter, 2: 2nd parameter, ...) ; $vError - [optional] value to be returned if result is no array (default: 0) ; Return values .: Success - Parameter value ; Failure - $vError, @error set to 1 ; Author ........: Prog@ndy ; Modified.......: ; Remarks .......: ; Related .......: ; Link ..........; ; Example .......; ; =============================================================================================================================== Func _AIOParam(Const $aResult, $iParam, $vError=0) ; Author: Prog@ndy If UBound($aResult)-1 < $iParam Then Return SetError(1,0,$vError) Return SetExtended($aResult[0], $aResult[$iParam]) EndFunc Edited September 10, 2012 by ProgAndy 9 | https://www.autoitscript.com/forum/topic/110379-autoitobject-udf/ | CC-MAIN-2018-34 | refinedweb | 706 | 50.73 |
PyQt by Example (Session 1)
Introduction
This series of tutorials is inspired by two things:
LatinoWare 2008, where I presented this very app as an introduction to PyQt development.
A lack of (in my very humble opinion) PyQt tutorials that show the way I prefer to develop applications.
The second item may sound a bit belligerent, but that's not the case. I am not saying the other tutorials are wrong, or bad, I just say they don't work the way I like to work.
I don't believe in teaching something and later saying "now I will shw you how it's really done". I don't believe in toy examples. I believe that you are smart enough to only learn things once, and learning the true thing the first time.
So, in this series, I will be developing a small TODO application using the tools and procedures I use in my actual development, except for the Eric IDE. That is because IDEs are personal preferences and for a project fo this scope really doesn't add much.
One other thing I will not add is unit testing. While very important, I think it would distract from actually doing. If that's a problem, it can be added in a later version of the tutorial.
Requirements
You must have installed the following programs:
Python: I am using 2.6, I expect 2.5 or even 2.4 will work, but I am not testing them.
Elixir: This is needed by the backend. It requires SQLAlchemy and we will be using SQLite as our database. If you install Elixir everything else should be installed automatically.
PyQt: I will be using version 4.4
Your text editor of choice
This tutorial doesn't assume knowledge of Elixir, PyQt or databases, but does assume a working knowledge of Python. If you don't know python yet, this is not the right tutorial for you yet.
You can get the full code for this session here: Sources (click the "Download" button).
Since this tutorial is completely hosted in GitHub you are free to contribute improvements, modifications, even whole new sessions or features!
Session 1: The basics. What that means is "a way to create objects that are automatically stored in a database".
Here is the code, with comments, for our backend, called todo.py. Hopefully, we will not have to look at it again until much later in the tutorial!
# -*- coding: utf-8 -*- """A simple backend for a TODO app, using Elixir""" import os from elixir import * dbdir=os.path.join(os.path.expanduser("~"),".pyqtodo") dbfile=os.path.join(dbdir,"tasks.sqlite") class Task(Entity): """ A task for your TODO list. """ using_options(tablename='tasks') text = Field(Unicode,required=True) date = Field(DateTime,default=None,required=False) done = Field(Boolean,default=False,required=True) tags = ManyToMany("Tag") def __repr__(self): return "Task: "+self.text class Tag(Entity): """ A tag we can apply to a task. """ using_options(tablename='tags') name = Field(Unicode,required=True) tasks = ManyToMany("Task") def __repr__(self): return "Tag: "+self.name saveData=None def initDB(): if not os.path.isdir(dbdir): os.mkdir(dbdir) metadata.bind = "sqlite:///%s"%dbfile setup_all() if not os.path.exists(dbfile): create_all() # This is so Elixir 0.5.x and 0.6.x work # Yes, it's kinda ugly, but needed for Debian # and Ubuntu and other distros. global saveData import elixir if elixir.__version__ < "0.6": saveData=session.flush else: saveData=session.commit def main(): # Initialize database initDB() # Create two tags green=Tag(name=u"green") red=Tag(name=u"red") #Create a few tags and tag them tarea1=Task(text=u"Buy tomatos",tags=[red]) tarea2=Task(text=u"Buy chili",tags=[red]) tarea3=Task(text=u"Buy lettuce",tags=[green]) tarea4=Task(text=u"Buy strawberries",tags=[red,green]) saveData() print "Green Tasks:" print green.tasks print print "Red Tasks:" print red.tasks print print "Tasks with l:" print [(t.id,t.text,t.done) for t in Task.query.filter(Task.text.like(ur'%l%')).all()] if __name__ == "__main__": main()
The Main Window
Now, let's start with the fun part: PyQt!
I recommend using designer to create your graphical interfaces. Yes, some people complain about interface designers. I say you should spend your time writing code for the parts where there are no good tools instead.
And here is the Qt Designer file for it: window.ui. Don't worry about all that XML, just open it on your designer ;-)
This is how it looks in designer:
The main window, in designer.
What you see is a "Main Window". This kind of window lets you have a menu, toolbars, status bars, and is the typical window for a standard application.
The "Type Here" at the top is because the menu is still empty, and it's "inviting" you to add something to it.
The big square thing with "Task" "Date" and "Tags" is a widget called QTreeView, which is handy to display items with icons, several columns, and maybe a hierarchical structure (A tree, thus the name). We will use it to display our task list.
You can see how this window looks by using "Form" -> "Preview" in designer. THis is what you'll get:
The main window preview, showing the task list.
You can try resizing the window, and this widget will use all available space and resize alongside the window. That's important: windows that can't handle resizing properly look unprofessional and are not adequate.
In Qt, this is done using layouts. In this particular case, since we have only one widget, what we do is click on the form's background and select "Layout" -> "Layout Horizontally" (Vertically would have had the exact same effect here).
When we do a configuration dialog, we will learn more about layouts.
Now, feel free to play with designer and this form. You could try changing the layout, add new things, change properties of the widgets, experiment at will, learning designer is worth the effort!
Using our Main Window
We are now going to make this window we created part of a real program, so we can then start making it work.
First we need to compile our .ui file into python code. You can do this with this command:
pyuic4 window.ui -o windowUi.py
Now let's look at main.py, our application's main file:
# -*- coding: utf-8 -*- """The user interface for our app""" import os,sys # Import Qt modules from PyQt4 import QtCore,QtGui # Import the compiled UI module from windowUi import Ui_MainWindow # Create a class for our main window class Main(QtGui.QMainWindow): def __init__(self): QtGui.QMainWindow.__init__(self) # This is always the same self.ui=Ui_MainWindow() self.ui.setupUi(self) def main(): # Again, this is boilerplate, it's going to be the same on # almost every app you write app = QtGui.QApplication(sys.argv) window=Main() window.show() # It's exec_ because exec is a reserved word in Python sys.exit(app.exec_()) if __name__ == "__main__": main()
As you can see, this is not at all specific to our TODO application. Whatever was in that .ui file would work just fine with this!
The only interesting part is the Main class. That class uses the compiled ui file and is where we will put our application's user interface logic. You never edit the .ui file or the generated python file manually!
Let me put that in these terms: IF YOU EDIT THE UI FILE (WITHOUT USING DESIGNER) OR THE GENERATED PYTHON FILE YOU ARE DOING IT WRONG! YOU FAIL! EPIC FAIL!. I hope that got across, because there is at least one tutorial that tells you to do it. DON'T DO IT!!!,
You just put your code in this class and you will be fine.
So, if you run main.py, you will make the application run. It will do nothing interesting, because we need to attach the backend to our user interface, but that's session 2.
Nice work. I'm impatient so see next part.
Note: The elixir link is wrong.
Thanks
Fixed, thanks!
Very good. Keep it up!
Genial!
Me canse de pedirtelo mientras estuvimos alla
BTW Felicitaciones por retomar el blog, de casualidad se me ocurrio volver por estos lares y veo que volviste y afilado!
Buen... hora de meterle a Pyqt parece
Tarde pero inseguro, como siempre :-)
Thanks so much for this. Just finished #1, and going to do #2 tonight.
BTW, for those using an older version of Elixir like me (I'm just using the one that comes with Ubuntu Intrepid), you need to do session.flush() instead of session.commit()
Hmmm.... maybe I should fix that in the code so it works for both versions.
Just followed session #1. Had to change the following statement to get it compiled correctly:
from windowUi import Ui_MainWindow
into:
from Ui_window import Ui_MainWindow
since the file is named Ui_window.py.
For the rest, looks very interesting, especially the "preparation" I had to do with Elixir, setuptools an SqlAlchemy. That made me learning a few more nice Python apps.
@Geert, the .py file will be calld whatever you say when you call pyuic. The tutorial says
pyuic4 window.ui -o windowUi.py
but if you made it work, all is well :-)
From the above remark, it's important to mention I'm using Eric4 to work on the project. Hence, the resulting UI file is called Ui_window.py (by Eric4) iso windowUi as been given in the article.
So, no issue...
@geert: Oh, eric! I will change it so it's compatible with eric's naming, since it's the same for me.
Is possible to instal elixir in windows?
I solve the problem using easy instal
Roberto, fijate cuando hacés los test en todo.py, intercalás nombres de variable en castellano e inglés (seguramente es culpa del traductor, claro). Creás dos etiquetas (verde y rojo) y después haces green.tasks y red.tasks. Obviamente genera un error cuando lo ejecutás.
Por cierto, muy bueno el tutorial.
Cuando vi la linea esa en donde trasnformas el archivo .ui a uno .py me sacastes de encima un gran peso. En serio yo vengo de dar mis primeros pasos. Y algo a futuro es ver esto de pyqt y mirando asi como quien no ve todo. Me encontre con que el designer(perdon si lo escribo mal.) no genera el archivo .py. En mi caso vengo de laburar con wxpython y un programa que aunque no me gusta ni mierda(wxglade) es lo que se usar por que generaba las ventanas en .py. Y la verdad que ahora me siento mas liviano. Muchisimas gracias.
A great article! I am currently learning PyQt and find this a great beginner article. Thanks!
Hey:
Sólo quiero saber una cosita, mira tengo un QMessageBox con sus standard buttons, y estos son 'OK','Cancel', 'No' Como le hago para cambiarlos a español, es facil o no? Gracias mano, ahora bien como se instala elixir. God Bless U
Lo de la traducción es fácil, mirá el principio de main() en este link:...
En la linea 1433 se carga una traducción de la aplicación, en la 1437 se carga la traducción de los dialogos standard de Qt (ahí están ese OK y Cancel)
Para instalar elixir: Los detalles dependen de qué sistema operativo estés usando, pero en la pagina de Elixir hay informacion:...
The materials are great but you know guys, you were suppose to make a tour guider manual for python. My first experience ia great and I feel is better than cpp but because am used to cpp, which has the simplest tour manual guide, it makes cpp easier though it is diffucult than python. It will be good for my fellow self learner easier, because it is not offered here. With the help of Sarah. Will continue to work on it, hoping that will keep toutch with each other.
Really, I was supposed to do that? Good to know!
I tried with latest pyQt4 (PyQt-Py3.1-gpl-4.7.7-1) .
Need to change the line:
"from MainWindow import Ui_MainWindow" to "from windowUi import Ui_MainWindow" since the UI file name of the generated is windowUi.py
With that change, this code works fine in latet pyQt version 4.7.7
Very good. Excellent!
Many thanks, I'm glad you have a clear, step by step tutorial like this. - Muchas gracias! | https://ralsina.me/stories/BBS47.html | CC-MAIN-2021-10 | refinedweb | 2,075 | 66.94 |
Objective
The Microchip Xpress board has an EMC1001 temperature sensor module on-board. This application will show how to access that device and communicate with it to display the temperature in a terminal window on a PC.
The project uses:
- MPLAB® Xpress Cloud-Based Integrated Development Environment with MPLAB Code Configurator Plug-in:
- PIC16F18855 Microcontroller:
- MPLAB Xpress Development Board:
- EMC1001 Temperature Sensor (on-board):
To follow along with these steps, MPLAB® Xpress should be open and the user should be logged in so that the MPLAB Code Configurator plug-in can be used. If you need help with the set-up, it is explained in a previous module. You should see a screen similar to the one below to move on to step 1:
If you have not completed the set-up yet, this will walk you through the process. Begin by opening a new project under File > New Project.
Choose Microchip Embedded and Standalone Project then choose Next
The Xpress Development Board uses the PIC16F18855 device. Next
Lastly, name your project. For this project, use the name TemperatureSensor.
Materials
Hardware Tools (Optional)
Software Tools
Additional Files
Procedure
1
Open the MPLAB® Code Configurator under the Tools>Embedded menu of MPLAB Xpress.
If you do not see this option, make sure you are logged in to your mymicrochip account.
Follow the steps to open MPLAB Code Configurator (MCC) in MPLAB Xpress.
2
The Master Synchronous Serial Port (MSSP) enables us to use Serial Peripheral Interface (SPI) and Inter-Integrated Circuit (I2C) serial communication. Serial communication is used for communication to other microcontrollers, as well as between microcontrollers and external peripherals. The temperature sensor in incorporated into the Xpress board and requires I2C serial communication to interface to it from the PIC16F18855 microcontroller on the board. Therefore, the MSSP block must be set-up accordingly. Select MSSP2 from Device Resources.
Although this is automatic, note the Mode is set to I2C Master due to the fact that it is in control over the peripheral.
Change Slew Rate Control to Standard Speed and the Baud Rate Generator Value to 0x4. It should look like the window below:
3
The second block we need is the EUSART block. This will handle the communication between the microcontroller and Tera Term to display the ambient temperature on your computer screen.
Choose EUSART under Device Resources. Select the Enable Transmit and Redirect STDIO to USART as well as increasing the Baud Rate: to 19200. Your block should look like the one below:
Redirect of STDIO to USART enables use of the STDIO.h library in your software, allowing easier programming of USART commands.
4
Next, connect the necessary pins. According to the schematic, pin RC4 is the SCL line connected to SCL on the EMC1001 and RC3 is the SDA line connected to SDA on the EMC1001. Therefore, lock RC4 to MSSP2 SCL1 input and output, as well as locking RC3 to MSSP2 SDA1 input and output.
We also need to connect the transmitting line, TX, of the EUSART to pin RC0. This is seen below:
7
This project makes heavy use of the i2c1.h library, and this step will teach you the functions necessary to complete this project. The image below gives a general flowchart of the code:
First, select main.c under the TemperatureSensor Project:
Now we will break down the code for this tutorial.
The code begins:
#include "mcc_generated_files/mcc.h" #define EMC1001_ADDRESS 0x49 // slave device address
This section selects the hex value for the slave, in this case the EMC1001, address as well as
including the general header file.
uint8_t EMC1001_Read(uint8_t reg, uint8_t *pData) { I2C2_MESSAGE_STATUS status = I2C2_MESSAGE_PENDING; static I2C2_TRANSACTION_REQUEST_BLOCK trb[2]; I2C2_MasterWriteTRBBuild(&trb[0], ®, 1, EMC1001_ADDRESS); I2C2_MasterReadTRBBuild(&trb[1], pData, 1, EMC1001_ADDRESS); I2C2_MasterTRBInsert(2, &trb[0], &status); while(status == I2C2_MESSAGE_PENDING); // blocking return (status == I2C2_MESSAGE_COMPLETE); }
Because there is no function currently included to read specifically from the sensor, one must be written into the code. This code contains a couple of important functions to enable the I2C communication process.
The Transaction Request Block (TRB) refers to the data type that needs to be built to handle any I2C communication, and is used to inform the driver how to handle the process.
The WriteTRBBuild and ReadTRBBuild are called to correctly form the TRB block, changing the register if it is a read or write command. However, these only format the TRB. In order to send the transaction, TRBInsert must be called.
The while loop waits until the process is completed, or the status stops pending, and then updates the status to deliver a message_complete.
// EMC1001 registers #define TEMP_HI 0 // temperature value high byte #define TEMP_LO 2 // low byte containing 1/4 deg fraction
This section defines the addresses called from the EMC1001 to report the high and low value bytes for the current temperature.
void main(void) { uint8_t data; int8_t temp; uint8_t templo; SYSTEM_Initialize(); INTERRUPT_GlobalInterruptEnable(); INTERRUPT_PeripheralInterruptEnable();
This section defines variables and initializes the code.
while (1) { printf("\x0C"); // comment out if terminal does not support Form Feed puts("Temperature Sensor Demo\n"); if (EMC1001_Read(0xfd, &data)) printf("Product ID: EMC1001%s\n", data ? "-1" : ""); if (EMC1001_Read(0xfe, &data)) printf("Manufacturer ID: 0x%X\n", data); if (EMC1001_Read(0xff, &data)) printf("Revision : %d\n", data);
The continuous part of the code begins by sending information to the USART, which can be read via Tera Term. After a general header, the previously written EMC1001_Read() function is called to read the Product ID, Manufacturer ID, and Revision number from the sensor.
if (EMC1001_Read(TEMP_HI, (uint8_t*)&temp)) { EMC1001_Read(TEMP_LO, &templo); // get lsb templo = templo >> 6; if (temp < 0) templo = 3-templo; // complement to 1 if T negative printf("\nThe temperature is: %d.%d C\n", temp, templo*25); }
The code now begins to read the actual temperature data to two decimal places, and then prints that to the USART.
if (EMC1001_Read(4, &data)) printf("\nThe Conversion rate is: %x\n", data); if (EMC1001_Read(5, &data)) printf("The high limit is: %d C\n", data); if (EMC1001_Read(7, &data)) printf("The low limit is: %d C\n", data); __delay_ms(1000); } }
Lastly, the program reads the conversion rate, or the amount of times per second the sensor delivers data, as well as the High and Low temperature limits set.
Now its time to program!
8
Generate a .hex file by clicking Make and Program Device:
Program the MPLAB Xpress board by dragging the generated.
Results
In order to test successful completion, open the Tera Term window. Change the Baud Rate from 9600 to 19200 and you will see something similar to this:
Conclusions
After completing this tutorial, you should have a basic understanding of the EUSART module as well as the EMC1001 temperature sensor on the Xpress board. This can be used to monitor CPU heat during processing high loads, or other applications as they arise. If the characters appearing on the screen seem to be gibberish or different than what you were expecting, make sure the baud rate of the microcontroller and computer terminal agree. | http://microchipdeveloper.com/xpress:how-to:on-board-temperature-sensor | CC-MAIN-2017-13 | refinedweb | 1,168 | 51.68 |
How to: Filter Items that Do Not Have Categories
This topic shows a code sample that uses a DAV Searching and Locating (DASL) query to filter items in the current folder that do not have any category assigned to them. Note that filtering items with an empty string in their categories requires a DASL query; the Microsoft Jet syntax does not support such filters.
When filtering an empty string with a DASL query, you can use the Is Null keyword. Is Null operations are useful to determine if a string property is empty or if a date property has been set. For more information, see Filtering Items Using Query Keywords.
The code sample sets up a DASL filter on the Categories property, which in the DASL query is expressed in the Office namespace as urn:schemas-microsoft-com:office:office#Keywords. The filter compares the value of the Categories property with an emptry string using the Is Null keyword. The code sample then applies the filer to items in the current folder. It then prints the number of items in the current folder that have been found to have no categories. | http://msdn.microsoft.com/en-US/library/bb177012(d=printer).aspx | CC-MAIN-2014-35 | refinedweb | 191 | 59.03 |
Unreal Engine 4 Toon Outlines Tutorial
In this Unreal Engine 4 tutorial, you will learn how to creating toon outlines using inverted meshes and post processing.
When people say "toon outlines", they are referring to any technique that render lines around objects. Like cel shading, outlines can help your game look more stylized. They can give the impression that objects are drawn or inked. You can see examples of this in games such as Okami, Borderlands and Dragon Ball FighterZ.
In this tutorial, you will learn how to:
- Create outlines using an inverted mesh
- Create outlines using post processing and convolution
- Create and use material functions
- Sample neighboring pixels
- Part 1: Cel Shading
- Part 2: Toon Outline (you are here!)
- Part 3: Custom Shaders Using HLSL
- Part 4: Paint Filter
Getting Started
Start by downloading the materials for this tutorial (you can find a link at the top or bottom of this tutorial). Unzip it and navigate to ToonOutlineStarter and open ToonOutline.uproject. You will see the following scene:
To start, you will create outlines by using an inverted mesh.
Inverted Mesh Outlines
The idea behind this method is to duplicate your target mesh. Then, make the duplicate a solid color (usually black) and expand it so that it is slightly larger than the original mesh. This will give you a silhouette.
If you use the duplicate as is, it will completely block the original.
To fix this, you can invert the normals of the duplicate. With backface culling enabled, you will see the inward faces instead of the outward faces.
This will allow the original to show through the duplicate. And because the duplicate is larger than the original, you will get an outline.
Advantages:
- You will always have clean lines since the outline is made up of polygons
- Appearance and thickness are easily adjustable by moving vertices
- Outlines shrink over distance. This can also be a disadvantage.
Disadvantages:
- Generally, does not outline details inside the mesh
- Since the outline consists of polygons, they are prone to clipping. You can see this in the example above where the duplicate overlaps the ground.
- Possibly bad for performance. This depends on how many polygons your mesh has. Since you are using duplicates, you are basically doubling your polygon count.
- Works better on smooth and convex meshes. Hard edges and concave areas will create holes in the outline. You can see this in the image below.
Generally, you should create the inverted mesh in a modelling program. This will give you more control over the silhouette. If working with skeletal meshes, it will also allow you to skin the duplicate to the original skeleton. This will allow the duplicate to move with the original mesh.
For this tutorial, you will create the mesh in Unreal rather than a modelling program. The method is slightly different but the concept remains the same.
First, you need to create the material for the duplicate.
Creating the Inverted Mesh Material
For this method, you will mask the outward-facing polygons. This will leave you with the inward-facing polygons.
Navigate to the Materials folder and open M_Inverted. Afterwards, go to the Details panel and adjust the following settings:
- Blend Mode: Set this to Masked. This will allow you to mark areas as visible or invisible. You can adjust the threshold by editing Opacity Mask Clip Value.
- Shading Model: Set this to Unlit. This will make it so lights do not affect the mesh.
- Two Sided: Set this to enabled. By default, Unreal culls backfaces. Enabling this option disables backface culling. If you leave backface culling enabled, you will not be able to see the inward-facing polygons.
Next, create a Vector Parameter and name it OutlineColor. This will control the color of the outline. Connect it to Emissive Color.
To mask the outward-facing polygons, create a TwoSidedSign and multiply it by -1. Connect the result to Opacity Mask.
TwoSidedSign will output 1 for frontfaces and -1 for backfaces. This means frontfaces will be visible and backfaces will be invisible. However, you want the opposite effect. To do this, you reverse the signs by multiplying by -1. Now frontfaces will output -1 and backfaces will output 1.
Finally, you need a way to control the outline thickness. To do this, add the highlighted nodes:
In Unreal, you can move the position of every vertex using World Position Offset. By multiplying the vertex normal by OutlineThickness, you are making the mesh thicker. Here is a demonstration using the original mesh:
At this point, the material is complete. Click Apply and then close M_Inverted.
Now, you need to duplicate the mesh and apply the material you just created.
Duplicating the Mesh
Navigate to the Blueprints folder and open BP_Viking. Add a Static Mesh component as a child of Mesh and name it Outline.
Make sure you have Outline selected and set its Static Mesh to SM_Viking. Afterwards, set its material to MI_Inverted.
MI_Inverted is an instance of M_Inverted. This will allow you to adjust the OutlineColor and OutlineThickness parameters without recompiling.
Click Compile and then close BP_Viking. The viking will now have an outline. You can control the color and thickness by opening MI_Inverted and adjusting the parameters.
That’s it for this method! See if you can create an inverted mesh in your modelling program and then bring it into Unreal.
If you want to create outlines in a different way, you can use post processing instead.
Post Process Outlines
You can create post process outlines by using edge detection. This is a technique which detects discontinuities across regions in an image. Here are a few types of discontinuities you can look for:
Advantages:
- Can apply to the entire scene easily
- Fixed performance cost since the shader always runs for every pixel
- Line width stays the same at various distances. This can also be a disadvantage.
- Lines don’t clip into geometry since it is a post process effect
Disadvantages:
- Usually requires multiple edge detectors to catch all edges. This has an impact on performance.
- Prone to noise. This means edges will show up in areas with a lot of variance.
A common way to do edge detection is to perform convolution on each pixel.
What is Convolution?
In image processing, convolution is an operation on two groups of numbers to produce a single number. First, you take a grid of numbers (known as a kernel) and place the center over each pixel. Below is an example of a 3×3 kernel moving over the top two rows of an image:
For every pixel, multiply each kernel entry by its corresponding pixel. Let’s take the pixel from the top-left corner of the mouth for demonstration. We’ll also convert the image to grayscale to simplify the calculations.
First, place the kernel (we’ll use the same one from before) so that the target pixel is in the center. Afterwards, multiply each kernel element with the pixel it overlaps.
Finally, add all the results together. This will be the new value for the center pixel. In this case, the new value is 0.5 + 0.5 or 1. Here is the image after performing convolution on every pixel:
The kernel you use determines what effect you get. The kernel from the examples is used for edge detection. Here are a few examples of other kernels:
To detect edges in an image, you can use Laplacian edge detection.
Laplacian Edge Detection
First, what is the kernel for Laplacian edge detection? It’s actually the one you saw in the examples from the last section!
This kernel works for edge detection because the Laplacian measures the change in slope. Areas with greater change diverge from zero, indicating it is an edge.
To help you understand it, let’s look at the Laplacian in one dimension. The kernel for this would be:
First, place the kernel over an edge pixel and then perform convolution.
This will give you a value of 1 which indicates there was a large change. This means the target pixel is likely to be an edge.
Next, let’s convolve an area with less variance.
Even though the pixels have different values, the gradient is linear. This means there is no change in slope and indicates the target pixel is not an edge.
Below is the image after convolution and a graph with each value plotted. You can see that pixels on an edge are further away from zero.
Phew! That was a lot of theory but don’t worry — now comes the fun part. In the next section, you will build a post process material that performs Laplacian edge detection on the depth buffer.
Building the Laplacian Edge Detector
Navigate to the Maps folder and open PostProcess. You will see a black screen. This is because the map contains a Post Process Volume using an empty post process material.
This is the material you will edit to build the edge detector. The first step is to figure out how to sample neighboring pixels.
To get the position of the current pixel, you can use a TextureCoordinate. For example, if the current pixel is in the middle, it will return (0.5, 0.5). This two-component vector is called a UV.
To sample a different pixel, you just need to add an offset to the TextureCoordinate. In a 100×100 image, each pixel has a size of 0.01 in UV space. To sample a pixel to the right, you add 0.01 on the X-axis.
However, there is a problem with this. As the image resolution changes, the pixel size also changes. If you use the same offset (0.01, 0) in a 200×200 image, it will sample two pixels to the right.
To fix this, you can use the SceneTexelSize node which returns the pixel size. To use it, you do something like this:
Since you are going to be sampling multiple pixels, you would have to create this multiple times.
Obviously, this will quickly become messy. Fortunately, you can use material functions to keep your graph clean.
In the next section, you will put the duplicate nodes into the function and create an input for the offset.
Creating the Sample Pixel Function
First, navigate to the Materials\PostProcess folder. To create a material function, click Add New and select Materials & Textures\Material Function.
Rename it to MF_GetPixelDepth and then open it. The graph will have a single FunctionOutput. This is where you will connect the value of the sampled pixel.
First, you need to create an input that will accept an offset. To do this, create a FunctionInput.
This will show up as an input pin when you use the function later.
Now you need to specify a few settings for the input. Make sure you have the FunctionInput selected and then go to the Details panel. Adjust the following settings:
- InputName: Offset
- InputType: Function Input Vector 2. Since the depth buffer is a 2D image, the offset needs to be a Vector 2.
- Use Preview Value as Default: Enabled. If you don’t provide an input value, the function will use the value from Preview Value.
Next, you need to multiply the offset by the pixel size. Then, you need to add the result to the TextureCoordinate. To do this, add the highlighted nodes:
Finally, you need to sample the depth buffer using the provided UVs. Add a SceneDepth and connect everything like so:
Summary:
- Offset will take in a Vector 2 and multiply it by SceneTexelSize. This will give you an offset in UV space.
- Add the offset to TextureCoordinate to get a pixel that is (x, y) pixels away from the current pixel
- SceneDepth will use the provided UVs to sample the appropriate pixel and then output it
That’s it for the material function. Click Apply and then close MF_GetPixelDepth.
Next, you need to use the function to perform convolution on the depth buffer.
Performing Convolution
First, you need to create the offsets for each pixel. Since the corners of the kernel are always zero, you can skip them. This leaves you with the left, right, top and bottom pixels.
Open PP_Outline and create four Constant2Vector nodes. Set them to the following:
- (-1, 0)
- (1, 0)
- (0, -1)
- (0, 1)
Next, you need to sample the five pixels in the kernel. Create five MaterialFunctionCall nodes and set each to MF_GetPixelDepth. Afterwards, connect each offset to their own function.
This will give you the depth values for each pixel.
Next is the multiplication stage. Since the multiplier for neighboring pixels is 1, you can skip the multiplication. However, you still need to multiply the center pixel (bottom function) by -4.
Next, you need to sum up all the values. Create four Add nodes and connect them like so:
If you remember the graph of pixel values, you’ll see that some of them are negative. If you use the material as is, the negative pixels will appear black because they are below zero. To fix this, you can get the absolute value which converts any inputs to a positive value. Add an Abs and connect everything like so:
Summary:
- The MF_GetPixelDepth nodes will get the depth value for the center, left, right, top and bottom pixels
- Multiply each pixel by its corresponding kernel value. In this case, you only need to multiply the center pixel.
- Calculate the sum of all the pixels
- Get the absolute value of the sum. This will prevent pixels with negative values from appearing as black.
Click Apply and then go back to the main editor. The entire image will now have lines!
There are a few problems with this though. First, there are edges where there is only a slight depth difference. Second, the background has circular lines due to it being a sphere. This is not a problem if you are going to isolate the edge detection to meshes. However, if you want lines for your entire scene, the circles are undesirable.
To fix these, you can implement thresholding.
Implementing Thresholding
First, you will fix the lines that appear because of small depth differences. Go back to the material editor and create the setup below. Make sure you set Threshold to 4.
Later, you will connect the result from the edge detection to A. This will output 1 (indicating an edge) if the pixel’s value is higher than 4. Otherwise, it will output 0 (no edge).
Next, you will get rid of the lines in the background. Create the setup below. Make sure you set DepthCutoff to 9000.
This will output 0 (no edge) if the current pixel’s depth is greater than 9000. Otherwise, it will output the value from A < B.
Finally, connect everything like so:
Now, lines will only appear if the pixel value is above 4 (Threshold) and its depth is lower than 9000 (DepthCutoff).
Click Apply and then go back to the main editor. The small lines and background lines are now gone!
The edge detection is working pretty well. But what if you want thicker lines? To do this. you need a larger kernel size.
Creating Thicker Lines
Generally, larger kernel sizes have a greater impact on performance. This is because you have to sample more pixels. But what if there was a way to have larger kernels with the same performance as a 3×3 kernel? This is where dilated convolution comes in handy.
In dilated convolution, you simply space the offsets further apart. To do this, you multiply each offset by a scalar called the dilation rate. This defines the spacing between each kernel element.
As you can see, this allows you to increase the kernel size while sampling the same number of pixels.
Now let’s implement dilated convolution. Go back to the material editor and create a ScalarParameter called DilationRate. Set its value to 3. Afterwards, multiply each offset by DilationRate.
This will place each offset 3 pixels away from the center pixel.
Click Apply and then go back to the main editor. You will see that your lines are a lot thicker. Here is a comparison between multiple dilation rates:
Unless you’re going for a line art look for your game, you probably want to have the original scene show through. In the final section, you will add the lines to the original scene image.
Adding Lines to the Original Image
Go back to the material editor and create the setup below. Order is important here!
Next, connect everything like so:
Now, the Lerp will output the scene image if the alpha reaches zero (black). Otherwise, it will output LineColor.
Click Apply and then close PP_Outline. The original scene will now have outlines!
Where to Go From Here?
You can download the completed project using the link at the top or bottom of this tutorial.
If you’d like to do more with edge detection, try creating one that works on the normal buffer. This will give you some edges that don’t appear in a depth edge detector. You can then combine both types of edge detection together.
Convolution is a wide topic that has many uses including artificial intelligence and audio processing. I encourage you to explore convolution by creating other effects such as sharpening and blurring. Some of these are as simple as changing the values in the kernel! Check out Images Kernels explained visually for an interactive explanation of convolution. It also contains the kernels for some other effects.
I also highly recommend you check out the GDC presentation on Guilty Gear Xrd’s art style. They also use the inverted mesh method for the outer lines. However, for the inner lines, they present a simple yet ingenious technique using textures and UV manipulation.
If there are any effects you’d like to me cover, let me know in the comments below! | https://www.raywenderlich.com/92-unreal-engine-4-toon-outlines-tutorial | CC-MAIN-2019-43 | refinedweb | 2,989 | 67.45 |
There's not much to say about Labour Day at all as far as I'm concerned, save that it was another opportunity for resting, reading, puttering about the house and doing some casual surfing:
- The much-hyped ABC Full Episode Streaming page is (predictably) blocked to European users. US-based proxies would be welcome if I didn't plan to get the shows on DVD later and watch them as I please (i.e., without ads at all, on whatever device I feel like using).
- I stay away from Politics as much as I possibly can (and then some), but there was no way I was going to miss out on Stephen Colbert's amazing address (video, transcript).
- I found out about AutoViewer, another very nice (if bandwidth-intensive) image viewer that can superimpose captions on images (haven't tested accented characters, but apparently some are not included in the embedded Flash font). See more in my Flash/Snippets node.
- I've been tinkering with Textpander (via Hawk Wings). Don't think I'll be using it for much, but it's interesting.
And since I haven't posted any code in a while, here's a first stab at human-readable time strings in Python - this outputs the usual posted 2 years, 1 month ago stuff, enhanced to have an arbitrary time range and detail - for instance, you can specify 3 levels of detail and get 3 days, 2 hours, 14 minutes ago:
def timeSince(older=None,newer=None,detail=2): """ Human-readable time strings, based on Natalie Downe's code from Assumes time parameters are in seconds """ intervals = { 60 * 60 * 24 * 365: 'year', 60 * 60 * 24 * 30: 'month', 60 * 60 * 24 * 7: 'week', 60 * 60 * 24: 'day', 60 * 60: 'hour', 60: 'minute', } chunks = intervals.keys() # Reverse sort using a lambda (for Python 2.3 backwards compatibility) chunks.sort(lambda x, y: y-x) if newer == None: newer = time.time() interval = newer - older if interval < 0: raise ValueError('Time interval cannot be negative') output = '' for steps in range(detail): for seconds in chunks: count = math.floor(interval/seconds) unit = intervals[seconds] if count != 0: break if count > 1: unit = unit + 's' if count != 0: output = output + "%d %s, " % (count, unit) interval = interval - (count * seconds) output = output[:-2] return output
And yes, I can do the sorting and other subtleties in other ways, but I'm still stuck in Tiger's bundled Python 2.3 and want to make sure this runs there. | http://the.taoofmac.com/space/blog/2006/05/01/2150 | CC-MAIN-2014-15 | refinedweb | 414 | 66.17 |
Part 2: The JavaFX Scene API and the Finite State Manager
Welcome back to my multi-part tutorial on building a Games Engine with the JavaFX framework. Last time we were here we discussed the topics of Threading & Software Design Patterns and how they mattered within the scope of the project. This time we're going to look at JavaFX, a rich client platform built on Java (but you already knew that). The advantage that JavaFX really offers us is the native hardware acceleration and a rich media interface, among other useful features such as the Scene API. We're also going to explore the beginnings of our Finite State Manager - and make a return to those dastardly Singletons!
As a footnote, this part of the series assumes familiarity with the HashMap, EventHandler and KeyEvent classes. If you aren’t familiar with these, I’d give them a Google and get up to speed.
Humble Beginnings - The JavaFX Scene Graph
JavaFX, released in 2008 (v1.0), is a rich client API primarily designed for use with data-driven applications and those that need to leverage the power of modern audio and graphics hardware (!). It's also an impressive UI toolkit, akin to the Windows Presentation Foundation which Microsoft released in 2006 (.NET 3.0), that offers an expanse of customisability through CSS styles and code. We're not really interested in the UI toolkit or the use of data; what we're interested in is the Scene API - a powerful layout engine based on a tree paradigm. In fact, we're interested in one teeeeeeeny portion of it: The Node class.
The Scene Graph is a series of Nodes. Each node can be one of three types:
- The Root: The first node in the scene graph, which can not have a parent.
- A Parent: Nodes with children, but not the root node. These are Groups, Regions and Controls.
- A Child: Child nodes are specific subclasses, such as Rectangles and ImageViews – specialised nodes for performing very particular tasks.
Each node can exist pretty much anywhere within a tree within the scene graph, but what we're particularly concerned with is nodes attached to a scene - making them eligible for rendering.
A little bit of House Keeping
Before we move further, we’ll define our base Engine class. It will become quickly apparent why we do this, however we won’t explore it just yet. Copy the code below into a file named “Engine.java” { } }
If you got here from our previous ventures, you’ll recognise a pattern here. That’s right, it’s a Singleton.* More importantly, we define a few things.
- WIDTH, the Width of the Window in pixels. We default to 800px.
- HEIGHT, the Height of the Window in pixels. We default to 450px.
- A Constructor
- An overridden method responsible for starting up the Application class.
Of these the more important pair, and the ones I’d like you to take not of, are WIDTH and HEIGHT. Let’s keep moving, try and keep up!
*Props to user CasiOo for the syncLock suggestion.
Setting the Scene
So far I’ve loosely discussed how JavaFX’ Scene Graph works. Let’s now look at implementing the basics of the Finite State Manager. We’ll get started out with the basic “scene” component, which we’re going to call “BaseScene”. The BaseScene class will inherit from JavaFX’ Scene class, a class responsible for a ‘view’ so to speak – depending on your individual background. The Scene is a node, second only to the Application node – the root node – and parent node to all of our graphics data. Here’s a pretty diagram to explain that a little better:
The Engine is the root node. At any given time, the engine can be managing a number of scenes (represented by the beautiful ‘scene’ box). Inside the most wonderful scene box are four scenes, one of which is the current child – that child is both a child and a parent and of type scene. Lastly, each scene can several children, for which I’ve used Images as an example – those are ImageView classes, for the uninitiated. Largely, that’s how the engine looks at a given snapshot in time. Okay, now for some code.
public abstract class BaseScene extends Scene { /** * The group of items to be rendered on screen */ private Group nodes; /** * Default Constructor * @param root * @param fill */ public BaseScene(Paint fill) { super(new Group(), Engine.WIDTH, Engine.HEIGHT, fill); // TODO Auto-generated constructor stub nodes = (Group) super.rootProperty().get(); } }
To make this a little quicker, as we have a lot to cover, I’m going to explain these classes with bullet points – worth a shot, really. My joke was terrible and I feel terrible.
- We import Group, Node and Scene from javafx.scene – the Group will handle our children, the Node is there for casting and our BaseScene will extend the Scene class.
- BaseScene is abstract. We can’t create an instance of it, it just provides all of the basic services the Scenes will need to provide to the Engine.
- BaseScene extends the Scene class, we’ll have to override a couple things later.
- We declare a private Group – we’ll use this as a local reference to the Scene’s “root” node. This Group will store and manage our child nodes in the scene.
- Lastly, a default constructor. We call the super class constructor, declaring that the root node should be a Group with nothing in it, the Scene should have the Width and Height variables we defined in the Engine class earlier and then we just pass in the fill from the constructor!
Let’s think about what basic functionality the Scene will need. It’s a game engine, so we’ll want to update the scene contents. We’ll also want to be able to respond to keyboard input and add children to the node group. Let’s throw a few functions and fields in.
/** * The keyboard reader assigned to this scene */ private Keyboard keys; public Keyboard getKeys() { return keys; } /** * Gets and Processes input from the core Engine loop */ protected void takeInput() { keys.poll(); } /** * Updates the BaseScene's contents */ public void update() { takeInput(); } public void addItem(Node e) { nodes.getChildren().add(e); }
As we’ve said above we want to take input (takeInput), update the scene (update) and add children to the group node (addItem). We also add in a Getter for a Keyboard variable named keys. But wait…
Keyboard? That’s not in the API!?
Right you are, we’re going to define it ourselves. The good thing about the JavaFX Scene class is that it behaves very similarly to a Swing JFrame, by which I mean we can set things such as KeyListeners etc. on them. What we’re going to do next is define our own style of Key Listener. We’ll listen to the scene to find out which Keys are currently up and down and use that information to decide what state the key is in. With that out of the way, let’s get down to coding!
First, a little enumeration:
public enum KeyState { UP, DOWN, HIT, }
Our enumeration serves only as a container for the states, or KeyStates, that a key can be in. Those are UP, meaning not currently held; DOWN, meaning currently held; and HIT, meaning that the key was DOWN last frame and is now UP again. That last one might be flagging a question for some, the answer is that sometimes we only want to register the press once (think ESC key), whereas for others we want to see that it is held (I.e WASD).
public class Keyboard { public static final int KEY_COUNT = 255; private HashMap<KeyCode,KeyState> states; private HashMap<KeyCode,Boolean> flags; private KeyState getKeyState(KeyCode key) { return states.get(key); } private void setKeyFlag(KeyCode key, Boolean flag) { flags.put(key, flag); } public Keyboard(BaseScene scene) { } public boolean isKeyUp(KeyCode key) { return (getKeyState(key) == KeyState.UP); } public boolean isKeyDown(KeyCode key) { return (getKeyState(key) == KeyState.DOWN); } public boolean isKeyHit(KeyCode key) { return (getKeyState(key) == KeyState.HIT); } }
Here we’ve got the basic outline of our Keyboard class. We define a couple of things:
- A static final int KEY_COUNT, set to 255, defining the number of possible keys.
- A HashMap to contain the KeyCode and its current KeyState.
- A second HashMap containing KeyCodes and their flags – showing their state has changed.
- A Getter and Setter for the current KeyState and Key Flag respectively.
- 3 functions for pulling whether or not a particular key is current Up, Down or Hit.
- Lastly, the Constructor, which takes a BaseScene as its argument (we’ll see why in a moment).
Moving forward, let’s take a look at the constructor
public Keyboard(BaseScene scene) { states = new HashMap<KeyCode,KeyState>(); flags = new HashMap<KeyCode,Boolean>(); scene.setonkeypressed(new EventHandler<KeyEvent>() { public void handle(KeyEvent event) { setKeyFlag(event.getCode(), true); event.consume(); } }); scene.setOnKeyReleased(new EventHandler<KeyEvent>() { public void handle(KeyEvent event) { setKeyFlag(event.getCode(), false); event.consume(); } }); }
First and foremost, we initialise the values of states and flags to their respective parameterised HashMaps. The following two blocks of code perform similar actions, so let’s look at the first block:
scene.setonkeypressed(new EventHandler<KeyEvent>() { public void handle(KeyEvent event) { setKeyFlag(event.getCode(), true); event.consume(); } });
We take the BaseScene we passed into the constructor and look to its setonkeypressed setter, which takes a parameterized EventHandler of type KeyEvent as its sole argument. We’re not going to need to change the event handler later, so we’ll lazy initialise it within the function.
public void handle(KeyEvent event) { setKeyFlag(event.getCode(), true); event.consume(); }
The function handle appears within EventHandler. Overriding the handle function allows us to define what should happen when the onkeypressed and onKeyReleased events fire. We want to set the Key Flags for our Keyboard class, so we write the line setKeyFlag(event.getKeyCode(), true); for onkeypressed and change our true to false for onKeyReleased. The actual state processing happens a little later in a function we’re going to write named poll.
So, moving swiftly onward!
public synchronized void poll() { Boolean b; KeyState s; for (KeyCode k : flags.keySet()) { b = flags.get(k); s = states.get(k); if (b) { if (s == null || s == KeyState.UP) { states.put(k, KeyState.HIT); } else { states.put(k, KeyState.DOWN); } } else { states.put(k, KeyState.UP); } } }
If you’re not registered to vote then this probably doesn’t apply to you. In the poll function, we take all of our Boolean flags and convert them to their new representations. The beauty of this function is that we don’t have to have any data in place where states are concerned. Starting at the beginning:
Boolean b; KeyState s;
We define a pair of temp variables – one for the Boolean of a KeyCode and one for the associated KeyState.
for (KeyCode k : flags.keySet())
This style of loop is Java’s answer to the for each found in many other languages. We tell java that for every KeyCode key in the Boolean flags KeySet, we want to pull out the value in the associated ValueSet into a variable named k.
b = flags.get(k); s = states.get(k);
Once we have the KeyCode in our k variable, we want to pull the values. We pull the Boolean flag into our b variable and the KeyState into the s variable. It’s worth noting here that when setting s we can end up with s being = to null. This is completely fine, in fact it’s useful information.
if (b) { if (s == null || s == KeyState.UP) { states.put(k, KeyState.HIT); } else { states.put(k, KeyState.DOWN); } } else { states.put(k, KeyState.UP); }
Why, you ask? Because if they key hasn’t been touched before then we know what to do with it! First and foremost, we want to know if they key has been flagged before: if it has we want to check which way to go:
if (s == null || s == KeyState.UP) { states.put(k, KeyState.HIT); } else { states.put(k, KeyState.DOWN); }
If we’ve never touched the key before, or it’s currently in its UP state, then we want to set its state to HIT. Otherwise, it’s being held down.*
} else { states.put(k, KeyState.UP); }
If the key flag is false, then we know the key is up.
*A Key is HIT before it is classed as DOWN. We have to check two frames before confirming it is being held DOWN. If we didn’t, we could introduce some weird behaviour.
Every time the poll function is called, we update the states. Then, we could do something like
Keyboard.isKeyUp(KeyCode.A);
To check whether the A key is currently not pressed.
ANYWHO. That pretty much covers the keyboard input. That was fun! Let’s get back on the scene.
MEANWHILE, IN BASESCENE.JAVA!
Firstly we need to make a minor alteration but an important one nonetheless – inside our BaseScene constructor. Add the following line to the end:
Keys = new Keyboard(this);
By this point, I expect you to know what that’s doing so I won’t be explaining it.
Another of the functions that BaseScene will be responsible for is to draw the game screens to the screen. JavaFX offers us something called an ImageView. ImageView is a node designed to output an image to the screen so we’re going to use it as if it were a full-screen quad.
private ImageView output;
Inside the constructor, add the following code –
Output = new ImageView(); Renderer.setFunction(new IRenderAction() { @Override Public void doRender(Graphics2D g) { draw(g); } }); This.addItem(output);
And inside the update function add
Output.setImage(Renderer.render());
And lastly throw a -
Public abstract void draw(Graphics2D g);
- at the end of our class. In order of definition:
- We define an ImageView called output which we will use to send our rendered scenes to the screen.
- We construct the ImageView within the BaseScene constructor.
- We add the ImageView to the BaseScene’s group node.
- We set a function on the Renderer using an Interface to define the function anonymously.
- We tell the engine to render the image to the ImageView during the update loop.
Renderer you say? What’s this new-fangled tech I’m hearing of all of a sudden?
Caught me again, I really need to improve my subtlety. Renderer is our class responsible for shifting images to the screen. First, let’s define the interface –
public interface IRenderAction { void doRender(Graphics2D g); }
In here we are simply providing the anonymous function for use by the renderer. Anonymous functions mean that any given scene can be drawing itself at once – only the scene will know what’s happening. This simply decouples the rendering process. However, I’ve implemented this with an incredible lack of elegance and I encourage you to improve on it. Moving swiftly on, let’s start with a code listing for renderer.
public class Renderer { private static Renderer singleton; private BufferedImage clearbuffer; private BufferedImage backbuffer; private Graphics2D device; private IRenderAction function; public synchronized static void setFunction(IRenderAction function) { get().setRenderFunction(function); } private void setRenderFunction(IRenderAction function) { this.function = function; } private synchronized static Renderer get() { if (singleton == null) { singleton = new Renderer(); } return singleton; } private Renderer() { super(); createClearBuffer(); createBackBuffer(); Engine.trace("Successfully initialised Renderer"); }); } public synchronized static Image render() { get().beforeRender(); get().doRender(); return get().getBuffer(); } private void beforeRender() { backbuffer.setData(clearbuffer.getRaster()); } private void doRender() { function.doRender(device); } public synchronized static void dispose() { get().clearbuffer.flush(); get().backbuffer.flush(); get().device.dispose(); } }
Renderer is another singleton. *cue complaints*. So we’ll start by defining an instance variable. Continuing on we define two buffered images – one which we’ll put the rendered contents into and another which represents a clear screen. The last two fields are our Graphics2D instance to draw with and an IRenderAction for anonymous drawing.
Next down the list is the get function. Similar to the engine class, get returns the single instance of the renderer. I won’t labour this with another explanation of how or why this works – if you’re still unsure I encourage you to take a look at part 1!
private Renderer() { super(); createClearBuffer(); createBackBuffer(); Engine.trace("Successfully initialised Renderer"); }
We also have a constructor – again it’s private so we can’t construct an instance elsewhere. Note the two functions createClearBuffer and createBackBuffer. We’ll explore the trace function a little later. Onward!(); }
Inside createBackBuffer we begin by instantiating the BufferedImage with the engine width and height variables as well as a defining the surface type as ARGB. Next we create an instance of Graphics2D from the backbuffer image – this is critical as this is how we’ll be drawing graphics, but it also defines where we draw them to. Inside createClearBuffer we again instantiate the BufferedImage – same variables. This time round, however, we want to set the content of the image and leave it at that so we create a new Graphics2D object, set the colour to black, fill a rectangle as big as the screen and then get rid of the Graphics2D object. Straight forward enough.
private void beforeRender() { backbuffer.setData(clearbuffer.getRaster()); } private void doRender() { function.doRender(device); }
Next up, before and do Render. The former gets the contents of the clearBuffer and shoves it into the backbuffer image. The latter is clever – the anonymous function we define gets called by function.doRender. The renderer will never know which scene it is actually rendering which makes it incredibly versatile.); }
GetBuffer is an interesting function as it concerns the conversion of data suitable for one framework to data suitable for another. It’s incredibly inefficient and not something that should be even remotely necessary. Let’s get on with it anyway. Firstly, we need to pull some information from the BufferedImage, namely its ColorModel, a value telling us whether the alpha channel is premultiplied and a copy of its actual data – the raster. The Swing library has a handy set of utility functions, one of which is the toFXImage function. We can use this function to then return the correct type of image.
public synchronized static Image render() { get().beforeRender(); get().doRender(); return get().getBuffer(); }
Extending on from here we have the render function – this simply makes calls to the beforeRender, doRender and getBuffer functions – performing a full render loop. Notably, this function is synchronized static. This is because, if you recall, our scenes need to output something to their ImageViews – to do this, a simple call to Render is made. Neato!
The last two functions are setRenderFunction – which does what it says on the tin – and setFunction, a thread safe version of setRenderFunction. Note how setRenderFunction is private so that it can’t be accessed anywhere else. You may have noticed, in fact, that all of the functions which are not static are private – because we can’t instantiate the class we don’t want them exposed.
Ending on Beginnings – Back to the Engine { } }
When we started we defined the basic portions of the engine class. Now we’re going to improve on this and add in some actual functionality. Let’s start with a few variables and helper functions.
private int fps = 0; private boolean shouldTrackFrames = true; private Stage staging; private List<BaseScene> scenes new ArrayList<BaseScene>(); public Stage stage() { return staging; }); } private void setupFrameTracker() { Timer fpsTime = new Timer(); fpsTime.scheduleAtFixedRate(new TimerTask() { @Override public void run() { staging.setTitle("Phobos XRT: Singleton Experiment [FPS: " + fps + "]"); fps = 0; } }, 1000, 1000); } public void quit() { Engine.trace("deinitialising..."); System.exit(0); } public static void trace(final String msg) { System.out.print("XRT: "); System.out.println(msg); } public static final Timeline create(final KeyFrame frame) { return TimelineBuilder.create().cycleCount(Animation.INDEFINITE).keyFrames(frame).build(); }
You must be thinking “woaaaaah that’s a lot of code”. You’d be right. That’s pretty much the entirety of the engine class. We’ll run through this bit by bit, starting with code that explain itself.
public void quit() { Engine.trace("deinitialising..."); System.exit(0); } public static void trace(final String msg) { System.out.print("XRT: "); System.out.println(msg); }
Quit outputs a message and then calls exit(0) on the program, 0 being “terminated correctly”. Trace, a function we saw earlier, outputs a message prefixed with XRT to the standard output – I used this for debugging portions of the engine as I could add a prefix inside the message to show where it came); }
These are, again, relatively self-explanatory. In setScene, we set the stage’s scene value. In kill, we make a call to quit (dat function overhead). In clone, we give the user a grilling because they shouldn’t be attempting to clone a singleton. Lastly, Launch makes a call to the Application classes launch function – this starts off the engine and its subsystems.
private void setupFrameTracker() { Timer fpsTime = new Timer(); fpsTime.scheduleAtFixedRate(new TimerTask() { @Override public void run() { staging.setTitle("Phobos XRT: Singleton Experiment [FPS: " + fps + "]"); fps = 0; } }, 1000, 1000); }
setupFrameTracker Simply creates an fps counter. We do this by defining a Timer called fpsTime. We tell that timer to schedule at task at a fixed rate (sheduleAtFixedRate) and pass it a new TimerTask since we won’t need to reuse it. The TimerTask gives us another useful anonymous function, run, in which we will tell the engine to set the title of the window to display the fps. We will also instruct it to reset the fps counter for the next run. Lastly, we tell the TimerTask that it should run once a second, with its first run starting a second from now!); }
addScene And removeScene are a little bit more complicated. Starting with addScene – we first add the BaseScene ‘scene’ onto our list. After that, we calculate it’s position (which will ALWAYS be the end of the list) using size – 1. Then we make a call to setScene to finish off the job by setting the current scene on the stage.
Remove scene, on the other hand, requires a little bit more in the way of logistics. Firstly, we must calculate the index of the screen before the current screen – we do this with size – 2. Next, we check to see if the number we have calculated is one or less. In the event it is there will be no scene to replace this with and so we should exit – once again we terminate with exit code 0. If there’s another scene to be worked with we should set it on the stage – its index will be the value of sceneNum – and then remove the top-most scene.
It’s important to understand that, whilst awkward, this process must be carried out carefully else the JVM will start throwing errors at us like a monkey throws faeces.
public static final Timeline create(final KeyFrame frame) { return TimelineBuilder.create().cycleCount(Animation.INDEFINITE).keyFrames(frame).build(); }
The create function creates a timeline that will run indefinitely, playing our single update frame each cycle.
This brings us to our endgame. The Start function. This is, quite possibly, the most important part of the engine class – it gets everything going and throws us into glorious 2D fantastimazing stuff. I really didn’t have something for that…
@Override public void start(final Stage primaryStage) throws Exception { //singleton = this; // TODO Auto-generated method stub //1) perform renderer initialisation routines staging = primaryStage; stage().setTitle("Phobos XRT: Singleton Experiment"); final Duration oneFrameAmt = Duration.millis(1000/30); KeyFrame oneFrame = new KeyFrame(oneFrameAmt, new EventHandler<ActionEvent>() { @Override public void handle(javafx.event.ActionEvent event) { //update scene stuff if(scenes.size() > 0) { scenes.get(scenes.size()-1).update(); } fps++; } }); // oneFrame create(oneFrame).play(); if (shouldTrackFrames) setupFrameTracker(); staging.show(); }
The very first thing we must do is set the value of staging to the value passed in the Start function – primaryStage. We also set the title but that’s not really of any huge importance. We define a duration variable equal a 30th (1000/30) of a second – this limits the engine to running at a maximum of 30 frames a second but feel free to experiment with different timing values.
The next portion we’ll dissect. We start with a Keyframe –
KeyFrame oneFrame = new KeyFrame(oneFrameAmt...
When defining the frame we set its duration to that of oneFrameAmt which we previously defined as a 30th of a second (or whatever playful number you choose to use, you devilishly adventurous programmer you).
new EventHandler<ActionEvent>
The KeyFrame takes an EventHandler too. The event in question is what happens when an action is performed – in this case, the action is that every 30th of a second the engine will attempt to run the handler function defined – if we’re not processing something else still, that is.
@Override public void handle(javafx.event.ActionEvent event) { //update scene stuff if(scenes.size() > 0) { scenes.get(scenes.size()-1).update(); } fps++; }
It gets easier, though. Inside the handle function we only want to do 2 things, really. Firstly, we need to ensure that we don’t try to update anything that doesn’t exist so we check that the size of scenes is at least 1 by asking if scenes.size() > 0. If it is, we run an update on the topmost screen. Either way, at the end of this function we increment the fps counter – this will top out at 30fps if we’re not updating any scenes.
create(oneFrame).play(); if (shouldTrackFrames) setupFrameTracker(); staging.show();
Last, but not least, we create the timeline for our frame and play it! That sets the ball in motion and really gives the engine the kick it needs to start updating (it’s funny because… never mind). All we do after this is decide whether to actually run the fps counter and then show the stage.
And that’s that.
Wasn’t hard, was it?
At this point we’ve covered the basic components our engine is going to need to run. We implemented the basic scene class and a renderer to handle the graphics output. We implemented the base engine class to handle the scenes and the application life cycle. We looked at singletons again, making notes of some pros and cons.
Next time, we’ll implement a menu system and some menus as well as a really basic “game” the engine. It’s gonna be fun on a bun! See you then!
Full Code Listing.
In addition to providing a lengthy listing, you can have a neatly packaged jar. I'm just that lovely. The code contained within the Jar is a full code listing for that of this tutorial. If you run it, you should be presented with a black 800x450 window.
Number of downloads: 535 | http://www.dreamincode.net/forums/topic/347981-phobos-a-javafx-games-engine-part-2-javafx-scene-api-and-the-fsm/ | CC-MAIN-2017-51 | refinedweb | 4,467 | 65.83 |
sandeep gunda2,093 Points
printing each element in a list -flask
For a flask app. I am trying to print each item in the list using Python code inside html There is something wrong with the code and treehouse editor is telling me that it all elements in the list are not printed.
Any help is much appreciated!
from flask import Flask, render_template from options import OPTIONS app = Flask(__name__) @app.route('/') def index(): return render_template('options.html', options=OPTIONS)
<ul> {% for item in options %} <li>item['name'] </li> {% endfor %} </ul>
1 Answer
Luis Onate13,222 Points
Perhaps try wrapping item['name'] in {{ }}. Also, depending on whether you have a list of individual items or a list of dict items, you may be printing the wrong thing. If options is a list of strings, I believe you would just want {{item}}. However, if it's a list of dicts that each have a 'name' property, {{item['name']}} should do the trick.
Let me know if this helps. | https://teamtreehouse.com/community/printing-each-element-in-a-list-flask | CC-MAIN-2020-40 | refinedweb | 167 | 72.66 |
Re: os.system question
- From: stanleyxu <no_reply@xxxxxxxxxxxxx>
- Date: Fri, 28 Dec 2007 20:32:44 +0100
kyosohma@xxxxxxxxx wrote:
On Dec 28, 12:57 pm, stanleyxu <no_re...@xxxxxxxxxxxxx> wrote:To note this problem occurs when debugging script in IDLE editor.
When I double click on my_script.py, all outputs will be printed in one
console.
--
___
oo // \\
(_,\/ \_/ \ Xu, Qian
\ \_/_\_/> stanleyxu2005
/_/ \_\
Why are you using os.system for these commands in the first place? You
should be using the os and shutil modules instead as they would be
more cross-platform friendly.
Something like this:
# untested
for new_folder, old_folder in folder_array:
os.mkdir(new_folder)
shutil.copytree(old_folder, new_folder)
Adjust the path as needed in the mkdir call.
See shutil's docs for more info:
And here's some folder manipulation docs:
By the by, the subprocess module is supposed to be used in place of
the os.system and os.popen* calls:
Mike
Thanks Mike,
you have provided another option.
But my question has not been answered yet. The reason, why I use os.system(), is that I want to avoid accident file deletion by writing a script. My real script looks like:
# 1. Funtion to execute a command in DOS-console
def execCommand(cmd):
if DEBUG_MODE:
print 'DOS> ' + cmd;
else:
os.system(cmd);
# 2.1 Creates temp folder. Removes it first, if it exists.
if os.path.exists(tmp_folder):
execCommand('RD "' + tmp_folder + '" /S /Q');
execCommand('MD "' + tmp_folder + '"');
# 2.2 Copies all files to the temp folder, that are going to be put in package.
for source_folder, dest_folder in folders_array:
if not os.path.exists(dest_folder):
execCommand('MD "' + dest_folder + '"');
execCommand('XCOPY \"' + source_folder + '" "' + dest_folder + '" /Y');
The benefit is that, when I set DEBUG_MODE=True, I can see what will be executed. So that I can make sure that my script will not delete any other important files by accident.
--
___
oo // \\
(_,\/ \_/ \ Xu, Qian
\ \_/_\_/> stanleyxu2005
/_/ \_\
.
- Follow-Ups:
- Re: os.system question
- From: kyosohma
- References:
- os.system question
- From: stanleyxu
- Re: os.system question
- From: stanleyxu
- Re: os.system question
- From: kyosohma
- Prev by Date: Re: Remove namespace declaration from ElementTree in lxml
- Next by Date: Re: fiber(cooperative multi-threading)
- Previous by thread: Re: os.system question
- Next by thread: Re: os.system question
- Index(es): | http://coding.derkeiler.com/Archive/Python/comp.lang.python/2007-12/msg03163.html | crawl-002 | refinedweb | 388 | 60.82 |
#include <rtt/DataPort.hpp>
Inherits RTT::DataPortBase< T >.
Use connection() to access the data object. If the port is not connected, connection() returns null.
Definition at line 130 of file DataPort.hpp.
Construct an unconnected Port to a readable data object.
Definition at line 142 of file DataPort.hpp.
Get the current value of this Port.
Definition at line 149 of file DataPort.hpp.
Get the current value of this Port.
result is unmodified if !this->connected()
Definition at line 170 of file DataPort.hpp.
Create a new connection object using a buffered connection implementation.
Reimplemented in CorbaPort.
Definition at line 187 of file PortInterface data object to read from.
The Task may use this to read from a Data object connection connected to this port.
Definition at line 106 of file DataPort.hpp.
Get the data object to write to.
The Task may use this to write to a Data Object connected to this port.
Definition at line 113 of file DataPort.hpp.
Change the name of this unconnected Port.
One can only change the name when it is not yet connected.
Definition at line 50 of file PortInterface.cpp.
References PortInterface::connected(). | http://people.mech.kuleuven.be/~orocos/pub/stable/documentation/rtt/v1.6.x/api/html/classRTT_1_1ReadDataPort.html | crawl-003 | refinedweb | 194 | 54.59 |
post the class then we can discuss how to implement to >> operator.
post the class then we can discuss how to implement to >> operator.(char *aSurname,char*aFirstname); void set(char *aSurname,char* afirstname) { if(aSurname=="") surname=""; else surname=aSurname; if(afirstname=="") firstname=""; else firstname=afirstname; };
Please use code tags. You can find what you're looking for here:
Warning! This line
surname=aSurname;
just copies the pointer not the content!!!
>>surname=aSurname;
First you have to allocate space for surname then call strcpy() to copy it
surname = new char[strlen(aSurname)+1]; strcpy(surname, aSurname);
The two friend functions must return a reference to the istream and ostream objects. Here's correction
#include <iostream> #include <string> #include <cstring> using namespace std; #pragma warning( disable: 4996) // only needed for VC++ 2008 compiler() { surname = NULL; firstname = NULL;} aName(char *aSurname,char*aFirstname) { surname = NULL; firstname = NULL; set(aSurname, aFirstname); } void set(char** name, const char* aName) { if( *name != NULL) delete[] *name; *name = NULL; if( aName != NULL) { *name = new char[strlen(aName)+1]; strcpy(*name, aName); } } void set(const char *aSurname,const char* afirstname) { set(&surname, aSurname); set(&firstname, afirstname); } }; istream& operator>>(istream& in, aName& nm) { // TODO: complete this function. return in; } int main() { aName n; cin >> n; }
[edit]Oops! I just noticed I did half your homework:( | https://www.daniweb.com/programming/software-development/threads/203283/overloading-operator | CC-MAIN-2016-50 | refinedweb | 216 | 54.52 |
Is there a way to say on any event fire
$('#foo').bind('click', function() {
alert($(this).text());
});
I am trying to test a piece of code for a certain event and not going in there. Just want it to fire fore ANY event.
$('#foo').bind('ANY', function() {
alert($(this).text());
});
There's no shorthand for listening to all events.
The closest thing you can get out of the box would be specifying them manually:
$('#foo').bind('blur change click dblclick focus focusin focusout hover keydown ...', function() { alert($(this).text()); });
Note that plugins may fire their own events, perhaps namespaced. You can't listen to these events without knowing them and manually specifying them.
You can specify every event separated by spaces in the first argument of the bind function. This will bind to all events. To view all events covered by jquery visit the documentation.
I don't think that is possible, but you can use Firebug to view all events that are bound to a page element.
Open up Firebug and then in the console write the following:
$("#foo").data("events");
That should display any event that is bound to the #foo element. | http://www.dlxedu.com/askdetail/3/fb5320320c7e925e4a6d002474453293.html | CC-MAIN-2018-43 | refinedweb | 194 | 77.13 |
A linear regression line is of the form w1x+w2=y and it is the line that minimizes the sum of the squares of the distance from each data point to the line. So, given n pairs of data (xi, yi), the parameters that we are looking for are w1 and w2 which minimize the error
and we can compute the parameter vector w = (w1 , w2)T as the least-squares solution of the following over-determined system
Let's use numpy to compute the regression line:
from numpy import arange,array,ones,linalg from pylab import plot,show xi = arange(0,9) A = array([ xi, ones(9)]) # linearly generated sequence y = [19, 20, 20.5, 21.5, 22, 23, 23, 25.5, 24] w = linalg.lstsq(A.T,y)[0] # obtaining the parameters # plotting the line line = w[0]*xi+w[1] # regression line plot(xi,line,'r-',xi,y,'o') show()We can see the result in the plot below.
You can find more about data fitting using numpy in the following posts:
from numpy import arange,array,ones#,random,linalg from pylab import plot,show from scipy import stats xi = arange(0,9) A = array([ xi, ones(9)]) # linearly generated sequence y = [19, 20, 20.5, 21.5, 22, 23, 23, 25.5, 24] slope, intercept, r_value, p_value, std_err = stats.linregress(xi,y) print 'r value', r_value print 'p_value', p_value print 'standard deviation', std_err line = slope*xi+intercept plot(xi,line,'r-',xi,y,'o') show()
Possible Bugs: x_lst is unused and w[] is undefined?
Thanks Steve, I fixed it. I changed the code at the end to make it consisted with the notation.
Another method is to use scipy.stats.linregress()
In the particular case when y=w1*x+w2 you can use both linregress and lstsq as shown above.
When you want to regress to multiple x-s, e.g.
y=w1*x1+w2*x2+w3*x3 you need to use lstsq.
What is the r_value and the p_value in the second program? What do they represent?
r_value is the correlation coefficient and p_value is the p-value for a hypothesis test whose null hypothesis is that the slope is zero.
For more information about correlation you can fin my last post:
And you can find more about p-value here:
how can i get the sum squared error of the regression from this function ??
You can compute the mean square error as follows:
err=sqrt(sum((line-xi)**2)/n)
thank you, and what's n in this case? the number of rows in the 2D list ??
n is the number of samples that you have: n = len(y).
Btw, there's a mistake in my last comment, the squared error is err=sqrt(sum((line-y)**2)/n)
Is there a way to calculate the maximum and minimum gradient, given multiple pairs of (x,y) measurements at each point e.g. repeated trials? Thanks!!
The following function is quite nice: scipy.stats.linregress
It provides the p-value and r-value without extra work.
Awesome! just what I was looking for.
This comment has been removed by the author.
I stumbled upon this fine piece of work, and it seemed to work just fine.
I although came across a problem, once the slope (from the updated code) turned either negative or below zero which meant that the "line" list became empty. To solve this, I simply did the following instead which solved my issue:
line = A[0]+intercept
Update:
While the data now provided is correct, I ran into yet another issue. When I tried to plot the line for a negative coefficient, it didn't plot the slope as going downwards, but rather upwards. I couldn't get to solve the issue so instead, I ended up using the following library:
coefficients = np.polyfit(xi, y, 1)
polynomial = np.poly1d(coefficients)
ys = polynomial(xi)
plot(xi, ys, 'r-', xi, y, 'o')
show()
Source:
* Note that this requires that you: import numpy as np
I will investigate this issue further and hopefully find away around this issue as I like the extra variables which the original code from this blog post provides. Until then, I will use the two libraries together to avoid any further issues, as I am still interested in the "r"- and "p"-values as well the standard error.
Alright, so I finally managed to find a solution which combines the best of both worlds basically hassle-free. The approach is basically the same as from the updated original code:
slope, intercept, r_value, p_value, std_err = stats.linregress(xi,y)
Instead of calculating the "line", I managed to solve my issues as explained above doing the following:
polynomial = np.poly1d([slope, intercept])
line = polynomial(xi)
plot(xi, line, 'r-', xi, y, 'o')
show()
And there you have it; a solution which also works when the coefficient is below 1! This also means, that you no longer have to use the "A" matrix as implemented in the original code; which doesn't seem to be used anyhow.
Thanks for the great work, to the original author!
Hi David, at the moment I'm using the implementation provided by sklearn, maybe you could find it helpful:
This comment has been removed by a blog administrator.
How about a 2D linear regression ? Can you please suggest whats the easiest way to perform the same analysis on a 2D dataset ?
Hi Adviser, you could try the linear regression module provided by sklearn. You can find the link some comments above.
Is there an easy way to plot a regression line that would be based only part of the y data. For example plot the whole y but plot regression line only for:
[20.5, 21.5, 22, 23, 23, 25.5, 24]
It should be very simple, you create your shorter version of y and you apply the regression to this data. Then, you plot the regression line and the the points of the original data as showed in the post.
std_err is not standard deviation, but the error of the estimated slope!
Is it necessary to add "ones(9)"? I usually have just the independent variable x, and the dependent one y,... I don't know how, why and when should I add ones column to my independent var (x). regards
Just one more,... how to predict the new data set with my new set of points? Y use Xtrain and Y train, the model W=inv(Xtrain.T*Xtrain)*Xtrain.T*Ytrain , with np.dot, of course... so when predicting y use Ypred = Xvalid*W, ... but it's not working to me :(
Hi Javier, given your two questions I'd recommend you to check
This takes care of the "ones" and the prediction for you. | http://glowingpython.blogspot.it/2012/03/linear-regression-with-numpy.html | CC-MAIN-2017-30 | refinedweb | 1,132 | 63.49 |
This article is an entry in our Windows Azure Developer Challenge. Articles in this sub-section are not required to be full articles so care should be taken when voting. Create your free Azure Trial Account to Enter the Challenge.
Building on Scott Hanselman's excellent post regarding hosting a two day virtual conference in the cloud for under $10, why not make the ability to host a conference and stream it live available to everyone? We're going to use the same principle as dotnetConf, but build on it so that anyone can create their own conference with speakers and presentations, then record and stream to a live audience. We'll also include built-in chat to allow interaction with the presenter, membership, search, and plenty of other useful features to make the site easy to use.
Website: The YouConf web site is publicly available at Source code: All source code, including history, is available at my GitHub repository -. I tagged the code at the end of each challenge so you can view the code as it was at the end of each stage. In addition, I've uploaded a copy of my solution to CodeProject just in case you can't get to GitHub - Download youconf-final.zip
Video: Following the competition, I was fortunate enough to be interviewed by Brian Hitney and Chris Caldwell for a Microsoft DevRadio episode. For an overview of the whole competition, the YouConf solution, and my thoughts on Azure, check out the video (34 min)
When I visited the dotNetConf site and saw Scott Hanselman's blog post about it, I thought that this could be useful to a much wider audience, particularly smaller groups such as .Net User groups who might like to record and stream their presentations. Having seen the Azure developer contest brief, I figured it would be a good chance to learn more about Azure, MVC, and the .Net technology stack. Hence my entry into this competition.
I hope that this article can serve as a guide to others, and help all developers get started with the Azure Platform. I also aim provide solutions to common issues one might face when getting started with Azure, and show I how I dealt with them.
Azure allows me to focus on what I do best - development - and not have to worry about the intricacies of infrastructure or hosting concerns. It provides a robust, scalable platform which allows me to scale-up my app/site as needed, and stay in control of costs through increased visibility. With Azure and automated deployments from GitHub I've been able to streamline the development process so I can make and deploy changes rapidly, with much less overhead than in than in the past. Finally, it provides a full capability set to help grow my applications, such as cloud services, virtual machines, storage, and more.
The contest involves five separate challenges, and this article contains a separate section for each challenge. Each section contains:
Note that in addition to the sections above, I've recorded daily progress as I go, in the History section of this article. For more detail on some of the daily items I covered, please read that section as I'll be referring to it in other parts of this article.
The following diagram gives a high-level overview of the components involved in the final version (I.e. at the end of challenge five) of the YouConf web application.
As you can see, the solution takes advantage of a number of Azure capabilities. This is not an exhaustive list, but highlights the main components and their relevance to each individual challenge.
A few additional points to note:
From here on, we'll go over the individual challenges, and how I completed each one. If you have any questions or comments, please feel free to point them out using the comments section. Let's begin!
For this challenge I built the YouConf website, and deployed it to Azure using automated deployments from GitHub. The application had a number of initial goals, which I managed to achieve as follows:
By the end of this challenge the site was up & running in Azure. Here's a screenshot of YouConf homepage:
Note - If you'd like more details on how I completed some of the tasks for challenge two, please have a look through the History section.
The rest of this section is explains how I achieved the goals above, please follow along and see how I went!
The first thing I needed was a website. I opened up Visual Studio 2012, and followed along with the following tutorial on how to build an MVC4 website, naming my project/solution YouConf. Note that since I'm not using SQL for this part of the competition I left the membership parts out (by commenting out the entire AccountController class so it doesn't try to initialize the membership database). Whilst this means that users won't be able to register, they will still be able to create and edit conferences, it's just that they will all be publicly available for editing. More detail on this is in my daily progress report.
Once I had it building locally, the next step was to get it into Azure. To do this, I went to the Azure Management Portal, select the Web Sites Node, and hit the New button. I wanted the url to start with YouConf, so I entered youconf in the url field, and selected West US as the region since it's closest to me (I'm in New Zealand!) as per the screenshot below:
Once I'd hit the Create site button I had a new site up & running just like that!
Next up I wanted to deploy to it, which required me to download the publish profile and import it into Visual Studio. To do so, I clicked on my YouConf site in the Azure Management Portal, then selected the Download the publish profile link. This opened up a window with the publish profile details, which I saved locally.
I then right-clicked on my YouConf web project in Visual Studio, and hit Publish. In the Publish dialog, I selected Import publish profile, and selected the .publishsettings file I'd saved earlier. I validated the connection using the button, chose Web Deploy as the publishing option, hit Next, and in the Settings section chose Release as the Configuration. I hit Next again, then hit Publish, and after about a minute I was able to browse my site in Azure. Now wasn't that easy?!
Next up was getting source-control in place so that it would deploy automatically to Azure. I chose to use Git, mainly because I haven't used it before and thought this would be a good opportunity to learn about it. I also wanted to be able to have a publicly-available source repository available for anyone to view, and having seen GitHub being used for this by others, thought I'd give it a go. Make no mistake, I love TFS, and use it on every other project, but for this I really wanted to push myself (although Azure makes it so easy that this wasn't quite the case as you'll see).
In order to get this working, I downloaded the Git explorer from, and setup a local youconf repository. I committed my changes locally, then synced my local changes to Git using the explorer. My Git repository is available at if you'd like to see the code.
Rather than pushing local changes directly to Azure, I wanted them first to go to GitHub so they'd be visible to anyone else who might want to have a poke around. To accomplish this I followed the steps in this article under the heading "Deploy files from a repository web site like BitBucket, CodePlex, Dropbox, GitHub, or Mercurial".
*IMPORTANT* After publishing my changes to Git I realised that I'd included all of my publish profile files as well, which contained some sensitive Azure settings (not good). To remove them, I did a quick search and found the following article. The commands I ran in the Git shell were as follows:
I also added an entry to my .gitignore file so that I wouldn't accidentally checkin anything in the Publish profile folder again:?!! It took next to no time, and left me set to focus on development, as I'd set out to do from the beginning.
From here on, most of my time was spent on building the functionality of the web app, which as I mentioned earlier was an MVC 4 web application. I started building some basic forms for creating/editing/deleting a conference, and was faced with my next challenge - where to store data? I wanted persistent storage with fast access, and an easy api to use.Since SQL wasn't available (till challenge 3), Azure Table Storage seemed like the logical option. See this daily progress update for more on why I chose this.
Azure Table Storage, so many options....
As per this daily progress update, I got setup and read about Partition and Row Keys, and found this article very helpful - . There are plenty of tutorials available about Azure Table storage which is helpful, and I created a table as per.
Azure allows you to use the storage emulator when developing locally, and then update your settings for Azure so that your app will use Azure Table storage when deployed to the cloud. I added the following line to my appsettings in web.config to tell Azure to use the development storage account locally:
<add key="StorageConnectionString" value="UseDevelopmentStorage=true" />
I created a YouConfDataContext class (link to GitHub) and accessed this connection string using the following code:
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
Things seemed to be going well, but once I tried to save a conference I soon realized that I didn't quite understand table storage quite as well as I'd thought! Basically I planned to store each conference, including speakers and presentations, as a single table entity, so that I could store/retrieve each conference in one go (as you would in a document oriented database).:
public class AzureTableEntity : TableEntity
{
public string Entity { get; set; }
}
An advantage of this approach is that it makes it easy to visualize conference data as well. To view my data in the Azure storage emulator, I downloaded the wonderful Azure Storage Explorer and viewed my Conferences table as shown below (note that I can see each conference serialized as JSON easily):
So now I had my data being stored using Azure Table Storage locally, how could I get it working when deployed in the cloud? I just had to setup a storage account and update my Azure Cloud settings as per
I created a storage account name youconf, then copied the primary access key. I then went to the websites section, selected my youconf site, clicked Configure, then added my StorageConnectionString to the app setttings section with the following value:
DefaultEndpointsProtocol=https;AccountName=youconf;AccountKey=[Mylongaccountkey]
Now when I deployed to Azure I could save data to table storage in the cloud.
Note that I ran into an issue when updating a conference's hashtag, as this is also used for the rowkey in Azure Table storage, and in order to make an update I first had to delete the existing record, then insert the new one (with the new hashtag/rowkey). See this daily progress report for more details.
As mentioned earlier, most of my time was spent on working with MVC and finding/fixing issues with the site as they arose, rather than having any issues with Azure itself. The following section outlines some of the application highlights, and how they address the goals described in the introduction. Feel free to go to the YouConf site and create your own conference if you'd like to give it a try.
The conference listing page - - lists available conferences, and allows users to drill into the conference/speaker/presentation details if they wish to. It also provides users with an SEO-friendly url for their conference, based on their chosen conference hashtag. In order to achieve this I had to add a custom route for conferences which automatically routed the request to the Conference Controller when applicable, and also a route constraint to ensure that this didn't break other controller routes. The code for adding my custom route is below (from the /App_Start/RouteConfig.cs file - abbreviated for brevity):
public static void RegisterRoutes(RouteCollection routes)
{
routes.IgnoreRoute("{resource}.axd/{*pathInfo}");
routes.MapRoute(
name: "ConferenceFriendlyUrl",
url: "{hashTag}/{action}",
defaults: new { controller = "Conference", action = "Details" },
constraints: new { hashTag = new IsNotAControllerNameConstraint() }
);
and the end result at:
I used a number of techniques to help make it easier for those running conferences to maintain them. For example:
Both of these involved obtaining code from Google/Twitter which created an embedded widget on the conference live page, based on the hangout id/twitter widget id associated with the conference. The dotNetConf site uses Jabbr for chat, however, I thought that I'd try and go for something that allowed for chat to be on the same page as the video feed. One of the commenters on my article suggested Twitter, which seemed like a good choice as it's already so widely used. In the next stage I might also look at using SignalR for this if time permits.
The image below shows an example of a page with embedded video and chat (note that I used the hangout id for one of the dotNetConf videos for demonstration, and had to shrink the screenshot to fit into the CodeProject window)::
public class YouConfHub : Hub
{
public Task UpdateConferenceVideoUrl(string conferenceHashTag, string url)
{
//Only update the clients for the specific conference
return Clients.All.updateConferenceVideoUrl(url);
}
public Task Join(string conferenceHashTag)
{
return Groups.Add(Context.ConnectionId, conferenceHashTag);
}
}.
var context = GlobalHost.ConnectionManager.GetHubContext<YouConfub>();
context.UpdateConferenceVideoUrl("[conference hashtag]", "[new hangout id]");
Sadly, it turns out that you can't actually call methods on the hub from outside the hub pipeline You can, however, call methods on the Hub clients, and groups. So, in my conference controller's edit method, I was able to use the following code to notify all clients for the specific conference that they should update their url as follows:?
Responsive design is all the rave these days, and fair enough too given the proliferation of web-enabled devices out there. I won't spend too long on this, except to say I've implemented a number of specific styles using media queries to make the bulk of the site look good on both desktop, tablet, and mobile device resolutions. There's a huge amount of information out there about responsive design, and I found articles by the Filament Group and Smashing Magazinevery helpful in both understanding and fixing some of the issues. An example of one of my media queries for devices width widths below 760px (mobiles or small tablets) is below:
/********************
* Mobile Styles *
********************/
@media only screen and (max-width: 760px) {
.main-content aside.sidebar, .main-content .content {
float: none;
width: auto;
}
.main-content .content {
padding-right: 0;
}
}
I've included a screenshot below to show the homepage on a mobile device. It looks good, but there's still work to do for future challenges....
For Azure websites, you're only charged for outbound traffic, hence it makes sense both financially, and for usability, to reduce the amount of bandwidth your site consumes. I used a number of techniques to achieve this:
For example, in the code below I try to load the jQuery library from the Microsoft Ajax CDN if possible, but if it's not available, fallback to a local copy, which has already been minified to reduce bandwidth:
<script src=""></script>
I do the same for other CSS/Javascript too - see my code on GitHub for examples.
Being able to log and retrieve exceptions for further analysis is key for any application, and it's easy to get good quality logging setup in Azure, along with persistent storage of the logs for detailed analysis.
I've written quite a large article up on how I implemented logging in this daily progress report, so please see it for further technical details. In brief, I used Elmah for logging errors, with a custom logger that persisted errors to Azure Table storage. This means I can view my logs both on the server, and locally using Azure Storage explorer. Awesome!.
As with logging, the bulk of the implementation details are included in this daily progress report. I managed to get the blog up & running without too much fuss, but thought I'd better not move all my content there as it would mean having to cross-post content, and possibly make it harder to assess my article of content was in different places. Here's a screenshot from
It's been quite an adventure this far, but I think I've managed to complete what I set out to achieve for challenge two, namely getting the site up & running in Azure with source control integration, and delivering the main features it was required to. I've used table storage both in the emulator and the cloud, and become much more familiar with the Azure platform as a whole. I've also gone through the process of setting up a blog, which was even easier than I thought it would be.
Finally - where are my tests? You may have noticed a distinct lack of unit tests, which I'm ashamed to say is at least partially intentional. Thus far my api has been changing so often that I felt adding tests would slow me down more than it was worth. I know this would drive TDD purists insane, but in my experience it's sometimes helpful to wait till one's api is more stable before adding tests, particularly when it comes to testing controllers. In addition to this, I'm going to be swapping out my table-based data access layer for SQL in challenge 3, so things are likely to change a lot more throughout the application. I will, however, at least add tests for my controllers at the start of challenge 3, so that I can verify I haven't broken anything once I start adding SQL membership etc.
So what's next?
At the end of challenge two, there were a number of additional features I had in mind:
The goal of this challenge is to learn as much as possible about SQL Azure, and use it to power the YouConf website. My initial plans for using SQL were as follows:
By the end of this challenge, the solution will make use of the following Azure features:
I've provided details of the discoveries I made, and issues encountered, in the sections below. As with challenge two, I've been recording daily progress as I go, in the History section of this article. For more detail on the daily items I covered, please read that section as I'll be referring to it in other parts of this article. Note that in order to help those viewing this article for the first time get up-to-speed, I've left the daily progress reports for challenge two intact. I've also added a separate history section for Challenge three - click here to go straight to it.
SimpleMembership comes baked into MVC 4, making it really easy to get started with. It uses SQL to store membership data, and with the MVC 4 Internet Application template is automatically setup to store data using SQL Server LocalDB.?
If you recall from challenge 2, I commented out the entire AccountController class as I didn't want it to be used, since I wasn't implementing Membership. I left the /views/account/*.cshtml files alone, however, as I knew I'd need them for this part. This time, I uncommented the AccountController code again and dived on in.:
using (var context = new UsersContext())
{
if (!context.Database.Exists())
{
// Create the SimpleMembership database without Entity Framework migration schema
((IObjectContextAdapter)context).ObjectContext.CreateDatabase();
}
}
WebSecurity.InitializeDatabaseConnection("DefaultConnection",
"UserProfile", "UserId", "UserName", autoCreateTables::
<connectionStrings>
<add name="DefaultConnection"
connectionString="Data Source=(LocalDb)\v11.0;Initial Catalog=aspnet-YouConf-
20130430121638;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|\
aspnet-YouConf-20130430121638.mdf" providerName="System.Data.SqlClient" />
</connectionStrings> SimpleMembership to use the DefaultConnection to access the db, and the UserProfile table to store user data. localdb file to be checked in to source control, so I added an entry to my gitignore file to exclude the whole /App_Data folder from Git.
I won't go into too much detail here, but please see my daily progress update for details on how I went about setting up external authentication for both Microsoft and Google, and took advantage of the OAuth and OpenId features that come with SimpleMembership. Please also see this article, which covers the whole topic of external authentication really well.
The end result is a login screen that looks like this:
Now that authentication is working locally, how about getting it working with a real SQL Azure database?
As I mentioned earlier, when I'm developing locally I can use localdb for my database (I could also use SQL Server, SQL Express, or SQL CE if I really wanted to). However, when deploying to Azure I need access to a real database in the cloud, so before pushing my source code to GitHub and triggering a deployment, I went about setting one up. With the Free Azure trial, you get to create one database, which is all I need at this stage. for details). I completed the fields as below:
So now I had a new database named YouConf - easy as pie!
I wanted to have the connection string available to the website when deployed to Azure, so I needed to add the connectionstring for my cloud database to the configuration for my website. To do this, my YouConf web site can access the database when it's deployed to Azure. What if I want to access it myself and run queries though? It turns out that you can access the database both from SQL Server Management Studio, and from within the Azure Management Portal.
This was particularly important to me, as I wanted to secure the error log viewer page (created in challenge 2) so that only users in the Administrators role could view it. In order to make myself a member of that role, I needed to run a sql query against my Azure database. I'll show you how I did it using both the portal, and SQL Server Management Studio. You can find more details on how I secured the error logs with Elmah in this daily progress update.
The tools that come built-in to the management portal allowed the management portal to automatically add an IP restriction for my ip address, by confirming the dialog that popped up. I then logged in as follows:)
I'm now an administrator, so if all goes to plan I should be able to view the error logs page remotely. Let's give it a try:....
Two ways of achieving the same goal, once again made very easy by the dedicated folk who built SQL Azure - I raise my glass to you ladies and gentlemen!
Azure allows you to backup your database to your cloud storage account, so that you can then download a copy and restore it locally if needed (or do whatever else you might need to with it). This was particularly useful for me when debugging data-related issues. In order to take a backup of my database, I followed the tutorial in this article. The steps involved:
By the end of all this I was pretty familiar with SQL Azure and how it worked, and was confident that it would meet the needs of my application. I found it easy to setup and access my cloud databases, and was also able to take backups when I needed. Needless to say that knowing there are multiple redundant copies of my database in the cloud should one node fail also leaves me feeling pretty confident that my database is in good hands - Go SQL Azure! Now, what else did I learn during this challenge? Read on...
I decided early on that I'd like to move the conference data to SQL storage as soon as possible, and that's what I did. I used Entity Framework + code first migrations and found them very helpful to get up & running fast, and also to keep the database in-sync as I made updates to my model classes. I've covered the detailed steps I went through in this progress update, so please read this for the full details. A few highlights are as follows:
Database.SetInitializer(new System.Data.Entity.MigrateDatabaseToLatestVersion<YouConfDbContext, YouConf.Migrations.Configuration>());
As an example, here's the Presentation class after making the above modifications. Note that it's not very different to how it was at the end of challenge two, but just has the additional validation attributes and navigation properties:; }
}
If you recall from challenge 2, I'm using SignalR to keep users' live video feeds up to date. If I were to scale the site onto multiple servers, I need to make sure that SignalR can communicate with all the servers in order to broadcast messages to all users regardless of which server they're connected to.
In order to transmit messages between server nodes in an Azure web farm, SignalR uses service bus topics. This requires you to setup a service bus in the management portal, which I managed to do without too much fuss. See for more details, b ut the configuration is fairly simple. Here's what I did:
Added the service bus namespace in the Azure Management Portal (See for specific details):
Added SignalR Service bus to my website project via NuGet in Visual Studio:
Copied the value of the service bus connection string from the management portal as below:
pasted it into my web.config file
<add key="Microsoft.ServiceBus.ConnectionString"
value="Endpoint=sb://yourservicebusnamespace.servicebus.windows.net/;
SharedSecretIssuer=owner;SharedSecretValue=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" />
IMPORTANT - don't forget this, especially if you're using more than one Azure environment (e.g. for testing vs production) - updated the application settings for my cloud website in the management portal, so that it would use the correct settings was particularly applicable after challenge two, where I wanted to start development for challenge 3, but still wanted my source from challenge two to be available for the judges and anyone else to look at. I also didn't want to introduce breaking changes into the live site when I checked changes in. So, what did I do?
git branch dev
git checkout dev
git push -u origin dev or!
I also found out how to exclude NuGet packages from source control, which allowed me to drastically reduce the size of my public source control repository and make it easier for others to download. See this daily progress update for details.
At this stage I'd setup Git so that I have a Master and Dev branch, with Master being configured to auto-deploy to the live site at. Initially I'd been merging my dev changes into Master and testing locally before pushing them to GitHub, which dev changes in the cloud before deploying them to production. The good news is that it really isn't that hard to setup a replica environment in Azure. You just have to make sure that you have the same services (e.g. database, storage, queues etc) setup as you do in your live environment. So, what did I do?
You've seen the detailed steps I went through to setup my production Azure environment in challenge two, and in earlier progress reports, so I won't repeat them in detail here. I will, however, summarize the steps I went through to create a replica test environment below.!
All the source for my dev branch is available on GitHub, so feel free to view it at (but please don't use that branch when assessing challenge 3, as the Master branch is where the stable code for the live site lives). discoveries:
This left me'll leave things as they are, since I already have a recognizable domain at youconf.azurewebsites.net. Another good thing is that Azure automatically provides an an SSL certificate for *.azurewebsites.net, which gives us the security we require for logins etc.
In future I think I might move my source code into TFS and start using a web and worker role, but you'll have to wait till challenge four for that! I really hope that SSL is available out-of-the-box soon for Azure websites too!
Regardless of whether I used a custom domain name and SSL certificate or not, I wanted to secure the authentication process for my site. So I added a url rewrite rule to my web.config (yes url rewrite 2.0 comes built-in to Azure!) to force all AccountController methods to be redirected to https as follows:
<rewrite>
<rules>
<rule name="HTTP to HTTPS redirect for account/admin pages" stopProcessing="false" enabled="true">
<match url="(.*)" />
<conditions>
<add input="{HTTPS}" pattern="^OFF$" />
<add input="{PATH_INFO}" pattern="^/account(.*)" />
<add input="{SERVER_PORT}" pattern="80" />
</conditions>
<action type="Redirect" url="https://{HTTP_HOST}/{R:1}" redirectType="Permanent" />
</rule>
</rules>
</rewrite>
I also updated the authentication element to only issue cookies over SSL:
<authentication mode="Forms">
<forms loginUrl="~/Account/Login" timeout="2880" path="/" requireSSL="true" />
</authentication>
If you're serious about securing your .Net web apps, I would highly recommend this Pluralsight course, which covers many scenarios and how to protect against them.
An essential requirement when implementing membership is having some sort of password reset functionality available for users who forget their passwords. I managed to implement basic functionality for this on top of SimpleMembership. I won't go into all the details here, but if you're interested, you can find them all in this daily progress update.
One thing I would like to mention is that part of this involved setting up emails using Sendgrid. I ended up sending the password reset emails in-process I.e. Directly from my controller method, rather than using a separate windows service/console app/worker role. This is not ideal from a reliability or scalability point of view due to the possibility of smtp errors disrupting the user, so in future (challenge 4) I'll move this into a worker role and just use the controller to generate the email body and put it on the Azure Service Bus for sending. As I mentioned earlier, I'm likely to move my website into a web role in future too, so challenge four would be a good time to do all of this. If you'd like to see how I plan to do it, have a look at this amazing article. Seriously, if that article were part of this competition I think it would win!
As I mentioned in the introduction, after challenge 2 I wanted to add more tests, particularly for my controllers, to make the app more robust. Check out this daily progress update for how I went about implementing unit tests initially, and also this progress update for how I setup integration tests using SQL CE to create a dummy database for integration testing. I'm still not quite there yet, and have a lot more tests to write before this competition is over. Thankfully there are still two more challenges after this, so I'll continue my testing efforts, including adding some UI/smoke tests which I can run after each deployment to verify the test and production sites are working as expected.
As I mentioned earlier, I made plenty of other discoveries during this challenge, which I've documented in daily progress reports. A few notable mentions are:
As with challenge two, what started out as a fairly focused challenge (SQL Azure) ended up becoming quite a large exercise in how best to develop and maintain an Azure solution. As I've completed various tasks my understanding of both SQL Azure and the Azure environment as a whole has improved greatly. I've moved all my conference data into SQL, along with membership data, and now have a site that allows for secure user registration including emails. I've also found what I think is a good solution for setting up a test version of the site and using a separate development branch in source control to allow me to develop AND test without impacting the production site.
In addition I made a few discoveries about the limitations of Azure websites regard custom domain name mapping and SSL certificates. I'm fine with leaving the site on the .azurewebsites.net domain for now though. I'm still not 100% sure what I'll do with regard to my dilemna regarding publicly available source code and the issues involved with deploying to Azure cloud services from GitHub, however, I suspect that in the next challenge I'll:
For future challenges, there a number of additional features to focus on:
This challenge focused on using Virtual Machines (VMs) in Azure, which tied in nicely to one of the functions I'd been keen to build for my site for some time now - search. In the past I've used SQL Server full-text search, and also Lucene.Net, to provide search capability for my apps. However, for this challenge I wanted to try out Apache Solr, which is a well-known, high performance search engine built on top of Lucene. In the event that the YouConf site starts to gain popularity, I wanted to ensure that I had a robust search solution in place, and using an Azure VM running Apache Solr is a great way to help achieve that.
Having worked in Windows-only environments for a long time, I haven't been able to use Solr before, as it requires running Apache on a linux operating system. The good news is that Azure VMs allow you to not-only run Windows systems, but also Linux-based ones such as Ubuntu, Debian and others! This means that I could create an Ubuntu VM on Azure, install Apache/Solr on it, and then call into it from my app to add documents, and perform searches. I'll describe how I went about doing this in the following section, and also how I was able to add documents (with the help of the SolrNet library), and perform searches directly from the client's browser (using the Ajax Solr library).
In addition to adding search functionality, I also created a separate worker role to handle sending emails, which I was doing in-process at the end of section three. Once this was in place, I updated the web project to use Azure service bus to communicate with the worker role. I also moved the functionality for adding/updating/removing documents in the Solr index into the worker role, so as to improve the performance and robustness of the YouConf web site.
Finally, if you recall from challenge three, I was planning on moving my app to TFS for source control. I decided against this, as it would have meant squeezing even more into the fairly short window for the challenge, and I didn't want to run the risk of not completing my article on-time due to source control issues.
As with previous sections, if you'd like more details on how I completed some of the tasks for this challenge, please have a look through the History section. It's a bit light for this part, as I spent most of my time on the article content, and less on the daily progress reports...
As per the Azure documentation, with Azure Virtual machines "you get choice of Windows Server and Linux operating systems in multiple configurations on top of the trustworthy Windows Azure foundation". This is great, as I needed to setup an Ubuntu VM running Apache Solr. Initially, I thought about creating one from scratch, however, after some searching I found that there were already a couple of pre-built solutions which help you get started with Apache Solr and Azure fast:
I chose the second option, as it looked very easy to setup, and was already available in the Azure VM Depot. I also thought that the first option looked like a bit of overkill for my needs as it created three VMs - however if the site absolutely takes off, then maybe I'll revisit that option in future. So, how did I set up my VM? I followed along with the article at, and have documented my steps below.
Firstly, I needed to create my own VM image based on the one in the VM depot. This would later be used as the basis for the actual virtual machine instance. To do this, I logged into the Azure Management portal and selected the Virtual Machines tab as below:
I selected the Images tab, then hit the Browse VM Depot button:
I selected the Apache Solr image from Bitnami:
... selected my YouConf storage account (that I'd setup in challenge two:
...and clicked the little tick to confirm button. I then had to wait a while as Azure copied the VM image (30gb) from the VM image gallery to my storage account, as shown below:
once it was complete, the screen refreshed and told me that I needed to register the image:
So I hit the Register button at the bottom of the screen, as follows:
Now that my image was registered, I needed to create a VM based on it.
I selected the New button in the bottom-left of the screen, and chose Virtual Machine > From Gallery as follows:
I named it youconf-solr, and made it extra small, so it would cost as little as possible to run. A good thing about Azure VMs is that you can always change their size later on, so if I find it's running slowly, I can scale up to a larger instance. I also selected a username, and chose the Provide Password option so I could login to the machine once it was created, as follows (Note: make sure you remember your username password, as you'll need it to login later on):
Next I needed to configure DNS. At this stage I'm just using a stand-alone machine, which I gave the dns name youconfsearch, which seemed like a logical name for it.
When asked to configure availablility, I didn't create an availability set, as I just want my machine to run on its own for now, and am not quite so concerned about high availability as I would be if I were running a large app for a client. Again, if the site were to suddenly become popular, I would likely revisit this and configure availability set so the site is more fault-tolerant.
After confirming this, Azure began provisioning the VM, which took a few minutes to complete, as below:
Once provisioning was complete, my VM was up & running, with a single endpoint for SSH as shown below (after selecting the Endpoints tab):
To enable access to the VM from the web, I needed to add a public endpoint on port 80, which I did by clicking Add, and then completing as below:
Now my additional endpoint was displayed as below:
At this stage, I should be able to browse to my vm on the web, which I was able to do using the url provided on the Dashboard screen for my vm -. And it worked!
I then selected the Access my application link, and was taken to the Solr management screen for my Solr instance:
As I've said before in this competition - isn't it nice when things just work! I had a fully functional Solr instance up & running on Azure, and it really wasn't that tricky to complete, thanks to the BitNami tutorial I referred to earlier, and the Azure Management Portal's ease-of-use. For more information on setting up Azure VMs, I'd recommend reading some of the documentation at
Note: I used the Azure Management Portal GUI to perform the setup tasks above such as creating the VM image. You can also perform all of the same steps via the command-line if you prefer. If you'd like to find out more, I'd recommend reading this blog post by Scott Hanselman where he creates a VM server farm, all via the command line.
If you're interested in learning more about Apache Solr, I'd recommend reading about it on the Apache website. One can use the admin interface shown above to query documents and manage the Solr instance, which makes it really easy to use. Now I had my Solr instance running, what I needed to do next was add some documents to the index.
I wanted to be able to connect to the Solr VM from my Azure site, and add conference data, which could later be searched on. Rather than writing my own code to handle HttpRequests etc to the Solr instance, I used the SolrNet library, which is a .Net wrapper that makes it easy to manage Solr documents using .Net code. I initially tried installing the NuGet package for it, however, I discovered that the NuGet package doesn't contain the latest version of SolrNet, which I needed in order to connect to my Solr instance (v 4.3.0). Thankfully, the source code for SolrNet is available on GitHub, and since I already had GitHub explorer installed on my machine, I was able to:
If you're interested in using SolrNet, I'd recommend reading the documentation at. I'll show you how I set it up below.
After adding references to the above binaries in my web project in Visual Studio, I first added a ConferenceDTO class as per the SolrNet documentation, which would represent the data that I'd be sending to the Solr index for later retrieval. The class is as follows:
public class ConferenceDto
{
[SolrUniqueKey("id")]
public int ID { get; set; }
[SolrUniqueKey("hashtag")]
public string HashTag { get; set; }
[SolrField("title")]
public string Title { get; set; }
[SolrField("content")]
public string Content { get; set; }
[SolrField("cat")]
public ICollection<string> Speakers { get; set; }
}
The attributes on each property correspond to fields in the Solr index. Note that by default, the Solr index already contains all of the fields above, except for the hashtag field. I needed to add it to Solr manually, and will come back to that later to show you how I did it by modifying the Solr schema.xml file...
I wanted to add/update the conference data in the Solr index whenever a conference was changed in the YouConf site, so I updated my ConferenceController to do that. First, I updated the constructor:
ISolrOperations<ConferenceDto> Solr { get; set; }
public ConferenceController(IYouConfDbContext youConfDbContext, ISolrOperations<ConferenceDto> solr)
{
if (youConfDbContext == null)
{
throw new ArgumentNullException("youConfDbContext");
}
if (solr == null)
{
throw new ArgumentNullException("solr");
}
YouConfDbContext = youConfDbContext;
Solr = solr;
}
Next I added an AddConferenceToSolr method to add/update the conferences in Solr, which I called from my Create and Edit methods. The code is as follows:
private void AddConferenceToSolr(Conference conference)
{
// make some articles
Solr.Add(new ConferenceDto()
{
ID = conference.Id,
HashTag = conference.HashTag,
Title = conference.Name,
Content = conference.Abstract + " " + conference.Description,
Speakers = conference.Speakers
.Select(x => x.Name)
.ToList()
});
Solr.Commit();
}
Finally, since I'm using Ninject for dependency injection, I added an entry to my Ninject bootstrapper file in /App_Start/NinjectWebCommon.cs to create an instace of the ISolrOperations interface, which would be passed into the controller:
kernel.Load(new SolrNetModule(""));
In the above I provide the url for my Solr VM which I configured earlier. Now, when conferences are added/updated, the changes will automatically propagate to the search index in Solr.
Note: As I mentioned in the introduction, by using the AddConferenceToSolr method in my controller, I made the UI less responsive, as this was making an external call to Solr in-process. Later on I'll show you how I moved this to a separate worker process to speed things up again.
At this stage, conference data was ready to be propagated to Solr. I also updated the Speaker and Presentation controllers to update Solr when they were changed. I still had work to do though before I could actually save some conferences, because if I tried to run the above code, Solr would complain that it didn't know about the hashtag field. Remember how I mentioned earlier that I needed to update the Solr schema.xml file to add the hashtag field? That's what I'll show you next.
Since I was running an Ubuntu VM, I couldn't automatically remote desktop into it with a gui automatically like I would be able to with a Windows Server VM (actually it turns out it is possible, I just didn't get onto it early enough!) Instead of using remote desktop, I had to use SSH from the command-line. The article at does a really good job of showing how to use SSH, and I'd recommend reading it.
In my case, I went to the management portal and obtained the SSH details for the VM, then used KiTTY (a variant of PuTTY) to login to the VM using my VM using the username/password I'd created when I setup the VM, as shown below:
Now I was connected to the VM and could make changes via the console. To update the Solr Schema.xml file, I needed to find and open the file, then add an additional field for the hashtag. Once I'd navigated to the right directory, I used the command sudo nano schema.xml to open the file in the nano text editor, with administrative rights, as shown below:
I then added the hashtag field:
...then hit Ctrl^X to save and exit the file. For the changes to take effect, I needed to restart Solr as follows:
I then exited KiTTY, opened up a web browser and and went to my Solr Instance, and confirmed that the hashtag field had been added successfully as shown below:
Great - it's there! Now when I performed any CRUD operations on the YouConf website, the Solr index would be kept up-to-date automatically. There was one thing missing though - a search page! I'll show you how I added that now...
Adding a search page
I wanted a nice simple search page, which retrieved results directly from my Solr instance, rather than going via the YouConf website. This would mean better performance for the YouConf site, as it didn't have to relay query requests to the Solr VM. To get started, I added a new SearchController, which simply displayed a plain page with a search box on it as shown below:
Next, I wanted to add the functionality to perform the actual search. As it turns out, there's already a library to do just that -. To use it, I downloaded the relevant javascript files, and followed the Reuters tutorials at. Using this, I was able to add basic search, paging, and hit highlighting. I won't go into all of the technical details here, as the tutorial I mentioned does a better job of that, and I recommend reading it if you're looking to implement this functionality yourself. If you'd like to dig into the source code that drives the YouConf search page functionality, it's all available either via download at the top of this article, or on GitHub. A good starting point is the search page code -
The end result was the search page with results displayed below, which I thought was quite nice.
Note: I really wanted to add autocomplete, and I found a number of tutorials on how to use NGram and EdgeNGram filters to accomplish this, however, after a few goes at it I still couldn't get it working, and I felt that my time would be better spent on the rest of this article. In future I'll look to incorporate this though for sure!
Feel free to try out the search page at (Hint: search for terms such as Azure or other terms that appear in the conference description for any of the conferences). Just make sure to access it over http as your browser will likely block ajax search requests to the Solr VM () due it being non-http. In the next section I'll discuss how I had planned to fix this, and how I didn't quite get there in the end...
At this point I thought about security, and wanted to secure my Solr instance so only authorized users could make edits/updates and access the admin ui, whilst still leaving the querying functionality available to be consumed by the client-side javascript on the YouConf search page. I also wanted to add SSL to it. I read the Apache documentation at, and also the BitNami documentation at, and tried adding and modifying .htaccess files, and also the Solr configuration, but still couldn't get it to work! I've shown a screenshot below of my updated Solr configuration file which I thought would work, but didn't:
I'm no Apache expert, and I'm guessing there's something simple that I was missing here. I leave it as an exercise to you the reader for now, but if anyone reading this and know's what's wrong, please let me know by posting in the comments section In future I'll continue to try and get this working, as it's an essential part of securing the app so malicious users can't get in and fiddle with my Solr index.
Another thing I'd like to do (once I've setup auth and SSL on the VM) is to take an image of it, so that in the event I want to create a new Solr VM, I can use my pre-configured image to get started quickly. To do so, I read over the steps in the article at. The process is fairly simple, so I'll leave it to you the reader to have a go at doing it once you've setup your VM.
Moving on - I now had a fully functional search implementation using Apache Solr, which was working like a charm. I could have left it like this, however as I mentioned earlier, all of the updating of the Solr index was being performed from within the web app in-process, and I wanted to move it into a background worker process to make it more robust. That's what I'll show you next.
Using background services to perform offline tasks can help improve both the performance and robustness of your application, and Azure makes it easy to add these background services using Worker Roles. I initially thought about adding another Windows VM to perform this functionality, however, worker roles are perfectly suited to this task, and I wouldn't feel comfortable trying to shoehorn this into a VM-based solution, as it wouldn't fit with my goal of giving proper guidance to readers of this article.
If I did use a VM, it would also mean I couldn't have automated deployments (if/when I move to TFS), and make it harder to keep track of monitoring data etc, which comes out-of-the-box with worker roles. In saying that, if you're reading this and you require a solution where you need to have complete control over the operating system upon which your background service is running, or you need to move an existing background service to the cloud quickly without creating a worker role, then VMs might be the option for you. The beauty of the Azure platform as it gives you the power to choose which option suits you best, so you can pick the solution that's best for your needs.
As I've mentioned earlier, the article at gives a comprehensive rundown on how worker roles can be used to perform background tasks, and I highly recommend reading it if you're not familiar with the concept of worker roles or why you would need them. The YouConf web site had two tasks that were candidates for offline processing:
To add a worker role, I opened the YouConf solution in Visual Studio, and then hit Add New > Windows Azure Cloud Service as shown below:
I selected Worker Role with service bus, and named the project YouConfWorker:
With the worker role project in place, I went about moving the functionality for sending emails and updating the Solr index out of the web project, and into the worker role. Note: As mentioned earlier, I've written a separate article describing the best practices I followed when creating the worker role for:
If you're interested, please check out my other CodeProject article -
As an example, I'll show you how I moved the Solr index updating functionality across into the worker project.
The first thing to do was to remove the code that updates the Solr Index from the web project, and replace it with code that simply puts a message on a service bus queue, with the details of the update to be made. To accomplish this, I added a new base controller class, with common functionality for sending queue messages, and updated the ConferenceController to inherit from it. The code for the BaseController class was as follows:
public class BaseController : Controller
{
const string QueueName = "ProcessingQueue";
protected void SendQueueMessage<T>(T message)
{
//
var client = QueueClient.CreateFromConnectionString(connectionString, QueueName);
// Create message, passing a string message for the body
BrokeredMessage brokeredMessage = new BrokeredMessage(message);
brokeredMessage.Properties["messageType"] = message.GetType().AssemblyQualifiedName;
client.Send(brokeredMessage);
}
protected void UpdateConferenceInSolrIndex(int conferenceId, SolrIndexAction action)
{
var message = new UpdateSolrIndexMessage()
{
ConferenceId = conferenceId,
Action = action
};
SendQueueMessage(message);
}
}
Note the UpdateConferenceInSolrIndex method, which simply creates a new UpdateSolrIndexMessage which specifies which conference needs to be updated, and the action to be performed (either Update or Delete). The SendQueueMessage<T> method is responsible for creating an actual BrokeredMessage and putting it on the queue, and specifying the type of the message to help with retrieval in the worker role.
The relevant code in the Create method of the ConferenceController then became:
....
//Save conference to db etc...
....
UpdateConferenceInSolrIndex(conference.Id, Common.Messaging.SolrIndexAction.Update);
...
Now that I had this in place, I needed to add the actual Solr Index update functionality into the worker role.
You've seen the pattern I used for accessing the Azure Queue and accessing strongly-typed messages in the article I referred to earlier, so I won't repeat it here. What I will do, however, is show you the code from the message handler responsible for pushing updates to Solr, stored in the YouConfWorker/MessageHandlers/UpdateSolrIndexMessageHandler.cs class:
namespace YouConfWorker.MessageHandlers
{
public class UpdateSolrIndexMessageHandler : IMessageHandler<UpdateSolrIndexMessage>
{
public IYouConfDbContext Db { get; set; }
public ISolrOperations<ConferenceDto> Solr { get; set; }
public UpdateSolrIndexMessageHandler(IYouConfDbContext db, ISolrOperations<ConferenceDto> solr)
{
Db = db;
Solr = solr;
}
public void Handle(UpdateSolrIndexMessage message)
{
if (message.Action == SolrIndexAction.Delete)
{
Solr.Delete(message.ConferenceId.ToString());
}
else
{
var conference = Db.Conferences.First(x => x.Id == message.ConferenceId);
Solr.Add(new ConferenceDto()
{
ID = conference.Id,
HashTag = conference.HashTag,
Title = conference.Name,
Content = conference.Abstract + " " + conference.Description,
Speakers = conference.Speakers
.Select(x => x.Name)
.ToList()
});
}
Solr.Commit();
}
}
}
This code is very similar to the original code in the ConferenceController, and simply calls the Solr VM using SolrNet. It also retrieves the conference from the database if needed when doing updates.
In addition to the changes above, I moved some of the common messaging and database-related functionality into a new project called YouConf.Common. I'd recommend checking out the source code if you'd like to find out more. After debugging locally, I just needed to deploy the role to Azure...
As I mentioned in challenge three, I initially thought of moving the whole solution to TFS, as this allows integrated continuous deployments to cloud services from TFS. When using GitHub, however, this is only available for Azure web sites. After some thinking I decided to leave the code in GitHub for now, as it means that it will remain available for everyone to access. That meant that in order to deploy the worker role to Azure, I had publish directly from my local machine.
To do this, I first created a local deployment package, by right-clicking on my YouConfCloudService project in Visual Studio, and hit Package (selecting Cloud and Release in the service and build configuration boxes), which created an initial deployment package on my machine. I then went to the Azure management portal and created a new cloud service as shown below:
Since I'd selected the Deploy a cloud service package box, I was able to select my local package that I'd created for deployment to the cloud.
This then created the cloud service in Azure and deployed my package to it, as shown below:
I was also able to publish directly from Visual Studio after downloading the publish profile (which I originally did in challenge two), by selecting the cloud service project in Visual Studio, and selecting Publish, as shown below:
this then published the site to Azure, with the result shown in the output window:
So there you have it - offline processing and background tasks being performed in an Azure worker role - a win for performance, robustness, and appropriate use of the technologies available! Note that I selected Extra Small as the virtual machine size, as I didn't want to go over the usage limit for my Azure subscription (bear in mind that I'm also running an extra-small VM, which means I'm just within the monthly limit for the free Azure subscription). I can always scale this up if needed though "
What a challenge this was! By the end of it, the application was finally at a state where all the pieces were together, with the right tools being used for the right tasks. My Apache Solr VM was running well in an Extra small instance, and easily handled all the search requests I threw at it. If needed, I could easily have scaled it up to a larger instance depending on the load. The functionality for sending emails and updating the Solr index was sitting within a worker role, where it should be, and the web app was communicating with the worker role using Azure service bus, making it robust and reliable.
I didn't manage to solve the authentication/authorization issues with Apache, but this will be an ongoing task, which hopefully feedback from others will help solve. I also had to deploy the worker role directly from Visual Studio, rather than having it auto-deploy from GitHub, however this was fairly easy to manage. I just had to make sure that I didn't accidentally check-in my production connection strings.
You might also recall from challenge three that I was keen on adding SSL and a custom domain name to my Azure web site. The good news is that SSL has recently become available for Azure websites, however, it requires running your site in reserved mode, and is fairly costly to use (at least I thought so). Thus, my view at the end of challenge three - that it would be better to move the web site into a dedicated web role - remains the same. I may decide to implement this in the next challenge, however, at this stage I'm happy with the site remaining on the .azurewebsites.net domain till it starts getting more traffic.
One last item worth mentioning - in challenge three I discussed my method for keeping sensitive configuration settings out of GitHub with Azure Web Sites. I felt it was worth an article in its own right, so if you're interested, please check it out at.
Only one to go now! The focus for challenge four is on responsive design, so that's what I'll target next. I'll also aim to complete some of the outstanding tasks from earlier challenges, namely:
It's the final stage of the contest, and what better to work on than one of the major challenges facing all web developers building modern websites - mobile access. My main goal for this challenge was to make the YouConf website usable across a range of devices, including smartphones, tablets, desktops, and all things in-between. With this in mind, I chose to use responsive design to optimize the site to give the best user experience regardless of the device on which the user was browsing. In the following sections I'll explain why I chose this approach, and also look at the pros/cons of other approaches available. I'll then look at responsive design in detail, and go through the steps I took to make the YouConf site responsive - including how to test it, and the issues I had to overcome for specific pages. Finally, I'll wrap up with a few highlights from my time in this competition, and a list of future steps that I plan to take for the YouConf web application.
One point worth mentioning is that whilst I didn't end up needing to make use of Azure Mobile Services for this challenge, I still did some research into their usage, and can see how they would be useful they would be if you were building a mobile app and required features such as push notifications, backend services, and scheduling. Maybe I'll have the chance to build a mobile app and try them out in a future competition....
As with previous sections, if you'd like more details on my day to day progress, please have a look through the History section. I admit it is very light for this part, as I spent nearly all on the article content due to time constraints.
For those who can't wait, here's a sneak peek at the finished product:
Before I go into the details of how I went about completing the tasks and building a responsive site, let's first look at the options available when designing for mobile. I did some research on building mobile-capable websites, and found that there are three main options available when you want to start developing for mobile:
Pros: These are similar to the advantages of building a separate mobile site, with the additional benefit of allowing users to take advantage of specific features for the particular device that the mobile devices is targeting. For example, taking a picture and uploading directly, accessing local file storage, performing actions offline etc.
Cons: Again, these are similar to those for a separate mobile site, with the additional cons of:
As I said earlier, I wanted the YouConf site to be usable on a range of devices, not just those with a specific brand or screen size. I also wanted the same features of the desktop website to be available on mobile devices. Finally, since there's only one of me, and my resources (both dev and testing) are severely limited, I wanted to choose the option that gave the greatest return on investment. Hence my decision to use responsive web design! Note: if you're building a web application and considering what to do to make your site available on mobile, it's worth reviewing the articles I mentioned earlier, as your goals may be different to mine, and hence you may need to evaluate the other two approaches in more detail. In my case, however, responsive design seemed like the obvious choice.
Now that I'd decided on my approach, let's take a look at responsive design in detail.
Response design involves making your site flexible so that it presents the best user experience possible in any context, be it desktop, or mobile. If you're new to the whole concept, I'd recommend reading Ethan Marcotte's 2010 article which gives a great explanation of what responsive design is, along with a quick demo of how to make an example site responsive. Responsive design is a huge topic, and though I'll only cover a small portion of it in this article, I hope to give you an indication of some of the steps involved in making a site responsive, and also show the great results you can achieve without having to make too many changes.
Sometimes when building a responsive site for public usage, you might be given a specific device to target, such as "we want it to work on mobile, so we'll just test it on the iPad.". Whilst this allows you to focus your efforts on a specific device, it doesn't necessarily mean your site is the best it could be, as there so many devices out there that it's impossible to code for every specific device, not to mention every orientation on every device! A better approach, as outlined in this article, is to take a device-agnostic approach and focus on how your key content displays at a range of screen sizes. You can then see where content breaks, and adjust your design to make the site usable across a whole range of screen sizes, rather than just a single on a single device.
This is the approach we'll take with the YouConf site, and we'll start by testing across multiple resolutions/devices to see what happens to our content as the browser is resized. From there we'll identify some possible breakpoints (based on our key content/navigation) where we need to adjust our layout to optimize the site for the given resolution. Note: We'll still need to test the site in specific devices to make sure it displays properly, and also to get an idea of the common resolutions that we'll need to pay attention to if we want to make the site usable for the widest possible audience. However, by making the site flexible based on target screen sizes (as opposed to target devices) we should end up with a result that naturally works well across a range of devices. Note also that I'm not an expert in this field, and I'd recommend doing plenty of your own research and reading the articles that I've mentioned thus far.
The first thing we'll do is look at the site as it appears today at a number of screen sizes on a number of devices. Often it's easy to test different browser sizes by simply resizing our browser window, as this gives a quick guide as to whether the site is performing as we expect it to in our chosen browser at a given resolution. However, if we want to test specific screen sizes/devices, we need to use something that resizes to those specific screen sizes and is as close to the native device as possible (of course, we could go and purchase a heap of mobile devices, but that could get pretty costly!). It would also be useful if we could test using the same OS as users will be on their devices. Thankfully, there's already a solution for this - BrowserStack. Scott Hanselman has a great blog post on BrowserStack integration into Visual Studio, which I throughly recommend reading to familiarize yourself with it. In short, BrowserStack provices a virtual environment that allows you test your site on various devices and operating systems, meaning you can test more reliably and efficiently.
To get started with BrowserStack, I first visited and clicked on the link to 'Try BrowserStack'. (As an aside, the Modern.Ie site has some useful links for testing your site, and is well worth a read. Chris Maunder has a good write up on it as well).
I signed up for the free trial, which provides 3 months of free testing in Windows-based environments, and 30 minutes of non-windows testing time. I've shown screenshots of the signup process below:
As per Scott's post, I then installed the Visual Studio extension which allows me to debug using BrowserStack right from Visual Studio 2012, as shown below:
Once I started debugging, it allowed me to choose which OS/browser I'd like to test in:
At the next step I chose to debug an internal url using a tunnel, and received a warning saying I had to install Java in order for it to run, as shown below:
I then clicked on the Download the latest java version link and installed java:
After reloading the page, I was able to specify the local url and port to test (which is what I've been using already when debugging locally) and then let BrowserStack do its magic:
The end result is shown below - I'm remotely debugging code in that's running on my local machine using a cloud emulator running Windows 8/IE10 - which is pretty amazing IMHO!
Now that we've got that working, what's next? Well, let's go back and test on some devices that have screen sizes in the range that we want to support.
Before we look at mobile and tablets, let's look at the desktop site and the browsers we want to support. Ideally, I'd like the site to be fully functional in all of the latest Chrome/Safari/Firefox variants, plus IE8/9/10. However, given the competition time constraints, there's no way I could test the site thoroughly in all of them. If I had more time I'd have given the site a run-through on all of them to ensure it was fully functional! IE9 and IE10 support CSS3 media queries (as do the latest versions of Chrome/Firefox/Safari), which allow us to use specific CSS styles depending on the browser viewport width. IE8, however, does not. To fix that, I included respond.js, which is a script that helps make media queries work in IE6/7/8. Note that I'm not supporting IE6/7, as past experience tells me that coding for these two rogues is more pain than it's worth. Plus by coding for them, I'm not keeping in line with the guidance from Microsoft which encourages users to upgrade their browser to IE8 if they're on XP, or IE10 if they're on Windows 7.
It turns out that respond.js was already included in the default Visual Studio MVC 4 internet application template that I'd started with back in challenge two, in the /scripts/modernizr-2.6.2-respond-1.1.0.min.js file. So all I needed to do was update my /Views/Shared/_Layout.cshtml file to reference this script file instead of the default modernizer.js file, as follows:
<head>
.....
<script src="/scripts/modernizr-2.6.2-respond-1.1.0.min.js"></script>
</head>
Now IE8 will support media queries, which is essential for IE8 users with desktop resolutions below 1024 * 768px.
I'd like the site to work on tablets (big and small), smaller desktops, and mobile devices. When viewing in an iPad or a Microsoft Surface in portrait mode, the viewport width is usually 768px, so I'll test that. There are also a range of other tablets out there, not to mention mobile phones in landscape mode, which we want to accomodate. Finally, many mobile devices have screen widths of 320px, so I'll test at that width too. Once we get below 320px things get pretty cramped, and given most of the newer mobile devices out there are 320px and above, I won't test at screen sizes below this.
Remember what I said earlier about the content being the most important part to focus on when using responsive design? Well, on our site we have a number of key content pages we'll focus on:
For each of the above pages, I'll test it at various screen sizes, and then apply specific responsive design techniques in order to make it work properly in tablets/mobiles. Note that whilst I'll focus on the above pages, I'll also look at the site as a whole, and make modifications to other pages if required.
With that in mind, let's have a look at the live conference video page in an iPad 2 (768px wide):
As you can see, when looking on a large tablet such as the iPad 2, things aren't too bad. There are a number of issues though:
Once we get down to the small tablets, however, it's another story. For quick testing on my local machine, I've downloaded a browser extension for Chrome called Viewport, which allows me to automatically resize the browser window to common device widths. At 480px wide (the same as an iPhone in landscape mode) the result is as below:
Now things are starting to look a bit off. We have the following issues:
Finally, let's have a look at what the page above looks like on an iPhone 4 (320px):
Oh dear, it looks like we have a few issues to fix, which are much the same as those we found at 480px. Note that the phone has adjusted the scale of the page to compensate for the large video, and the result isn't very nice! I know that, due to bandwidth limitations (at least in NZ), users are less likely to be viewing a video on their mobile. However, as mobiles become more powerful and networks improve, this will become more popular over time. So I'd like to get the video working at 320px as well if possible. So, what can we do? Let's look at each specific page I mentioned earlier and fix the issues.
If I were using a responsive CSS framework such as Twitter Bootstrap, Skeleton, or Foundation, some of the boilerplate CSS required to make the site scale properly on different devices would already be included. However, since I started with the vanilla MVC 4 internet application template, there is only a small amount of code included for responsive design by default. Whilst this could be seen as a bad thing, I see it as a positive, as it means that I will have to learn more about using CSS media queries properly to achieve a website that will be truly responsive. It also means I'll be in control and have a better understanding if things don't work quite as intended.
Right, enough talk, let's get down to the code and start addressing the issues mentioned earlier!
As mentioned in this incredibly helpful article,.
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
The code for embedding the YouTube video was initially as follows:
<div id="video">
<iframe width="630" height="473" src="//youtube.com/embed/@Model.HangoutId" frameborder="0" allowfullscreen></iframe>
</div>
Note the fixed width of the iframe, which is what causes the video to break the layout once the screen width drops down below 630px. To fix this, we'll update the code so that the video automatically scales to fill all of the available screen width. That will not only make it work properly on mobile, but also give desktop and tablet users a larger video area to watch. One of THE most helpful sites I found to help get up-to-speed quickly on responsive design was, and I found this tutorial particularly useful when trying to implement proper scaling for video and images.
First, we add the following CSS classes to our stylesheet:
#video {
position: relative;
padding-bottom: 56.25%;
padding-top: 30px;
height: 0;
overflow: hidden;
}
#video iframe,
#video object,
#video embed {
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
}
We also remove the the width and height declarations on the iframe, so it becomes:
<div id="video">
<iframe src="//youtube.com/embed/@Model.HangoutId" frameborder="0" allowfullscreen></iframe>
</div>
Now the video will scale to fit the screen automatically. Onto our next issue:
The width of our main content area when viewing the page on a desktop is 960px. As long as the user's browser is wider than this, the page will have a margin on the left and right-hand sides to stop the content pushing up against the edges of the browser. However, once the browser window is smaller than 960px wide, the content hits the edges of the screen, as shown in the mobile screenshots above. To fix this, we'll add a media query to our stylesheet, so that if the browser width is less than 960px, a small margin will be added to the outside of the main content blocks, as follows:
@media only screen and (max-width: 960px) {
.content-wrapper {
margin: 0 1%;
}
}
Note that rather than setting fixed width margin using pixels, we're following the guidance from this article, and making our margin fluid so it will scale with the browser window. Onto our next issue:
As the browser gets smaller, the h1 element containing the conference title looks proportionately larger. To fix this, we'll add some more entries after our .content-wrapper class mentioned above, to reduce the size of the headings on screens less than 960px wide, as follows:
h1 {
font-size: 1.6em;
}
h2 {
font-size: 1.3em;
}
h3 {
font-size: 1.1em;
}
With that in place, the headings should still stand out from the body text, but not take up too much space on smaller devices. I also added an additional media query to reduce the font size further for mobile devices with a width of less than 480px, as follows:
@media only screen and (max-width: 479px) {
h1 {
font-size: 1.3em;
}
h2 {
font-size: 1.2em;
}
}
Whilst the header navigation links don't look too bad on the iPad 2, on smaller devices they float up against the right hand of the screen and drop below the banner logo. To fix this, I initally looked at creating a dropdown or expandable menu for mobile devices, such as the one discussed in this article, or the one that comes out of the box with Twitter bootstrap. However, at this stage there are only a few items in the navigation, and they all fit on one line even on mobile devices at 320px width. Hence, all I did was make them center-aligned, and remove the floats, along with the login links and logo. Note: If I need to add any more items to the navigation in future, then I'll revisit this and go with one of the pulldown menu options mentioned above.
For clarity, the html for the header and nav is included below:
<header class="site-header">
<div class="content-wrapper clear-fix" style="position: relative;">
<section id="login">
@Html.Partial("_LoginPartial")
</section>
<div class="float-left">
<a href="/" title="Home">
<img src="/images/logo-full.png" alt="YouConf logo" /></a>
</div>
<div class="float-right">
<nav>
<ul id="menu">
<li>@Html.ActionLink("Home", "Index", "Home")</li>
<li>@Html.ActionLink("Conferences", "All", "Conference")</li>
<li>@Html.ActionLink("Help", "Index", "Help")</li>
<li><a href="/search" title="Search"><img src="~/images/search-nav.png" class="search-button" alt="Search" /></a></li>
</ul>
</nav>
</div>
</div>
</header>
And the code to achieve the center-aligned menu, login links, and logo was as follows:
@media only screen and (max-width: 767px) {
/* header
----------------------------------------------------------*/
header .float-left,
header .float-right {
float: none;
text-align: center;
}
/* logo */
header .site-title {
margin: 10px;
text-align: center;
}
font-size: .85em;
margin: 0 0 12px;
text-align: center;
}
ul#menu {
margin: 0;
padding: 0;
text-align: center;
}
ul#menu li {
margin: 0;
padding: 0;
}
}
Note the additional media query, which will only apply these styles if the browser width is less than 768px.
While I was doing that, I also added additional code so that any images would scale to fit the browser window, as discussed in this article. I didn't need to add a media query for this, and simply added it to the main section of my stylesheet as follows:
img {
max-width: 100%;
height: auto;
}
@media \0screen {
img {
width: auto; /* for ie 8 */
}
}
Finally, as the logo was taking proportionately more vertical space once the screen width dropped below about 480px, I added an additional rule so that once the screen width was less than 480px, it would take up a maximum of 80% of the screen width (with height scaling automatically) as follows:
@media only screen and (max-width: 479px) {
.logo {
max-width: 80% !important;
}
}
As I made each change, I resized my desktop browser for quick feedback to see if it had worked or not. Once I had all of the changes in place, I went back to BrowserStack and checked again. As you can see, things were looking a lot better.
On the iPad 2:
... and the iPhone 4:
Not too bad eh? The h1 text was still a bit too large on the iPhone 4, however the video itself was still visible above the fold, which I felt was satisfactory. Now that I had that page working, let's look at our other two key pages.
Believe it or not, after making those adjustments to the header, nav, and headings for the live video page, the conference detail page actually looked pretty good on both the iPad, iPhone, and desktop, so I didn't have to do anything to it! Fingers crossed I wasn't just dreaming... One more page to go...
From the outset I thought this would be the page that required the most work, as it has a number of large block elements, including the hero banner with two links on the right-hand side, and also the three info tiles in the center. With the header and nav already taken care of, I just had to take care of the remaining unique elements on this page. First, let's see what it looked like before I started on it.
This time on an iPad 3 at 768px wide:
Next - on a Samsung Galaxy Note 2 at 480px wide (note I scrolled down to show the hero banner and info tiles):
and finally, on an iPhone 4S at 320 * 480:
This time it was mainly the iPhone (320px wide) device that had issues, namely:
Once again, let's go about fixing the issues one by one:
The html for the hero banner is as follows, with two main columns:
<section class="content-wrapper hero-wrapper clear-fix">
<div id="hero" class="hero box">
<div class="grid">
<div class="col-7-10">
<div class="teaser">Prepare. Present. Engage.</div>
<h1>YouConf - your conference online</h1>
</div>
<div class="col-3-10">
<ul>
<li><a href="@Url.Action("Index", "Help")" class = "button"><span class="arrow">Get started</span></a></li>
<li><a href="@Url.Action("All", "Conference")" class = "button"><span class="info">More info</span></a></li>
</ul>
</div>
</div>
</div>
</section>
To improve the look of this on smaller devices, we'll first reduce the font size of the text if the browser width is less than 960px, as follows:
.hero {
font-size: 0.8em;
padding: 3%;
}
Note that I added the above CSS rule to the existing section of my stylesheet targeting browsers with max-width < than 960px.
Before I took the above screenshots, I'd also already added some code to remove the float so the buttons move below the heading text once the browser width falls below 768px, and adjusted the padding to scale with the browser, by adding additional rules to the section of the stylesheet targeting browsers with max-width 767px:
.hero .grid div {
width: auto;
float: none;
}
.hero .button {
padding: 3%;
}
Now the hero banner should adjust to fit the window nicely on mobile devices.
The tiles are arranged so each one takes up one-third of the available space and sits alongside the others. The html for the tiles is as follows:
<section class="content-wrapper clear-fix">
<div class="grid landing-panels">
<div class="col-1-3">
<div class="box clearfix">
<img src="/images/conferencescreenshot.png" alt="Setup and manage your conference screen" />
<div>
<h2>Prepare</h2>
<p>Setup your conference, and invite people to visit your conference page with a recognizable url. We make it easy to manage presentations, speaker, and conference details.</p>
</div>
<div style="clear: both;"></div>
</div>
</div>
<div class="col-1-3">
<div class="box clearfix">
<img src="/images/vsscreenshot.png" alt="Live embedded video feeds with Google Hangouts" />
<div>
<h2>Present</h2>
<p>Broadcast live using Google Hangouts. With YouConf you can embed your live video feeds for both conferences and individual presentations, providing an integrated viewing experience.</p>
</div>
<div style="clear: both;"></div>
</div>
</div>
<div class="col-1-3">
<div class="box">
<img src="/images/twitterscreenshot.png" alt="Integrated chat with Twitter alongside your video" />
<div>
<h2>Engage</h2>
<p>Engage and interact with your audience using Twitter live chat feeds alongside your video. Viewers can comment on your presentation in realtime!</p>
</div>
<div style="clear: both;"></div>
</div>
</div>
</div>
</section>
What I thought would work best was the following:
.landing-panels [class*='col-']{
width: auto;
float: none;
padding: 0;
}
.landing-panels > div{
padding: 0 !important;
}
.landing-panels .box {
padding: 1em;
height: auto;
}
.landing-panels [class*='col-'] img {
float: left;
width: 40%;
margin-right: 1em;
}
.landing-panels .box div {
overflow: auto;
padding: 0;
}
I then added selectors to the CSS section targeting browsers < 480px, as follows:
@media only screen and (max-width: 479px) {
.landing-panels [class*='col-'] img {
display: none;
}
.landing-panels .box div {
margin-left: 0;
}
}
With these rules in place, let's see the results!
On the iPad 3:
On the Galaxy Note 2 (note the tiles with images alongside the text):
And finally, the iPhone 4S (note the tile images are now hidden):
Fantastic! I have to say I was pretty chuffed with the end result for the three key pages, particularly given that prior to this competition I had no idea of the possibilities for multi-device support that responsive web design offers. Now that you've seen how I went about testing and adjusting the key pages, I hope you have an appreciation for some of the items that I had to deal with when trying to design for both desktop, tablet, and mobile. Note that I could still continue to refine the site at various resolutions to make it look even better, but thought it would be better to focus on getting this article completed!
I had to make a few other adjustments to the site elsewhere, but I won't go into detail on those, as the steps I took were much the same as the ones above. Some notable mentions were:
Finally - As I mentioned in the introduction, I didn't need to create a specific Azure mobile service for this challenge, however, I am looking at possibly using one in future to handle executing scheduled tasks such as sending conference reminders (once I add that feature).
Most of these are ongoing tasks, and rest assured, this isn't the end for YouConf! I still have a few things to work on...:
What a competition this has been! I initially set out to try and replicate the dotNetConf website, with a few additional features to make it available to the public, and didn't really think too far beyond that (as evidenced by my poor showing in challenge one). However, I ended up becoming well and truly absorbed in Windows Azure and the competition itself. I learned a tonne about the many aspects of Azure, and with each challenge I found myself appreciating the Azure platform more and more, due to its ability to support every development scenario that I could throw at it. I can't recommend it highly enough!
I hope that this article will live on beyond this competition and provide guidance to anyone who is trying out Azure - both those that are trying it out for the first time, and those who are looking for additional tips on how to perform certain tasks. It's taken enough blood sweat & tears on my part that it would be a shame if it didn't help a few folk out there. If you're reading this now and you've learned a thing or two that's helpful, please let me know as it will put a smile on my face and inspire me to continue on with similar conquests like this :)
Finally - A slightly scary thought is that in spite of spending the last two months learning everything I possibly can about it, I've still only just scratched the surface in terms of what Azure is capable of. There's so much more to learn, and I encourage you to go and learn for yourself. If you're a developer, be it .Net, PHP, RoR, C++, or anything else, and you haven't tried out Azure, get on there and give it a try! You won't regret it.
Part one: I've registered for the free Azure offering of up to 10 websites () and just realised how generous the offer really is. Up to 10 websites!!! Hopefully we won't need all of those, but you never know....
*I'll try and post daily project updates, but if there are no entries for a given day, I either didn't find time to work on the project, or was so caught-up in working on the project that I forgot to post an update.
Challenge 2
Was a bit worried about what I'd gotten myself into, thinking things like - "You mean you're trying to improve on something the Hanselman built? Are you crazy?!" I then thought about what a good opportunity to learn that this competition is, and calmed down a little... Spent the rest of the day reading up on Google Hangouts and how they work, SignalR, TFS and Git integration into Azure using VS2012.
Time to build a website! I'm following the tutorial on how to build an MVC4 website but since I'm not going to be using SQL for this part of the competition I'll leave the membership stuff out for now (by commenting out the entire AccountController class so it doesn't try to initialize the membership database).Managed to deploy the sample MVC4 website to Azure - using the builtin Visual Studio publishing mechanism after I'd downloaded my publish profile from Azure.
Note: I had a bit of an issue at one stage with Azure as per the screenshot - I couldn't seem to access my website, even though I could see the site live at ...... After about half an hour this seemed to go away, so not quite sure what was happening....
Now let's setup Git so I can publish directly to Azure when I checkin, using the steps in this article.
I've downloaded the Git explorer, setup a local youconf repository, and published my local changes to Git (). Rather than pushing local changes directly to Azure, I'd rather they were first pushed to my GitHub repository so they're visible to anyone else who might want to have a poke around. To accomplish this I'm following the steps in the article under the heading "Deploy files from a repository web site like BitBucket, CodePlex, Dropbox, GitHub, or Mercurial"?!! Now we're all setup to write some code.....
Next up: What will my site look like?
I want a site that looks good, so will do a bit of searching and see if I can find something that's nice, and free (Creative Commons licence or similar).
I've decided to build the next Facebook! Just kidding, but dreams are free right? Happy May day!
After looking at various free CSS templates on yesterday I got a bit stuck as I wasn't sure whether I should go for the one that looked really good, but which had some very complicated looking CSS that I couldn't get my head around in a short time, OR whether I should start with something simple like a grid layout and build up from there. My dilemna is that I'm not very good at producing graphics, and they take me a long time to make, so by getting a pre-built template I can avoid all the hassle. In saying that, I like to know what's going on in the CSS in case I need to modify it. Maybe I'll have to use a bit of both? Moving on....
Membership (Not for now)
I'd like users to register in order to create conferences, however, since we're not going to be building any sort of membership mechanism for this part, I'm going to allow anyone to create conferences without registration. I did find a membership provider for table storage in the Azure code samples, however, it didn't include the Facebook/Twitter authentication which I know comes bundled with SimpleMembership. I think I'll wait till I get SQL before I go further with membership.
Conferences
I need to be able to record conference details (sessions, speakers etc) and will try and get something working for that today. In Scott's article he mentioned using an xml file stored in DropBox for the data, which seemed like a pretty good idea for a single conference. However, given that we're building a site to (hopefully) host lots of online conferences, and because this is a good chance for me to learn about Azure, I'm going to look into using one of the Azure Storage options (Queue, Table, SQL). Since I'm trying to avoid using SQL till the third part of this challenge, I think I'll go with table storage as it's fast, easy to setup, and gives me a chance to learn a new tool.
I started reading up on Partition/Row keys, and found this article very helpful -
Azure Table Storage, so many options....
I got setup and created a table as per, however, I soon realized that I didn't quite understand table storage quite as well as I'd thought! Basically I planned to store each conference, including speakers and presentations, as a single table entity, so that I could store/retrieve each conference in one go.:
Progress at end of day: I've managed to insert some conferences, but am still a bit stuck getting the UI looking nice as I'm flip-flopping between using a custom CSS template or just building from scratch with the grid layout like
Currently working on the input screens for conferences and speakers. I really love the MVC framework, both how easy it is to use for common scenarios such as validation, and also how easy it is to extend through ModelBinders, DisplayTemplates etc. Some cool things I've discovered:
Display Templates/Editor Templates
Each conference has a TimeZoneId, such as (UTC-04:00) Atlantic Time (Canada).This is stored as a string property on the Conference e.g.
public class Conference
{
public string TimeZoneId { get; set; }
...
}
The advantage of just storing this as a string rather than a TimeZoneInfo is that I don't need to write a custom modelbinder or custom validator as it's just a plain old string, so the framework can take care of binding and validating it when it's a mandatory field etc.
When adding/editing a conference I want to be able to display a dropdown list of all timezones, and have this automatically bound to the conference. To achieve this, I used code from the and omitted the custom ModelBinder as I didn't need it. I created a new Editor Template in /Views/Shared/EditorTemplates named TimeZone, and also in /Views/Shared/DisplayTemplates as follows:
@* Thanks to*@
@model string
@{
var timeZoneList = TimeZoneInfo
.GetSystemTimeZones()
.Select(t => new SelectListItem
{
Text = t.DisplayName,
Value = t.Id,
Selected = Model != null && t.Id == Model
});
}
@Html.DropDownListFor(model => model, timeZoneList)
@Html.ValidationMessageFor(model => model)
This will handle displaying a dropdown with all timezones, however, I needed to tell the framework that when rendering the TimeZoneId property on a Conference it should use this template... and it turned out to be really easy! I just had to add a UiHint to the TimeZoneId property and it automagically wired it up. E.g
[Required]
[UIHint("TimeZone"), Display(Name = "Time Zone")]
public string TimeZoneId { get; set; }
And that's it! Now when I call .DisplayFor or .EditorFor in my views for the TimeZoneId property it automatically renders this template. In the view it looks like this:
<div class="editor-label">
@Html.LabelFor(model => model.TimeZoneId)
</div
<div class="editor-field">
and on-screen:
BOOM!!!
Validation
Well that turned out to be as easy as adding the right attributes to the properties I wanted to validate. You'll see above I added the [Required] attribute to the TimeZoneId property, which ensures a user has to enter it. I also added the [Display] attribute with a more user-friendly property name.
Azure Table storage issues when updating a conference
When storing conferences, I used "Conferences" as the PartitionKey, and the conference HashTag as the RowKey, as each conference should have a unique HashTag. My UpsertConference code is as follows:);
}
Unfortunately this means that if I were to update a conference's HashTag, a new record would be inserted as the .InsertOrReplace code thinks it's a completely new entry. To work around this, I had to find the old conference record first using the old HashTag, delete it, then Insert the conference again with the new HashTag. It feels a bit clunky, especially since it't not wrapped in a transaction or batch, but as I mention in my comments, this is something I'll be refactoring to use SQL Server in Part 3 of the competition, so I'm not stressing too much over it at the moment. The updated code is as follows:);
}
CRUD
I've found it fairly easy to perform simple CRUD operations using Table storage thus far, with the minor issue relating to updating an entity's RowKey. While developing locally I used Development storage by setting my web.config storage connection string as follows <add key="StorageConnectionString" value="UseDevelopmentStorage=true" />. In order to get this working in the cloud I just had to setup a storage account and update my Azure Cloud settings as per
DefaultEndpointsProtocol=https;AccountName=youconf;AccountKey=[Mylongaccountkey]
Date/time and TimeZone fun
I've had to do a bit more work than expected with the date/times, given that when a conference is created, the creator can select a start/end date/time, and also a timezone. The same goes for a Presentation, which has a start date/time, duration, and timezone.
Initally I was going to store them in local format, along with the timezone Id (as they appear to be stored in dotNetConf from reading Scott's blog post). However, after doing some reading on the subject of storing date/time information, I gathered that it's best to store datetimes in UTC, then convert them into either the user's timezone, or your chosen timezone (such as the event timezone) as close to the UI as possible. This allows for easier comparisons in server-side code, and also makes it easy to order Conferences and presentations by date/time E.g.
@foreach (var presentation in Model.Presentations.OrderBy(x => x.StartTime)) seems to be an article that I keep coming back to whenever I do anything involving datetimes and different timezones, and I read it once again to re-familiarise myself with how to go about things.
So, a user enters the datetime in their chosen timezone, selects the timezone from a dropdown list, and hits Submit. In order to store the date in UTC I have to have code such as this in the Controller, or possibly in a ModelBinder (I haven't tried using a Custom ModelBinder yet though)
var conferenceTimeZone = TimeZoneInfo.FindSystemTimeZoneById(conference.TimeZoneId);
conference.StartDate = TimeZoneInfo.ConvertTimeToUtc(conference.StartDate, conferenceTimeZone);
conference.EndDate = TimeZoneInfo.ConvertTimeToUtc(conference.EndDate, conferenceTimeZone);
... then to render it back out again in the local timezone I created a custom EditorTemplate called LocalDateTime.cshtml. Note that I also add a date class onto the input field, so that I can identify and date fields using jQuery when wiring up a date time picker (more on that later).
@model DateTime
@{
var localTimeZone = TimeZoneInfo.FindSystemTimeZoneById((string)ViewBag.TimeZoneId);
var localDateTime = Model.UtcToLocal(localTimeZone);
}
@Html.TextBox("", localDateTime.ToString(),
new { @class = "date",
@Value = localDateTime.ToString("yyyy-MM-dd HH:mm") })
.. and to use this template, I can either decorate the relevant properties on my Conference/Presentation classes with a UIHint, or specify the editor template directly from another view. For example, here's some of the code from /Views/Conference/Edit.cshtml:
@Html.LabelFor(model => model.StartDate)
@Html.EditorFor(model => model.StartDate, "LocalDateTime",
new { TimeZoneId = Model.TimeZoneId }) @Html.ValidationMessageFor(model => model.StartDate)
Note that 2nd parameter which specifies the editor template that I want to use. I also pass in the TimeZoneId of the conference as a parameter to the LocalDateTime editor template.
The UI - How to display date/times?
I was investigating how best to render date/times, and was initially looking at using dual input boxes, with one holding the date, and one holding the time, as per yet another of Scott's articles. However, after getting partway through implementing that, I discovered an amazing jQuery datetimepicker plugin at which extends the existing jQuery datepicker.
By using that I was able to get away with having a single input box containing both the date AND time, along with a nice picker to help users. It really is cool, and only takes a single line of code to add:
$(function () {
$(".date").datetimepicker({ dateFormat: 'yy-mm-dd' });
});
... and the resulting UI looks pretty good to me!
Not too much to report today as I've been hacking away at the stylesheet to try and make the site look nice. I had a go with the Twitter Bootstrap CSS template, but eventually decided not to use it as it might not work well with jQuery UI (and validation etc). Still struggling away at the end of the day....
More CSS and UI tidy-up. Things are starting to look better now - have a look at the live site to see it coming together.
JSON Serialization
When adding the functionality to delete a speaker, I ran into an issue where I would delete the speaker, but they would not be removed from the actual presentation. Here's a snippet of the code from the Presentation class:
...
[Display(Name="Speaker/s")]
public IList<Speaker> Speakers { get; set; }
...
Now in the Delete method of my speaker controller, I have code like this to delete the speaker:
...
//Remove the speaker
conference.Speakers.Remove(currentSpeaker);
//Also remove them from any presentations...
foreach (var presentation in conference.Presentations)
{
var speaker = presentation.Speakers.FirstOrDefault(x => x.Id == currentSpeaker.Id);
presentation.Speakers.Remove(speaker);
}
YouConfDataContext.UpsertConference(conferenceHashTag, conference);
return RedirectToAction("Details", "Conference", new { hashTag = conferenceHashTag });
...
Note that line which says Presentation.Speakers.Remove(speaker)... with my default setup this wasn't actually deleting the speaker, because by default JSON.Net serializes all objects by reference (remember that we're serializing the entire conference when we save it to table storage then deserializing it on the way back out). This means that the speaker object that I retrieved on the line beforehand is not actually the same instance as the one in the presentation.Speakers collection.
Initially I was going to override Equals on the Speaker class to have it compare them by Id, but then I did some googling and found that, sure enough, others had already run into this problem. And it turns out JSON.Net (written by the coding fiend aKa James Newton-King, who also happens to be in Wellington, NZ) already handles this situation and allows you to preserve object references! See for more. Basically I just had to specify the right option when serializing the conference before saving as follows, in my UpsertConference method:
var entity = new AzureTableEntity()
{
PartitionKey = "Conferences",
RowKey = conference.HashTag,
//When serializing we want to make sure that object references are preserved
Entity = JsonConvert.SerializeObject(conference,
new JsonSerializerSettings {
PreserveReferencesHandling = PreserveReferencesHandling.Objects })
};
TableOperation upsertOperation = TableOperation.InsertOrReplace(entity);
Setting the width of textareas in MVC
Remember earlier how I said I could make MVC automatically render a textarea for a property by simple decorating the property with the [DataType(DataType.MultilineText)] attribute? Well, what if I want to specify the height/width of the textarea? CSS to the rescue!
The framework automatically adds the multi-line class to any textareas that it renders using the default editortemplate, which means I was able to add a style for this class and achieve the desired result. E.g.
.multi-line { height:15em; width:40em; }
Spent most of my time doing CSS and UI enhancements, and making the homepage look pretty. I tend to struggle with CSS and making things look beautiful at the best of times, particularly when I start to run into cross-browser issues. However, I think that I've come up with something that looks quite nice now - check it out at
A few things I found helpful along the way...
jQuery UI comes with a button widget, which "Enhances standard form elements like buttons, inputs and anchors to themeable buttons with appropriate hover and active styles." It makes them look quite nice, and since I already had jQuery UI included in the project (it comes bundled with the MVC4 Internet web application template) I thought I'd use it. One line of javascript was all that was needed:
$("#main-content input[type=submit], #main-content a:not(.no-button), #main-content button")
.button();
Note that I've scoped it to only include items within the main-content element to improve selector performance. The before > after is below:
On the subject of buttons, it's often nice to have icons for various buttons, not to mention in either your header or logo. I found a couple of sites that provide free icons released under the Creative Commons attribute licence, and so I used a few of them (and included the relevant link-back in my site footer). The sites were:
Find Icons - Archive -
I also found a very cool logo generator at, which I used to generate the text in the YouConf logo.
It's fairly easy to include links for the three big boys above since they provide code that you can embed either via iFrame or javascript. Unfortunately they seem to take quite long time to load though, so this can result in flickering of the icons as one populates after the other. To hide this I hacked away and ended up hiding the section with the buttons in it till 5 seconds after the DOM had loaded. E.g.
setTimeout(function () {
$("#social").show();
}, 5000);
I'm sure there's a better way to do this, but I'm not sure I have time to find out just yet! Thanks to for the idea anyhow...
Isn't it nice when things just work? All this time that I've been stressing away fixing bugs and getting my site looking nice, I haven't had a single issue with Git publishing to TFS. I simply check in my changes to my local repository as I complete features, and try to sync to GitHub a few times a day. Each time I sync to GitHub my changes are automatically pushed to my Azure website, usually within a few minutes. I've been able to focus on building my website and not fret over versioning or deployment issues. Phew!
Quite a bit to report on today.....
I found the above post, and a few others, which setup Wordpress blogs, so I thought why not try a different one to make things a bit more interesting. In the end I went with Drupal, as an old workmate of mine used to rave about it. I found an article for guidance on installing Wordpress at, so used this as a guide. Here's what I did:
And now I have a nice themed Drupal site!
I then added a couple of blog entries for day one & two, by copying & pasting the html code from my CodeProject article into the blog entry.
What, wait a minute, aren't we supposed to avoid duplication?
After getting my second day's progress blogpost into my Drupal site, I realized that if I was to copy & paste all the articles:
In light of the above, I left my two initial blog posts intact, and decided that for now I'll only post updates in my CodeProject article, since the goal of setting up the blog was to see if it really was as easy as others had made out (whilst learning along the way), which indeed it was. I'll leave the blog in place though, as it deserves to be part of my entry for challenge two as one of the other 9 websites.
Usually one of the first things I do when creating a project is setting up Error Logging. Sometimes it's to a text file, sometimes to xml, sometimes to a database, depending on the application requirements. My favourite logging framework for .Net web apps is Elmah, as it takes care of catching unhandled exceptions and logging them to a local directory right out-of-the-box. It has an extension for MVC too, which is awesome.
Elmah allows you to specify the route url you're like to use for viewing errors in your web.config. It also allows you to restrict access to the log viewer page if needed, using an authorization filter so you can specify which user roles should have access. At this stage I haven't implemented membership, and so can't restrict access via roles. Thus I'm going to leave remote access to the logs off (which it is by default). For part 3 when I implement membership I'll update this. Note that for any production application I'd never leave the error log page open to the public, as it would give away far too much to anyone who happens to come snooping.
Right - to setup Elmah logging I did the following:
By default Elmah logs exceptions in-memory, which is great when you're developing, but not so good when you deploy to another environment and want to store your errors so you can analyze them later. So, how do we setup persistent storage?
In the past I've used local xml file, which is really easy to configure in Elmah by adding the following line to the <elmah></elmah> section of your web.config as follows:
<elmah>
<errorLog type="Elmah.XmlFileErrorLog, Elmah" logPath="~/App_Data" />
</elmah>
This is fine if you're working on a single server, or can log to a SAN or similar and then aggregate your log files for analysis. However, in our case we're deploying to Azure, which means there are no guarantees that our site will stay on a single server for its whole lifetime. Not to mention that the site will be cleared each time we redeploy, along with any local log files. So what can we do?
One option is to setup Local Storage in our Azure instance. This will give us access to persistent storage will not be affected by things like web role recycles or redeployments. To use this, we would need to:
The above solution would work fine, however, since I'm already using Azure Table storage, I thought why not use it for storing errors as well? After some googling I came upon the following package for using table storage with Elmah, but upon downloading the code realized it wasn't up-to-date with the Azure Storage v2 SDK. It was easy to modify though, with the end result being the class below.
namespace YouConf.Infrastructure.Logging
{
/// <summary>
/// Based on
/// using-elmah-in-windows-azure-with-table-storage/
/// Updated for Azure Storage v2 SDK
/// </summary>
public class TableErrorLog : ErrorLog
{
private string connectionString;
public const string TableName = "Errors";
private CloudTableClient GetTableClient()
{
// Retrieve the storage account from the connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));
// Create the table client.
return storageAccount.CreateCloudTableClient();
}
private CloudTable GetTable(string tableName)
{
var tableClient = GetTableClient();
return tableClient.GetTableReference(tableName);
}
public override ErrorLogEntry GetError(string id)
{
var table = GetTable(TableName);
TableQuery<ErrorEntity> query = new TableQuery<ErrorEntity>();
TableOperation retrieveOperation = TableOperation.Retrieve<ErrorEntity>("", id);
TableResult retrievedResult = table.Execute(retrieveOperation);
if (retrievedResult.Result == null)
{
return null;
}
return new ErrorLogEntry(this, id,
ErrorXml.DecodeString(((ErrorEntity)retrievedResult.Result).SerializedError));
}
public override int GetErrors(int pageIndex, int pageSize, IList errorEntryList)
{
var count = 0;
var table = GetTable(TableName);
TableQuery<ErrorEntity> query = new TableQuery<ErrorEntity>()
.Where(TableQuery.GenerateFilterCondition(
"PartitionKey", QueryComparisons.Equal, TableName))
.Take((pageIndex + 1) * pageSize);
//NOTE: Ideally we'd use a continuation token
// for paging, as currently we're retrieving all errors back
//then paging in-memory. Running out of time though
// so have to leave it as-is for now (which is how it was originally)
var errors = table.ExecuteQuery(query)
.Skip(pageIndex * pageSize);
foreach (var error in errors)
{
errorEntryList.Add(new ErrorLogEntry(this, error.RowKey,
ErrorXml.DecodeString(error.SerializedError)));
count += 1;
}
return count;
}
public override string Log(Error error)
{
var entity = new ErrorEntity(error);
var table = GetTable(TableName);
TableOperation upsertOperation = TableOperation.InsertOrReplace(entity);
table.Execute(upsertOperation);
return entity.RowKey;
}
public TableErrorLog(IDictionary config)
{
Initialize();
}
public TableErrorLog(string connectionString)
{
this.connectionString = connectionString;
Initialize();
}
void Initialize()
{
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));
var tableClient = storageAccount.CreateCloudTableClient();
CloudTable table = tableClient.GetTableReference("Errors");
table.CreateIfNotExists();
}
}
public class ErrorEntity : TableEntity
{
public string SerializedError { get; set; }
public ErrorEntity() { }
public ErrorEntity(Error error)
: base(TableErrorLog.TableName,
(DateTime.MaxValue.Ticks - DateTime.UtcNow.Ticks).ToString("d19"))
{
PartitionKey = TableErrorLog.TableName;
RowKey = (DateTime.MaxValue.Ticks - DateTime.UtcNow.Ticks).ToString("d19");
this.SerializedError = ErrorXml.EncodeString(error);
}
}
}
This will log all errors to the Errors table in Azure table storage, and also take care of reading them back out again.
I also had to update my web.config to use the new logger class as follows:
<elmah>
<errorLog type="YouConf.Infrastructure.Logging.TableErrorLog, YouConf" />
</elmah>
Now if I generate an error I'll still see it on the Elmah log viewpage, but I can also see it in my table storage. I'm using dev storage locally, so I can fire up the wonderful Azure Storage Explorer and view my Error Log table as shown below:
and also on-screen:
Lovely!
Today I spend most of my time writing up the final article content for challenge two. I also implemented the SignalR functionality for keeping the live video url up-to-date as below.?
My article is now almost complete, with just a few touchups required. I'll probably spend the next day or two tidying up the site's css, javascript etc and making sure I haven't missed anything!
Carried on updating my article and tidying up my code to fix all those little things such as extraneous files that were no longer necessary. Also added an Easter Egg for the spot challenge!!
Continued to make a few text changes and minor tidyup as I realised in NZ we're actually 16 hours ahead of the timezone the conference is being judged in. Thus the challenge one deadline for me was actually about 4pm on May 13th NZT!
Since I'd been spending quite a lot of time on this during challenge two, I thought I'd have a bit of a break and keep away from the computer for a bit. I kept up with the comments and forum, but didn't do any development work. I did learn how to do tagging (or 'labelling' in TFS-speak) in branching in Git though.... is particularly applicable in my current situation, where I'd like to start development for challenge 3, but still want my source from challenge two to be available for the judges and anyone else to look at. I also don't want to introduce breaking changes into the live site when I check changes in. So, what did I do?, and thus won't change!
One of my tasks from last time was to add some tests for my controllers, to ensure that they're doing what they're meant to be doing. Given that most of the logic is in the Conference, Speaker, and Presentation controllers, I'll start with them. I'm not too keen on testing to the n-th degree when it comes to what is essentially a CRUD-based system, however, there is some specific logic that I think should be tested so we can be confident we're not breaking anything going forward...
To get started, I've installed a couple of packages that I find useful for testing, with screenshots.
Note I would like to use the Visual Studio Fakes Framework, and I did try, however every time I've tried using it in the recent past I end up in situations where something can't be mocked for whatever reason, and I'm left without a clue how to fix it. For example, I followed the steps in and added the fakes assembly for YouConf, but after building it couldn't generate a fake for the IYouConfDataContext. Given the tight timeframes for this competition, I really didn't have time to look any further, so I went with Moq which I knew would work.
I tend to use either Rhino Mocks or Moq on my projects, because they both do what they're supposed to do and have lots of useful help and tutorials available. As an aside, now that Ayende won't be actively maintaining Rhino Mocks, I wonder who will?.....
I won't go into the details of the tests too much, except to say that I'll try and add tests for what I see as the important bits of my controllers as I go. You can always check the source code if you'd like to see more. An example test for the All() method on my ConferenceController is as follows:
[TestClass]
public class ConferenceControllerTests
{
[TestMethod]
public void All_Should_ReturnOnlyPublicConferences()
{
//Setup a stub repository to return three public conferences and one private
var stubRepository = new Mock<IYouConfDataContext>();
stubRepository
.Setup(x => x.GetAllConferences())
.Returns(new List<Conference>(){
new Conference(){ AvailableToPublic = true},
new Conference(){ AvailableToPublic = true},
new Conference(){ AvailableToPublic = true},
new Conference(){ AvailableToPublic = false}
});
var conferenceController = new ConferenceController(stubRepository.Object);
var result = conferenceController.All()
.As<ViewResult>();
result.Model
.As<IEnumerable<Conference>>()
.Should().HaveCount(3);
}
I downloaded the source for my project from GitHub at the end of challenge one (to make sure it worked), and found (to my surprise) that the download size was ~60mb! After checking where most of the files were, I found that it was due to the large number of NuGet packages in my solution. I thought to myself "wouldn't it be nice if these could be automagically downloaded during the build process on GitHub" and it turns out that this problem has already been solved! See this article for details on how to instruct the build server to automatically download missing NuGet packages - in my case I had to do the following:
This added a new .nuget folder to my solution, as below:
Finally, I updated my .gitignore file to exclude the entire packages folder from source control (note that there was already a line for it in there by default, so I just uncommented it):
And now we no longer have packages getting checked in to source control! I also ran a command to delete the packages folder from my GitHub remote repository, but not my local machine as I wanted to keep the existing packages in my dev environment, as follows, and then checked in my changes.
git rm -r --cached packages
Today I'll try and get the basic membership functionality that comes with the example MVC 4 Web application template working. In order to do that, I'll need SQL.... and guess what - that's the focus for challenge 3!
SimpleMembership comes baked into the MVC 4, making it really easy to get started with.?
In challenge 2, I commented out the entire AccountController class as I didn't want it to be used, since I wasn't implementing Membership. I left the /views/account/*.cshtml files alone, however, as I knew I'd need them for this part. I've now uncommented the AccountController code again, and want to point out a few things. the built-in Asp.net web security to use the DefaultConnection and the UserProfile table. be checked in to source control, so I'll add another entry to my gitignore file to exclude the whole /App_Data folder for now.
I really don't want poor users to have to remember yet another username/password for my site, so I'll allow them to login using external providers as listed below:
Again, support for this comes built-in to MVC 4, and I highly recommend visiting the article at as it contains information on how to support all of the above providers. I now have to go and register YouConf with each of the above providers so I can get an api key/secret. Again, the article above shows how to go about that task as well.
Now that authentication is working locally, how about getting it working with a real SQL Azure database?
SQL Azure - your database in the cloud
As I mentioned earlier, when I'm developing locally I can use localdb for my database. However, what about when I deploy this to Azure? I'll need to access a real database as that stage, so before I go any further I think I should set one up. With the Free Azure trial, you get to create one database, which is all I'll need.). The result is below:
So now I had a new database named YouConf - easy as pie! I wanted to have the connection string available to the website, so first I can access my database from the YouConf web site. I can also manage it and run queries directly from the Azure Management Portal by clicking the Manage button with the database selected. Note that I had to allow my IP address in the firewall rules in order to do this, which I did by simply accepting the prompt that came up when I clicked the Manage button.
I can also access it from my local SQL server management studio if needed! Again, this requires a firewall entry to allow my ip address to have access. More on this later...
What about my Conferences - will I move them over from table storage?
I certainly plan to, and will do so over the coming days. This will take a bit of work though, so at this stage I'll leave them as-is since they're working fine using table storage. What I will do, however, is rename the UsersContext to YouConfDbContext as below:
public class YouConfDbContext : DbContext
{
public YouConfDbContext()
: base("DefaultConnection")
{
}
public DbSet<UserProfile> UserProfiles { get; set; }
}
I updated references to it, moved it to my /Data folder, and also moved the UserProfile class to its own file. Remember my source is all available on GitHub in the dev branch so feel free to view it -
From this point forward, it's important to know a bit about Entity Framework migrations if I need to change my UserProfile class, so I'd recommend the following two articles if you aren't familiar with EF or code-first migrations:
If you've looked at my code, you'd realise I've used a repository pattern to hide the implementation details for accessing Azure Table storage from my controllers. I felt this was a good approach to take, as there are a number of specific details (managing partition/row keys, update strategy when the conference hashtag changes etc) which are best performed by a specific repository class - in this case the YouConfDataContext - and not other classes.
Taking a step back in time, (before I started using NHibernate, and then Entity Framework for db access), I was a big fan of using repositories or other data access strategies such as ActiveRecord or DAOs in order to hide the database implementation details from my UI and other code. However, with the advent of such powerful OR-mapping tools, I often find these days (particularly with small sites like YouConf) that it's easier to avoid having a repository or data access layer, and just use the Entity Framework Data Context (or ISession for NHibernate) directly from the controller.
For those who argue that doing this for EF is not testable - it actually is - you just have to create an interface that your DbContext derives from and pass that as a dependency into your controllers, much like I'm already doing with the IYouConfDataContext. I'm an avid follower of Oren Eini's blog, and admit to having been swayed by some of his views on this, particularly when it comes to eager-loading of object graphs, and the need for this to be transparent so we don't go creating select n+1 issues etc. I'd recommend reading some of his posts if you're interested:
The upshot of all this is that I'll need to create an IYouConfDbContext interface, which YouConfDbContext implements, and use this from the controllers; without requiring an additional abstraction layer between it and the controllers. You'll see what I mean when I start writing some code!
Sorry for the lack of updates, but as I'm sure you're aware, the more stuff I add to the app, the more I have to write about it, and as a result I end up spending far too much time in front of the small screen and not doing other things! I might have to shorten some of the daily updates so I can fit them in more easily, but here's a brief list of the things I've been working on over the past few days...
Right, time to start writing up some of the details of the items I mentioned earlier, here goes...
I went to both the Microsoft site and setup an external Oauth account for my app, and was going to do the same with Google, but then found that since I didn't need to access any of the user's information in their google account, I could get away with not setting up an Oauth account with them. Note that in the MVC4 default application, the Google external provider actually uses OpenId for authentication, not OAuth. This wasn't a concern for me so I didn't worry about it, but if you have to have OAuth it would be worth noting.
I then added the code to enable the Microsoft and Google external providers in my /App_Start/AuthConfig.cs files as follows:
public static void RegisterAuth()
{
// To let users of this site log in using their accounts
// from other sites such as Microsoft, Facebook, and Twitter,
// you must update this site. For more information
// visit
Dictionary<string, object> microsoftSocialData =
new Dictionary<string, object>();
microsoftSocialData.Add("Icon", "/images/icons/social/microsoft.png");
OAuthWebSecurity.RegisterMicrosoftClient(
clientId: ConfigurationManager.AppSettings["Auth-MicrosoftAuthClientId"],
clientSecret: ConfigurationManager.AppSettings["Auth-MicrosoftAuthClientSecret"],
displayName: "Windows Live",
extraData: microsoftSocialData);
Dictionary<string, object> googleSocialData = new Dictionary<string, object>();
googleSocialData.Add("Icon", "/images/icons/social/google.png");
OAuthWebSecurity.RegisterGoogleClient("Google", googleSocialData);
}
Note that I've also added an additional piece of data for the icon to display, to make the login page a little prettier by showing an icon for each provider, rather than just a button with text. Thanks to for the icons! The result is that we get buttons like this on the login screen (note that I may try and remove the gray border at some stage too...):
You may have noticed that when setting up my Microsoft external provider, I used code such as ConfigurationManager.AppSettings["Auth-MicrosoftAuthClientId"] to retrieve the private keys from the web.config file. Given that the web.config file is checked into source control, I didn't want the values for these to be publicly available for all to see, so I had to find a way to hide them. Note that these are slightly different to my local db connection strings, as I don't have a problem with other users seeing my local db connection string in the web.config. For settings like this, however, I want them to be available on my local machine, but don't want them going into source control. So, what did I do?
UPDATE: During challenge four I found a better way to handle sensitive config settings with Azure Websites, and keep them out of GitHub. I've documented this in a full article at
As you may be aware, you can store appsettings in a separate file if you wish, using the file attribute on the appsettings element. Any values in the additional file that you specify will overwrite the existing values for the same key in the web.config, or just be added if they weren't present in the web.config. So in my case, I:
<appSettings file="HiddenSettings.config">
<add key="Auth-MicrosoftAuthClientId" value="thisvalueneedstobeupdatedinthecloudconfig"/>
<add key="Auth-MicrosoftAuthClientSecret" value="thisvalueneedstobeupdatedinthecloudconfig"/>
</appSettings>
I implemented the standard password reset functionality where one has to enter an email address, click a button, then get sent an email with a reset token in the querystring. Upon clicking that, they get taken to the site to reset their password. This required me to send emails, and for that I used Sendgrid. After setting up an account with them, I copied the username and password values, and added them to my HiddenSettings.config file. I also added a system.net entry to my web.config file as follows:
<system.net>
<mailSettings>
<!-- Method#1: Configure smtp server credentials -->
<smtp from="no-reply@youconf.azurewebsites.net">
<network enableSsl="true" host="smtp.sendgrid.net"
port="587" userName="empty@thiswillgetoverwritten"
password="thiswillgetoverwritten" />
</smtp>
I added an email sender class to send the emails, along with an interface, and configured Ninject to inject it into the constructor of the AccountController. The method for sending emails is as follows (note that I didn't use the Sendgrid library, just plain old .Net code):
public void Send(string to, string subject, string htmlBody)
{
MailMessage mailMsg = new MailMessage();
// To
mailMsg.To.Add(new MailAddress(to));
// From
mailMsg.From = new MailAddress("no-reply@youconf.azurewebsites.net", "YouConf support");
// Subject and multipart/alternative Body
mailMsg.Subject = subject;
string text = "You need an html-capable email viewer to read this";
string html = htmlBody;
mailMsg.AlternateViews.Add(
AlternateView.CreateAlternateViewFromString(text, null, MediaTypeNames.Text.Plain));
mailMsg.AlternateViews.Add(
AlternateView.CreateAlternateViewFromString(html, null, MediaTypeNames.Text.Html));
// Init SmtpClient and send
SmtpClient smtpClient = new SmtpClient();
System.Net.NetworkCredential credentials = new System.Net.NetworkCredential(
CloudConfigurationManager.GetSetting("Sendgrid.Username"),
CloudConfigurationManager.GetSetting("Sendgrid.Password"));
smtpClient.Credentials = credentials;
smtpClient.Send(mailMsg);
}
Generating the email body
I wanted to generate the body of the emails using Razor views, so I could pass in models, parameters etc, and have them nicely formatted. To do that, I added the nuget package for the MvcMailer library, and configured it to have a UserMailer class with a PasswordReset method and corresponding view. Scott Hanselman has a good blog post on this which I recommend reading.
To use the UserMailer class, I simply call it from my controller:
string token = WebSecurity.GeneratePasswordResetToken(user.UserName);
//Send them an email
UserMailer mailer = new UserMailer();
var mvcMailMessage = mailer.PasswordReset(user.Email, token);
MailSender.Send(user.Email, "Password reset request", mvcMailMessage.Body);
and the code in the UserMailer class...
public virtual MvcMailMessage PasswordReset(string email, string token)
{
ViewBag.Token = token;
return Populate(x =>
{
x.Subject = "Reset your password";
x.ViewName = "PasswordReset";
});
}
When I complete the forgot password process, I receive an email as follows:
Magnifique!!!
Note: I'm sending the email in-process here, which is not recommended as it can slow down the user's browsing experience and is less resilient to faults connecting to smtp etc. In future I'll look to move this into an Azure worker role, but for now I'll leave it and move on as I have other issues with worker roles which I'll explain later. See my section on some of the issues I had when looking at domains, ssl, web/worker roles for more on this....
Moving Conferences to SQL
I thought I'd bite the bullet and do this now, so that I didn't end up scrambling to finish it on the last day of the challenge. In the end it wasn't too hard, as I was able to use the same data model and entity classes with Entity Framework as I had for table storage e.g. The Conference, Presentation, and Speaker classes. What I did first was add more validation attributes such as Max length validators, so these would automatically be applied when the tables were being created.
I also made sure to add bi-directional navigation properties where they were needed. For example, at the end of challenge two, the Conference class container a list of speakers, and a list of presenters, however, there was no Conference property in either the Speaker or Presentation class to navigate the other way. In order to get Entity Framework to generate the tables as I'd like them, I had to add the properties on both ends. Likewise for the relationship between Speaker and Presentation where a presentation can have 0....* presenters. To give an example, below is the code for the Presentation and Speaker classes:; }
}
public class Speaker{
public int Id { get; set; }
[Required]
[MaxLength(200)]
public string Name { get; set; }
[Required]
[DataType(DataType.MultilineText)]
public string Bio { get; set; }
[MaxLength(250)]
public string Url { get; set; }
[MaxLength(150)]
public string Email { get; set; }
[Display(Name = "Avatar Url")]
[MaxLength(250)]
public string AvatarUrl { get; set; }
[Required]
public int ConferenceId { get; set; }
public virtual Conference Conference { get; set; }
public virtual IList<Presentation> Presentations { get; set; }
}
IMPORTANT: This gets me every time!!!! Make sure you mark your navigation properties as virtual, otherwise EF won't be able to lazy-load them! I got bitten by this yet again as I hadn't set them up as virtual, and as a result was wondering why my presentations had no speakers.... Hopefully I don't forget again...
I'm using Code-First, and since the database had already been setup automatically with just the Membership tables by the SimpleMembership attribute I didn't have to recreate it. What I did do was remove the initializer in the SimpleMembershipAttribute.cs class, and add one in the Global.asax.cs class to automatically migrate the database to the latest version on app startup as follows:
//Tell Entity Framework to automatically update
// our database to the latest version on app startup
Database.SetInitializer(
new System.Data.Entity.MigrateDatabaseToLatestVersion<YouConfDbContext,
YouConf.Migrations.Configuration>());
As I mentioned earlier, I'd created a YouConfDbContext which inherited from the EF DBContext for accessing the database. The code for this is as follows:
public class YouConfDbContext : DbContext, IYouConfDbContext
{
public YouConfDbContext()
: base("DefaultConnection")
{
}
public DbSet<UserProfile> UserProfiles { get; set; }
public DbSet<Conference> Conferences { get; set; }
public DbSet<Speaker> Speakers { get; set; }
public DbSet<Presentation> Presentations { get; set; }
}
I had to enable code-first migrations, and add my initial migration, as follows:
which resulted in the following code (note that I commented out the UserProfile table as it was already created by SimpleMembership):
public partial class AddConferenceDataToStoreInDatabaseInsteadOfTableStorage : DbMigration
{
public override void Up()
{
//CreateTable(
// "dbo.UserProfile",
// c => new
// {
// UserId = c.Int(nullable: false, identity: true),
// UserName = c.String(),
// })
// .PrimaryKey(t => t.UserId);
CreateTable(
"dbo.Conferences",
c => new
{
Id = c.Int(nullable: false, identity: true),
HashTag = c.String(nullable: false, maxLength: 50),
Name = c.String(nullable: false, maxLength: 250),
Description = c.String(),
Abstract = c.String(nullable: false),
StartDate = c.DateTime(nullable: false),
EndDate = c.DateTime(nullable: false),
TimeZoneId = c.String(nullable: false),
HangoutId = c.String(maxLength: 50),
TwitterWidgetId = c.Long(),
AvailableToPublic = c.Boolean(nullable: false),
})
.PrimaryKey(t => t.Id);
CreateTable(
"dbo.Presentations",
c => new
{
Id = c.Int(nullable: false, identity: true),
Name = c.String(nullable: false, maxLength: 500),
Abstract = c.String(nullable: false),
StartTime = c.DateTime(nullable: false),
Duration = c.Int(nullable: false),
YouTubeVideoId = c.String(maxLength: 250),
ConferenceId = c.Int(nullable: false),
})
.PrimaryKey(t => t.Id)
.ForeignKey("dbo.Conferences", t => t.ConferenceId, cascadeDelete: true)
.Index(t => t.ConferenceId);
CreateTable(
"dbo.Speakers",
c => new
{
Id = c.Int(nullable: false, identity: true),
Name = c.String(nullable: false, maxLength: 200),
Bio = c.String(nullable: false),
Url = c.String(maxLength: 250),
AvatarUrl = c.String(maxLength: 250),
ConferenceId = c.Int(nullable: false),
Presentation_Id = c.Int(),
})
.PrimaryKey(t => t.Id)
.ForeignKey("dbo.Conferences", t => t.ConferenceId, cascadeDelete: true)
.ForeignKey("dbo.Presentations", t => t.Presentation_Id)
.Index(t => t.ConferenceId)
.Index(t => t.Presentation_Id);
}
public override void Down()
{
DropIndex("dbo.Speakers", new[] { "Presentation_Id" });
DropIndex("dbo.Speakers", new[] { "ConferenceId" });
DropIndex("dbo.Presentations", new[] { "ConferenceId" });
DropForeignKey("dbo.Speakers", "Presentation_Id", "dbo.Presentations");
DropForeignKey("dbo.Speakers", "ConferenceId", "dbo.Conferences");
DropForeignKey("dbo.Presentations", "ConferenceId", "dbo.Conferences");
DropTable("dbo.Speakers");
DropTable("dbo.Presentations");
DropTable("dbo.Conferences");
//DropTable("dbo.UserProfile");
}
}
When I fired up the debugger in Visual Studio and ran the app, my tables were automatically created by Entity Framework, and I was able to keep developing using SQL!
As I made updates to my entity classes I added additional migrations in order for the changes to propagate to the database, such as when I added an Email field to the UserProfile class, so I could store the user's email address.
A common issue when using MVC is how to handle the mapping from form parameters to your domain objects for saving to the database. E.g. You might have the following method signature in a controller:
public ActionResult Edit(string currentHashTag, Conference conference)
MVC can take care of binding form fields to the conference parameter, but how do you map those those values onto the existing entity retrieved from the database? Often in this situation it can be helpful to use viewmodels to restrict the properties that can be updated, and make mapping easier, however even if we use viewmodels, we still have the same issue.
The good news is that AutoMapper helps make this issue easy to resolve! I'd recommened reading the documentation to find out more, but in my case I had to:
For example, in global.asax.cs I have a method called ConfigureAutoMapper as follows (note that I don't want to override the existing colletion properties so I ignore them):
private static void ConfigureAutoMapper()
{
Mapper.CreateMap<Speaker, Speaker>()
.ForMember(x => x.Presentations, x => x.Ignore())
.ForMember(x => x.Conference, x => x.Ignore());
Mapper.CreateMap<Presentation, Presentation>()
.ForMember(x => x.Speakers, x => x.Ignore())
.ForMember(x => x.Conference, x => x.Ignore());
Mapper.CreateMap<Conference, Conference>()
.ForMember(x => x.Presentations, x => x.Ignore())
.ForMember(x => x.Speakers, x => x.Ignore())
.ForMember(x => x.Administrators, x => x.Ignore());
}
and in my ConferenceController edit method:
public ActionResult Edit(string currentHashTag, Conference conference)
{
....
var existingConference = YouConfDbContext.Conferences
.FirstOrDefault(x => x.Id == conference.Id);
if (conference == null)
{
return HttpNotFound();
}
...
Mapper.Map(conference, existingConference);
YouConfDbContext.SaveChanges();
...
}
one line of code to do the mappings, and only a few lines to configure it - looks a bit like magic to me!!!
In order to transmit messages between server nodes in an Azure web farm, SignalR uses service bus topics. See for more details, but the configuration is fairly simple. You just need to create a service bus namespace, add SignalR service bus to your project in Visual Studio, and then tell SignalR to use your service bus namespace, as shown below:
Adding the service bus namespace in the Azure Management Portal (See for specific details):
Add SignalR Service bus via NuGet:
Copy the value of the service bus connection string from the management portal as below:
paste it into your web.config file, or in my case, my HiddenSettings.config file.
<add key="Microsoft.ServiceBus.ConnectionString"
value="Endpoint=sb://yourservicebusnamespace.servicebus.windows.net/;
SharedSecretIssuer=owner;SharedSecretValue=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" />
Important: Don't forget to update your application settings in your cloud service more discoveries:
I'm now think I'll leave things as they are, since I already have a recognizable domain at youconf.azurewebsites.net, and there's already an SSL certificate for *.azurewebsites.net automatically provided by Azure, which gives us the security we require for logins etc. I suspect I'll have to revisit this in future....
5000+ hits on the article?!!!! I'm wondering if those stats are right or if maybe someone's playing a trick on me... Anyway, hopefully if you are one of the 5000 mystery viewers you've learned a thing or two, or perhaps learned what not to do!
In the 2nd challenge, you might recall that I didn't make the error logs page public as I couldn't secure it using role-based authentication. Now that I've included SimpleMembership, I can enable remote access to administrators. To do so, I first had to update my web.config as follows (note that I've made the Elmah error log url a bit shorter this time).
First, I enabled remote access:
<elmah>
<errorLog type="YouConf.Infrastructure.Logging.TableErrorLog, YouConf" />
<security allowRemoteAccess="1" />
</elmah>
Second, I enabled authentication, set the allowed role to Administrators, and changed the url:
<add key="elmah.mvc.requiresAuthentication" value="true" />
<add key="elmah.mvc.allowedRoles" value="Administrators" />
<add key="elmah.mvc.route" value="viewerrorlogs" />
So now any user in the Administrators role can browse my to the error logs at. Note that I normally would do my best to avoid letting anyone know the url for my error logs/admin pages, but in this case it's worth doing for the benefit of the article.
One thing you might be asking is how does a user become an administrator? In the old-school asp.net web apps, one might have a role management section and be able to assign users to roles using the built-in functionality. However, since we don't have that luxury I'm going to go one better and run some SQL to assign myself to the Administrators group. How do I run SQL against my live database? As I mentioned earlier, you can use the web-based database management tools from within the management portal, or you can connect directly to your database using SQL Management Studio. I'll show you both below.
The builtin management tools then logged in as follows (note that I'd already allowed the management portal to automatically add an IP restriction for my ip address earlier, so didn't need to add it again):....
It's getting late and I don't quite have time to get screenshots, but when I wanted to get a copy of the database that I had in production, I followed the tutorial in this article. The steps involved:
I'm going to start writing up my official article for Challenge 3 today, so that I don't end up running out of time on the last days. Before I do so, however, there's one more thing I did a while ago that I think is highly relevant to nearly every app that you expect to actually use for real in Azure. Namely, setting up a dedicated test environment in Azure, so I can test my dev changes in the cloud before I deploy them to the production site.
Setting up a separate test environment in Azure
As I mentioned in an earlier post, I setup Git so that I have a Master and Dev branch, with Master being configured to auto-deploy to the live site at. Initially I'd been merging my dev changes into Master and testing locally before pushing them to GitHub, which test them in the cloud before deploying them to production. The good news is that it really isn't that hard to setup a replica environment in Azure. You just have to make sure that you have the same services (e.g. database, storage, queues etc) setup. So, what did I do?
You've seen the detailed steps I went through to setup my production Azure environment with website, database etc, so I won't repeat them in detail here, but to summarize,!
Did some tidyup on the daily progress reports and started preparing the main article for challenge two. I should also mention that yesterday I got the 3rd Easter Egg challenge working, although after some discussion on the forums I'm not sure if I did it correctly or not. Hopefully I did!
Setup an integration test project with SQL CE, based on the helpful article at. I've been swinging between whether it's better to try and mock the DB context when using EF, or use SQL CE or localdb and do integration tests. Given that I've been burned in the past when using FakeDbSets and found the behaviour isn't the same as when using the real EF Sql provider, I thought I'd go with the SQL CE option this time around and see how it goes. Yes it means some of the setup is more verbose given that the entities have to be valid before they can be inserted into the test db, but hopefully the tests end up being more reliable. Whilst they're 'integration' tests since they're hitting a real (albeit a disposable) database, I'm going to treat them as unit tests and use the integration test project for doing most of my testing.
I'm hoping to setup some UI/smoke tests as well, which I can run after each time I deploy to test/production to verify everything is working as expected.
Spent a fair amount of time writing up my article for challenge three and making sure it all fitted together well. Just putting the finishing touches on it today, and will publish it tomorrow so it gets approved in time for the deadline. Wish me luck!
Managed to get the article submitted in time - here's hoping I can do well! Also did the Pascal's Pyramid challenge previously
Had a bit of a break after challenge 3 and did some thinking about how I'll use Azure Vms.
*WARNING - Begin Rant * Upon checking in again I saw the results for challenge 3 had been announced. Feeling a bit sore after seeing them to be honest, given the effort I'd put into challenge 3. Not to take away from the 3 winners - I read all three articles and they were very well written - good job guys However I felt like my approach of covering not only the Sql side of things, but also a whole heap of other relevant web app issues, would've been worthy of a top 3 spot. Sorry for ranting but I'm sure anyone who's entered any event, be it IT, sport, or other, and not made it onto the podium understands where I am at the moment.... Will see how things unfold over the next few days.* End Rant *
Given my reservations about entering the next challenge, (see my rant above), I still wasn't quite sure what to do. However, after some encouragement from fellow developers (thanks guys - you know who you are!) I snapped out of my melancholy mood, and am back in the game It was good to have a few days off anyhow, as it gave me time to think about how I'll implement my VM solution. I've begun work on it and made good progress, however, as I mentioned at the end of challenge 3 I'm going to wait till close to the end of the challenge before I post all of my updates for challenge 4, so as not to give everything away too early. Sorry to those of you who are following along, but I think this is the best option competition-wise..... Onwards and upwards, and good luck to everyone else involved!
What a day! Three big accomplishments:
Had a fairly quiet day today, struggling away with authentication in Apache. I've been trying to configure basic auth and just can't get it to work - doh! Maybe I'll have to run without auth for this challenge as I need to start writing up the article soon.
Wrote up an article describing the pattern I used for sending strongly-typed messages using Azure Service Bus, and also other best practices such as:
Check it our -. I thought about including it in the body of this article, but figured that it might count against me given that challenge four is supposed to be all about VMs, and this isn't a VM solution.
I also wrote up the bulk of the challenge four article on VMs and my background worker role.
Doing some final tidyup on the search screen, and having one last stab at configuring authentication on the VM. The article is pretty much ready to publish, so I'll do that sometime soon as well.
Time to publish what has become something of an epic! After a few little touch-ups I think it's ready for general availability :P
Apologies for the lack of updates.... I didn't want to modify the article until the judging for challenge four was complete. In any case, it looks like it was worth staying in the competition for challenge four after all, as my article made it back into the top 3 again - phew!
Now onto challenge 5 - mobile access and responsive design. This is something I've been looking forward to since day one, as up until not long ago I really had no idea what it was or how I could apply it to any of the sites I've worked in. Thankfully there are loads of helpful articles out there on the world wide web, and I think I've managed to grasp some of the fundamentals. I've been trying to think of the best way to go about documenting my progress for this section, as it's a bit tricky to decide how much detail to go into. There have also been some discussions around this on the forum, and my interpretation thus far is that you have to go into detail for at least part of your article to help those who are completely new to the topic, but also summarise well for those who just want a quick overview if what you did, and don't necessarily care exactly how you went about it. Gosh iit's tricky :)
Anyway, I'm going to carry on with updating YouConf to work on the big screen, small screen, and everything in between over the coming days. Hopefully I'll post a few updates as I go!
Day 51 - 52 (June 19 - 20)
10000 views on the article - and 23 votes - woohoo! That's exceeded even my wildest expectations (well almost, although dreams are free right?)
This mobile challenge is a tricky one, as the write-up seems to take much longer than the actual development/testing time. Hopefully I'll have the bulk of the article complete by eod tomorrow so I don't have to rush on the weekend :)
Day 53 * 54 (June 21 - 22)
Almost there! Did some tidyup of various items in the article that I'd been putting off for a while, including writing up some final thoughts on the competition and what it has meant to me.
37000+ words, 130+ screenshots, and only one more day left :)
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
Error 1 The type or namespace name 'Diagnostics' does not exist in the namespace 'Microsoft.WindowsAzure' (are you missing an assembly reference?) C:\projects\youconf\YouConfWorker\WorkerRole.cs 10 30 YouConfWorker
Error 2 The type or namespace name 'ServiceRuntime' does not exist in the namespace 'Microsoft.WindowsAzure' (are you missing an assembly reference?) C:\projects\youconf\YouConfWorker\WorkerRole.cs 11 30 YouConfWorker
Error 3 The type or namespace name 'RoleEntryPoint' could not be found (are you missing a using directive or an assembly reference?) C:\projects\youconf\YouConfWorker\WorkerRole.cs 23 31 YouConfWorker
Error 4 'object' does not contain a definition for 'OnStart' C:\projects\youconf\YouConfWorker\WorkerRole.cs 153 25 YouConfWorker
Error 5 The type or namespace name 'DiagnosticMonitorConfiguration' could not be found (are you missing a using directive or an assembly reference?) C:\projects\youconf\YouConfWorker\WorkerRole.cs 158 13 YouConfWorker
Error 6 The name 'DiagnosticMonitor' does not exist in the current context C:\projects\youconf\YouConfWorker\WorkerRole.cs 158 53 YouConfWorker
Error 7 The name 'LogLevel' does not exist in the current context C:\projects\youconf\YouConfWorker\WorkerRole.cs 161 59 YouConfWorker
Error 8 The name 'DiagnosticMonitor' does not exist in the current context C:\projects\youconf\YouConfWorker\WorkerRole.cs 164 13 YouConfWorker
Error 9 'object' does not contain a definition for 'OnStop' C:\projects\youconf\YouConfWorker\WorkerRole.cs 195 18 YouConfWorker
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/584534/YouConf-Your-Live-Online-Conferencing-Tool?msg=4605199 | CC-MAIN-2016-44 | refinedweb | 25,712 | 58.32 |
A custom appFolder configuration is lost when creating a sub-class hierarchy of more than one level. E.g. Ext.app.Application > myapp.app.AbstractApplication > myapp.app.MyApplication.
If...
A custom appFolder configuration is lost when creating a sub-class hierarchy of more than one level. E.g. Ext.app.Application > myapp.app.AbstractApplication > myapp.app.MyApplication.
If...
@evan
thx.
@mitchell
thx for asking :)
I am happy with the chosen approach.
This 'feature' was introduced in 4.1.2. Maybe not the best time to do so in a maintenance release.
Also, I recommend finding a prominent place to document the fact that it is not possible to use...
Well,
a config or
template method or
a function Ext.app.Application#getApp() or
disable the feature by default or
...
I guess that is a decision that a Sencha developer should make.
It is correct that it will work in a development setup with Ext.Loader configured.
It will not work in a production setup.
This bug was introduced in 4.1.2, it worked in 4.1.1
REQUIRED INFORMATION
Ext version tested:
4.1.2
Description:
A custom class extending from Ext.app.Controller in a namespace that does **not** follow the pattern /^(.*)\.controller\./...
REQUIRED INFORMATION
Ext version tested:
Ext 4.1.2
Browser versions tested against:
not relevant
I need a appFolder value with a point like a domain name. /domainname.nl/waarde This doesnt work in designer. It is a bug.
I have a form panel width 2 columns. Both columns have a fieldset. Problem i am having is that on all pages width a fieldset the first field is not correctly displayed in Internet Explorer. In...
It looks very nice, but i think i have to agree a little bit with animal that beginners with ext like to use somthing like this. If you need advanced things it will require knowledge off the...
I like this plugin. Only problem i am having is the fact that the fields allowBlank option does not work. If i submit the form the validation off the fields does not work.
I have a menu on my webpage. Clicking the menu reloads a submenu. The page with the fushioncharts is still visible. Problem is that when i mask and unmask the viewport (i do this when loading the...
Maybe you can add a icon after the combobox width
Ext.DomHelper.insertAfter
Then based on the id off the new element add a handler
...
Remove the , after the }
{
fieldLabel: 'Room #',
name: 'room'
},
I have a license from extjs. But i have the same questions. The visitors off my website are paying me to access this site. I use extjs. But i don't change it, nor i distribute it. The users pay for...
I have included the shadowbox files. checked if they are included in firefox. I try to open shadow box with the hyperlink method, doesnt work. I tried to open shadowbox with the direct...
I am going to use flash charts in the future. I bookmark this one. Looking good. | https://www.sencha.com/forum/search.php?s=a1b662105f4853386cec849ecdc84ba2&searchid=19627412 | CC-MAIN-2017-39 | refinedweb | 510 | 69.48 |
Map Reduce -- How Cool is That?
From time-to-time I hear a few mentions of MapReduce; up until recently, I avoided looking into it. This month's CACM, however, is chock-full of MapReduce goodness. After reading some of the articles, I decided to look a little more closely at that approach to handling large datasets.
Python Implementation
Map-Reduce is a pleasant functional approach to handling several closely-related problems.
- Concurrency.
- Filtering and Exclusion.
- Transforming.
- Summarizing.
Map-Reduce on the Cheap
The basics of map reduce can be done several ways in Python. We could use the built-in map and reduce functions. This can lead to problems if you provide a poorly designed function to reduce.
But Python also provides generator functions. See PEP 255 for background on these. A generator function makes it really easy to implement simple map-reduce style processing on a single host.
Here's a simple web log parser built in the map-reduce style with some generator functions.
Here's the top-level operation. This isn't too interesting because it just picks out a field and reports on it. The point is that it's delightfully simple and focused on the task at hand, free of clutter.
def dump_log( log_source ): for entry in log_source: print entry[3]
We can improve this, of course, to do yet more calculations, filtering and even reduction. Let's not clutter this example with too much, however.
Here's a map function that can fill the role of log_source. Given a source of rows, this will determine if they're parseable log entries and yield up the parse as a 9-tuple. This maps strings to 9-tuples, filtering away anything that can't be parsed.
log_row_pat= re.compile( r'(\d+\.\d+\.\d+\.\d+) (\S+?) (\S+?) (\[[^\]]+?]) ("[^"]*?") (\S+?) (\S+?) ("[^"]*?") ("[^"]*?")' ) def log_from_rows( row_source ): for row in row_source: m= log_row_pat.match( row ) if m is not None: yield m.groups()
This log source has one bit of impure functional programming. The tidy, purely functional alternative to saving the match object, m, doesn't seem to be worth the extra lines of code.
Here's a map function that can participate as a row source. This will map a file name to an sequence of individual rows. This can be decomposed if we find the need to reuse either part separately.
def rows_from_name( name_source ): for aFileName in name_source: logger.info( aFileName ) with open(aFileName,"r") as source: for row in source: yield row
Here's a mapping from directory root to a sequence of filenames within the directory structure.
def names_( root='/etc/httpd/logs' ): for path, dirs, files in os.walk( root ): for f in files: logging.debug( f ) if f.startswith('access_log'): yield os.path.join(path,f)
This applies a simple name filter. We could have used Python's fnmatch, which would give us a slightly more extensible structure.
Putting it Together
This is the best part of this style of functional programming. It just snaps together with simple composition rules.
logging.basicConfig( stream=sys.stderr, level=logging.INFO ) dump_log( log_from_rows( rows_from_name( names_from_dir() ) ) ) logging.shutdown()
We can simply define a of map functions. Our goal, expressed in dump_log, is the head of the composition. It depends on the tail, which is parsing, reading a file, and locating all files in a directory.
Each step of the map pipeline is a pleasant head-tail composition.
Pipelines
This style of programming can easily be decomposed to work through Unix-style pipelines.
We
can cut a map-reduce sequence anywhere. The head of the composition
will get it's data from an unpickle operation instead of the original
tail.
The original tail of the composition will be used by a new head that pickles the results. This new head can then be put into the source of a Unix-style pipeline.
Parallelism
There are two degrees of parallelism available in this kind of map-reduce. By default, in a single process, we don't get either one.
However, if we break the steps up into separate physical processes, we get huge performance advantages. We force the operating to do scheduling. And we have processes that have a lot of resources available to them.
[Folks like to hand-wring over "heavy-weight" processing vs. threads. Practically, it rarely matters. Create processes until you can prove it's ineffective.]
Additionally, we can -- potentially -- parallelize each map operation. This is more difficult, but that's where a framework helps to wring the last bit of parallel processing out of a really large task.
Until you need the framework, though, you can start doing map-reduce today.
A Link:
From
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)
Evgeniy Karyakin replied on Tue, 2010/01/12 - 1:30am | http://java.dzone.com/articles/map-reduce-how-cool | CC-MAIN-2014-10 | refinedweb | 809 | 67.65 |
In the early days of programming, non-reentrancy was not a threat to programmers; functions did not have concurrent access and there were no interrupts. In many older implementations of the C language, functions were expected to work in an environment of single-threaded processes.
Now, however, concurrent programming is common practice, and you need to be aware of the pitfalls. This article describes some potential problems due to non-reentrancy of the function in parallel and concurrent programming. Signal generation and handling in particular add extra complexity. Due to the asynchronous nature of signals, it is difficult to point out the bug caused when a signal-handling function triggers a non-reentrant function.
This article:
- Defines reentrancy and includes a POSIX listing of a reentrant function
- Provides examples to show problems caused by non-reentrancy
- Suggests ways to ensure reentrancy of the underlying function
- Discusses dealing with reentrancy at the compiler level
What is reentrancy?
A reentrant function is one that can be used by more than one task concurrently without fear of data corruption. Conversely, a non-reentrant function is one that cannot be shared by more than one task unless mutual exclusion to the function is ensured either by using a semaphore or by disabling interrupts during critical sections of code. A reentrant function can be interrupted at any time and resumed at a later time without loss of data. Reentrant functions either use local variables or protect their data when global variables are used.
A reentrant function:
- Does not hold static data over successive calls
- Does not return a pointer to static data; all data is provided by the caller of the function
- Uses local data or ensures protection of global data by making a local copy of it
- Must not call any non-reentrant functions
Don't confuse reentrance with thread-safety. From the programmer perspective, these two are separate concepts: a function can be reentrant, thread-safe, both, or neither. Non-reentrant functions cannot be used by multiple threads. Moreover, it may be impossible to make a non-reentrant function thread-safe.
IEEE Std 1003.1 lists 118 reentrant UNIX® functions, which aren't duplicated here. See Resources for a link to the list at unix.org.
The rest of the functions are non-reentrant because of any of the following:
- They call
mallocor
free
- They are known to use static data structures
- They are part of the standard I/O library
Signals and non-reentrant functions
A signal is a software interrupt. It empowers a programmer to handle an asynchronous event. To send a signal to a process, the kernel sets a bit in the signal field of the process table entry, corresponding to the type of signal received. The ANSI C prototype of a signal function is:
void (*signal (int sigNum, void (*sigHandler)(int))) (int);
Or, in another representation:
typedef void sigHandler(int); SigHandler *signal(int, sigHandler *);
When a signal that is being caught is handled by a process, the normal sequence of instructions being executed by the process is temporarily interrupted by the signal handler. The process then continues executing, but the instructions in the signal handler are now executed. If the signal handler returns, the process continues executing the normal sequence of instructions it was executing when the signal was caught.
Now, in the signal handler you can't tell what the process was executing when the signal was caught. What if the process was in the middle of allocating additional memory on its heap using
malloc, and you call
malloc from the signal handler? Or, you call some function that was in the middle of the manipulation of the global data structure and you call the same function from the signal handler. In the case of
malloc, havoc can result for the process, because
malloc usually maintains a linked list of all its allocated area and it may have been in the middle of changing this list.
An interrupt can even be delivered between the beginning and end of a C operator that requires multiple instructions. At the programmer level, the instruction may appear atomic (that is, cannot be divided into smaller operations), but it might actually take more than one processor instruction to complete the operation. For example, take this piece of C code:
temp += 1;
On an x86 processor, that statement might compile to:
mov ax,[temp] inc ax mov [temp],ax
This is clearly not an atomic operation.
This example shows what can happen if a signal handler runs in the middle of modifying a variable:
Listing 1. Running a signal handler while modifying a variable
#include <signal.h> #include <stdio.h> struct two_int { int a, b; } data; void signal_handler(int signum){ printf ("%d, %d\n", data.a, data.b); alarm (1); } int main (void){ static struct two_int zeros = { 0, 0 }, ones = { 1, 1 }; signal (SIGALRM, signal_handler); data = zeros; alarm (1); while (1) {data = zeros; data = ones;} }
This program fills
data with zeros, ones, zeros, ones, and so on, alternating forever. Meanwhile, once per second, the alarm signal handler prints the current contents. (Calling
printf in the handler is safe in this program, because it is certainly not being called outside the handler when the signal happens.) What output do you expect from this program? It should print either 0, 0 or 1, 1. But the actual output is as follows:
0, 0 1, 1 (Skipping some output...) 0, 1 1, 1 1, 0 1, 0 ...
On most machines, it takes several instructions to store a new value in
data, and the value is stored one word at a time. If the signal is delivered between these instructions, the handler might find that
data.a is 0 and
data.b is 1, or vice versa. On the other hand, if we compile and run this code on a machine where it is possible to store an object's value in one instruction that cannot be interrupted, then the handler will always print 0, 0 or 1, 1.
Another complication with signals is that, just by running test cases you can't be sure that your code is signal-bug free. This complication is due to the asynchronous nature of signal generation.
Non-reentrant functions and static variables
Suppose that the signal handler uses
gethostbyname, which is non-reentrant. This function returns its value in a static object:
static struct hostent host; /* result stored here*/
And it reuses the same object each time. In the following example, if the signal happens to arrive during a call to
gethostbyname in
main, or even after a call while the program is still using the value, it will clobber the value that the program asked for.
Listing 2. Risky use of gethostbyname
main(){ struct hostent *hostPtr; ... signal(SIGALRM, sig_handler); ... hostPtr = gethostbyname(hostNameOne); ... } void sig_handler(){ struct hostent *hostPtr; ... /* call to gethostbyname may clobber the value stored during the call inside the main() */ hostPtr = gethostbyname(hostNameTwo); ... }
However, if the program does not use
gethostbyname or any other function that returns information in the same object, or if it always blocks signals around each use, you're safe.
Many library functions return values in a fixed object, always reusing the same object, and they can all cause the same problem. If a function uses and modifies an object that you supply, it is potentially non-reentrant; two calls can interfere if they use the same object.
A similar case arises when you do I/O using streams. Suppose the signal handler prints a message with
fprintf and the program was in the middle of an
fprintf call using the same stream when the signal was delivered. Both the signal handler's message and the program's data could be corrupted, because both calls operate on the same data structure: the stream itself.
Things become even more complicated when you're using a third-party library, because you never know which parts of the library are reentrant and which are not. As with the standard library, there can be many library functions that return values in fixed objects, always reusing the same objects, which causes the functions to be non-reentrant.
The good news is, these days many vendors have taken the initiative to provide reentrant versions of the standard C library. You'll need to go through the documentation provided with any given library to know if there is any change in the prototypes and therefore in the usage of the standard library functions.
Practices to ensure reentrancy
Sticking to these five best practices will help you maintain reentrancy in your programs.
Practice 1
Returning a pointer to static data may cause a function to be non-reentrant. For example, a
strToUpper function, converting a string to uppercase, could be implemented as follows:
Listing 3. Non-reentrant version of strToUpper
char *strToUpper(char *str) { /*Returning pointer to static data makes it non-reentrant */ static char buffer[STRING_SIZE_LIMIT]; int index; for (index = 0; str[index]; index++) buffer[index] = toupper(str[index]); buffer[index] = '\0'; return buffer; }
You can implement the reentrant version of this function by changing the prototype of the function. This listing provides storage for the output string:
Listing 4. Reentrant version of strToUpper
char *strToUpper_r(char *in_str, char *out_str) { int index; for (index = 0; in_str[index] != '\0'; index++) out_str[index] = toupper(in_str[index]); out_str[index] = '\0'; return out_str; }
Providing output storage by the calling function ensures the reentrancy of the function. Note that this follows a standard convention for the naming of reentrant function by suffixing the function name with "_r".
Practice 2
Remembering the state of the data makes the function non-reentrant. Different threads can successively call the function and modify the data without informing the other threads that are using the data. If a function needs to maintain the state of some data over successive calls, such as a working buffer or a pointer, the caller should provide this data.
In the following example, a function returns the successive lowercase characters of a string. The string is provided only on the first call, as with the
strtok subroutine. The function returns
\0 when it reaches the end of the string. The function could be implemented as follows:
Listing 5. Non-reentrant version of getLowercaseChar
char getLowercaseChar(char *str) { static char *buffer; static int index; char c = '\0'; /* stores the working string on first call only */ if (string != NULL) { buffer = str; index = 0; } /* searches a lowercase character */ while(c=buff[index]){ if(islower(c)) { index++; break; } index++; } return c; }
This function is not reentrant, because it stores the state of the variables. To make it reentrant, the static data, the
index variable, needs to be maintained by the caller. The reentrant version of the function could be implemented like this:
Listing 6. Reentrant version of getLowercaseChar
char getLowercaseChar_r(char *str, int *pIndex) { char c = '\0'; /* no initialization - the caller should have done it */ /* searches a lowercase character */ while(c=buff[*pIndex]){ if(islower(c)) { (*pIndex)++; break; } (*pIndex)++; } return c; }
Practice 3
On most systems,
malloc and
free are not reentrant, because they use a static data structure that records. However, if you know that the program cannot possibly use the stream that the handler uses at a time when signals can arrive, you are safe. There is no problem if the program uses some other stream.
Practice 4
To write bug-free code, practice care in handling process-wide global variables like
errno and
h_errno. Consider the following code:
Listing 7. Risky use of errno
if (close(fd) < 0) { fprintf(stderr, "Error in close, errno: %d", errno); exit(1); }
Suppose a signal is generated during the very small time gap between setting the
errno variable by the
close system call and its return. The generated signal can change the value of
errno, and the program behaves unexpectedly.
Saving and restoring the value of
errno in the signal handler, as follows, can resolve the problem:
Listing 8. Saving and restoring the value of errno
void signalHandler(int signo){ int errno_saved; /* Save the error no. */ errno_saved = errno; /* Let the signal handler complete its job */ ... ... /* Restore the errno*/ errno = errno_saved; }
Practice 5
If the underlying function is in the middle of a critical section and a signal is generated and handled, this can cause the function to be non-reentrant. By using signal sets and a signal mask, the critical region of code can be protected from a specific set of signals, as follows:
- Save the current set of signals.
- Mask the signal set with the unwanted signals.
- Let the critical section of code complete its job.
- Finally, reset the signal set.
Here is an outline of this practice:
Listing 9. Using signal sets and signal masks
sigset_t newmask, oldmask, zeromask; ... /* Register the signal handler */ signal(SIGALRM, sig_handler); /* Initialize the signal sets */ sigemtyset(&newmask); sigemtyset(&zeromask); /* Add the signal to the set */ sigaddset(&newmask, SIGALRM); /* Block SIGALRM and save current signal mask in set variable 'oldmask' */ sigprocmask(SIG_BLOCK, &newmask, &oldmask); /* The protected code goes here ... ... */ /* Now allow all signals and pause */ sigsuspend(&zeromask); /* Resume to the original signal mask */ sigprocmask(SIG_SETMASK, &oldmask, NULL); /* Continue with other parts of the code */
Skipping
sigsuspend(&zeromask); can cause a problem. There has to be some gap of clock cycles between the unblocking of signals and the next instruction carried by the process, and any occurrence of a signal in this window of time is lost. The function call
sigsuspend resolves this problem by resetting the signal mask and putting the process to sleep in a single atomic operation. If you are sure that signal generation in this window of time won't have any adverse effects, you can skip
sigsuspend and go directly to resetting the signal.
Dealing with reentrancy at the compiler level
I would like to propose a model for dealing with reentrant functions at the compiler level. A new keyword,
reentrant, can be introduced for the high-level language, and functions can be given a
reentrant specifier that will ensure that the functions are reentrant, like so:
reentrant int foo();
This directive instructs the compiler to give special treatment to that particular function. The compiler can store this directive in its symbol table and use it during the intermediate code generation phase. To accomplish this, some design changes are required in the compiler's front end. This reentrant specifier follows these guidelines:
- Does not hold static data over successive calls
- Protects global data by making a local copy of it
- Must not call non-reentrant functions
- Does not return a reference to static data, and all data is provided by the caller of the function
Guideline 1 can be ensured by type checking and throwing an error message if there is any static storage declaration in the function. This can be done during the semantic analysis phase of the compilation.
Guideline 2, protection of global data, can be ensured in two ways. The primitive way is by throwing an error message if the function modifies global data. A more sophisticated technique is to generate intermediate code in such a way that the global data doesn't get mangled. An approach similar to Practice 4, above, can be implemented at the compiler level. On entering the function, the compiler can store the to-be-manipulated global data using a compiler-generated temporary name, then restore the data upon exiting the function. Storing data using a compiler-generated temporary name is normal practice for the compiler.
Ensuring guideline 3 requires the compiler to have prior knowledge of all the reentrant functions, including the libraries used by the application. This additional information about the function can be stored in the symbol table.
Finally, guideline 4 is already guaranteed by guideline 2. There is no question of returning a reference to static data if the function doesn't have one.
This proposed model would make the programmer's job easier in following the guidelines for reentrant functions, and by using this model, code would be protected against the unintentional reentrancy bug.
Resources
- You can read or download IEEE Std 1003.1 from unix.org, a Web site of The Open Group (registration is required to view or download the document).
- Starting with Synchronization is not the enemy (developerWorks, July 2001), this series of three articles covers issues of threading and concurrency when programming in the Java™ language.
- PowerPC developers will appreciate the insights presented in Save your code from meltdown using PowerPC atomic instructions (developerWorks, November 2004); it describes techniques for safe concurrent programming in PowerPC assembly language.
- Good background for UNIX programmers includes UNIX Network Programming by W. Richard Stevens and Design of the Unix Operating System by Maurice J. Bach.
- Find more resources for Linux developers in the developerWorks Linux zone.
- Get involved in the developerWorks community by participating in developerWorks blogs.
- Browse for books on these and other technical topics.. | http://www.ibm.com/developerworks/linux/library/l-reent/index.html | CC-MAIN-2014-10 | refinedweb | 2,807 | 50.87 |
How to write a function that takes a list of strings and a list of characters as arguments and returns a dictionary whose keys are the characters and whose values are lists of the strings that start with that character.
a_func(['apple','orange','banana','berry','corn'],['A','B','C'])
{'A':['apple'], 'B':['banana','berry'], 'C':['corn]}
a = ['apple','orange','banana','berry','corn']
b = ['A','B','C']
d = {}
for k,v in zip(b,a):
d.setdefault(k, []).append(v)
print (d)
---------
#output is
{'A': ['apple']}
{'A': ['apple'], 'B': ['orange']}
{'A': ['apple'], 'C': ['banana'], 'B': ['orange']}
What you are doing wrong with zip. This is what happens when you call zip the way you are doing it:
>>> list(zip(['A','B','C'], ['apple','orange','banana','berry','corn'])) [('A', 'apple'), ('B', 'orange'), ('C', 'banana')]
Which, as you can see, clearly does not match up to any helpful result you can use to get to your required output.
A much easier way to do this is to make use of
defaultdict. Call your defaultdict with a
list, so that the initial value of the dictionary entry will be a list. That way all you have to do is check the first character of each word against the character list using in. From there, just simply append once you find that your first character exists in your character list.
Also, you seem to have lower case words with an uppercase character list, so you should set the casing accordingly.
from collections import defaultdict def a_func(words, chars): d = defaultdict(list) for word in words: upper_char = word[0].upper() if upper_char in chars: d[upper_char].append(word) return d res = a_func(['apple','orange','banana','berry','corn'],['A','B','C'])
Result:
defaultdict(<class 'list'>, {'A': ['apple'], 'B': ['banana', 'berry'], 'C': ['corn']}) | https://codedump.io/share/f87ndDn0Meaq/1/how-to-write-a-function-for-dictionary-from-2-lists-in-python-with-if-condition | CC-MAIN-2017-13 | refinedweb | 295 | 59.03 |
I have a simple RESTful service in IntelliJ IDEA 12.1.3 Ultimate.
I've tested it. It works. Now I want to create a Java client for this service and need a WADL.
Per the instructions at , I right clicked my class and went to "Web Services -> RESTful Web Services" only to find the menuitem "Generate WADL from Java Code" disabled.
What have I done wrong?
Here's the code:
package com.mybiz; import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.Produces; @Path("/greeting") public class Greeter { @GET @Produces("text/plain") public String hello() { return "Hi!"; } }
Hmmm... it seems the instructions at the link I gave only work when the application server is set up just like in that page.
I am using TomEE Plus 1.5.2 and apparently it doesn't support generation of the WADL. | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206206649-How-do-I-generate-WADL-from-RESTful-Java-code- | CC-MAIN-2021-04 | refinedweb | 145 | 70.19 |
Do you need the ability to manage your SQL Server data wherever you are?
Microsoft released a Web interface that can help you manage your SQL Server
databases. With this tool, called the SQL Server Web Data Administrator, you
can:
- Perform ad-hoc
queries against databases and save them to your file system.
- Create/edit
databases in SQL Server 2000 or MSDE 2000.
- View, create,
and edit stored procedures.
- Export and
import database schema and data.
- Manage users and
roles.
Where do I download?
You can download the SQL Server Web Data Administrator
from Microsofts Web site. Before installing it, you must meet the following
requirements:
- Windows 2000
SP4, Windows Server 2003, or Windows XP
- SQL Server 7.0
SP2 or higher
- Microsoft .NET SDK or Visual Studio.NET
- IE 5.5 or higher
- IIS
Note: These are the
requirements of the machine that the product is being installed:
- Create new
databases
- Edit, query, or
delete existing databases
- Import databases
- Export databases
- Create and edit
logins
- Create and edit
server roles
Figure C
At this point, you can
begin to see the power of the Web Data Administrator. Let’s dig a little deeper
into each of these areas to show you how you can effectively manage your SQL
Server. | https://www.databasejournal.com/ms-sql/the-sql-server-web-data-administrator/ | CC-MAIN-2022-21 | refinedweb | 210 | 66.13 |
JavaFX HTTP Networking and XML Parsing
The recently released JavaFX
platform allows developers to build rich internet applications
(RIA) that can include audio and video. Using JavaFX, it is
possible to create highly interactive applications. Moreover, it is
possible to easily create content for different devices (desktop,
mobile phone, television, and so on). JavaFX is a compiled
language, like Java, and is highly portable and is based on the
familiar paradigm “Write Once, Run Everywhere.”
JavaFX is focused on the client side, and aims to improve the
look and feel of Java GUIs so that users can experience more
attractive interfaces. Of course, many client applications need to
exchange information with a remote server. Nowadays, the HTTP protocol
and XML are widely accepted as the best choices to exchange
information, so we want to show how easy is in JavaFX to handle
HTTP communication details and how we can parse and extract
information from an XML data structure.
In the article we will assume you are already familiar with the
basic notions of the JavaFX language.
JavaFX Basic Language Concepts
While it is a compiled language, JavaFX mixes the features of
scripting languages with those inherited from Java. Scripting
languages allow for fast and easy application development, while
JavaFX's Java-based heritage allow it to be a robust language.
JavaFX proposes a new coding paradigm: as a declarative
language, it compels us to describe how we want our application to
behave without describing the specific control flow, as we do with
imperative languages. This paradigm is really powerful when we need
to develop GUIs. The basic idea that stands behind that JavaFX GUI
development model is that you "describe" what your interface should
look like. There is a strict relationship between the code and the
"visual structure." Moreover, the order used to declare objects, in
the code, reflects the order used to display them. The overall
result is an elegant way to create a GUI in fewer lines of code; this
makes applications easier to understand and maintain.
Another interesting feature of JavaFX is that it is a statically
typed language, meaning the data type of every variable, function,
and so on is known at compile-time. See the "#resources">Resources section for links to JavaFX tutorials
that explore this trait further.
JavaFX HTTP and XML Package Overview
To develop an application using HTTP protocol and XML, JavaFX
provides several packages, which are shown below:
javafx.io.httpfor handling HTTP
communication
javafx.data.pulland
javafx.data.xml
for XML parsing
The class diagram in Figure 1 shows the classes contained in
these packages.
Figure 1. Defining the channel rule for the RDBMS Event Generator (click for larger
view)
HTTP and JavaFX
To handle the HTTP protocol, we can use
HttpRequest
class in the
javafx.io.http package. This class makes
asynchronous HTTP requests to a remote server that supports the HTTP
protocol. The HTTP methods currently supported are:
GET
PUT
DELETE
This class is neutral in respect to the data format exchanged, so
we can invoke a remote server and send whatever type of
information we like, as long as we supply an
OutputStream containing the data that must be sent,
using the
POST or
PUT HTTP methods.
The
HttpRequest operation, related to each HTTP
method supported, has a specific lifecycle. We focus our attention
on the lifecycle in the case of HTTP
GET method; for other methods
(
PUT,
DELETE), the lifecycle is very similar. In the
case of an HTTP
GET request, the lifecycle is shown in Figure
2.
Figure 2: HTTP
GET method request lifecycle (click for larger
view)
As we can see from the diagram above, each state of the
lifecycle is defined by a specific value of the internal variables
of the
HttpRequest class. Related to each variable
transition, there is a corresponding method that is called during
the transition itself, so that we can control and handle different
states in the HTTP lifecycle. These methods have the same name of
the corresponding variable, prepended with
on. For
example, if we want to track when the request is trying to connect
to the server, we will use the
onConnecting
function.
It is time we start coding our JavaFX HTTP client. First of all
we have to declare a variable that contains our URL:
def url : String = "";
Then we create the HTTP request and define our callback
function, which is called when the HTTP request starts
connecting.
HttpRequest { location: url; onConnecting: function() { java.lang.System.out.println("Connecting"); } }.enqueue();
Notice the method
enqueue() that makes the
request.
Now we want to read the response body. We can do that using the
InputStream provided by the function
onInput. We need to add this piece of code to our
client.
onInput: function(is: InputStream) { try { var responseSize : Integer = is.available(); java.lang.System.out.println("Response size {responseSize}"); } finally { is.close(); } }
The last step is to handle any exceptions that can occur during
the HTTP request. The
HTTPRequest has a function that
is called whenever an exception occurs. So we can add the
exception-handling code below to our client.
onException: function(ex : Exception) { System.out.println("Error: {ex.getMessage()}"); }
If you run the client using NetBeans, you should see output
similar to Figure 3:
"Client log" />
Figure 3: Client log
In the package
javafx.io.http, there are two other
classes called
HttpHeaders and
HttpStatus. The first class defines a set of constants
that map the corresponding HTTP header value names. The second
class defines a set of constants corresponding to the possible HTTP
response codes.
XML API
As we said, many clients today send data over HTTP using an
XML format, and JavaFX offers the capability to easily parse an XML
document. We focus our attention now on the other
two packages, shown before in Figure 1:
javafx.data.xml
javafx.data.pull
The package
javafx.data.pull contains the classes
to parse an XML document, while the
javafx.data.xml
package defines some constants and handles qualified names. The
parser is event-based (similar to the "">
SAX parser) and it supports two different data formats:
- XML
- JSON
For this article, we'll focus our attention on the XML data
format.
The
PullParser class, the heart of JavaFX's
document parser, accepts several attributes that can be used to
control the parser. First of all, we need to declare the document
type we want to parse, which we do by using the class attribute
documentType. This string can have two values:
PullParser.XMLis used for parsing XML
PullParser.JSONis used for parsing JSON
After we declare the document type, we need to supply the input
document to parse. The parser accepts an input stream, and as we
will see later, this is very handy when we need to parse an XML
document retrieved from an HTTP request. To declare the input
stream we need to set the value of the
input
variable.
So it is time we create an instance of our
PullParser, as shown below:
parser = PullParser { documentType: PullParser.XML; input: xmlFileInputStream; }
While the parser analyzes the document, it generates a set of
events. We need to implement a callback function to be called in
response to these events. The callback function is called
onEvent and in its body, we implement our logic to
extract information from the document, which we will do later.
The function signature is
onEvent(event : Event),
where the
Event class belongs to the package
javafx.data.pull. This class contains all the
information related to the pull-parsing event, and we can use it to
extract the information we need. The
type declares the
type of event, as one of the values defined in
PullParser. We are interested in the following types of
events:
START_DOCUMENT: This event is generated at the
beginning of document parsing.
START_ELEMENT: This event is generated when the
parser finds a new starting element. We can use this event to read
the element attribute.
END_ELEMENT: This event is generated when the
parser finds the end of the element. We can use it to read the text
contained in the element.
END_DOCUMENT: This event is generated when the
parser reaches the end of the document.
There are other events that can be used for JSON
documents; if you're interested, have a look at the "">
PullParser documentation. At any rate, here's an
onEvent skeleton implementation to react to the
START_ELEMENT and
END_ELEMENT events.
onEvent: function(event : Event) { /* We start analyzing the different event types */ if (event.type == PullParser.START_ELEMENT) { /* Here we implement our logic to handle the start element event, for example to extract the attribute values and so on */ } else if (event.type == PullParser.END_ELEMENT) { /* Here we implement our logic to handle the end element */ } }
During the parsing process, some errors can occur. We can manage
them verifying the type of
Event generated by the
parser.
Integrating the HTTP and XML APIs
Now that we have described these two APIs, it is time we look at
the most interesting part: how we can integrate everything so that
we can code a complete XML-over-HTTP client. This can be useful if
we want to have a client that exchanges information with a remote
server.
Let's suppose that our JavaFX client application invokes a
servlet that returns an XML file with the structure shown
below:
<?xml version="1.0" encoding="UTF-8"?> <data> <person id="1"> <name>Mikey</name> <surname>Mouse</surname> </person> </data>
This is a simple XML file, but it is enough for the purpose of
our example. Our goal is for our client to connect to the test
servlet and retrieve the XML content, and then parse it and show the
extracted information. To do that, we need to change the
HttpRequest function
onInput so that
when we start receiving the XML document we parse it, too. The code
below shows how to do it:
onInput: function(is: InputStream) { try { PullParser { input: is; onEvent: function (event : Event) { // We handle the event } }.parse(); } finally { is.close(); } }
Notice how we have added the
PullParser to the
onInput function, and that we set the parser input
stream to the one received from the
HttpRequest. Now
we just need to handle the events as we described before:
.... if (event.type == PullParser.START_ELEMENT and event.level == 1) { java.lang.System.out.println("Start a new element {event.qname.name}"); var qAttr : QName = QName {name : "id"}; var attVal : String = event.getAttributeValue(qAttr); java.lang.System.out.println("Attribute ID value {attVal}"); } else if (event.type == PullParser.END_ELEMENT) { var nodeName : String = event.qname.name; java.lang.System.out.println("End element {nodeName}"); // Now we extract the text only if the node is name or surname if (nodeName == "name" or nodeName == "surname") { var textVal : String = event.text; java.lang.System.out.println("Text {textVal}"); } } ....
It is useful to analyze the code step by step. In the case of a
PullParser.START_ELEMENT event, we use the
event.level variable. This tells us at which line the
event occurs (starting from zero, the XML document root). We know
already that the
id attribute is present only on the
first line, so we limit the extraction to this line only. Then we
create a
QName object setting, the
name
variable to our attribute name, and then we extract the value.
In the case of
PullParser.END_ELEMENT, we want to
extract the node content. To do this, we use the
text
variable that contains the node value.
If everything works properly we will see the parsed items in the
console, as shown in Figure 4.
"470" alt="HTTP request with XML parsing" />
Figure 4. HTTP request with XML parsing
Conclusion
In this article, we explored some essential features of JavaFX,
focusing our attention on two important aspects: XML and HTTP. We
discovered how easy is to develop a simple client that makes an HTTP
request and parses the XML response. This is a basic example, but it
can be further expanded adding other features; for example,
connecting to a site and retrieving pictures.
Resources
- Sample code for this article
- JavaFX
SDK
- JavaFX
API
- JavaFX
- " "">Building
GUI Applications With JavaFX"
- "Learning
the JavaFX Script Programming Language"
- "">
Joshua Marinacci's blog
- Login or register to post comments
- Printer-friendly version
- 19082 reads
POJOs can do the same
by Anonymous - 2009-02-24 17:12Nice writeup! Motivated me to do a little comparison to plain old Java objects here:
POJOs can do the same
by Anonymous - 2009-03-31 02:35hi, Where is the binding to an opengl hardware accelerated visual container for the XML content... KArel
POJOs can do the same
by Anonymous - 2009-04-04 01:48Karel, do you refer to my blog post? It does not have such binding. My blog post only describes how to do HTTP requests and parse XML responses with POJOs. I.e., the same what is described here for JavaFX. Ulrich
just simplify a development environment
by Anonymous - 2009-02-25 02:55thanks for the article, as I read last, JavaFX was a great technology introduced, in fact environment development support, interoperability and simplifying programming model are also a main reason to implement this technology at period of time.
Nice Sample Article explaining HTTP and XML usage in JavaFX
by pragun - 2009-08-04 09:15Hi I am newbies in JavaFX and this article provides very nice initial understanding how JavaFX generates a HTTP request and parse the response XML I have a same small module in my project which generates a WebService and parse response XML It would be great if you let me know any more information on this same subject for my reference. Thanks
Hi, Does anyone have the
by valime - 2010-03-13 02:20Hi, Does anyone have the sample code for this article? I`m new to Java FX, I would like also to ask if someone can help me do a Sax Parser in FX, and build a DOM tree after that :) Thanks | https://today.java.net/node/219961/atom/feed | CC-MAIN-2015-35 | refinedweb | 2,330 | 61.46 |
Java provides an extensive bit manipulation operator for programmers who want to communicate directly with the hardware. These operators are used for testing, setting or shifting individual bits in a value. In order to work with these operators, one should be aware of the binary numbers and two's complement format used to represent negative integers.
The bitwise operators can be used on values of type long, int, short, char or byte, but cannot be used with boolean, float, double, array or object type.
The bitwise AND (&) bitwise OR (!) and bitwise Exclusive OR (^)are the three logical bitwise operators. These are the binary operators which compare the two operands bit by bit. If either of the operands with a bitwise operator is of type long then the result will be long, otherwise the result is of type int.
Bitwise AND (&)operator combines its two integer operands by performing a logical AND operation on their individual bits. It sets each bit in the result to 1 if corresponding bits in both operands are 1. One of the applications of bitwise AND operator is forcing selected bits of an operand to 0.
Bitwise OR (|)operator combines its two integer operands by performing a logical OR operation on their individual bits. It sets each bit in the result to 1if the corresponding bits in either or both of the operands are 1. One of the applications of bitwise OR is forcing selected bits of an operand to 1.
Bitwise Exclusive OR operator (^) combines its two integer operands by performing a logical XOR operands on their individual bits. It sets each bit in the result to 1 if the corresponding bits in two operands are different. One of the applications of bitwise Exclusive OR operator is to change 0's to 1's and l's to 0's.
Bitwise One's Complement operator (-) Bitwise complement operator (-) is a unary operator. It inverts each bit of its operand i.e. Is become Os and Os become Is. This operator can be used to
1. To encrypt the contents of a file which can later be decrypted.
2. To store negative numbers in some computers that supports one's complement method for storing negative number.
Let us consider an operand a = 0000 0000 0001 0010 then on performing (~a) we get
-a = 1111 1111 1110 1101
Bitwise shift left operator (<<) shifts the bits of the left operand to left by number of positions specified by the right operand. The high order bits of the left operand are lost and 0 bits are shifted in from the right .It has the following syntax
Operand1 << operand2
Here, operand1 is binary representation of a number to be the shifted and operand2 represents the number of positions by which it is shifted.
Bitwise Shift Right operator (>>)shifts the bits of the left operand to right by a number of positions specified by the right operand. The low order bits of the left operand are lost and the high order bits shifted in are either 0 or 1 depending upon whether the left operand is positive or negative. If the left operand is positive, Os are shifted into the high order bits and if the left operand is negative, 1's are shifted instead. It has the following syntax.
operand1>>operand2
Here, operand1 is the binary representation of a number to be shifted and operand2 represents the number of positions by which it is shifted.
Bitwise Shift Right with zero fill (>>>) operator is similar to bitwise shift right (>>) operator with the exception that it always shifts zero's into the high order bits of the result regardless of the sign of the left hand operand.
//program Showing bitwise Operators
import static java.lang.Long .*;
public class BitwiseOperators
{
public static void main (String [] args)
{
Short x=20,y=0xaf ;
Short z= -24;
System.out.println(" x & y --> " + (x & y));
System.out.println(" x | y --> " + (x | y));
System.out.println(" x ^ y --> " + (x ^ y));
System.out.println(" z << 2 --> " + (z<<2));
System.out.println(" z >>> 2 --> " + (z>>>2));
System.out.println(" z >> 2 --> " + (z> | http://ecomputernotes.com/java/what-is-java-operators-and-expressions/bitwise-operators | CC-MAIN-2018-47 | refinedweb | 682 | 54.93 |
#include <hallo.h> * Nico Golde [Sat, Jul 09 2005, 01:12:24PM]: > >. a) a system administrator has to be familiar with grep (don't throw in FUD like "awk, etc.", those are tools for people that like them). Learning grep's usage for this purpose is not much more complicated than learning apt-history's usage. b) it is a problem for _everyone_ if you create a _new_ package, which eats up disk space with its meta data and kills apt's performance for no good reason Regards, Eduard. -- Schon wieder Telefon... Ein noier Juser... :-) Aber der tippt so langsam, da kann ich locker bei ircen ]:-) -- Quelle bekannt | https://lists.debian.org/debian-devel/2005/07/msg00391.html | CC-MAIN-2014-15 | refinedweb | 108 | 75.71 |
On date Sunday 2010-12-12 15:45:19 +0100, Stefano Sabatini encoded: > --- > doc/filters.texi | 30 ++++++++++++++++++++++++++++++ > doc/libavfilter.texi | 28 ---------------------------- > 2 files changed, 30 insertions(+), 28 deletions(-) My main problem with this patch is that we're mentioning in the ff* man pages a tool which is not usually distributed with FFmpeg. So the problem is: do we want to install it with the other ff* tools? And in this case we should provide some prefix for avoiding possible namespaces clutter and for helping autocompletion. So the question is: should we install the tools/*, which ones and with which prefix? -- FFmpeg = Fast and Fanciful Miracolous Pitiful Exciting Gem | http://ffmpeg.org/pipermail/ffmpeg-devel/2010-December/084034.html | CC-MAIN-2015-06 | refinedweb | 109 | 65.42 |
logger
Fast & extensible logging framework
This package is not currently in any snapshots. If you're interested in using it, we recommend adding it to Stackage Nightly. Doing so will make builds more reliable, and allow stackage.org to host generated Haddocks.
haskell-logger
Fast & extensible logging framework for Haskell!
Overview
Logger is a fast and extensible Haskell logging framework.
Logger allows you to log any kind of messages in both
IO as well as pure code, depending on the information you want to log.
The framework bases on the idea of logger transformer stack defining the way it works. You can build your own stack to highly tailor the behaviour to your needs, starting with such simple things, like logging messages to a list, ending on logging compile-time, priority-filtered messages from different threads and gathering them in other logger thread.
Documentation
The following documentation describes how to use the framework, how it works under the hood and how can you extend it.
Basics
This chapter covers all the basic information about logger transformers shipped with the framework.
BaseLogger
Let's start with a very simple example:
import System.Log.Simple test = do debug "a debug" warning "a warning" return "Done" main = print $ runBaseLogger (Lvl, Msg) test -- output: "Done"
There are few things to note here:
We are importing the
System.Log.Simple interface. It provides all necessary functions to start with the library. There is another interface,
System.Log.TH (using
TemplateHaskell to gather logs location information), which provides similar functionality, but allows additionally logging such informations like file or module name and log location inside the file.
We are running the logger using
runBaseLogger function providing the description what type of information we want to gather with each call to
debug,
warning, etc. This is very important, because we can choose only the needed information, like messages and levels and run the logger as a pure code. If you try to run the example with other description, like
`(Lvl, Msg, Time), it will fail complaining that it needs the IO
monad for that.
* The BaseLogger` is the most base logger transformer and it should be run as a base for every logger transformer stack. It does not log any messages under the hood, in fact you cannot do anything sensible with it.
As every logger transformer,
BaseLogger has an appropriate transformer type called
BaseLoggerT. You can use it just as every monad transformer, to pipe computations to an underlying monad. Using the transformer we can ask our logger to log also such information as the time:
main = print =<< runBaseLogger (Lvl, Msg, Time) test
There is one very important design decision. All the logger transformers, apart from the base one, pass the newly registered log to underlying transformers. This way we can create a transformer that writes messages to disk and combine it with the one, that registers the logs in a list. There are some examples showing this behavior later in this document.
WriterLogger
WriterLogger is just like
Writer monad - it gathers all the logs into a list and returns it:
main = print $ (runBaseLogger (Lvl, Msg) . runWriterLoggerT) test
As a result we get tuple, whose first element is the function's return value, while the second is list of all Log messages. For now the log message is not very friendly nested-tuple structure, but it will change in the next versions of the library. To be clear, the single log looks like this at the moment:
Log {fromLog = (Data {recBase = Lvl, recData = LevelData 0 "Debug"},(Data {recBase = Msg, recData = "a debug"},()))}
WriterLogger should work as fast as just
WriterT monad transformer with
Dlist used for logs gathering, because there should be no overhead introduced by the library.
HandlerLogger
HandlerLogger allows you to handle messages using handlers and log formatters. At last we will see something useful as a logging library! To start, let's look at a simple example:
import System.Log.Simple test = do addHandler $ printHandler Nothing debug "a debug" warning "a warning" main = print =<< (runBaseLoggerT (Lvl, Msg) . runHandlerLoggerT defaultFormatter) test
As a result, we get a colored output (on all platforms, including Windows):
[Debug] a debug [Warning] a warning "Done"
Ok, so what's happening here? The function
addHandler registers new log handler in current logger monad. The
`Nothing just indicates, that this handler does not need any special formatter and can use the default one, provided when executing the monad - in this case, the defaultFormatter`. We can of course define our custom message formatters.
For now only the
printHandler is provided, but it is straightforward to define custom handlers. Others will be added in the next versions of the library.
Formatters
It is possible to define a custom message formatter. To do it, import the module
System.Log.Format and use so called formatter builder. Let's see how the
defaultFormatter is defined:
defaultFormatter = colorLvlFormatter ("[" <:> Lvl <:> "] ") <:> Msg
You might ask now, what are
Lvl or
Msg. They are "data providers". You will learn about them later, for now just remember, you can use them while running loggers as well as defining formatters. There is one very important thing to note here - you cannot use any data provider in your logger, that was not declared to be gathered when the logger is run! In later chapters you will also learn how to create custom data providers.
So what if we would like to output not only the message and its priority level, but also the module name and location of the message in the source file? Such logger is also defined and it's called
defaultFormatterTH. You cannot use it using the
Simple interface, so let's see for now how it is defined:
defaultFormatterTH = colorLvlFormatter ("[" <:> Lvl <:> "] ") <:> Loc <:> ": " <:> Msg
Its output is similar to:
[Debug] Main.hs:4: a debug [Warning] Main.hs:5: a warning
PriorityLogger
The
PriorityLogger is used to filter the messages by priority levels. It is important to note here, that
PriorityLogger is able to filter them at compile time, so if we need some
IO actions to construct a log, like reading a time or process id, they will not be executed when the priority of such log is too low. Let's see how we can use it:
test = do addHandler $ printHandler Nothing debug "a debug" setPriority Debug debug "another debug" warning "a warning" print =<< ( runBaseLoggerT (Lvl, Msg) . runHandlerLoggerT defaultFormatter . runPriorityLoggerT Warning ) test
As the output we get:
[Debug] another debug [Warning] a warning
ThreadedLogger
The
ThreadedLogger is a very fancy one. It allows to separate the actual logging from program. Program is being run on a separate thread, while logs are being gathered by the main thread. You can fork the program as many times you want and all the logs will be sent to the log-gather routine. This allows to get nicely not-broken output in terminal or in files from different threads. The program stops after all the logs have been processed. Let's look at the example:
import System.Log.Simple import qualified System.Log.Logger.Thread as Thread import Control.Monad.IO.Class (liftIO) test = do addHandler $ printHandler Nothing debug "a debug" setPriority Debug debug "another debug" warning "a warning" Thread.fork $ do liftIO $ print "Threaded print" debug "debug in fork" liftIO $ print "End of the test!" print =<< ( runBaseLoggerT (Lvl, Msg) . runHandlerLoggerT defaultFormatter . runPriorityLoggerT Warning . runThreadedLogger ) test
As the output we get:
"Threaded print" "End of the test!" [Debug] another debug [Warning] a warning [Debug] debug in fork
The output may of course vary, based on the way threads will be scheduled, because we use
`Thread.fork, which is just a simple wrapper around forkIO`.
Exception handling
All the loggers behave in a proper way, when an exception is raised. The exception will be evaluated after all necessary logging has been done:
test = do addHandler $ printHandler Nothing debug "debug" Thread.fork $ do fail "oh no" debug "debug in fork" warning "a warning" print =<< ( runBaseLoggerT (Lvl, Msg) . runHandlerLoggerT defaultFormatter . runThreadedLogger ) test
Results in:
[Debug] debug Main.hs: user error (oh no)
DropLogger
The
DropLogger allows you to simply drop all logs from the function. It could be used if you want to execute a subroutine but just discard all logging there. The log messages would be completely discarded - they will not even be created.
TemplateHaskell interface
You can use more advanced interface to be able to log more information, like module name or file number. To use it, import
`System.Log.TH instead of System.Log.Simple` and use TemplateHaskell syntax to report logs:
import System.Log.TH test = do addHandler $ printHandler Nothing $(debug "a debug") setPriority Debug $(debug "another debug") $(warning "a warning") print =<< ( runBaseLoggerT (Lvl, Msg, Loc) . runHandlerLoggerT defaultFormatterTH . runPriorityLoggerT Warning . runThreadedLogger ) test
Which results in the following output:
[Debug] Main:7: another debug [Warning] Main:8: a warning
Filtering messages
The framework allows you to filter messages after they have been created. It is slower than using
PriorityLogger because the messages are created even if they are not needed. It could be used for example in a situation, where you've got many handlers and you want to output only important logs to the screen and all the logs into files. Here's a small example showing how it works.
test = do addHandler $ addFilter (lvlFilter Warning) $ printHandler Nothing $(debug "a debug") $(warning "a warning") print =<< ( runBaseLoggerT (Lvl, Msg, Loc) . runHandlerLoggerT defaultFormatterTH ) test
Which results in:
[Warning] Main:5: a warning
Extending the logger
It is possible to extend the logging framework in any way you want. All the functionality you have seen above are just simple logger transformers and you can modify them in a ton of ways or create custom ones.
Custom priority levels
Defining a custom priority level is as easy as creating a new datatype that derives the
Enum and start using it. The default priorities are defined as:
data Level = Debug -- ^ Debug Logs | Info -- ^ Information | Notice -- ^ Normal runtime conditions | Warning -- ^ General Warnings | Error -- ^ General Errors | Critical -- ^ Severe situations | Alert -- ^ Take immediate action | Panic -- ^ System is unusable deriving (Eq, Ord, Show, Read, Enum)
Custom data providers
It is possible to define custom data providers. Let's look how the
Msg data provided is defined in the library:
data Msg = Msg deriving (Show) type instance DataOf Msg = String
That's it. There is no more code for it. After creating such new datatype you can create a pretty printing instance for it and use it just like all other data even in the formatter builder!
But how the data is being registered? Let's look how the
debug function is defined in the
Simple library:
debug = log empty Debug
The
log function is a very generic one and allows creating almost any logging functionality. If for example we would love to add a new data provider
Foo registering an
Int, we can do this simply by:
data Foo = Foo deriving (Show) type instance DataOf Foo = Int debugFoo i = log (appData Foo i empty) Debug instance PPrint Foo where pprint = text . show fooFormatter = defaultFormatter <:> " (" <:> Foo <:> ")" test = do addHandler $ printHandler Nothing debugFoo 7 "my custom debug" print =<< ( runBaseLoggerT (Lvl, Msg, Foo) . runHandlerLoggerT defaultFormatter ) test
Which results in:
[Debug] my custom debug (7)
A new function
appData is used here. It allows providing a data to be registered when creating log messages. You can provide this way any data you want and only the data will be used, that is explicitly defined when running a logger. If you run a logger asking about data that was not provided when constructing the log, the framework will look for its monad data provider (described later). If there will be no such provider, it will fail at compile-time.
In fact, if we look how the log function is defined, we will find some similarities:
log rec pri msg = do [...] appendRecord $ appData Lvl (mkLevel pri) $ appData Msg msg $ rec
Monad data providers
What happens when such data is not provided when constructing the message? Like
Time data? If data is not available at construction time, the logger looks for its
DataGetter instance. A simple
Time data provider could be defined as:
import Data.Time.Clock (getCurrentTime, UTCTime) import Data.Time.Format (formatTime, defaultTimeLocale) data Time = Time deriving (Show) type instance DataOf Time = UTCTime instance MonadIO m => DataGetter Time m where getData = do liftIO $ Data Time <$> getCurrentTime instance Pretty UTCTime where pretty = text . formatTime defaultTimeLocale "%c" defaultTimeFormatter = colorLvlFormatter ("[" <:> Lvl <:> "] ") <:> Time <:> ": " <:> Msg
That's it! You can use any function inside - both pure as well as IO. If you use pure function, just return the value. If you will execute
runBaseLogger it will be evaluated inside the
Identity monad.
Custom logger transformers
It's also straightforward to define custom logger transformers. They have to be instances of some datatypes. To know more about it, look at example transformers inside the
System.Log.Logger module.
Conclusion
This is a new logging library written for purposes of fast logging between threads. It is still under development, so you can expect some api changes. There is still some functionality missing, like file handlers, but as you have seen, it is easy to define such. Any help would be welcome.
Happy logging! | https://www.stackage.org/package/logger | CC-MAIN-2017-13 | refinedweb | 2,193 | 53.81 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.