text stringlengths 32k 189k | timestamp stringlengths 20 20 | url stringlengths 15 660 |
|---|---|---|
Cerb (7.2) is a major functionality update released on August 5, 2016. It contains over 143 new features and improvements from community feedback. There are 85 additional improvements provided in 5 maintenance updates.
To check if you qualify for this release as a free update, view Setup » Configure » License. If your software updates expire on or after May 1, 2016 then you can upgrade without renewing your license.
We aim to ship a major functionality update every 3-4 months. The development cycle for 7.2 was longer than usual (about 7 months), since it evolved in tandem with the introduction of Cerb Cloud.
We’ve spent several months auditing the daily usage of Cerb Cloud instances with 7.2 and fine-tuning our code and queries. We revamped our platform to take proper advantage of cloud-based deployments (e.g. load balancers, multiple web servers, databases with read replicas, distributed filesystems, distributed caches, Elasticsearch, etc). A single instance of Cerb is now capable of painlessly scaling to hundreds of concurrent workers who field tens of thousands of messages per day. There is nothing proprietary in how we’re accomplishing that – even if you run Cerb on your own hardware, the “cloud” focus of the 7.2 release provides you with many major scaling and performance improvements.
Many other improvements in 7.2 are similarly focused on scalability: dashboard widgets load in parallel, bulk update on long worklists runs incrementally rather than taking forever without any progress output, fulltext searches that return too many results help the worker be more specific, the schema and database indexes were made more efficient by combining related fields and records, etc.
The other major focus of 7.2 was on improving the flexibility and usability of worklists and Virtual Attendant behaviors, since they are the basis of almost everything interesting about Cerb. As we’ve consulted with teams who are deploying Cerb, they’ve asked for seemingly simple things that Cerb just couldn’t do easily: “How do I build a worklist with tickets I’m either the owner of OR a watcher of?”, “How do I build a worklist with a non-continuous range of values, like < 5 OR > 10?”, “How do I build a worklist of records with at least one watcher who is not myself (without explicitly filtering for every other worker on my 70 person team)?”, etc.
To that end, quick search has been significantly improved. You can now include the same filter multiple times, use lists and negation, and (most importantly) you can group filters with AND and OR. This makes so many new workflows possible.
Similarly, when we run through a training session with a new team, there’s a fairly universal pause as the huge list of 800+ placeholder options results in cognitive overload. Those placeholders are now presented in nested menus like Ticket » Latest Message » Sender » Organization » Name. It’s much, much simpler. We’re looking forward to breezing through those examples during training sessions in the future.
There are a lot of palpable improvements in this release. There just as many “under the hood” improvements to keep things running efficiently, on the most modern hardware, with less frustration, as building blocks for all kinds of future functionality. We have a huge pile of feedback, plenty of big ideas, and no intention of slowing down.
To honor our commitment to 3-4 major updates per year for annual subscriptions, we are backdating the release date of 7.2 for licensing purposes to May 1, 2016. Visit the project website to purchase or renewal a license.
If you use the Web API for integration, note that the is_waiting, is_closed, and is_deleted fields on ticket records are now stored in the status field.
Completely overhauled the quick search interpreter to enable more expressive queries. This is simpler for workers, saves a lot of work for third-party developers, promotes a consistent syntax across filters, and is far more flexible when adding new functionality.
Previously, all the filters added to a worklist were in a single AND group. Additional filters could only further narrow down the current results. There was no way to join the results of two independent filters.
It is now possible to add any number of filters in arbitrary groups using AND and OR keywords. By default, AND is used when an operator isn’t specified.
Parentheses are used to define a group of filters, and groups can be nested within other groups.
Previously, some filters allowed an array of values, but it was never made clear when this was possible, and the format wasn’t standardized.
Previously, you could only filter a numeric value using greater than or less than.
Previously, some filters supported negation, but this was also inconsistently handled.
Previously, each filter could only be used once per worklist. This made it difficult or impossible to handle some queries (e.g. “any value from 10 to 20, but not 15”).
You can now apply the same filter any number of times.
Any search text without an explicit field: prefix is considered to be part of a text-based search. Quoted phrases in quick search queries may contain any characters other than extra quotation marks.
Boolean operators are now supported regardless of the search engine in use (e.g. word OR “phrase two”).
When performing a fulltext search on a worklist, meta information is now displayed above the worklist for each search (search engine, query time in milliseconds, total results). If more results were found than returned, a “try being more specific” message is given. This should help address usability issues when using search engines like Elasticsearch (returning 500-1000 results) where it wasn’t previously clear that partial results were being returned.
Previously, the hints for quick search were found in a drop-menu that had to be opened and closed manually. Now the syntax and available filters are autocompleted while typing. Previously, inserting filters from the menu only appended to the end of the text box. Now, autocomplete suggestions will be inserted at the cursor position. Autocomplete situations are now also aware of the content around the cursor.
[CHD-4353] We’ve received many requests for the ability to add other built-in and custom fields to the card popups. Previously, the displayed fields were hardcoded and couldn’t be changed without hacking around in the code.
When viewing a contact card popup, a ‘Compose’ button is now available in the ‘Tickets’ section. Clicking this buttons opens the compose dialog and pre-fills the email address in the ‘To:’ field.
When viewing an organization card popup, a ‘Compose’ button is now available in the ‘Tickets’ section. Clicking this buttons opens the compose dialog and pre-fills the org name in the ‘Org:’ field, which provides autocomplete suggestions of the most common contacts from that org.
Implemented card popups for task records. Previously, the peek for task records always opened in edit mode.
When viewing a card popup you can now quickly view all the comments on the record. You can also add a comment using the new ‘Comment’ button at the top. After commenting, the comments timeline will automatically refresh.
Improved bulk update functionality on worklists. Previously, the bulk update attempted to complete in a single request, which often timed out on the server, leaving the worker unsure if it completed or not (whether it did depended on PHP/MySQL configuration). Now, bulk updates run small batches of records with real-time progress shown above the worklist. Upon completion, the worklist will refresh as expected. The new process properly handles the updating of thousands of records.
When broadcasting from a worklist, the available placeholders now display in nested menus rather than a very long list. This significantly speeds up the process of finding the desired placeholder.
Implemented broadcast functionality in bulk update for contact worklists.
Implemented broadcast functionality in bulk update for worker worklists.
Implemented broadcast functionality in bulk update for organization worklists.
Significantly improved the performance of worklists that use many custom field columns. Previously, custom field columns had a linear cost in the database (more columns meant more complexity). Now this cost is constant, and the custom field values are “lazy loaded” (they are all loaded with a single query and only when requested, and then merged with the original search results). Any number of custom fields may now be displayed as columns without negatively impacting performance. If a custom field column is used to sort the worklist, we still retrieve it with the original search results to make that possible.
Improved the performance of multi-value custom fields when used as NOT IN filters on worklists. These fields have a special condition where the presence of any of their fields in a “NOT IN” should exclude the field entirely, even if other values exist.
Dashboard widgets now load in parallel. Previously, dashboard widgets all loaded serially (i.e. one at a time). Now, up to three widgets will load at the same time.
When exporting from a worklist, the field chooser now uses nested menus to simplify selection. The default fields for each record type is also automatically selected.
[CHD-4330] Contact-based worklists can be subtotaled on gender.
Worker-based worklists can be subtotaled on gender.
Removed the ‘Last Action’ column on ticket worklists in favor of ‘Last Wrote’. The same functionality can be added back through Virtual Attendant worklist behaviors.
Significantly improved the performance of filtering by watchers on worklists.
Improved the performance of ticket worklists when filtering by participants: (particularly when filtering for participants with a *@host wildcard).
Contact-based worklists can be quick searched by gender:.
Worker-based worklists can be quick searched by gender:.
Added a comments: filter for searching comments on contact worklists.
Added a va: filter for searching by Virtual Attendant in scheduled behavior worklists.
Added an inGroupsOfWorker: filter to ticket worklists.
On ticket worklists, the resolution.first: and response.first: filters now accept dates in natural language, like “< 1 hour” and “2 hours … 8 hours”.
On message worklists, the responseTime: filter now accepts dates in natural language, like “> 1 hour” and “2 hours … 8 hours”.
[CHD-4375] In message worklists, added a header.messageId: quick search filter for matching messages based on their ‘Message-Id:’ header. This is particularly useful for webhook behavior from Mailgun, where the message-id header is the only identifier available. The filter will only return exact matches.
On ticket worklists, the bulk update popup now uses linked dropdowns for group and bucket in the ‘Move To:’ option. Selecting a group in the first dropdown displays its buckets in the second dropdown. Previously this was a huge list of “Group:Bucket” options.
Removed the ‘pile sort’ option from ticket worklists. This can be handled with subtotals and bulk update in modern versions.
When using bulk update from an organization worklist, the country field now autocompletes using existing values.
[CHD-4370] Added a primary email address field to organization records.
Added ‘language’ and ‘timezone’ fields to contact records. These provide for localization in community portals, and improved segmentation in worklists and reports.
When bulk updating contact worklists, watchers can be added and removed.
When bulk updating a worker worklist, the following fields may now be set: title, location, gender, language, and timezone.
The “Who’s Online” section now has a smaller footprint. Previously, every worker was listed on a separate line with their IP, idle time, and last activity. This information (and much more) is now available in worker cards. Workers are now listed as links (to cards) in a paragraph and separated by commas.
The <is_waiting> and <is_closed> fields no longer exist in the schema. These should be replaced with <status> with a value of: open, waiting, closed, or deleted.
The <is_outgoing> element can be provided for all message records. This removes the requirement that sender addresses are set up beforehand and match.
The raw message headers can be provided in the <headers> element as a CDATA text block (rather than itemized elements per header/value pair).
The <org> element is available to create/link an organization to the ticket.
When editing Virtual Attendants, the ‘Insert Placeholder’ button now opens a nested menu with related placeholders grouped together. This replaces a flat list with hundreds of options.
All worklists now have improved markup to simplify Virtual Attendant behaviors that target specific columns. A data-column attribute is available on each table cell.
Improved the performance of fulltext search filters when using the default MySQL Fulltext search engine. As of MySQL 5.6+, both MyISAM and InnoDB database tables support fulltext indexes. Prior to that only MyISAM supported fulltext, and that was how Cerb was developed and benchmarked.
Our testing has shown InnoDB to generally be faster when matching terms (‘all these words’), but significantly slower for matching phrases, especially in datasets with millions of rows (as is the case in many Cerb environments).
We’ve discovered many “exact phrase” searches in Cerb Cloud that took less than a second with MyISAM which are taking minutes in InnoDB. This has do with the fact that InnoDB uses distributed indexes and doesn’t currently support a LIMIT clause to stop once the desired number of matches are returned.
By moving phrase searches into Cerb (using the index to match all the words in any order, then a LIKE for the exact phrase), we’re able to make those slower searches in InnoDB more than 100X faster. This change also speeds up non-phrase searches, even against MyISAM tables.
Previously, when building search indexes for the MySQL Fulltext engine, we heavily pre-processed content to convert to lowercase, remove punctuation, remove reply-quoted content, remove common (“stop”) words, remove accents, etc. This resulted in more efficient indexes, but it also reduced search accuracy. It also prevented some phrase searches from matching properly. Now all we do is strip reply-quoted lines and truncate content to the first 5KB. This only applies to content indexed after the 7.2 upgrade, so a full reindex should be performed if you need to search older content this way.
When using a fulltext search filter on a worklist using the MySQL Fulltext search engine, the query will now be pre-processed to conform to MySQL’s requirements. Terms that are smaller than 3 characters or larger than 82 are ignored, as are words that appear in the default InnoDB “stop words” list. This is because the presence of any of these terms in an “all these words” search would return 0 results instead of being ignored by MySQL.
A new APP_DB_ENGINE_FULLTEXT option is available in the framework.config.php file. This specifies the MySQL database engine that should be used for newly created fulltext search tables. In MySQL 5.6+, ‘InnoDB’ now supports fulltext indices. Previously, only ‘MyISAM’ tables supported them. This is particularly important at scale since InnoDB is recommended (or even required).
APP_DB_ENGINE_FULLTEXT now defaults to APP_DB_ENGINE when not explicitly set in framework.config.php. If this results in InnoDB being selected in MySQL versions prior to 5.6, then MyISAM is always used regardless of this setting.
The installer now checks if the MyISAM storage engine is disabled and the MySQL version is less than 5.6 (where InnoDB has fulltext indexing). If the MySQL version is >= 5.6 then the installer will succeed with InnoDB handling everything.
Fixed an issue where newer versions of the Elasticsearch search engine couldn’t be configured.
Condensed search schema document indexing into a single ‘content’ field. This is what the MySQL Fulltext search engine did anyway, and Elasticsearch doesn’t need the inefficient _all field with this approach. Cerb already provides filters for everything else.
When configuring Elasticsearch, a “default query field” option is now available. This makes it possible to disable the _all field in an Elasticsearch index to save resources (up to a 50% reduction). In most cases, Cerb writes to an aggregated ‘content’ field rather than itemizing fulltext search fields.
Added default timeouts to searches using the Elasticsearch engine. The indexing timeout is 20 seconds and the query timeout is 5 seconds.
When performing fulltext searches using Elasticsearch, a ‘max results’ setting over 1000 results now joins a temporary table. When returning fewer than 1000 results, the more efficient inline IN(...) method will continue to be used.
Added a helper script to install/extras/developers/search_dump_elasticsearch_json.php to assist with bulk importing message content from Cerb into Elasticsearch.
All search engines (MySQL FT, Sphinx, Elasticsearch) can now provide a ‘max results’ option.
When editing signatures on sender addresses in Setup, the ‘Insert Placeholder’ button now uses nested menus rather than a huge dropdown list of placeholders.
When editing signatures on bucket records, the ‘Insert Placeholder’ button now uses nested menus rather than a huge dropdown list of placeholders.
When editing HTML templates in Setup, the ‘Insert Placeholder’ button now uses nested menus rather than a huge dropdown list of placeholders.
Optimized the way the status field is stored for ticket records in the database. Previously, there were three different fields and indexes involved (is_waiting, is_closed, is_deleted), as a byproduct of continuous improvements over many years. These fields have been consolidated into a single status_id field. This should result in slightly faster ticket worklist results, slightly less filesystem space wasted, and much cleaner logic in the code.
Improved the performance of operations involving email message headers (e.g. to/from/subject/date/etc) and made them much more efficient in the database.
Previously, each header/value pair was saved as a separate row in the message_header table. Since each message has many headers, the record count of the headers table grew exponentially larger. For instance, one sampled production Cerb environment had 1.7 million messages and 42 million message headers (an average of 25 headers per message). The access times of the message_header table could be negatively impacted at scale. Additionally, since message headers are immutable (never modified and never appended to), it made little sense to store headers individually.
Cerb now stores all of the headers for a message together in a single row, making reading and writing a single operation and result. The 7.2 update automatically migrates existing records to the new format. The overall size of the data in the table should be relatively unchanged, but the size of the indexes is significantly reduced since the header names and first few characters of each value no longer need to be indexed.
Previously, when viewing a ticket profile, messages could be toggled to show “brief” or “full” headers above the content. However, the “full” version of these headers was not the original copy – it was decoded, de-duped, etc. The raw original headers are now stored for each message, and can be accessed from the “Show Full Headers” button in the extended “…” options at the bottom of each message. The original headers are shown unformatted in a popup, rather than expanding above the message, which makes it easier to inspect and copy them. Whenever these headers are used in Cerb (displayed on messages, in Virtual Attendants, etc) they are decoded in real-time. This allows the handling of header information to change and improve without modifying their original state.
When the ‘cron.parser’ scheduler job runs to process incoming mail, it will now log output for several important headers (e.g. To, From, Subject, Date, In-Reply-To, Delivered-To) to assist with later troubleshooting and forensics. Previously, only the file name of the parsed message was logged, unless Virtual Attendants ran conditions against specific fields.
Improved the performance of threading replies to conversations.
The value of ‘message-id’ wasn’t guaranteed to be well distributed (there could be millions of rows with the same prefix); etc.
The Message-ID: header of each message is now converted to a SHA-1 hash and stored on the ‘message’ table. It’s now much faster when matching an incoming In-Reply-To: or References: header against these values. The SHA-1 hash is a fixed length (40 characters), and even very similar (but different) values will end up having very different hashes, which allows us to more efficiently index only the first few characters of the hashes for lookups. In high-volume environments with database contention (e.g. locks), message_header lookups were often implicated, and this should no longer be an issue.
Refactored the email parser to be more efficient. Replaced the direct usage of the mailparse_* API in PHP with Mailparse’s MimeMessage wrapper class. This simplified the ability to parse email messages as either files or text-based variables and cleaned up a lot of code. Previously, Cerb always wrote email messages to the filesystem before parsing (i.e. Support Center, REST API), which generated needless filesystem I/O (much slower than memory).
In the inbound email parser, the enforcement of the maximum attachment size is now more efficient. Previously, the attachment was always written to disk before the size was checked and the file potentially discarded. Now, if a Content-Disposition-Size: header is provided, this can be used to ignore an attachment before doing anything else with it (saving compute cycles, filesystem I/O).
[CHD-4188] Mailbox records now keep tracked of their ‘last checked at’ timestamp. This is used to check mailboxes in the order of least recently checked first, which addresses issues where a slow or busy first mailbox could block other mailboxes from ever being checked.
Improved the usability of the ‘skip to bottom’ link on each message in a ticket profile. This link was designed to make it easy to jump to the message actions when viewing a large message; however, it displayed on all messages. This button will now only show up when the message actions (like ‘Reply’) are scrolled beyond the bottom of the browser’s current viewport. This saves some screen space and reduces clutter when reading shorter messages.
[CHD-4396] When sending HTML mail, improved the way that Markdown-formatted lists are converted back to plaintext.
[CHD-4329] When viewing a ticket timeline, if a message sender has an organization but not a contact, the organization details are now displayed along with a card popup.
Task records now have a built-in ‘owner’ field to simplify worker assignments. Previously this required the use of a worker-based custom field.
[CHD-4226] Task records now have a built-in ‘importance’ field that behaves like the same field on ticket records. This allows for simpler prioritization of tasks without requiring the use of custom fields.
When editing snippets, the ‘Insert Placeholder’ button now opens a nested menu rather than a flat list with hundreds of options.
[CHD-4415] When opening the ‘Create Snippet’ popup from compose or reply, any selected text in the current message will be used as the snippet’s default text. Previously, this content had to be copied manually in a second step.
Improved the usability of permalink buttons on profiles. Previously, clicking a permalink button redirected the browser instantly to the permalink URL, which was often undesirable (e.g. when in the middle of a reply). These buttons didn’t allow copying the URL in all browsers (Firefox in particular). Now, clicking a permalink button opens a popup which displays the permalink URL, already selected and ready to copy. This doesn’t prevent a worker from continuing what they were doing by redirecting them to something else, and it makes it much easier to share these URLs.
In the activity log, when a ticket is assigned to a worker, that worker’s name is now a clickable link to their profile.
Added a cache for community portal records. Previously, these results always came from the database.
Improved the performance of loading templates from plugins. Previously, the content of the referenced templates was being loaded every request. Now they are only loaded when their modification times have changed.
Previously, when custom templates in Community Portals (like the Support Center) were modified, the entire template cache was flushed. Now only the cache of that specific template is removed.
Added an error message when attempting to add a community portal and no portal extensions are enabled.
Custom templates in the Support Center now run in a secure sandbox.
[CHD-4266] The unread notifications popup now saves any changes made (e.g. filters, columns, subtotals, sorting).
Improved the organization of the Setup page. The ‘Configure’ menu now has subsections for System, Contexts (Record types), and Plugins. Virtual Attendants and Portals moved out of Configure into their own top-level menus.
In Setup, when creating or modifying mailboxes, a new ‘Disable PLAIN Authentication’ option is available. This is necessary for proper authentication in some environments (e.g. shared mailboxes in Exchange/Office365).
Added an ignore_internal=1 option to bare scheduler /cron URLs. This ignores the built-in scheduler jobs (e.g. heartbeat, mailbox, parser, search, etc) and only runs plugin-provided scheduler jobs. This is useful when scheduling jobs independently by type.
In the scheduler, added a max_mailboxes URL option to /cron/cron.mailbox to control how many mailboxes are checked per job.
In ‘Setup->Configure->Sessions’, the bulk update action for deleting sessions moved to a worklist action.
[CHD-4331] The ‘Worker History over Date Range’ and ‘Group Replies over Date Range’ reports now use the new chooser popups for filtering.
[CHD-4369] When creating a comment through the Web-API, attachments can now be added to the comment using one or more instances of the file_id parameter.
[CHD-3576] Implemented bulk update functionality on asset worklists.
When sending a broadcast from a domain worklist, file attachments can be added to the messages.
[CHD-582] The installer now automatically adds the client’s IP to AUTHORIZED_IPS_DEFAULTS in framework.config.php.
The Cerb automated upgrader will no longer close all worker sessions when updating. If a specific patch needs to force all workers to log back in (which should be rare), then it can explicitly do so. This change makes the upgrade process more seamless, less annoying for workers, and allows it to be automated.
[CHD-4398] Added a global ‘Timezone’ setting to Setup->Configure->Localization. This serves as the default when a worker doesn’t have a timezone set, or there is no session (Virtual Attendants, webhooks, etc). In many standalone deployments this wasn’t necessary since the system timezone was sufficient as a default, but in distributed and multi-tenant environments we needed a per-instance default.
Worker ‘last activity’ is now determined by the activity log and sessions, and the separate mechanism from 4.x that wrote simple activity info directly to worker records has been removed. This is more efficient since it’s not constantly invalidating the worker cache (previously even several times per minute). On worker peeks, when a worker has been active within 15 minutes, they’re “currently active” rather than “active 49 seconds ago”, which allows sessions to update more efficiently as well.
Optimized the database indexes for: address, attachment_link, comment, context_link, message, and ticket. This should reduce disk space usage and speed up writes.
Added APP_DB_OPT_CONNECTION_RECONNECTS and APP_DB_OPT_CONNECTION_RECONNECTS_WAIT_MS options to framework.config.php. These control the number of times, and duration between, that Cerb will attempt to reconnect to a prematurely closed database connection (i.e. one that had connected successfully at the beginning of the request). The first database connection when Cerb answers a new request ignores these options, and will fail instantly if the database is unavailable. These options are specific to situations where MySQL severs connections (“MySQL has gone away”, “MySQL is shutting down”, a long running query is killed from the MySQL console, etc). The option defaults to retrying 10 times with 1 second between attempts. In high traffic environments this could be tuned to fail quicker (and not hold load balancer or proxy connections open longer), and in lower traffic environments it could be tuned to retry longer (especially for a known flakey database host). Previously, if a database connection closed during a Cerb request, the subsequent queries in the same request would fail. It is now possible for Cerb to retry the previously failed query (when caused by the database host connection rather than SQL syntax errors) and resume/complete normally.
When the database isn’t running in master/slave mode (read/write splitting), or when the slave connection fails and it falls back to master, Cerb is now more efficient about continuing with that assumption for the remainder of the request. Previously, the connection was re-checked multiple times.
Added some sanitization checks when writing to the server-side cache. Previously, it was possible for the cache to become poisoned with invalid results in some rare situations (e.g. “MySQL has gone away”). This could lead to difficult to troubleshoot issues, like legitimate logins returning “Invalid password”.
The templates_c path where Smarty compiles templates is now configurable in framework.config.php from the APP_SMARTY_COMPILE_PATH option. This is useful in multi-tenant environments where the template compile cache can be shared between instances (which reduces I/O on disk and memory usage in opcache). It also allows the template compile cache to be separated from other temp files, since the compile cache is frequently accessed and rarely changed, and it may not perform well on some shared filesystems (NFS, etc). Previously, this path was always located in APP_TEMP_PATH.
When Cerb displays an error and shuts down (e.g. “Cache not writeable”, “Can’t connect to database”, “Access denied”), a proper HTTP status code is returned. Previously, a lot of these error messages still returned HTTP 200 OK, which caused problems with detecting errors from distributed services like proxies, load balancers, and monitoring.
Added APP_DB_OPT_MASTER_CONNECT_TIMEOUT_SECS and APP_DB_OPT_SLAVE_CONNECT_TIMEOUT_SECS options in framework.config.php for independently controlling the timeout when connecting to the MySQL database(s). This previously used the system default, which was often 30+ seconds (way too long in distributed environments). The default timeout is now 5 seconds for the master and 1 second for the read slaves. This improve the gracefulness of fail-over capabilities in distributed environments.
When a connection to the slave database fails (if defined), Cerb will now revert to the master. Previously a fatal error was returned which prevented the app from working at all. This change improves graceful fail-over in distributed environments.
Added a DEVBLOCKS_CACHE_ENGINE_PREVENT_CHANGE option to framework.config.php for preventing changes to cache configuration from Setup. When enabled, it also hides Setup->Configure->Cache. This option is useful when combined with DEVBLOCKS_CACHE_ENGINE to prevent Cerb’s cache from ever touching the filesystem in a clustered environment.
Added DEVBLOCKS_CACHE_ENGINE and DEVBLOCKS_CACHE_ENGINE_OPTIONS options to framework.config.php for overriding the default cache in the platform. In a distributed environment with many web nodes and a shared filesystem, it can be inefficient for the initial cache (plugins, extensions, classloader, etc) to always be read from and written to disk-based storage, even when a memory-based cache (Redis/Memcache) is enabled. This occurs because the platform uses the cache before it has started up (cache engines are implemented as plugins, and plugins need to be loaded to use them). The DEVBLOCKS_CACHE_ENGINE option bypasses this. In our multiple web node tests against shared filesystems (NFS/EFS/RedisFS), connecting to Redis/Memcache directly rather than using an NFS/FUSE mount was over 400% faster. This is particularly useful for high-volume, high-availability installations and cloud hosting.
Added a DEVBLOCKS_STORAGE_ENGINE_PREVENT_CHANGE option to framework.config.php for preventing changes to storage configuration from Setup. When enabled, it also hides Setup->Storage->Profiles. This is particularly useful in multi-tenant environments (like Cerb Cloud).
Added a DEVBLOCKS_SEARCH_ENGINE_PREVENT_CHANGE option to framework.config.php for preventing changes to search configuration from Setup. When enabled, it also hides Setup->Configure->Search. This is particularly useful in multi-tenant environments (like Cerb Cloud).
Added a CERB_FEATURES_PLUGIN_LIBRARY option to framework.config.php for enabling/disabling the Plugin Library feature in Setup. When disabled, it hides Setup->Plugins->Library, and doesn’t attempt to fetch/install plugin updates during upgrades. This is useful in multi-tenant (Cerb Cloud) or intranet environments where the available plugins are curated manually, and potentially shared locally between multiple instances of Cerb. This is also useful in highly secure environments that want to prevent Cerb from downloading automatic code updates in a more user friendly way (currently these outgoing connections can just be firewalled).
Added an APP_SMARTY_COMPILE_USE_SUBDIRS option to framework.config.php to toggle whether the template compile cache uses subdirectories for hashing (in ./storage/tmp/templates_c/). Enabling this option is more efficient than having thousands of cache files in a single directory.
Previously, Cerb checked the contents of the storage/_version file to detect changes to the underlying files (i.e. upgrades). This file just contained the numeric APP_BUILD as of the last time the upgrade process ran. Some OSes cached this content, but it could be inefficient when constantly loaded from distributed filesystems like NFS. This file has been renamed to storage/version.php so that it can be cached in shared memory via PHP opcache. It is recommend that the corresponding setting in php.ini (opcache.enable_file_override=1) is also set to make the file_exists() checks more efficient.
The DevblocksPlatform::redirect() and DevblocksPlatform::redirectURL() methods now accept an optional $wait_secs argument that intentionally delays the redirect. This is particularly useful in distributed environments where a change needs to be replicated for consistency (e.g. new sessions).
Added the option APP_DB_OPT_READ_MASTER_AFTER_WRITE to framework.config.php. This toggles ‘master read-after-write’ functionality to combat latency in read replicas when using read/write splitting for database queries. Previously, if a read replica was behind, then the UI could appear to be ignoring worker actions for a brief time. For example, several tickets could be selected in a worklist and then marked closed (writing to master), and when the worklist instantly refreshes those tickets could still appear open (reading from an eventually consistent replica). A subsequent manual refresh in this situation would show the current state of the data. This lag is only apparent to the worker who initiated an action, and not other workers who may see old data that’s less than a second behind the master. This situation is more common when read replicas are distributed geographically away from the master with relatively slower network access. However, Cerb pages can load so quickly (tens of milliseconds or less), and read replicas can become busy with expensive queries no matter where they’re located, so it was possible for this situation to occur anywhere. When this option is set to any non-zero integer value, reads will be redirected to the master following a write for that number of seconds within the current session. If a volatile memory cache is being used (Redis/Memcache vs disk), then reads in other requests (Ajax) will also be redirected to the master for the configured duration. The _DevblocksDatabaseManager::OPT_NO_READ_AFTER_WRITE flag passed to ::ExecuteMaster($sql,$bits) bypasses this behavior for writes that tolerate eventual consistency. This option is disabled by default.
Added a APP_SMARTY_COMPILE_PATH_MULTI_TENANT option to framework.config.php for multi-tenant environments that share a single Smarty templates_c compile cache. When true, Cerb won’t flush the template compile cache during upgrades or when enabling/disabling plugins. The default is false, which doesn’t change Cerb’s long-time behavior (these caches are flushed in several situations).
Added a APP_SMARTY_SANDBOX_COMPILE_PATH option to framework.config.php for multi-tenant environments. This allows each tenant to have their own template compile directory; ideally on a shared filesystem, which solves issues with templates being out of sync across multiple web servers.
Cerb no longer directly reads from the $_SERVER['REMOTE_ADDR'] value from multiple places. The DevblocksPlatform::getClientIp() method now returns the client’s IP. When multiple load balancers and proxies are involved, it’s possible for multiple client IPs to be available (based on X-Forwarded-For), and Cerb now selects the appropriate one in these conditions.
[CHD-4311] The DEVBLOCKS_HTTP_PROXY option in framework.config.php now configures a proxy for all outgoing HTTP requests. By default this uses an HTTP forward proxy, but the value can be prefixed with socks5:// to use a SOCKS5 server instead. This allows networked Cerb functionality to work behind a proxy: Virtual Attendant behaviors, Plugin Library, Widget datasources, avatar image URLs, Elasticsearch, etc.
When using the PHP open_basedir setting, the logs complained about CURLOPT_FOLLOW_LOCATION not being available. We now handle following redirects in Devblocks.
Added a /debug/status page with stats in JSON format. This is useful for monitoring a Cerb instance.
Added DevblocksPlatform::setRegistryKey($key, $value, $as, $persist) and DevblocksPlatform::getRegistryKey($key, $as, $default) methods for simpler access to the registry service.
The Devblocks registry service now supports JSON as an object type. This can be used to store complex nested objects with automatic JSON encoding and decoding.
Patched the S3 library to support infrequent access (IA) objects and security tokens for temporary credentials.
Added the jquery.visible plugin to the platform. This detects whether a UI element is visible within the browser viewport or not.
Added the FineDiff library (by Raymond Hill, MIT license) to the platform. This makes it easy to create “diffs” for comparing historical revisions to blocks of text (like knowledgebase articles).
Added a new ImpEx (Import/Export) tool to install/extras/impex/. The previous ImpEx tool was a separate project written in Java. This is written in PHP and is used from the command line. This makes the contribution of new export drivers much simpler. | 2019-04-24T10:28:13Z | https://cerb.ai/releases/7.2/ |
For years, marketers have been experimenting with the senses and sensory experiences to create better perceptions of their products. Even with a product as simple as a potato chip, there are many factors that go into the experience of interacting with the chip. How it tastes, how it smells, the sound that eating it makes, and the appearance of its packaging can all influence our perception of the potato chip itself. As scientists and managers begin to recognize the importance of the senses in product design and marketing, more and more products and advertisements have become sensory in nature.
Accepting the importance of the senses brings about a change in how a manager views his or her products. What changes can be made in the packaging, branding, and advertising to captivate the consumer's senses? What changes can be made to the product itself? This book helps managers to understand how customers relate to products on a sensory level, detailing the specific interactions unique to each sense and showing them how small sensory changes can make a huge impact. Customer Sense allows managers to unlock the secret world of sensory appeal and to craft unique products and advertisements for their businesses.
In this edited book, the authors discuss how sensory aspects of products, i.e., the touch, taste, smell, sound and look of the products, affect our emotions, memories, perceptions, preferences, choices and consumption of these products. We see how creating new sensations or merely emphasizing or bringing attention to existing sensations can increase a product's or service's appeal. The book provides an overview of sensory marketing research that has taken place thus far. It should facilitate sensory marketing by practitioners and also can be used for research or in academic classrooms.
Lowe, Michael, Kate Loveland and Aradhna Krishna, “A Quiet Disquiet: Anxiety and Risk Avoidance Due to Imperceptible Differences in Ambient Sound”, forthcoming, Journal of Consumer Research.
The target article by John Jost (2017 – this issue) focuses on political ideology (liberalism vs. conservatism) and its association with personal characteristics, cognitive processing style, and motivational interests. Jost's arguments and data are very compelling and will inspire consumer psychologists to do more research in the political domain. To enable this goal further, we complement the target article by focusing on partisanship, another major determinant of political judgments and decisions. Whereas political ideology refers to people being more liberal or conservative, partisanship refers to how strongly people identify with a specific political party (e.g., Republicans or Democrats). In reviewing the literature on partisanship, we concentrate on voting behaviors and attitudes, an area not addressed by Jost, but of great importance for consumer psychologists given the large expenditures on political advertising. Adding to Jost's discussion of the link between political ideology and systematic processing, we examine the interplay between these two constructs and partisanship.
Why sexual assaults and car accidents are associated with the consumption of alcohol mixed with energy drinks (AMED) is still unclear. In a single study, we show that the label used to describe AMED cocktails can have causal non-pharmacological effects on consumers' perceived intoxication, attitudes, and behaviors. Young men who consumed a cocktail of fruit juice, vodka, and Red Bull felt more intoxicated, took more risks, were more sexually self-confident, but intended to wait longer before driving when the cocktail's label emphasized the presence of the energy drink (a “Vodka-Red Bull cocktail”) compared to when it did not (a “Vodka” or “Exotic fruits” cocktail). Speaking to the process underlying these placebo effects, we found no moderation of experience but a strong interaction with expectations: These effects were stronger for people who believe that energy drinks boost alcohol intoxication and who believe that intoxication increases impulsiveness, reduces sexual inhibition, and weakens reflexes. These findings have implications for understanding marketing placebo effects and for the pressing debate on the regulation of the marketing of energy drinks.
Firms can save considerable money if consumers conserve resources (e.g., if hotel patrons turn off the lights when leaving the room, restaurants patrons use fewer paper napkins, or airline passengers clean up after themselves). In two studies conducted in real-world hotels, the authors show that consumers’ conservation behavior is affected by the extent to which consumers perceive the firm as being green. Furthermore, consumer perceptions of firms’ greenness and consumer conservation behavior depend on (a) whether the firm requests them to conserve resources, (b) the firm’s own commitment to the environment, and (c) the firm’s price image. Additionally, firm requests to consumers to save resources can create consumer reactance and can backfire when firms themselves do not engage in visible costly environmental efforts. Such reactance is more likely for firms with a high price image. Finally, the authors show that by spending a little money to signal environmental commitment, firms can save even more money through consumers’ conservation of resources, resulting in wins for the firm, the consumer, and the environment.
Five experiments show that less physical involvement in obtaining food leads to less healthy food choices. We find that when participants are given the choice of whether or not to consume snacks that they perceive as relatively unhealthy, they have a greater inclination to consume these snacks when less (versus more) physical involvement is required to help themselves to the food; this is not the case for snacks that they perceive as relatively healthy. Further, when participants are given the opportunity to choose their portion size, they select larger portions of unhealthy foods when less (versus more) physical involvement is required to help themselves to the food; again, this is not the case for healthy foods. We suggest that this behavior occurs because being less physically involved in serving one’s food allows participants to reject responsibility for unhealthy eating and thus to feel better about themselves following indulgent consumption. These findings add to the research on consumers’ self-serving attributions and to the growing literature on factors that nudge consumers towards healthier eating decisions..
Ellie Kyung, Manoj Thomas and Aradhna Krishna (2017), “When Bigger Is Better (and When It Is Not): Implicit Bias in Numeric Judgments”, Journal of Consumer Research, 44, 62-78.
Numeric ratings for products can be presented using a bigger-is-better format (1=bad, 5=good)or a smaller-is-better format with reversed rating poles (1=good, 5=bad). Seven experiments document how implicit memory for the bigger-is-better format—where larger numbers typically connote something is better—can systematically bias consumers’ judgments without their awareness. This rating polarity effect is the result of proactive interference from culturally determined numerical associations in implicit memory and results in consumer judgments that are less sensitive to differences in numeric ratings. This is an implicit bias that manifests even when people are mindful and focused on the task and across a range of judgment types (auction bids, visual perception, purchase intent, willingness-to-pay). Implicating the role of reliance on implicit memory in this interference effect, the rating polarity effect is moderated by (i) cultural norms that define the implicit numerical association, (ii) construal mindsets that encourage reliance on implicit memory, and (iii) individual propensity to rely on implicit memory. This research identifies a new form of proactive interference for numerical associations, demonstrates how reliance on implicit memory can interfere with explicit memory, and how to attenuate such interference.
Packaging is a critical aspect of the marketing offer, with many implications for the multi-sensory customer experience. It can affect attention, comprehension of value, perception of product functionality, and also consumption, with important consequences for consumer experience and response. Thus, while it was once viewed as being useful only for product preservation and logistics, package design has evolved into a key marketing tool. We introduce the layered-packaging taxonomy that highlights new ways to think about product packaging. This taxonomy has two dimensions: the physicality dimension, which is composed of the outer-–intermediate–inner packaging layers, and the functionality dimension, which is composed of the purchase-–consumption packaging layers. We then build on this taxonomy to present an integrative conceptualization of the sensory aspects of package design as they affect key stages of customer experience.
Tatiana Sokolova and Aradhna Krishna (2016), “Take it or leave it: How choosing versus rejecting alternatives affects information processing”, Journal of Consumer Research, 43 (4): 614-635.
Prior research has shown that choice versus rejection decisions can cause preference reversals by changing the importance of negative and preference-inconsistent attributes. We demonstrate that people are also more likely to rely on deliberative processing in rejection (vs. choice) tasks. We test our conceptualization across seven experiments. We replicate results from prior research in the choice task; and then show how these results change when a rejection task is used. Study 1A uses the Asian disease problem scenario to demonstrate that rejection makes people less susceptible to framing effects. A hypothetical Study 1B and an incentive-compatible Study 1C replicate the moderating effect of task type on framing effects in the domain of financial decision-making. Studies 2 and 3 examine the effect of task type in the contexts of online reviews (Study 2) and complex product choices (Study 3) – they show that people make more rational and objectively superior decisions in rejection. Studies 4 and 5 tap into the process underlying the effect of task type (choice vs. rejection). We demonstrate that a rejection task produces decisions similar to those observed in a choice task when decision-makers are cognitively depleted (Study 4), or encouraged to rely on their feelings (Study 5).
People nowadays order food using a variety of computer devices, such as desktops, laptops, and mobile phones. Even in restaurants, patrons can place orders on computer screens. Can the interface a consumer uses affect her choice of food? We focus on the “direct touch” aspect of touch interfaces, whereby users can touch the screen in an interactive manner. In a series of five studies, we show that a touch interface such as that provided by an iPad, compared to a non-touch interface such as that of a desktop computer with a mouse, facilitates the choice of an affect-laden alternative over a cognitively superior one—what we call the “Direct-Touch effect.” Our studies provide some mediational support that the Direct-Touch effect is driven by the enhanced mental simulation of product interaction with the more affective choice alternative on touch interfaces. We also test the moderator of this effect. We obtain consistent results using multiple product pairs as stimuli. Our results have rich theoretical and managerial implications.
Krishna, Aradhna (2016), “Another Spotlight on Spotlight: Understanding, Conducting and Reporting”, Journal of Consumer Psychology, Lead article, 26 (July), 315-324.
There has been a remarkable increase in the use of spotlight analysis to examine any interactive effect between an independent variable and a continuous moderator. Most of the spotlight analyses have been conducted at one standard deviation above and below the mean value of the moderator, even when alternate methods are more appropriate. Additionally, many spotlight analyses are not conducted correctly. More importantly, results for spotlight analyses are reported in a manner that makes it virtually impossible for mistakes to be detected. This article focuses on "understanding", "conducting" and "reporting" spotlight analyses. By posing questions for the reader, it highlights some common mistakes made when doing spotlight analysis, and explains why confusion often arises. Then, it provides an easy to understand way to do spotlight analysis for some popular contexts. Alternatives to spotlight analysis are also briefly discussed. Finally, it suggests how to report results for spotlight analysis and for the alternatives. Pointing out recurrent mistakes should prevent perpetuation of misleading practices. Similarly, reporting essential details of the analyses should prevent mistakes from going undetected.
Krishna, Aradhna, Luca Cian, and Tatiana Sokolova (2016), "The Power of Sensory Marketing in Advertising," Current Opinion in Psychology, 10 (August), 142-147.
This article discusses the role of sensory marketing in driving advertisement effectiveness. First focusing on vision, we discuss the effect of mental simulation and mental imagery evoked by ad visuals on ad effectiveness. Second, we review findings on gustation, zooming in on the effect of multi-sensory stimulation on taste perceptions. Third, we elaborate on the role of actual and imagined touch in shaping consumer evaluations and behaviors. Fourth, we discuss olfaction as a driver of ad recall and responses to ads. Finally, we review the role of auditory sense in advertising, focusing on the effect of music on consumers’ memory for and evaluations of ads. Directions for future research in the domain of sensory marketing and product advertising are discussed.
Cian, Luca, Aradhna Krishna and Norbert Schwarz (2016), "Positioning Rationality and Emotion: Rationality Is Up and Emotion Is Down", Journal of Consumer Research, 42 (4), 632-651.
Emotion and rationality are fundamental elements of human life. They are abstract concepts, often difficult to define and grasp. Thus throughout the history of Western society, the head and the heart, concrete and identifiable elements, have been used as symbols of rationality and emotion. Drawing on the conceptual metaphor framework, we propose that people understand the abstract concepts of rationality and emotion using knowledge of a more concrete concept—the vertical difference between the head and heart. In six studies, we show a deep-seated conceptual metaphorical relationship linking rationality with “up” or “higher” and emotion with “down” or “lower.” We show that the association between verticality and rationality/emotion affects how consumers perceive information and thereby has downstream consequences on attitudes and preferences. We find the association to be most influential when consumers are unaware of it and when it applies to an unfamiliar stimulus. Because all visual formats—from the printed page to screens on a television, computer, or smartphone—entail a vertical placement, this association has important managerial implications. Our studies implement multiple methodologies and technologies and use manipulations of logos, websites, food advertisements, and political slogans.
Eda Sayin, Aradhna Krishna, Caroline Ardelet, Gwenaëlle Briand Decré, Alain Goudey (2015), "“Sound and safe”: The effect of ambient sound on the perceived safety of public spaces", International Journal of Research in Marketing, 32 (4), 343-353.
The amount of crime to which individuals are exposed on a daily basis is growing, resulting in increased anxiety about being alone in some public places. Fear of crime usually results in avoidance of places that are perceived to be unsafe, and such avoidance can have negative financial consequences. What can be done to reduce fear in relatively safe public places that are nevertheless perceived as being unsafe? In this paper, we explore the effect of auditory input (type of ambient sound) on perceived social presence and one's feeling-of-safety in public spaces such as car parks and metro stations. In one field study and four laboratory studies, we demonstrate that different ambient sounds convey social presence to a different degree. When perceived social presence is higher and positive, the feeling-of-safety is also higher. Additionally, we show that an increase in perceived safety has a positive effect on consumers' satisfaction with the public area and even raises their willingness to purchase a monthly membership card for the public area. Furthermore, the effect of ambient sound on such consumer responses is serially mediated by perceived social presence and feeling-of-safety.
Embarrassment has been defined as a social emotion that occurs due to the violation of a social norm in public, which is appraised by others (what we call “public embarrassment”). We propose that embarrassment can also be felt when one violates a social norm in private, or when one appraises oneself and violates one’s self-concept (“private embarrassment”). We develop a typology of embarrassment with two underlying dimensions – social context transgression in-public or in-private) and mechanism (appraisal by others or by the self). Of the four resulting categories, one fits with the dominant “social” view of embarrassment, whereas the other three have aspects of privacy. We generate triggers for public and private embarrassment and demonstrate their similarities in study 1. Study 2 (buying an incontinence drug) and study 3 (buying Viagra for impotence versus pleasure) replicate these similarities, and also exhibit differences in the experience of public and private embarrassment through accompanying physiological reactions, action tendencies, and behavioral consequences. Our aim is to expand the scope of embarrassment research to include private contexts and self-appraisal.
We propose that static visuals can evoke a perception of movement, i.e., dynamic imagery, and thereby impact consumer engagement and attitudes. Focusing on brand logos as the static visual element, we measure the perceived movement evoked by the logo. We demonstrate that the evoked dynamic imagery affects the level of consumer engagement with the brand logo. We measure consumer engagement both through self-report measures, as well as through eye-tracking technology. We find that engagement affects consumer attitudes toward the brand. We also show that the perceived movement – engagement – attitude effect is moderated by the congruence between perceived movement and brand characteristics. Our findings suggest that dynamic imagery is an important aspect of logo design, and if used carefully, can enhance brand attitudes.
The concept of olfactory imagery is introduced and the conditions under which imagining what a food smells like (referred to here as “smellizing” it) impacts consumer response are explored. Consumer response is measured by: salivation change (studies 1 and 2), actual food consumption (study 3), and self-reported desire to eat (study 4). The results show that imagined odors can enhance consumer response but only when the consumer creates a vivid visual mental representation of the odor referent (the object emitting the odor). The results demonstrate the interactive effects of olfactory and visual imagery in generating approach behaviors to food cues in advertisements.
Krishna, Aradhna and Norbert Schwarz (2014), “Sensory Marketing, Embodiment, and Grounded Cognition: Implications for Consumer Behavior”, Journal of Consumer Psychology, 24 (2), 158-298.
There has been a recent swell of interest in marketing as well as psychology pertaining to the role of sensory experiences in judgment and decision making. Within marketing, the field of sensory marketing has developed which explores the role of the senses in consumer behavior. In psychology, the dominant computer metaphor of information processing has been challenged by researchers demonstrating various manners in which mental activity is grounded in sensory experience. These findings are arduous to explain using the amodal model of the human mind. In this introduction, we first delineate key assumptions of the information processing paradigm and then discuss some of the key conceptual challenges posed by the research generally appearing under the titles of embodiment, grounded cognition, or sensory marketing. We then address the use of bodily feelings as a source of information; next, we turn to the role of context sensitive perception, imagery, and simulation in consumer behavior, and finally discuss the role of metaphors. Through this discourse, we note the contributions to the present special issue as applicable.
This research examines how oral haptics (due to hardness/softness or roughness/smoothness)related to foods influence mastication (i.e., degree of chewing) and orosensory perception (i.e., orally perceived fattiness), which in turn influence calorie estimation, subsequent food choices, and overall consumption volume. The results of five experimental studies show that, consistent with theories related to mastication and orosensory perception, oral haptics related to soft (vs. hard) and smooth (vs. rough) foods lead to higher calorie estimations. This “oral haptics–calorie estimation” (OHCE) effect is driven by the lower mastication effort and the higher orosensory perception for soft (vs. hard) and smooth (vs. rough) foods. Further, the OHCE effect has downstream behavioral outcomes in terms of influencing subsequent food choices between healthy versus unhealthy options as well as overall consumption volume. Moreover, mindful calorie estimation moderates the effects of oral haptics on consumption volume.
Lynch, John G. Jr., Joseph W. Alba, Aradhna Krishna, Vicki G. Morwitz and Zeynep Gürhan-Canli (2012), "Knowledge creation in consumer research: Multiple routes, multiple criteria", Journal of Consumer Psychology, 22, (4), 374-485.
The modal scientific approach in consumer research is to deduce hypotheses from existing theory about relationships between theoretic constructs, test those relationships experimentally, and then show “process” evidence via moderation and mediation. This approach has its advantages, but other styles of research also have much to offer. We distinguish among alternative research styles in terms of their philosophical orientation (theory-driven vs. phenomenon-driven) and their intended contribution (understanding a substantive phenomenon vs. building or expanding theory). Our basic premise is that authors who deviate from the dominant paradigm are hindered by reviewers who apply an unvarying set of evaluative criteria. We discuss the merits of different styles of research and suggest appropriate evaluative criteria for each.
Aydinoglu, Nilufer and Aradhna Krishna (2012), “Imagining Thin: Why Vanity Sizing Works”, Journal of Consumer Psychology, 22 (4), 565-572.
Vanity sizing, the practice of clothing manufacturers, whereby smaller size labels are used on clothes than what the clothes actually are, has become very common. Apparently, it helps sell clothes—women prefer small size clothing labels to large ones. We propose and demonstrate that smaller size labels evoke more positive self-related mental imagery. Thus, consumers imagine themselves more positively (thinner) with a vanity sized size-6 pant versus a size-8 pant. We also show that appearance self-esteem moderates the (mediating) effect of imagery on vanity sizing effectiveness— while vanity sizing evokes more positive mental imagery for both low and high appearance self-esteem individuals, the effect of the positive imagery on clothing preference is significant (only) for people with low appearance self-esteem, supported by the theory of compensatory self-enhancement. Our suggestion of simple marketing communications affecting valence of imagery and consequent product evaluation have implications for many other marketing domains.
Wang, Yu and Aradhna Krishna (2012), “Enticing for me but Unfair to Her: Can Targeted Pricing Evoke Socially Conscious Behavior?”, Journal of Consumer Psychology, 22 (3), 433-442.
Prior research shows that consumers stop purchasing from firms that treat them badly. In this research we show that consumers also resist firms that treat other consumers badly while favoring them. In three experiments, we demonstrate such social consciousness in the context of targeted pricing, where firms offer lower prices to new (versus old) customers. A significant proportion of consumers in our experiments give up money to resist the price-discriminating firm, especially when the discrimination is more salient or is not justified. Further, perceived unfairness mediates the relationship between the salience and justification of the pricing practice and consumer resistance.
I define “sensory marketing” as “marketing that engages the consumers' senses and affects their perception, judgment and behavior.” From a managerial perspective, sensory marketing can be used to create subconscious triggers that characterize consumer perceptions of abstract notions of the product (e.g., its sophistication or quality). Given the gamut of explicit marketing appeals made to consumers every day, subconscious triggers which appeal to the basic senses may be a more efficient way to engage consumers. Also, these sensory triggers may result in consumers' self generation of (desirable) brand attributes, rather than those verbally provided by the advertiser. The understanding of these sensory triggers implies an understanding of sensation and perception as it applies to consumer behavior—this is the research perspective of sensory marketing. This review article presents an overview of research on sensory perception. The review also points out areas where little research has been done, so that each additional paper has a greater chance of making a bigger difference and sparking further research. It is quite apparent from the review that there still remains tremendous need for research within the domain of sensory marketing—research that can be very impactful.
Elder, Ryan and Aradhna Krishna (2012), "The "Visual Depiction Effect" in Advertising: Facilitating Embodied Mental Simulation Through Product Orientation", Journal of Consumer Research, 38 (6), 988-1003.
This research demonstrates that visual product depictions within advertisements, such as the subtle manipulation of orienting a product toward a participant’s dominant hand, facilitate mental simulation that evokes motor responses. We propose that viewing an object can lead to similar behavioral consequences as interacting with the object since our minds mentally simulate the experience. Four studies show that visually depicting a product that facilitates more (vs. less) embodied mental simulation results in heightened purchase intentions. The studies support our proposed embodied mental simulation account. For instance, occupying the perceptual resources required for embodied mental simulation attenuates the impact of visual product depiction on purchase intentions. For negatively valenced products, facilitation of embodied mental simulation decreases purchase intentions.
Krishna, Aradhna (2011), "Can Supporting a Cause Decrease Donations and Happiness?: The Cause Marketing Paradox", Journal of Consumer Psychology, 21 (3), 338-345.
(Cited in Chronicle of Philanthropy, Media Post, Minneapolis Star Tribune, The Association of Fundraising Professionals Information Exchange).
In two laboratory and one pilot field study, we demonstrate that cause marketing, whereby firms link products with a cause and share proceeds with it, reduces charitable giving by consumers, even when it is costless to the consumer to buy on CM (versus not); further, instead of increasing total contribution to the cause, it can decrease it. Consumers appear to realize that participating in cause marketing is inherently more selfish than direct charitable donation, and are less happy if they substitute cause marketing for charitable giving. Our results suggest that egoistic and empathetic altruism may have different effects on happiness.
Morrin, Maureen, Aradhna Krishna and May Lwin (2011), "Is scent-enhanced memory immune to retroactive interference?", Journal of Consumer Psychology, 21, (3), 354-361.
Research shows that scent enhances memory for associated information. Current debate centers around scent's immunity to “retroactive interference,” i.e., reduced memory for earlier-learned information after exposure to additional, subsequently-learned information. This paper demonstrates that scent-enhanced memory is indeed prone to retroactive interference, but that some of the information lost is restored using a scent-based retrieval cue. Two process explanations for interference effects are proposed, with the evidence providing more support for an inhibition rather than a response competition explanation. The results enhance our understanding of the encoding and retrieval of olfactory information from long-term memory, and reasons why interference occurs.
Yuan, Hong and Aradhna Krishna (2011), "Price-Matching Guarantees with Endogenous Search: A Market Experiment Approach", Journal of Retailing, 87 (2), 182-193.
Price-matching guarantees are commonly used by sellers as promises to match the lowest price for an item that a customer can find elsewhere. In this paper, we use a market experiment approach to examine buyer search as well as sellers’ pricing decisions in the presence versus absence of Price-Matching Guarantees. We use student subjects as well as real consumers in an interactive laboratory setting to trade with each other, acting as buyers and sellers. Our findings from two experiments indicate that when searchers’ demand is more elastic than non-searchers, PMGs can result in more intense price competition, even when sellers are symmetric. Price-Matching sellers benefit from converting more consumers into searchers who buy a larger quantity at a lower price. The lower (average) market prices also benefit buyers. These implications should be of great interest to researchers, practitioners, and public policymakers.
Aydinoglu, Nilufer and Aradhna Krishna (2011), "Guiltless Gluttony: The Asymmetric Effect of Size Labels on Size Perceptions and Consumption", Journal of Consumer Research, 37 (6), 1095-1112.
(Discussed in Time Magazine Healthland, Globe and Mail, Science Daily).
Size labels adopted by food vendors can have a major impact on size judgments and consumption. In forming size judgments, consumers integrate the actual size information from the stimuli with the semantic cue from the size label. Size labels influence not only size perception and actual consumption, they also affect perceived consumption. Size labels can also result in relative perceived size reversals, so that consumers deem a smaller package to be bigger than a larger one. Further, consumers are more likely to believe a label that professes an item to be smaller (vs. larger) in the size range associated with that item. This asymmetric effect of size labels can result in larger consumption without the consumer even being aware of it (“guiltless gluttony”).
Hall, Joseph, Praveen Kopalle and Aradhna Krishna (2010), "Retailers' Dynamic Pricing and Ordering Decisions: Category Management versus Brand-by-Brand Approaches", Journal of Retailing, 86 (2), 172-183.
This paper provides a framework for retailer pricing and ordering decisions in a dynamic category management setting. In this regard, the key contributions of this paper are as follows. First, we develop a multi-brand ordering and pricing model that endogenizes retailer forward buying and maximizes profitability for the category. The model considers (i) manufacturer trade deals to retailers, (ii) ordering costs incurred by the retailer, (iii) retailer forward buying behavior, and (iv) both own- and cross-price effects of all the brands in the category. Second, we use this model to compare differences in ordering and pricing decisions, and in profits, resulting from using a category management versus a brand-by-brand management approach. Our approach allows us to derive implications in a dynamic setting about the impact of interdependence among the brands upon decisions on pass-through of trade deals and retailer order quantity. We show that category management results in noticeably higher profits versus brand-by-brand and cost-plus (markup) approaches. Further, our results suggest an interaction between a brand’s own-price effect and its cross-price effect emerges. If the cross-price effect for a brand is low – that is, the brand takes away relatively few sales from the other brands – the retail pass-through should increase with that brand’s own-price effect. On the other hand, when the cross-price effect is high, the retail pass-through decreases with the brand’s own-price effect.
Krishna, Aradhna, Ryan S. Elder and Cindy Caldara (2010), "Feminine to smell but masculine to touch? Multisensory congruence and its effect on the aesthetic experience", Journal of Consumer Psychology, 20 (4), 410-418.
We draw upon literature examining cross-modal sensory interactions and congruence to explore the impact of smell on touch. In line with our predictions, two experiments show that smell can impact touch in meaningful ways. Specifically, we show that multisensory semantic congruence between smell and touch properties of a stimulus enhances haptic perception and product evaluation. We explore this relationship in the context of two properties of touch, namely texture and temperature, and demonstrate that both smell and touch can have semantic associations, which can affect haptic perception and product evaluation depending on whether they match or not. In study 1, we focus on the semantic association of smell and touch (texture) with gender and in study 2 with temperature. Our results extend prior work on smell and touch within consumer behavior, and further contribute to emerging literature on multisensory interactions.
Lwin, May, Maureen Morrin, and Aradhna Krishna (2010), "Exploring the Superadditive Effects of Scent and Pictures on Verbal Recall: An Extension of Dual Coding Theory", Journal of Consumer Psychology, 20 (3), 317-326.
This research extends the dual coding theory of memory retrieval (Paivio 1969, 2007) beyond its traditional focus on verbal and pictorial information to olfactory information. We manipulate the presence or absence of olfactory and pictorial stimuli at the time of encoding (study 1) or retrieval (study 2) and measure the impact on verbal recall. After a time delay, scent enhances recall of verbal information, and scent-based retrieval cues potentiate the facilitative effect of pictures on recall. These results cannot be attributed merely to increased elaboration at the time of exposure.
Krishna, Aradhna, May Lwin and Maureen Morrin (2010), "Product Scent and Memory", Journal of Consumer Research, 37 (1), 57-67.
Scent research has focused primarily on the effects of ambient scent on consumer evaluations. We focus instead on the effects of product scent on consumer memories. For instance, if a pencil or a facial tissue is imbued with scent (vs. not), recall for the brand’s other attributes increases significantly—with the effects lasting as much as 2 weeks after exposure. We also find that product scent is more effective than ambient scent at enhancing memory for product information. We suggest that this may be because, with product (ambient) scent, scent-related associations are focused on a single object (are diffused across multiple objects) in the environment. In support, we find that the memory effects are driven by the number of product/scent-related associations stored in long-term memory. The results suggest that, although ambient scent has received the bulk of attention from researchers and managers in recent years, greater focus on product scent is warranted.
We propose that advertisement (ad) content for food products can affect taste perception by affecting sensory cognitions. Specifically, we show that multi-sensory ads result in higher taste perceptions than ads focusing on taste alone, with this result being mediated by the excess of positive over negative sensory thoughts. Since the ad effect is thoughts-driven or cognitive, restricting cognitive resources (imposing cognitive load) attenuates the enhancing effect of the multiple-sense ad. Our results are exhibited across three experiments and have many implications for cognition and sensory perception research within consumer behavior, as well as several practical implications.
The authors examine incumbent retailers’ reactions to a Wal-Mart entry and the impact of these reactions on the retailers’ sales. They compile a unique dataset which represents a natural experiment consisting of incumbent supermarket, drug, and mass stores in the vicinity of seven Wal-Mart entries and control stores not exposed to the entries. The dataset includes weekly store movement data for 46 product categories before and after each entry and allows them to measure reactions and sales outcomes using a before-and-after-with-control-group analysis. They find that, overall, incumbents suffer significant sales losses due to Wal-Mart entry, but there is substantial variation across retail formats, stores, and categories both in incumbent reactions and in their sales outcomes. Moreover, they find that a retailer’s sales outcomes are significantly affected by its reactions, and the relationship between reactions and sales outcomes varies across retail formats. These findings provide valuable insights on how retailers in different formats can adjust their marketing mix to mitigate the impact of Wal-Mart entry.
The number of firms carrying a cause-related product has significantly increased in recent years. We consider a duopoly model of competition between firms in two products to determine which products a firm will link to a cause. We first test the behavioral underpinnings of our model in two laboratory experiments to demonstrate the existence of both a direct utility benefit to consumers from cause marketing (CM) and a spillover benefit onto other products in the portfolio. Linking one product in a product portfolio to a cause can therefore increase sales both of that product and, via a spillover effect, of other products in the firm's portfolio. We construct a CM game in which each firm chooses which products, if any, to place on CM. In the absence of a spillover benefit, a firm places a product on CM if and only if it can increase its price by enough to compensate for the cost of CM. Thus, in equilibrium, firms either have both products or neither product on CM. However, with the introduction of a spillover benefit to the second product, this result changes. We show that if a single firm in the market links only one product to a cause, it can raise prices on both products and earn a higher profit. We assume each firm has an advantage in one product and show that there is an equilibrium in which each firm links only its disadvantaged product to a cause. If the spillover effect is strong, there is a second equilibrium in which each firm links only its advantaged product to a cause. In each case, firms raise their prices on both products and earn higher profits than when neither firm engages in CM. We also show that a firm will never place its entire portfolio on CM. Overall, our work implies that, by carrying cause-related products, companies can not only improve their image in the public eye but also increase profits.
Amaldoss, Wilfred, Teck Ho, Aradhna Krishna, Kay-Yut Chen, Preyas Desai, Ganesh Iyer, Sanjay Jain, Noah Lim, John Morgan, Ryan Oprea and Joydeep Srivasatava (2008), "Experiments on Strategic Choice and Markets", Marketing Letters, 19, (3-4), 417-429.
Much of experimental research in marketing has focused on individual choices. Yet in many contexts, the outcomes of one’s choices depend on the choices of others. Furthermore, the results obtained in individual decision making context may not be applicable to these strategic choices. In this paper, we discuss three avenues for further advancing our understanding of strategic choices. First, there is a need to develop theories about how people learn to play strategic games. Second, there is an opportunity to enrich standard economic models of strategic behavior by allowing for different types of bounded rationality and by relaxing assumptions about utility formulation. These new models can help us to more accurately predict strategic choices. Finally, future research can improve marketing practice by designing better mechanisms and validating them using experiments.
We examine the role of language choice in advertising to bilinguals in global markets. Our results reveal the existence of asymmetric language effects for multinational corporations (MNCs) versus local firms when operating in a foreign domain, such that the choice of advertising language affects advertising effectiveness for MNCs but not local companies. Also, different language formats (e.g., the local language vs. English or a mix of the two languages) are shown to vary in their advertising effectiveness for different types of products (luxuries vs. necessities). Our results indicate that language choice for advertisements is an important decision for MNCs. Also, MNCs cannot mimic local companies in their choice of advertising language.
Krishna, Aradhna, Rongrong Zhou and Shi Zhang (2008), "The Effect of Self-Construal on Spatial Judgments", Journal of Consumer Research, 35 (2), 337-348.
Much prior literature has focused on the effect of self-construal on social judgment. We highlight the role of self-construal in spatial judgments. We show that individuals with independent (vs. interdependent) self-construals are more prone to spatial judgment biases in tasks in which the context needs to be included in processing; they are less prone to spatial judgment biases in tasks in which the context needs to be excluded in processing. We show such spatial judgment effects when self-construal is operationalized by different cultures (study 1) and as a construct that shifts with situational primes (studies 2 and 3).
Krishna, Aradhna and Maureen Morrin (2008), "Does Touch Affect Taste? The Perceptual Transfer of Product Container Haptic Cues",Journal of Consumer Research, 34 (6), 807-818.
We develop a conceptual framework regarding the perceptual transfer of haptic or touch-related characteristics from product containers to judgments of the products themselves. Thus, the firmness of a cup in which water is served may affect consumers’ judgments of the water itself. This framework predicts that not all consumers are equally affected by such nondiagnostic haptic cues. Results from four studies show that consumers high in the autotelic need for touch (general liking for haptic input) are less affected by such nondiagnostic haptic cues compared to consumers low in the autotelic need for touch. The research has many implications for product and package design.
Yuan, Hong and Aradhna Krishna (2008), "Pricing of Mall Services in the presence of sales leakage", Journal of Retailing, 84 (1), 95-117.
For a shopping mall, sales leakage occurs when consumer purchases facilitated by the mall are finalized outside it. These sales include, for example, catalog orders filled at the leased premises in a physical mall; For an Internet mall, they include the ones consumers make on an on-line store’s website after learning about the store from an Internet mall website. While these sales are difficult to track in the physical mall, Internet malls like Yahoo can track them by placing cookies on consumers when they visit the mall. The challenge for a mall owner then is to design an appropriate pricing model which takes sales leakage into account. In fact, Yahoo currently uses an All-Revenue-Share Fee with Yahoo collecting from on-line stores a share of all sales revenue, regardless of whether the purchase was made through the mall or directly from the store’s own URL. We explore this new All-Revenue-Share Fee model, compare it with the commonly used Fixed Fee model and the two-part tariff model, and identify the model with the highest profits for the mall under different conditions. We suggest that although an All-Revenue-Share Fee is appealing for Internet malls due to its ability to capture sales leakage directly, it may cause the stores to refrain from joining the mall in certain circumstances. Thus, in certain situations charging a fixed monthly fee can actually be more profitable for the mall versus the All-Revenue-Share Fee model. We also examine how mall and product category characteristics as well as market expansion affect the optimal pricing strategy. We show that a mall should price discriminate across product categories, not just by charging different amounts of fees, but by using different pricing models. Our research provides many managerial implications on how to price over time.
Krishna, Aradhna and Utku Unver (2008), "Improving the Efficiency of Course Bidding at Business Schools: Field and Laboratory Studies", Marketing Science, 27 (2), 262-282.
Registrars’ offices at most universities face the daunting task of allocating course seats to students. Because demand exceeds supply for many courses, course allocation needs to be done equitably and efficiently. Many schools use bidding systems in which student bids are used both to infer preferences over courses and to determine student priorities for courses. However, this dual role of bids can result in course allocations not being market outcomes, and in unnecessary efficiency loss, which can potentially be avoided with the use of an appropriate market mechanism. We report the result of field and laboratory studies that compare a typical course-bidding mechanism with the alternate Gale-Shapley Pareto-dominant market mechanism. Results from the field study (conducted at the Ross School of Business, University of Michigan) suggest that using the latter could vastly improve efficiency of course allocation systems while facilitating market outcomes. Laboratory experiments with greater design control confirm the superior efficiency of the Gale-Shapley mechanism. The paper tests theory that has important practical implications because it has the potential to affect the learning experience of very large numbers of students enrolled in educational institutions.
Zhang, Jie and Aradhna Krishna (2007), "Brand-Level Effects of Stockkeeping Unit Reductions", Journal of Marketing Research, 44 (4), 545-559.
When retailers make product assortment changes by eliminating certain stockkeeping units (SKUs), how does this affect sales of individual brands? This is the main question the authors address in this article. Using data from an online retailer that implemented a permanent systemwide SKU reduction (SR) program, the authors investigate how the program affected various components of purchase behavior for individual brands. They find substantial variations in the SR effects across brands, categories, and consumers.They explore possible drivers for these differences and find that higher-market-share, higher-priced, and more frequently promoted brands tend to gain share and that reduction in the number of sizes, reduction in the number of SKUs, and change in SKU share in the category are important in affecting change in a brand’s purchase share after the SR.They also find that SRs lead to an increase in category purchase incidence and quantity for highly state-dependent consumers and frequent buyers but a decrease in category purchase and quantity for mildly state-dependent consumers and infrequent buyers.In addition, SRs tend to cause more changes in brand choice probabilities among consumers of lower state dependence and higher price and promotion sensitivity.These findings are of importance both to retailers wanting to make product assortment changes and to manufacturers affected by them.
Krishna, Aradhna, Fred Feinberg and John Z. Zhang (2007), "Pricing Power and Selective Versus Across-the-Board Prices Increases", Management Science, 53 (9), 1407-1423.
Firms in many industries experience protracted periods of pricing power, the ability to successfully enact price increases. In these situations, firms must decide not only whether to raise prices, but to whom. Specifically, in a competitive context, they must determine whether it is more profitable to increase prices across-the-board or to a specific segment of their customer base. While selective price decreases are ubiquitous in practice (e.g., better deals to potential new customers by phone carriers; better deals to current customers by various magazines), to our knowledge selective price increases are relatively rare. We illustrate the benefits of targeted price increases, and, as such, we expand the repertoire of firms' promotional policies. To that end, we explore a scenario where, two competing firms must decide whether to increase prices to the entire market or only to a specific segment. Targeted price increases (TPI), i.e., being offered an unchanged price (selectively) when others are subject to price increases, can be offered to Loyals (those who bought from the firm in the previous period) or Switchers (those who did not). The effects of TPIs are estimated through a laboratory experiment and an associated stochastic model, each allowing for both rational (Loyalty, Switching) and behaviorist (Betrayal, Jealousy) effects. We find that TPIs can indeed yield beneficial results (greater retention for Loyals or greater attraction of Switchers) and greater profits in certain circumstances. Results for TPI are additionally benchmarked against those for targeted price decreases and are found to differ. The range of effects stemming from the experiment can be used in a competitive analysis to yield equilibrium strategies for the two firms. In this case, we find that-depending on the magnitude of the price increase, market shares of the two firms, and price knowledge across consumer segments-a firm may wish to embrace targeted price increases in some situations, to institute across-the-board price increases in others, and to not enact any price increases in still others. We show that a firm can sacrifice considerable profit if it settles on a suboptimal pricing strategy (e.g., wrongly. instituting an across-the-board increase), favors the wrong segment (e.g., Switchers instead of Loyals), or ignores "behaviorist" effects (Betrayal or Jealousy).
Krishna, Aradhna and Yu Wang (2007), "The Relationship Between Top Trading Cycle and Top Trading Cycle and Chains Mechanisms", Journal of Economic Theory, 132 (1), 539-547.
In this paper, we show that there is a relationship between two important matching mechanisms: the Top Trading Cycles mechanism (TTC mechanism proposed by Abdulkadiroglu and Sonmez, 1999) and the Top Trading Cycles and Chains mechanism (TTCC mechanism proposed by Roth, Sonmez, and Unver, 2004). Our main result is that when a specific chain selection rule proposed by Roth et al. is used, these two mechanisms are equivalent. While the equivalence is relevant for one specific case of the TTCC mechanism, it is a particularly interesting case since it is the only version identified by Roth et al. to be both Pareto-efficient and strategy-proof.
Wang, Yu and Aradhna Krishna (2006), "Time-Share Allocations: Theory and Experiment", Management Science, 52 (8), 1223-1238.
This paper focuses on the timeshare industry, where members own timeshare "weeks" and can exchange these weeks among themselves without a medium of exchange (such as money). Timeshare exchanges allow for the weeks to be redistributed among members to better match their preferences and thus increase efficiency. As such, the problem falls within the domain of matching problems, which have recently gained much attention in academia. We demonstrate theoretically that the two major timeshare exchange mechanisms used currently (deposit-first mechanism and request-first mechanism) can cause efficiency loss. We propose an alternate exchange mechanism, the top trading cycles chains and spacebank (TTCCS) mechanism, and show that it can increase the efficiency of the timeshare exchange market because TTCCS is Pareto efficient, individually rational, and strategy-proof. We test the three exchange mechanisms in laboratory experiments where we stimulate exchange markets with networked "timeshare members." The results of the experiments are robust across four different environments that we construct and strongly support our theory. The research focuses on an industry not studied earlier within academia and extends theoretical work on mechanism design to cases where supply of resources is dynamic, but resources can be stored.
Krishna, Aradhna (2006), "The Interaction of Senses: The Effect of Vision and Touch on the Elongation Bias", Journal of Consumer Research, 32 (4), 557-566.
We highlight the role of interacting senses on consumer judgment. Specifically, we focus on the role of the visual and haptic (touch) senses on the elongation bias, which predicts that the taller of two equivolume objects will appear bigger. We show that sensory modality will affect the extent (and even direction) of the elongation biaswith visual cues alone and with bimodal visual and haptic cues (seeing and handling the objects), we obtain the elongation bias; however, with haptic cues alone (handling the objects blindfolded) and in bimodal judgments with visual load, we obtain a reversal of the elongation bias.
Krishna, Aradhna, Carolyn Yoon, Mary Wagner and Rashmi Adaval (2006), "The Effect of Extreme Price Frames on Reservation Prices", Journal of Consumer Psychology, 16 (2), 176-190.
We show that an extremely high-priced product featured among more moderately priced products within a catalog can increase the reservation price for a moderately priced target product as well as the category as a whole. We investigate how this increase is influenced by the degree of relatedness between the extreme-priced product and the target as well as the situational and temporal proximity (contiguity) in their presentation. Consistent with our conceptualization, we find that the presence of an extreme cue leads to greater changes in target reservation price when the extreme-priced referent and target are more related and are contiguously presented. Furthermore, the impact of an extreme-priced product's relatedness on reservation price appears to be greater when the contiguity between the extreme-priced product and the target product is high versus when it is low.
Krishna, Aradhna (2005), "How Big is Tall?", Forethought, Harvard Business Review, 83 (4), 18-19.
Brown, Christie and Aradhna Krishna (2005), "The Skeptical Shopper: A Metacognitive Account for the Effects of Default Options on Choice", Journal of Consumer Research, 31 (3), 529-539.
A default option is the choice alternative a consumer receives if he/she does not explicitly specify otherwise. In this article we argue that defaults can invoke a consumer's "marketplace metacognition," his/her social intelligence about marketplace behavior. This metacognitive account of defaults leads to different predictions than accounts based on cognitive limitations or endowment: in particular, it predicts the possibility of negative or "backfire" default effects. In two experiments, we demonstrate that the size and direction of the default effect depend on whether this social intelligence is invoked and how it changes the interpretation of the default.
Krishna, Aradhna and Joel Slemrod (2003), "Behavioral Public Finance: Tax Design as Price Presentation", International Tax and Public Finance, Policy Watch section, 10 (2), 189-203.
In this essay we review the evidence from marketing research about price presentation of consumer products and discuss how these lessons have been applied—consciously or unconsciously—in the design of the U.S. tax system. Our perspective is that, in most situations, the designers of the tax system attempt to minimize the perceived burden of any given amount of tax collections. We allow, though, that in certain situations an additional goal is to maximize the perceived burden of others. We also investigate how, when the objective is to encourage a particular activity, price presentation may enhance the achievement of that goal for a given amount of tax subsidy. We conclude by addressing the ethical and normative implications of price presentation in the tax system.
Feinberg, Fred, Aradhna Krishna and John Z. Zhang (2002), "Do We Care What Others Get?: A Behaviorist Approach to Targeted Promotions", Journal of Marketing Research, 39 (3), 277-291.
Increased access to individual customers and their purchase histories has led to a growth in targeted promotions, including the practice of offering different pricing policies to prospective, as opposed to current, customers. Prior research on targeted promotions has adopted a tenet of the standard economic theory of choice, whereby what a consumer chooses depends exclusively on the prices available to that consumer. In this article, the authors propose that consumer preference for firms is affected not just by prices the consumers themselves are offered but also by prices available to others. This departure from the conventional strong rationality approach to targeted promotion results in a decidedly different optimal policy. Through a laboratory experiment, calibration of a stochastic model, and game-theoretic analysis, the authors demonstrate that ignoring behaviorist effects exaggerates the importance of targeting switchers as opposed to loyals. This occurs, though with intriguing differences, even when only part of the market is aware of firms' differing promotional policies. The authors show that both the deal percentage and the proportion of aware consumers affect the optima! strategy of the firm. Furthermore, the authors find that offering lower prices to switchers may not be a sustainable practice in the long run, as information spreads and the proportion of aware consumers grows. The model cautions practitioners against overpromoting and/or promoting to the wrong segment and suggests avenues for improving the effectiveness of targeted promotional policies.
Krishna, Aradhna, Richard Briesch, Donald Lehmann and Hong Yuan (2002), ; "A Meta-Analysis of the Impact of Price Presentation on Perceived Savings", Journal of Retailing, 78 (2), 101-118.
Pricing is one of the most crucial determinants of sales. Besides the actual price, how the price offering is presented to consumers also affects consumer evaluation of the product offering. Many studies focus on “price framing,” i.e., how the offer is communicated to the consumer is the offered price given along with a reference price, is the reference price plausible, is a price deal communicated in dollar or percentage terms. Other studies focus on “situational effects,” e.g., is the evaluation for a national brand or a private brand, is it within a discount store or a specialty store. In this article, a meta-analysis of 20 published articles in marketing examines the effects of price frames and situations on perceived savings. The results reveal many features that significantly influence perceived savings. For instance, while both the percent of deal and the amount of deal positively influence perceived deal savings, deal percent has more impact. Further, the presence of a regular price as an external reference price enhances the offer value of large plausible deal and implausible deals, but not of small plausible deals. Thus, high value deals should announce the regular price, but not low value deals. Overall, the results have several useful insights for designing promotions.
Moreau, Page, Aradhna Krishna and Bari Harlam (2001), "The Manufacturer-Retailer-Consumer Triad: Differing Perceptions Regarding Price Promotions", Journal of Retailing, 77 (4), 547-569.
The effectiveness of any promotional strategy depends, in part, on how accurately channel members predict consumers’ perceptions of their promotional activity. However, empirical research on channel member predictions and their accuracy is virtually nonexistent. In this article we examine manufacturer and retailer beliefs about consumers’ (and each others’) perceptions of sales promotions and assess the accuracy of these predictions. Our findings indicate that manufacturers and retailers hold similar, but equally inaccurate views of consumers’ industry knowledge. When assessing consumers’ specific beliefs about different types of promotions, these channel members underestimate consumer knowledge. Their motivational knowledge, however, appears quite accurate whether predicting consumer or other channel member perceptions of motivations. The similarity of supplier and retailer knowledge bodes well for channel efficiency, yet limitations in their understanding of consumer knowledge about promotions may lead to weakness in channel marketing strategies.
Krider, Robert, Priya Raghubir and Aradhna Krishna (2001), "Pizza - Pi or Squared?: The Effect of Perceived Area on Price Perceptions", Marketing Science, 20 (4), 405-425.
Many product categories, from pizzas to real estate, present buyers with purchase decisions involving complex area judgments. Does a square look larger or smaller than a circle? How much smaller does a circle of 8-inch diameter look when compared to one with a 10-inch diameter? In this paper, we propose a psychophysical model of how consumers make area comparison judgments. The model involves consumers making effort-accuracy trade-offs that lead to heuristic processing of area judgments and systematic shape- and size-related biases. The model is based on four propositions: P1. Consumers make an initial comparison between two figures based on a single dimension; P2. The dimension of initial comparison—the primary dimension—is the one that is most salient to consumers, where salience is figure and context dependent; P3. Consumers insufficiently adjust an initial comparison using a secondary dimension, which we assume to be orthogonal to the primary dimension used for the initial comparison; and P4. The magnitude by which the initial comparison is adjusted is directly related to the relative salience of the secondary dimension versus the primary dimension. The model predicts that a single linear dimension inappropriately dominates the two-dimensional area comparison task and that contextual factors affect which linear dimension dominates the task. The relative use of the second dimension depends on its relative salience, which can be influenced in a variety of ways. The model extends the area estimation literature in cognitive psychology by exploring new biases in area estimation and is able to resolve controversial effects regarding which shape is perceived to be “bigger,” the square or the circle, by incorporating contextual factors into model specifications. A set of six studies—five laboratory experiments and one field experiment—systematically test model predictions. Study 1 is a process study that shows that when two dimensions are available to make an area comparison judgment, people choose one of those to be the primary dimension, with the other being the secondary dimension. Furthermore, it shows that the choice of the primary dimension is dependent on its relative salience that can be contextually manipulated via manner of visual presentation. Studies 2 and 3 show how the use of a diagonal versus the side of a square (contextually determined) can affect whether a square is perceived to be smaller or larger than a circle of the same area. Study 3 extends the investigation to the domain of the price people are willing to pay for “pizzas” of different shapes, presented differently. Study 4, a field study, demonstrates external validity by showing that purchase quantities are greater when a circular package is expected to contain less than a rectangular package of the same volume in a domain where consumption goal is constant (cream cheese with a bagel). Studies 5 and 6 examine ways in which one can increase the salience of the secondary dimension, in a size estimation task, i.e., judging the rate of increase of area. While Study 5 does so via contextual visual cues (incorporating lines that draw one's attention to the underused dimension), Study 6 does the same using semantic cues that direct attention to a single dimension (e.g., diameter) or the total area and comparing these with a visual presentation of the figure. Overall, results suggest that the manner in which information is presented affects the relative salience of dimensions used to judge areas, and can influence the price consumers are willing to pay. Underlining the external validity of these findings, container shape can significantly affect quantity purchased and overall sales. The paper highlights biases in area comparison judgments as a function of area shape and size. The model is parsimonious, demonstrates good predictive ability, and explains seemingly contradictory results in the cognitive psychology literature. Implications for pricing, product design, packaging, and retailing are suggested.
Zhang, John Z., Aradhna Krishna and Sanjay Dhar (2000), "The Optimal Choice of Promotion Vehicles: Front-loaded or Rear-loaded Incentives?", Management Science, 46 (3), 348-362.
We examine the key factors that influence a firm's decision whether to use front-loaded or rear-loaded incentives. When using price packs, direct mail coupons, FSI coupons or peel-off coupons, consumers obtain an immediate benefit upon purchase or a fuont-loaded incentive. However, when buying products with in-pack coupons or products affiliated with loyalty programs, promotion incentives are obtained on the next purchase occasion or later, i.e., a rear-loaded incentive. Our analysis shows that the innate choice process of consumers in a market (variety-seeking or inertia) is an important determinant of the relative impact of front-loaded and rear-loaded promotions. While in both variety-seeking and inertial markets, the sales impact and the sales on discount are higher for front-loaded promotions than for rear-loaded promotions, from a profitability perspective, rear-loaded promotions may be better than front-loaded promotions. We show that in markets with high variety-seeking it is more profitable for a firm to rear-load, and in markets with high inertia it is more profitable to front-load. Model implications are verified using two empirical studies: (a) a longitudinal experiment (simulating markets with variety-seeking consumers and inertial consumers) and (b) market data on promotion usage. The data in both studies are consistent with the model's predictions.
Kopalle, Praveen, Aradhna Krishna, and Joao Assuncao (1999), "The Role of Market Expansion on Equilibrium Bundling Strategies", Managerial and Decision Economics, 20 (7), 365-377.
Research on optimal bundling strategy has primarily dealt with the case of a monopolist and suggests that mixed bundling be adopted, as it allows for price discrimination. The overwhelming majority of consumer products, however, operate in a competitive arena, so that an adequate description of the bundling phenomenon needs to take account of alternative competitive product offerings. A few researchers have examined the duopolistic case — two suppliers each offering a bundle composed of two complementary products. However, the collective results do not paint a consistent picture. For example, Economides (1993. Mixed bundling in duopoly. Working Paper, Stern School of Business, New York University, EC-93-29) shows that the sub-game perfect Nash equilibrium bundling strategy is to offer a mixed bundle. By contrast, Anderson and Leruth (1993. Why firms may prefer not to price discriminate via mixed bundling. International Journal of Industrial Organization 11: 49 – 61), show that the solution is to offer pure components. The results of Matutes and Regibeau (1992. Compatibility and bundling of complementary goods in a duopoly. Journal of Industrial Economics 40: 37 – 54) suggest that the bundling strategy depends on consumer reservation price: mixed bundling when it is low and pure components when it is high. This paper offers an analytical analysis that reconciles these results by incorporating the moderating role of market expansion on equilibrium bundling strategies. Rendering comparable the conflicting results of such prior research requires selecting a methodology that not only sufficiently allows for their unique formal specifications, but which, in the current estimate, best captures empirical phenomena of broadest interest. The focus on market expansion suggests a model of the nested logit type (see Bucklin and Gupta 1992. Brand choice, purchase incidence, and segmentation: an integrated approach. Journal of Marketing Research 29: 201 – 215). It is shown that the sub-game perfect Nash equilibrium bundling strategy in a duopoly depends on the scope for market expansion, i.e., as the scope for market expansion decreases, the equilibrium bundling strategy shifts from mixed bundling to pure components. It is also shown that pure bundling will not be an equilibrium strategy. Finally, a discussion of the results when the assumption of perfect complementarity is relaxed is provided.
Krishna, Aradhna and Z. John Zhang (1999), "Short-or long-duration coupons: The effect of the expiration date on theprofitability of coupon promotions", Management Science, 45 (8), 1041-1056.
United States firms collectively spend over $6.5 billion annually on coupon promotions and are becoming increasingly concerned with their profitability. FSI (free-standing-insert) data show that coupon duration varies across brands. In this paper, we show how coupon duration can affect coupon profitability. We also provide answers for some empirical observations on coupon duration. We explain, for example, why (i)coupon duration will vary across firms, such that large market share firms will give short-duration coupons and small market share firms will give long-duration coupons; (ii)longer coupon duration for one brand will increase redemption for coupons of that brand and of a competing brand; (iii) coupon duration will affect coupon profitability.
Raghubir, Priya and Aradhna Krishna (1999), "Vital Dimensions in Volume Perception: Can the Eye Fool the Stomach?", Journal of Marketing Research, 36 (3), 313-326.
Given the number of volume judgments made by consumers, for example, deciding which package is larger and by how much, it is surprising that little research pertaining to volume perceptions has been done in marketing. In this article, the authors examine the interplay of expectations based on perceptual inputs versus experiences based on sensory input in the context of volume perceptions. Specifically, they examine biases in the perception of volume due to container shape. The height of the container emerges as a vital dimension that consumers appear to use as a simplifying visual heuristic to make a volume judgment. However, perceived consumption, contrary to perceived volume, is related inversely to height. This lowered perceived consumption is hypothesized and shown to increase actual consumption. A series of seven laboratory experiments programmatically test model predictions. Results show that perceived volume, perceived consumption, and actual consumption are related sequentially. Furthermore, the authors show that container shape affects preference, choice, and post-consumption satisfaction. The authors discuss theoretical implications for contrast effects when expectancies are is confirmed, specifically as they relate to biases in visual information processing, and provide managerial implications of the results for package design, communication, and pricing.
Meyer, Robert, Tulin Erdem, Fred Feinberg, Itzhak Gilboa, Wesley Hutchinson, Aradhna Krishna, Steven Lippman, Carl Mela, Amit Pazgal, Drazen Prelec and Joel Steckel (1997), "Dynamic Influences on Individual Choice Behavior", Marketing Letters, 8 (3), 349-360.
Research examining the process of individual decision making over time is briefly reviewed. We focus on two major areas of work in choice dynamics: research that has examined how cur rent choices are influenced by the history of previous choices, and newer work examining how choices may be made to exploit expectations about options available in the future. A central theme of the survey is that if a general understanding of choice dynamics is to emerge, it will come through the development of boundedly-rational models of dynamic problem solving that lie on the interface between economics and psychology.
Krishna, Aradhna and Priya Raghubir (1997), "The Effect of Line Configuration on Perceived Numerosity of Dotted Lines", Memory and Cognition, 25 (4), 492-507.
Estimates the number of objects in a line are made in many different situations. This paper demonstrates that besides the actual number of dots, aspects of line configuration affect the perceived numerosity of dotted lines. Experiment 1 provides evidence that the highly studied "clutter effect" in distance perception research replicates to the numerosity domain so that lines made up of more segments are perceived to contain more dots. Experiments 2-5 provide nomological validity for the recently proposed "direct distance" effect in distance perceptions by showing that numerosity perceptions are higher the greater the euclidean length between the line end points and by manipulating euclidean length in three orthogonal ways: the relative length of segments (Experiment 2), the angle between segments (Experiment 3), and the general direction of segments (Experiment 4). Experiment 5 conceptually replicates the results of Experiments 2-4 utilizing stimuli-based versus memory-based judgments and a discrimination task. Experiments 6 and 7 extend the research on spatial perception by demonstrating that the use of euclidean length as a source of information is inversely related to line width, with width varied through clutter (Experiment 6) and total line length (Experiment 7). Overall, the results demonstrate that the robustness of the euclidean length effect is contingent on the salience of alternative spatial heuristics-specifically, euclidean width. Theoretical implications are discussed.
Raghubir, Priya and Aradhna Krishna (1996), "As the Crow Flies: Bias in Consumers' Map-Based Distance Judgments", Journal of Consumer Research, 23 (1), 26-39.
Consumers make distance judgments when they decide which store to visit or which route to take. However, these judgments may be prone to various spatial perception biases. While there is a rich literature on spatial perceptions in urban planning and environmental and cognitive psychology, there is little in the field of consumer behavior. In this article we introduce the topic of spatial perceptions as an area of research in marketing. We extend the literature on spatial perceptions by proposing that consumers use the direct distance between the endpoints of a path, or the distance "as the crow flies," as a source of information while making distance judgments-the shorter the direct distance, the shorter the distance estimate. We study two spatial features that affect direct distance-path angularity (i.e., the size of the angle between path segments) and path direction (i.e., whether the path retraces back or not). We further propose and demonstrate that the direct-distance bias is due to the perceptual salience of direct distance and is used by consumers in an automatic manner. Theoretical implications for the manner in which consumers process spatial information and the use of cognitive heuristics while making spatial judgments are discussed.
Krishna, Aradhna and Gita V. Johar (1996), "Consumer Perception of Deals: Biasing Effects of Varying Deal Prices", Journal of Experimental Psychology:Applied, 2 (3), 187-206.
Some brands in the market opt to offer a single "deal" price (e.g., Pepsi brand soft drink at $1.09 every alternate week), whereas others opt to offer 2 or more deal prices (e.g., Coca-Cola brand soft drink at $0.99 in Week 1 and $1.19 in Week 3). It was hypothesized that offering multiple deal prices is likely to result in underestimation of deal frequency and average deal price, which will bias the price consumers are willing to pay for the brand. Results from 3 laboratory experiments, a longitudinal experiment, and a survey support the hypotheses. In addition, consumers are likely to be willing to pay more for the brand when it is offered at 2 deal prices with a small difference compared with a single deal price. Implications of these findings for consumer welfare and pricing policy are discussed.
Harlam, Bari, Aradhna Krishna, Donald R. Lehmann and Carl Mela (1995), "Impact of Bundle Type, Price Framing and Familiarity on Purchase Intention for the Bundle", Journal of Business Research, 33 (1), 57-66.
Bundling of products is very prevalent in the marketplace. For example, travel packages include airfare, lodging, and a rental car. Considerable economic research has focused on the change in profits and consumer surplus that ensues if bundles are offered. There is relatively little research in marketing that deals with bundling, however. In this article we concentrate on some tactical issues of bundling, such as which types of products should be bundled, what price one can charge for the bundle, and how the price of the bundle should be presented to consumers to improve purchase intent. For example, we hypothesize that bundles composed of complements of equally priced goods will result in higher purchase intention. We also hypothesize that price increases will result in larger purchase intention changes than price decreases. Further, we expect that the presentation format for describing the price of the bundle will influence purchase intention in general, and depending on the price level of the bundle, different presentation formats will result in higher purchase intention. Finally, we hypothesize that purchase intention changes associated with different price levels will be higher for subjects who are familiar with the products than for subjects who are less familiar with the products. We used an interactive computer experiment conducted among 83 Master of Business Administration (MBA) students to test our hypotheses. Our findings suggest that: (1) bundles composed of complements have a higher purchase intent than bundles of similar or unrelated products, (2) consumers are more sensitive to a bundle price increase than to a bundle price decrease of equal amounts, (3) different presentation formats for describing the price of the bundle influence purchase intention, and (4) more familiar subjects respond to different presentations of equivalent bundles in different ways than less familiar subjects. We did not find any support for the hypothesis that bundles composed of similarly priced items have higher purchase intent than bundles composed of unequally priced products.
Krishna, Aradhna (1994), "The Impact of Dealing Patterns on Purchase Behavior", Marketing Science, 13 (4), 351-373.
We explore the effect of dealing patterns on consumer purchase behavior by developing a normative purchase quantity model that can incorporate an),dealing pattern. The model adds to the stream of research on optimal purchasing policy by demonstrating how dealing patterns can be incorporated in a simple manner in dynamic programming models. Implications for purchase behavior are derived by employing the model in a numerical simulation in which time between deals is characterized by a Weibull distribution. The flexibility ofthe Weibull distribution enables us to establish how particular facets of the dealing distribution (e.g., certainty in deal timing, minimum time between deals) affect consumer behavior with respect to optimal purchase quantity, inventory, etc. One of the implications of the model is that the average quantity purchased on deal should be larger when there is greater certainty in deal timing. The model also shows that the average quantity purchased on deal should be larger when deals are spaced further apart, even though the buyer is presented with the same number of deals. We test certain model implications in a laboratory experiment and find actual behavior varying across dealing patterns in a manner consistent with model implications.
Krishna, Aradhna (1994), "The Effect of Deal Knowledge on Consumer Purchase Behavior", Journal of Marketing Research, 31 (1), 76-91.
Research has shown that there is heterogeneity in consumer knowledge of prices and deals. In addition, it has been found that buyers' purchase behavior can be influenced not only by the current price of a product, but also by what prices they expect in the future. The author builds a purchase quantity model to contrast normative behavior of consumers who have knowledge of future price deals with that of those who do not. Implications from the model are derived concerning consumer deal response for the consumer's preferred and less preferred brands. These implications show that normative purchase behavior is very different between consumers with and without knowledge of future deals. The model implies that consumers with knowledge of future deals could be more likely to purchase on low-value deals and deals on less preferred brands compared with consumers without knowledge of future deals. Another implication of interest is that the relative quantity purchased by consumers who have deal knowledge compared with those who do not depends on the time pattern of deals. The implications are supported in a laboratory experiment. The author finds that actual behavior varies depending on deal knowledge and is quite consistent with model predictions.
Krishna, Aradhna (1992), "The Normative Impact of Consumer Price Expectations for Multiple Brands on Consumer Purchase Behavior", Marketing Science, 11 (3), 266-286.
Empirical research indicates that some consumers form price expectations which may impact their purchase behavior. While literature in operations research has built purchase policy models incorporating uncertain price expectations, these models have been built for commodities. Consumers face an environment with multiple brands. In this paper, we develop a model that incorporates consumer preferences and price expectations for multiple brands as determinants of normative consumer purchase behavior. The model demonstrates how commodity purchase policy models recognizing price uncertainty can be adapted to the study of multi-brand markets. The model is used to analyze the normative impact of changes in price promotion policies and holding costs on individual purchase behavior. It is also used in a Monte-Carlo market simulation that illustrates some scenarios where a post-promotion dip is more or less evident, and provides an explanation for the nonexistent post-promotion dip.
Krishna, Aradhna and Robert W. Shoemaker (1992), "Estimating the Effects of Higher Coupon Face Values on the Timing of Redemptions, The Mix of Coupon Redeemers and Purchase Quantity", Psychology and Marketing, 9 (6), 453-467.
One of the key decisions a manager must make in designing a coupon promotion is to decide on the face value. In this study we examine the effects of higher face values on coupon redemption timing, category purchase timing, the mix of buyers who redeem the coupon, and purchase quantity. Data from a field experiment on coupon face values are used to test the hypotheses. A new method of measuring the effects of a coupon on category purchase timing is proposed. We find that coupons per se tend to advance category purchase timing, but higher face values do not increase the magnitude of this effect. Surprisingly, higher face values appear to increase redemption rates for both the prior nonbuyers and prior buyers of the brand in a similar way. However, higher face values have little effect on the package size purchased, the number of units purchased, or the total quantity (package size times units) purchased.
Krishna, Aradhna (1991), "Effect of Dealing Patterns on Consumer Perceptions of Deal Frequency and Willingness to Pay", Journal of Marketing Research, 28 (4), 441-451.
Research has shown that brands with higher deal frequency obtain a smaller market share gain on deal and have a lower expected price. However, the level of dealing must be perceived by consumers before it can affect consumer response to promotions. Hence, perception of deal frequency may affect consumer price perceptions and deal response much more strongly than the actual deal frequency. The author determines how consumer perceptions of deal frequency for a brand may be influenced by the dealing pattern of that brand and of other brands. She shows that the price consumers are willing to pay for a brand is correlated more highly with perceived deal frequency than with actual deal frequency. She also shows that the price consumers are willing to pay is correlated with the actual deal frequency of the brand for certain dealing patterns, but not for others.
Krishna, Aradhna, Imran C. Currim and Robert W. Shoemaker (1991), "Consumer Perceptions of Promotional Activity", Journal of Marketing, 55 (2), 4-16.
Several models of consumer response to promotions suggest that a current decision on brand and purchase quantity depends on the expected time until the next price reduction and the expected size of future reductions. In spite of the importance of expected deal frequency and expected deal price to a consumer's decision, relatively little empirical work has been reported on those topics. The authors investigate several aspects of consumer perceptions of deal frequency and deal prices. First, a conceptual model is presented to describe how consumers develop and use those perceptions. Second, results of an extensive survey are used to estimate the degree of consumer knowledge about deal frequency and deal prices. Third, hypotheses about which types of consumers have better knowledge of promotions are tested. Results from the survey indicate that many consumers are reasonably accurate about deal frequency and sale price. In addition, recall on deal frequency and sale price is higher for consumers with larger family sizes and those who read weekly fliers for items on sale, devote a higher percentage of product class purchases to the brand, and purchase the package size more frequently. It is lower for older buyers.
Bawa, Kapil, Jane T. Landwehr, and Aradhna Krishna (1989), "Consumer Response to Retailers' Marketing Environments: An Analysis of Coffee Purchase Data", Journal of Retailing, 65 (4), 471-495.
As a consequence of differences in positioning strategies, retail outlets for grocery products often differ in their "marketing environment" - that is, in the configuration of price, product, and promotional stimuli to which consumers are exposed in the store. The in-store marketing environment can be an important marketing tool in terms of its ability to influence consumers' purchase behavior and attract certain types of consumers. This study examines the association between the in-store marketing environment and certain characteristics of consumers' purchase behavior. The results indicate that consumers exposed to different environments exhibit significant differences in their brand loyalty, promotion sensitivity, price sensitivity, and response to new brands. These differences in behavior are found to be related to environmental attributes such as width of product assortment and promotional activity.
Krishna, Aradhna (2016), “The best way to stop normalizing hate crimes is to talk more about the people who act as allies”, Op-Ed in Quartz, December 12.
Krishna, Aradhna (2016), “Voters’ embarrassment and fear of social stigma messed with pollsters’ predictions”, Op-Ed in The Conversation, November 10.
Krishna, Aradhna (2016), “How to vote for president when you don’t like the candidates”, Op-Ed in The Conversation, September 29. Appeared also on The Washington Post, The Guardian, Quartz, The Wire, Salon and other publications (no. of reads > 126,000).
Krishna, Aradhna (2016), "Sensory Imagery for Design", The Psychology of Design: Creating Consumer Desire, Rajeev Batra, Diann Brei, and Colleen Seifert (Eds.), Routledge.
Elder, Ryan S. and Aradhna Krishna (2014), “Grasping the Grounded Nature of Mental Simulation”, In-Mind Magazine, 4 (20).
Krishna, Aradhna (2013), Customer Sense: How the 5 Senses Influence Buying Behavior, Palgrave Macmillan, NYC.
Krishna, Aradhna (2012), “Two questions to ask before buying pink”, Op-Ed in Detroit Free Press, October 30.
Krishna, Aradhna (2011), “The Right Way for Companies to Mix Donations and Marketing”, Op-Ed in Detroit Free Press, October 28.
Krishna, Aradhna (2011), "Philanthropy and marketing", Op-Ed in The Toronto Star, October 15, 2011.
Krishna, Aradhna (2011), "As I See it: Sensory marketing ", in Consumer Behavior: Buying, Having and Being (9th edition), Michael Solomon (Ed.), Prentice Hall, London.
Krishna, Aradhna(2011), "Price Deals", in Consumer Insights: Findings from Behavioral Research, Joseph Alba (Ed.), Marketing Science Institute, Boston.
Krishna, Aradhna (Ed.) (2009), Sensory Marketing: Research on the Sensuality of Consumers, Routledge, NYC.
Krishna, Aradhna (2009), "Introduction to Sensory Marketing ", in Sensory Marketing: Research on the Sensuality of Consumers, Aradhna Krishna (Ed.), Routledge, NYC.
Krishna, Aradhna, and Ryan Elder (2009), "The Gist of Gustation: Taste, Food and Consumption", in Sensory Marketing: Research on the Sensuality of Consumers, Aradhna Krishna (Ed.), Routledge, NYC.
Aydinoglu, Nilufer, Aradhna Krishna and Brian Wansink (2009), "Do Size Labels have a Common Meaning across Consumers?", in Sensory Marketing: Research on the Sensuality of Consumers, Aradhna Krishna (Ed.), Routledge, NYC.
Krishna, Aradhna (2009), "Behavioral Responses to Pricing", in Handbook of Pricing Research in Marketing, Vithala Rao (Ed.), Edward Elgar Publishing, Northampton, MA .
Krishna, Aradhna (2008), "Regulate Deals to Prevent More Retail Tragedies", Op-Ed in Detroit News, December 9.
Krishna, Aradhna (2007), "Biases in Spatial Perception: A Review and Integrative Framework", in Visual Marketing: From Attention to Action, Michel Wedel and Rik Pieters (Eds.), Lawrence Erlbaum Associates, Mahwah, New Jersey.
Sensory perceptions and sensory marketing; social marketing, corporate social responsibility and cause marketing; pricing and promotions; experimental economics.
In order to preserve the integrity of the double-blind review process, papers under review and works in progress are not listed. | 2019-04-25T04:21:15Z | http://www.aradhnakrishna.com/publicationsresearch.html |
CIOs face the paradox of having to protect their businesses while at the same time streamlining access to the information and systems their companies need to grow. The threatscape they’re facing requires an approach to security that is adaptive to the risk context of each access attempt across any threat surface, anytime. Using risk scores to differentiate between privileged users attempting to access secured systems in a riskier context than normal versus privileged credential abuse by attackers has proven to be an effective approach for thwarting credential-based breaches.
Privileged credential abuse is one of the most popular breach strategies organized crime and state-sponsored cybercrime organizations use. They’d rather walk in the front door of enterprise systems than hack in. 74% of IT decision makers surveyed whose organizations have been breached in the past say it involved privileged access credential abuse, yet just 48% have a password vault. Just 21% have multi-factor authentication (MFA) implemented for privileged administrative access. These and many other insights are from Centrify’s recent survey, Privileged Access Management in the Modern Threatscape.
The challenge to every CIO’s security strategy is to adapt to risk contexts in real-time, accurately assessing every access attempt across every threat surface, risk-scoring each in milliseconds. By taking a “never trust, always verify, enforce least privilege” approach to security, CIOs can provide an adaptive, contextually accurate Zero Trust-based approach to verifying privileged credentials. Zero Trust Privilege is emerging as a proven framework for thwarting privileged credential abuse by verifying who is requesting access, the context of the request, and the risk of the access environment.
By taking a least privilege access approach, organizations can minimize attack surfaces, improve audit and compliance visibility, and reduce risk, complexity, and the costs of operating a modern, hybrid enterprise. CIOs are solving the paradox of privileged credential abuse by knowing that even if a privileged user has entered the right credentials but the request comes in with risky context, then stronger verification is needed to permit access.
The following are five strategies CIOs need to concentrate on to stop privileged credential abuse. Starting with an inventory of privileged accounts and progressing through finding the gaps in IT infrastructure that create opportunities for privileged credential abuse, CIOs and their teams need to take preemptive action now to avert potential breaches in the future.
Discover and inventory all privileged accounts and their credentials to define who is accountable for managing their security and use. According to a survey by Gartner, more than 65% of enterprises are allowing shared use of privileged accounts with no accountability for their use. CIOs realize that a lack of consistent governance policies creates many opportunities for privileged credential abuse. They’re also finding orphaned accounts, multiple owners for privileged credentials and the majority of system administrators having super user or root user access rights for the majority of enterprise systems.
Vault your cloud platforms’ Root Accounts and federate access to AWS, Google Cloud Platform, Microsoft Azure and other public cloud consoles. Root passwords on each of the cloud platforms your business relies on are the “keys to the kingdom” and provide bad actors from inside and outside the company to exfiltrate data with ease. The recent news of how a fired employee deleted his former employer’s 23 AWS servers is a cautionary tale of what happens when a Zero Trust approach to privileged credentials isn’t adopted. Centrify’s survey found that 63% or organizations take more than a day to shut off privilege access for an employee after leaving the company. Given how AWS root user accounts have the privilege to delete all instances immediately, it’s imperative for organizations to have a password vault where AWS root account credentials are stored. Instead of local AWS IAM accounts and access keys, use centralized identities (e.g., Active Directory) and enable federated login. By doing so, you obviate the need for long-lived access keys.
Audit privileged sessions and analyze patterns to find potentially privileged credential sharing or abuse not immediately obvious from audits. Audit and log authorized and unauthorized user sessions across all enterprise systems, especially focusing on root password use across all platforms. Taking this step is essential for assigning accountability for each privileged credential in use. It will also tell you if privileged credentials are being shared widely across the organization. Taking a Zero Trust approach to securing privileged credentials will quickly find areas where there could be potential lapses or gaps that invite breaches. For AWS accounts, be sure to use AWS CloudTrail and Amazon CloudWatch to monitor all API activity across all AWS instances and your AWS account.
Enforce least privilege access now within your existing infrastructure as much as possible, defining a security roadmap based on the foundations of Zero Trust as your future direction. Using the inventory of all privileged accounts as the baseline, update least privilege access on each credential now and implement a process for privilege elevation that will lower the overall risk and ability for attackers to move laterally and extract data. The days of “trust but verify” are over. CIOs from insurance and financial services companies recently spoken with point out that their new business models, all of them heavily reliant on secured Internet connectivity, are making Zero Trust the cornerstone of their future services strategies. They’re all moving beyond “trust but verify” to adopt a more adaptive approach to knowing the risk context by threat surface in real-time.
Adopt multi-factor authentication (MFA) across all threat surfaces that can adapt and flex to the risk context of every request for resources. The CIOs running a series of insurance and financial services firms, a few of them former MBA students of mine, say multi-factor authentication is a must-have today for preventing privileged credential abuse. Their take on it is that adding in an authentication layer that queries users with something they know (user name, password, PIN or security question) with something they have (smartphone, one-time password token or smart card), something they are (biometric identification like fingerprint) and something they’ve done (contextual pattern matching of what they normally do where) has helped thwart privileged credential abuse exponentially since they adopted it. This is low-hanging fruit: adaptive MFA has made the productivity impact of this additional validation practically moot.
Every CIO I know is now expected to be a business strategist first, and a technologist second. At the top of many of their list of priorities is securing the business so it can achieve uninterrupted growth. The CIOs I regularly speak with running insurance and financial services companies often speak of how security is as much a part of their new business strategies as the financial products their product design teams are developing. The bottom line is that the more adaptive and able to assess the context of risks for each privilege access attempt a company’s access management posture can become, the more responsive they can be to employees and customers alike, fueling future growth.
The biggest problem with enterprise operations today is the simple fact that most firms still run most of their processes exactly the same way as they did 20/30/40 years ago, with the only “innovation” being models like offshore outsourcing and shared service centers, cloud and digital technologies enabling those same processes to be conducted steadily faster and cheaper. However, fundamental changes have not been made to intrinsic business processes – most companies still operate with their major functions such as customer service, marketing, finance, HR and supply chain operating in individual silos, with IT operating as a non-strategic vehicle to maintain the status quo and keep the lights on.
Enter the concept of Robotic Process Automation (RPA), introduced to market in 2012 via a case study written by HFS and supported by Blue Prism, which promised to remove manual workarounds and headcount overload from inefficient business processes and BPO services. However, despite offering clear technical capability and the real advantage of breathing life into legacy systems and processes, RPA hasn’t inspired enterprises to rewire their business processes – it’s really just helped them move data around the company faster and require less manual intervention. In addition, most “RPA” engagements that have been signed are not for unattended processes, instead, most are attended robotic desktop automation (RDA) deployments. Attended RDA requires a loop of human and bot interplay to complete tasks. These engagements are not the pure form of RPA that we invented – they are a motley crew of scripts and macros applying add band-aids to messy desktop applications and processes to maintain the same old way of doing things. Sure, there is usually a reduction in labor needs – but in fractional increments – which is rarely enough to justify entire headcount elimination. Crucially, the current plethora of “RPA” engagements have not resulted in any actual “transformation”.
Real research data of close to 600 major global enterprise shows just how not-ready we are to declare any sort of robo-victory. In our recent survey of 590 G2000 leaders, only 13% of RPA adopters are currently scaled up and industrialized. Forget about leveraging RPA to curate end-to-end processes, most RPA adopters are still tinkering with small-scale projects and piecemeal tasks that comprise elements of broken processes. Most firms are not even close to finding any sort enterprise-scale automation adoption.
A Triple-A Trifecta toolkit that leverages RPA, various permutations of AI, and smart analytics in an integrated fashion.
So HFS is calling it as we see it. RPA is dead! Long live Integrated Automation. And by integrated we mean integrated technology, but also, and all importantly, we mean integration across people, process and technology supported by focused objectives and change management. Integrated Automation is how you transform your business and achieve an end-to-end Digital OneOffice.
Integrated Automation is not about RPA or AI or Analytics. It is RPA and AI and Analytics.
RPA products are seeking to underpin AI and data management capabilities. WorkFusion was arguably the first to combine RPA and AI with its “smart process automation” capability. Other subsequent examples include Automation Anywhere with its ML-infused IQBot, Blue Prism announced its AI Lab to develop proprietary RPA-ready AI elements, and AntWorks embeds computer vision and fractal science in its stack to enable the use of unstructured data. What these products having in common is their use of robotics to transform tasks, desktop apps and pieces of processes. Hence, we need to refer to these “RPA” products as Robotic Transformation Software products which is a far more appropriate description.
AI and analytics focused products are starting to embrace Robotic Transformation Software, instead of undermining it. IPsoft launched 1RPA with a cognitive user interface. Xceptor’s data-led business rules and AI-based approach to automation leverage RPA to help extend its functionality. Arago is starting to go to the market where it can help orchestrate RPA capabilities within its platform.
Enterprise software products are integrating the triple-A trifecta capabilities in their products. SAP Leonardo aspires to harness the emerging technologies across ML, analytics, Big Data, IoT, and blockchain in combination. It also acquired RPA software company Contextor (late 2018) similar to Pega when it acquired OpenSpan in 2016 adding RPA functionality to its customer engagement capabilities.
System Integrators are orchestrating the Triple-A Trifecta across multiple curated products. This typically combines some of their IP and service capabilities. Accenture launched SynOps in early 2019, offering a “human-machine operating engine.” Genpact’s Cora, a modular platform of digital technologies, similar to HFS’ Triple-A Trifecta, is designed to help enterprises scale digital transformation. IBM’s Automation Platform includes composable automation capabilities that orchestrate responses and alerts between Watson and Robotic Transformation Software solutions. KPMG’s IGNITE brings RPA, AI and analytics tools together with KPMG IP and services.
Integrated Automation is not just about Technology. It is Technology + People + Process.
The real point of Integrated Automation is actually to move beyond the tools. Yes, the Triple-A Trifecta offers more functionality, but it still does not work unless you change your business, your people, your processes. Integrated automation is the effective melding of technology, talent, organizational change, and leadership to get to the promise land. It requires the integration of the Triple-A Trifecta change agents in your toolbox and their application across the original trifecta of people, process, and technology. If you keep throwing technology at a business problem, you will have more technology rather than a solution.
Integrated Automation is not a Product or a Service. It is a Product and a Service.
Just like we realized that throwing bodies at a problem does not solve the problem, we need to recognize that merely hurling software at business process will not drive transformation. The real genius lies in understanding what to use when and how. The software also needs to come with support and services. Otherwise, we’re just selling more snake oil and magic. Strategic and collaborative relationships of the future will be formed by providers that can consult as a trustworthy advisor and execute as an “extension” of clients’ operations. Enterprises need partners to drive innovation, contribute investment, apply automation and new ideas, and focus on delivering business outcomes – and that requires a combination of services and software. An ecosystem approach with symbiotic relationships between service and product companies is a must-have ingredient for automation to succeed and truly be transformative. It is imminently clear that no one can be everything to everyone.
Adoption is not the measure of success for Integrated Automation. It is about Change Management.
Fifty-one percent of the highest performing enterprises see their cultures as holding them back in the digital transformation journey, while only 36% of the lowest performing enterprises identify culture as a problem to progress. Providers need to offer change management approaches that are agile, measurable, and iterative to be impactful. Scaling up digital initiatives and enabling the right governance models are also critical points. The ability to codify “business outcomes” in contractual agreements, pricing structures, and performance measures is also a vital element to drive change. While there is no nirvana around pricing, it needs to be implemented based on every client’s unique requirements and context. The flexibility to put skin in the game with innovative and non-linear commercial models is essential to drive real change.
Integrated Automation will not be effective with a functional approach. It requires an end-to-end “OneOffice” strategy.
Less than 12% of the enterprises we surveyed have an enterprise-wide approach to automation. This strong focus on task-level and process-level automation remind us that automation often takes place in functional silos, with parallel but unconnected initiatives. The ability to balance task-specific and process-specific pilots and production instances with broader enterprise mission and vision is certainly daunting, but it is precisely what needs to occur to enable scaled and successful automation programs.
The collaboration between business and IT is another crucial issue. While automation initiatives require IT involvement, the programs are generally impacting and enhancing business processes—which requires participation from business constituents who understand the functions in question. The ideal leadership mix, then, is a combination of IT and business. However, our data shows that just one-fifth of respondents have created integrated IT and business leadership teams to grapple with automation strategy and deployment.
Bottom Line: Integrated Automation utilizes the power of AND, not OR!
We are lucky to live at a time where we have a multitude of established and emerging change agents at our disposal: global sourcing, design thinking, Robotic Transformation Software, AI, Analytics, IoT, blockchain among others. But, unfortunately, most of the discussions in the market end up becoming a comparative discussion versus integrative discussion – man versus machine, offshore versus automation, RPA versus AI, consulting versus execution, and so on. These change agents must work together rather than operate in silos to solve real business problems. The power of AND is much greater than OR and Integrated Automation is all about the power of AND. Thus, RPA is dead. Long live integrated automation!
Looking at the current assortment of CRM functionality including AI, machine learning, voice recognition and chatbots, you might conclude that the tools are evolving to remove salespeople and others from direct customer contact and you wouldn’t be wrong. What’s surprising to me though is the amount of sales angst in senior people about the division of sales people’s time into two traditional buckets selling and not selling. One is good and valuable and the other is suspect. But why?
Research has shown that customers like finding their own answers without the assistance of someone whose job is to sell. Selling comes later, once a customer has a good idea of their basic needs and the available solutions. Before this point assistance can seem heavy handed. So, it makes sense to let customers do what they do. Organizations implicitly agree because they put so much information out on the Internet to support the self-service effort.
The result is that sales people engage with customers further along in the process or down the funnel. This puts added responsibilities on sales people. For instance, when sellers met customers earlier it was easier to have more customer meetings to simply deliver basic information. Managers didn’t like this so much because the probability of closing a deal was relatively low given that customers still had to go through a mental process to make a decision.
So, what did we do? We positioned all that information on the Internet to enable customers to make their own determinations. That’s great, right? No, no, no! Now the complaint is that salespeople are not spending enough time in front of customers.
Last year’s Salesforce report, “The State of Sales,” showed graphically that during an average week a sales person spent about one third of his or her time selling and the rest doing what many deem wasting time and that can get you scratching your head.
The report included as selling activities things like Meeting customers in person, Connecting with customers virtually, Prospecting, Administrative tasks, and Preparation and planning. This all seems abundantly reasonable.
But then look at what non-selling activities included. Generating quotes, proposals, and gaining approvals, Researching prospects, Internal meetings and training, Manually entering customer/sales info, Prioritizing leads/opportunities, Downtime.
A few years ago, I recall another report from either Forrester or IDC and I apologize for not knowing which but my point is simple. It surveyed CIOs and one finding was that sales reps were not adding enough value. They weren’t following up appropriately, didn’t come to meetings prepared to advance a discussion from the last meeting, etc. My takeaway when I read that report was that reps had too many accounts and not enough time to do the spade work necessary to drive a meeting and push a sales process along.
When I was selling, it was a rough rule of thumb that for every one-hour meeting you might need three or four (or more) hours of preparation. That was largely before CRM began giving reps back some of their time. In this context, the current selling and non-selling time appropriation looks pretty reasonable to me for a couple of reasons.
First, modern CRM has reduced the amount of preparation time significantly (but not to zero). Analytics might be able to tell you the next logical step but so can an experienced sales person and the real challenge is in doing the work for that step. But there’s good news there too. For instance, modern CPQ can produce errorless proposals and get the necessary approvals as quickly as a sales rep can get a manager to check email. You can say the same about presentation tools and slide libraries and almost every phase of selling that has some amount of automation for it.
But somebody has to actually use the systems and do the preparation which sounds like non-selling, but I’d advise that we reconsider our definitions. The fact that customers encounter sales people for the first time further down the funnel means there’s more at stake in that meeting than back when a sales person could fire off generalities. Having more at stake requires better preparation. If we want our sales people to be consultative and not glad-handers they have to come locked and loaded.
I’m not saying there’s no room for improvement, just that we need to be smart about what we value and how we see the modern sales process. So, I guess I don’t get it when I see or hear sales pros lamenting the 2:1 relationship between preparation and face time. To me that seems pretty good. It could be better and maybe in ten years it will be. But it’s worth noting that as products commoditize customers need less attention from sales people and that at some point many products go from employing a direct sales model to more of a retail model. That’s especially what we’re seeing all over the tech sector today and it will proceed differently from business to business based on things like product complexity, customer receptivity and of course competition.
I’ve spent 2 days at Cloud Expo Europe, the premier London based event covering cloud platforms, hybrid and multicloud approaches, cybersecurity, AI, blockchain and more, as well as well as all of the ingredients of the data centres that support those technologies. A wide set of tech topics, but within them everyone’s talking digital transformation and it’s dangerous. Dangerous because, like talking cloud 10 years ago, it means different things to different people, becoming a catch all with too much emphasis on the technology itself, rather than the business outcomes it supports. It’s the classic mistake we technology marketers have been making with our “widgets” for decades. We need to reframe the digital transformation conversation!
The crucial point is that emerging technologies and innovation are driving it, but the true transformation is all about business, mindset and leadership change.
Digitally savvy companies have leaders who encourage teamwork, explain their purpose with clarity, and promote an environment of openness and sharing. The particular organizational structure you have in place is less important than getting employees and leaders to embrace these behaviors. In her book The Management Shift, Vlatka Hlupic shows that many successful companies share a management style characterized by an open mindset, an unbounded culture, strong team cohesion, inspirational leaders, a strong sense of purpose, and passion for the work the company does. Check out the absolutely excellent Team of Teams by General Stanley McChrystal, Chris Fussell et al translating their experiences in Iraq War 2 to today’s complex supply chains where teamwork across organisational boundaries is crucial. These are the characteristics that 21st century leaders and managers need to be able to handle today’s rapidly changing business landscapes.
Adding mobile apps and new digital business components on top of existing systems can provide some help, and even give short-term benefits in key areas. To really transform your business, however, you need a holistic approach. According to recent Forrester research, most digitally mature businesses recognize that they must break down business silos in order to realize their digital visions. One helpful tool is the McKinsey 7-S framework, which has been tried and tested over decades. The 7-S framework emphasizes the role of coordination, rather than structure, in organizational effectiveness. First you assess the business in terms of strategy, structure, and systems. Then you examine your staff, skills, and style, as well as the shared values of the company. This approach helps to integrate all the factors needed to add value, find efficiencies, and make a real difference in your organization. You don’t have to use this particular framework, of course—there are many other useful tools out there. The point is that digital transformation becomes much easier when you think about it holistically.
You need a plan to integrate your digital transformation project so that it works with your legacy systems. Your plan should draw on agile thinking while still satisfying the financial demands of the C-suite. Think in terms of short time scales and multiple iterations. Don’t fear experimentation or failure. The Forrester research already mentioned highlights agility as one of the top five metrics to measure the success of digital programmes. True agility requires you to think like a startup. First, identify the problem that needs to be solved with a new digital approach. Next, develop a minimum viable product that you can implement. Use the resulting feedback to improve and iterate your product. Pursue multiple, parallel streams of change with a six-to-eight-week cycle or shorter. Focus on achievable outcomes rather than individual tasks and steps, and be sure to foster regular communication at all levels across the process (back to Team of Teams).
True digital transformation touches all of a company’s teams and processes. You need sound cross-functional governance to get everyone on board with the disruption that’s to come. Our research shows that organizations that have implemented some form of enterprise social network or social collaboration platform, such as Workplace by Facebook, Jive, Microsoft Teams, Kahootz, GitHub or Slack, are more successful with their transformation than those that don’t. This kind of communication harnesses the collective intelligence of teams in ways that aren’t possible with old communications technologies such as e-mail.
Unless you are a digital native startup, your digital transformation will most likely be a complex series of incremental and strategic initiatives that fundamentally change the company over time. To get employees, customers, and investors on board, leadership needs to communicate the big idea—the “why” of what you are trying to achieve by reinventing your business. Start thinking about the principles of story telling. Start thinking in terms of the visual tools and communication processes you are going to use get the whole company as well as your partner and supplier ecosystem on board.
Please check out the hashtags #techerati and #disruptivelive for more CEE19 content from this year’s show.
Digital transformation requires an open mindset, an unbounded culture, strong team cohesion, inspirational leaders, a strong sense of purpose, and passion for the work the company does.
You need agile thinking, a mix of incremental and strategic initiatives, and short development cycles.
Leaders must communicate why they are reinventing the company so that everyone is on board with the overall goal.
If you need help defining, adapting or communicating your particular digital transformation story, please contact us – we’d love to help.
Note – this post is an evolution of an article I wrote for enterprise.nxt the HPE Insights blog.
An all-time high 48% of organizations say cloud BI is either “critical” or “very important” to their operations in 2019.
Marketing & Sales place the greatest importance on cloud BI in 2019.
Small organizations of 100 employees or less are the most enthusiastic, perennial adopters and supporters of cloud BI.
These and other insights are from Dresner Advisory Services’ 2019 Cloud Computing and Business Intelligence Market Study. The 8th annual report focuses on end-user deployment trends and attitudes toward cloud computing and business intelligence (BI), defined as the technologies, tools, and solutions that rely on one or more cloud deployment models. What makes the study noteworthy is the depth of focus around the perceived benefits and barriers for cloud BI, the importance of cloud BI, and current and planned usage.
“We began tracking and analyzing the cloud BI market dynamic in 2012 when adoption was nascent. Since that time, deployments of public cloud BI applications are increasing, with organizations citing substantial benefits versus traditional on-premises implementations,” said Howard Dresner, founder, and chief research officer at Dresner Advisory Services. Please see page 10 of the study for specifics on the methodology.
An all-time high 48% of organizations say cloud BI is either “critical” or “very important” to their operations in 2019. Organizations have more confidence in cloud BI than ever before, according to the study’s results. 2019 is seeing a sharp upturn in cloud BI’s importance, driven by the trust and credibility organizations have for accessing, analyzing and storing sensitive company data on cloud platforms running BI applications.
Marketing & Sales place the greatest importance on cloud BI in 2019. Business Intelligence Competency Centers (BICC) and IT departments have an above-average interest in cloud BI as well, with their combined critical and very important scores being over 50%. Dresner’s research team found that Operations had the greatest duality of scores, with critical and not important being reported at comparable levels for this functional area. Dresner’s analysis indicates Operations departments often rely on cloud BI to benchmark and improve existing processes while re-engineering legacy process areas.
The retail/wholesale industry considers cloud BI the most important, followed by technology and advertising industries. Organizations competing in the retail/wholesale industry see the greatest value in adopting cloud BI to gain insights into improving their customer experiences and streamlining supply chains. Technology and advertising industries are industries that also see cloud BI as very important to their operations. Just over 30% of respondents in the education industry see cloud BI as very important.
R&D departments are the most prolific users of cloud BI systems today, followed by Marketing & Sales. The study highlights that R&D leading all other departments in existing cloud BI use reflects broader potential use cases being evaluated in 2019. Marketing & Sales is the next most prolific department using cloud BI systems.
Finance leads all others in their adoption of private cloud BI platforms, rivaling IT in their lack of adoption for public clouds. R&D departments are the next most likely to be relying on private clouds currently. Marketing and Sales are the most likely to take a balanced approach to private and public cloud adoption, equally adopting private and public cloud BI.
Advanced visualization, support for ad-hoc queries, personalized dashboards, and data integration/data quality tools/ETL tools are the four most popular cloud BI requirements in 2019. Dresner’s research team found the lowest-ranked cloud BI feature priorities in 2019 are social media analysis, complex event processing, big data, text analytics, and natural language analytics. This years’ analysis of most and least popular cloud BI requirements closely mirror traditional BI feature requirements.
Marketing and Sales have the greatest interest in several of the most-required features including personalized dashboards, data discovery, data catalog, collaborative support, and natural language analytics. Marketing & Sales also have the highest level of interest in the ability to write to transactional applications. R&D leads interest in ad-hoc query, big data, text analytics, and social media analytics.
The Retail/Wholesale industry leads interest in several features including ad-hoc query, dashboards, data integration, data discovery, production reporting, search interface, data catalog, and ability to write to transactional systems. Technology organizations give the highest score to advanced visualization and end-user self-service. Healthcare respondents prioritize data mining, end-user data blending, and location analytics, the latter likely for asset tracking purposes. In-memory support scores highest with Financial Services respondent organizations.
Marketing & Sales rely on a broader base of third party data connectors to get greater value from their cloud BI systems than their peers. The greater the scale, scope and depth of third-party connectors and integrations, the more valuable marketing and sales data becomes. Relying on connectors for greater insights into sales productivity & performance, social media, online marketing, online data storage, and simple productivity improvements are common in Marketing & Sales. Finance requiring integration to Salesforce reflects the CRM applications’ success transcending customer relationships into advanced accounting and financial reporting.
Subscription models are now the most preferred licensing strategy for cloud BI and has progressed over the last several years due to lower risk, lower entry costs, and lower carrying costs. Dresner’s research team found that subscription license and free trial (including trial and buy, which may also lead to subscription) are the two most preferred licensing strategies by cloud BI customers in 2019. Dresner Advisory Services predicts new engagements will be earned using subscription models, which is now seen as, at a minimum, important to approximately 90% of the base of respondents.
60% of organizations adopting cloud BI rank Amazon Web Services first, and 85% rank AWS first or second. 43% choose Microsoft Azure first and 69% pick Azure first or second. Google Cloud closely trails Azure as the first choice among users but trails more widely after that. IBM Bluemix is the first choice of 12% of organizations responding in 2019.
Trying to do business without also having a modern CRM system is like walking around naked. You can do it, at least for a little while, but people will begin to think you’re weird and the trouble is that those people are all potential customers. CRM is essential today because, despite our reverence for the free market, it’s really more like free markets–plural–and we need to be relevant, to treat people the way they expect to be, in each venue we decide to play in.
Before he died, Steve Jobs observed that the marketplace had bifurcated and the traditional Bell curve we use to represent it had actually become two humps, sort of like a Dromedary camel becoming a Bactrian.
Jobs saw one market for good enough products and the other for luxury items. His insight was that, with modern technology goods and customer facing systems, the merchandises in each group was essentially the same. In other words, much of the difference today comes from how products are sold and serviced. Good enough products had to be easy to use and intuitive in Jobs’ scheme and luxury goods had to come with hands on service throughout the purchase and ownership process.
That wasn’t so long ago and, wittingly or not, the observation drove CRM’s evolution to the point that today the suite is dripping with artificial intelligence and machine learning for insights into what to do for a customer next. CRM is now also over the top with journey maps as well as chatbots that do a pretty good job of interfacing with customers. They will only get better too.
But nobody who gets the good enough product, it turns out, likes being reminded that they didn’t spend big on the luxury item and all that service. We all like getting more than our fair share of service, especially if it’s free. So modern CRM splits the difference and provides affordable service and support without quibbling, through technology. Now, I would love to say that our work is done here but we both know better.
Here comes the rub. The other day I was reading an articlefrom last March in Inc. magazine that brought home how disparate our markets are and by implication how much vendors need high quality CRM today.
What once looked like a Bell curve and then looked like a camel has in a few short years come to resemble high tide at Huntington Beach. It’s clearer now than ever that people of different stripes, including different generations come to marketplaces with very different needs. We knew that already but maybe not its full extent.
So, in the article, “73 Percent of Millennials are Willing to Spend More Money on This 1 Type of Product,” writer Melanie Curtin quotes a Neilsen report saying, “While 66 percent of global consumers are willing to pay more for sustainable goods, a full 73 percent of Millennials are.” Neilsen’s definition of Millennials is people born between 1977 and 1995 while others skew the timeline toward the present.
For instance, Pew Research brackets the group at those born between 1981 and 1997. Regardless, Pew and others rank Millennials and Baby Boomers neck and neck in cohort size: 75.4 million millennials according to a 2015 US Census study vs. 74.9 million Boomers.
But let’s get back to sustainability. As the author of a book with sustainability in the title, it warms my heart to think some of those people could actually find my book but the CRM side of me also says whoa horsey! A CRM system better be able to distinguish between a younger customer and someone older who might have different priorities.
Sustainability is not a well-defined category though you know it when you see it. It can mean everything and nothing but generally it refers to things with long lifecycles, things that can be easily recycled, products and services that aren’t generally one and done. So paper napkins not sustainable, recyclable paper plates, yes. And of course renewable power generation yes, yes!
Any market where 66 percent of customers want to pay more for something, let alone a market with 73 percent thinking that way, ought to raise more than eyebrows. But this all speaks to the need for data that tells vendors concretely who customers are.
We might still rely heavily on the next best algorithm to help determine what’s next but now we see that age cohort might reasonably be part of the calculations. I expect that simple discoveries like this will keep CRM developers busy for a long time as they continue to iterate toward CRM nirvana. At its heart, this means continuous improvement of process as the CRM suite seems to be fairly well broadened at this point.
Show season is starting in CRM. It’s the time of year when almost every CRM vendor holds a conference in San Francisco, Las Vegas, Chicago, Dallas, or Austin. Next week Oracle kicks it off with Modern Customer Experience (MCX) in Las Vegas. It will be a great chance to see the company’s CRM apps put through their paces without the dilution caused by all the database and middleware discussions of OpenWorld. Not that I mind, but as a CRM person discussions of benchmarks can leave me cold.
At any rate, Oracle has been coming on strong in the last 5 years making serious moves to the cloud and into CRM. They got serious, in my opinion when Larry Ellison pointed to the cloud and the company has spent billions to reach its rather complex goals in apps, platform and infrastructure.
It has only been in the last couple of years though that the company has achieved something like critical mass, which is why this edition of MCX will be so interesting. Rather than bringing so many products to market, the company has begun turning to process and it’s process that I think drives the discussion about things like sustainability.
I then used “drama” in the title of my recent ZDNet guest column.
Drama? That’s what you get in Augusta in April. Isn’t enterprise tech boring and glacially slow?
My book catalogs the product launch machine SAP has become and the fact that its customer base has grown 50% in the last five years. It was on the ropes, the competition could have delivered a knockout punch but it has bounced back impressively.
How can that not make your heart beat faster?
Last week, in his coming-out party since he joined Google several months ago, Thomas Kurian made quite an impression.
Also last week, at a SAP Analytics event, mostly under NDA, I expected to hear plenty about coopetition with Google, Oracle and IBM. Instead I kept hearing about Microsoft with its Power BI, Surface, Azure ML, Office 365 and other products.
Oracle and IBM were similarly snubbed by the DoD as it shortlisted vendors for its $ 10 billion JEDI cloud contract. One contract should not mean that much but the twists and protests in that one certainly count as drama.
When a case ends up at the Supreme Court of the US, it usually makes for some some oohs and aahs. When all 9 justices vote unanimously on something, it certainly perks you up. That’s what recently happened to Oracle in its long running litigation with Rimini Street.
That para goes on to say “Today, even after two decades of cloud computing, industry and geographic coverage is spotty — by my estimate, less than 20%. For Hurd’s prediction to prove accurate, it will take massive new investments in upgrading industry functionality.” Will Hurd’s own products grow rapidly and get a lion share of his projected 80%? In January 2006, Oracle had confidently declared they were “Halfway to Fusion”. 13 years later, Oracle still mostly talks finance, HCM and CRM. NetSuite, even after 20 years, has limited global coverage as Phil Wainewright points out here or industry coverage as I point out here.
Augusta generates drama for 4 days. Enterprise tech takes a bit longer. But it has its share of comebacks and black horses. I will have a front row seat as I start planning for SAP Nation 4.0.
Or may be it will be titled Oracle Nation 1.0 or Workday Nation 1.0. That right there is plenty of drama.
Watching the G Suite news coming out of Google Cloud Next 2019, I had my Michael Corleone moment that took me back to my old stomping ground: employee engagement and workplace performance.
Not shockingly, the incremental updates to G Suite were awesome engineering feats. But Google had the chance to entirely shift the goal post by re-imagining a better “what” this time. Instead, what we got another attempt at a better “how” by pitting product features against Office 365.
Knowledge sharing, connecting employees, breaking silos.
2001 called. She wants her value prop back.
So, what does more “what” look like?
Frontline Work: Upskill America estimates that over 24 million workers just in the US are front line workers. Many of these workers have one job in huge industries such as retail, logistics and healthcare. Sitting at the very last mile of the customer experience, they are the face of a company’s brand. Yet these workers today do backflips to make desktop worker optimized HRIS and employee engagement systems work for them. The mobile-only front-line worker needs a very different design paradigm to be the best that she can be.
Shift Work: Approximately two in five workers in the US work during nonstandard times. Shift workers work in jobs that go 12-24 hours. Each job requires 2 to 3 people to cycle in. And shift workers often have multiple employers. There is a significant gap and inefficiency in how they get found, how they get scheduled, how they perform and in turn, how their performance is measured, and how they get marketed to potential employers. The Population Reference Bureau estimated that the occupations of these workers will have the largest projected growth rate over the next decade. That’s a hunk of TAM.
Unstructured work modalities are hugely underserved. The design metaphor for tools such as GSuite, Office 365, Asana and others serve a structured work modality: task management, spreadsheet creation, workflow, etc. The reality is that these tools are great after the collaboration to generate and firm up ideas has already taken place. They work great when its time to break up tasks and get them done, to model out costs, or document decisions, or present plans made. What’s missing is the ability to support the integral steps that form the basis of the task – whiteboarding, ideation, planning, organizing thoughts. Massive industries such as Professional Services, Industrial and other Design, CAD/CAM all rely on these crucial precursor phases before “work” is remotely ready to go into a spreadsheet for costing, or a presentation for funding.
The tragedy in all of this is that Google does have the best capabilities, the best real-time interplay and the absolute best NLP in the market. All sitting atop GCP which remains the best-kept secret on Planet Enterprise. Thanks in part to GCP and in part to the genius of Kahuna co-founder and CTO Jacob Taylor and his team, at Kahuna we on-boarded 80 million consumers in a single day, with zero dev ops support.
And yet, Google misses the forest for the trees by deploying these maddingly sophisticated resources only towards improving the “how”, vs re-imagining the “what”.
This playbook of shifting the goalpost works. At SuccessFactors / SAP, we shifted the goal post by offering functionalized collaboration to solve very discrete problems in Sales, Marketing, IT and HR. And as a result, we drove revenues up well over 20-fold and subscription 5-fold in just 12 quarters. And we changed the narrative for collaboration which added significant sales and marketing velocity.
So, question the prevailing premise.
Look, this doesn’t need to be an either / or game. Given the astonishingly good numbers we’ve seen from Google’s historical focus on education, the horizontal long game can work. But Google has all the pieces to re-cast the narrative and see returns now. And with Thomas Kurian at the helm, they have the street cred to build real enterprise SaaS software that solves real business problems, for huge markets.
I’m rooting for you, Goog.
As I was doing some research for this post, these were some of my favorite posts on the event.
Holger Mueller of Constellation Research covers the infrastructure and security elements but ventures into apps and SaaS.
Ron Miller at TechCrunch and Kurt Marko at Diginomica go for the jugular and take on the AWS vs GCP analysis, head on.
Finally, Frederic LardInois at TechCrunch has an in-depth interview with David Thacker, VP of Product for G Suite on Currents – Google’s newest shot at horizontal enterprise collaboration.
Gartner predicts the worldwide public cloud service market will grow from $182.4B in 2018 to $331.2B in 2022, attaining a compound annual growth rate (CAGR) of 12.6%.
Spending on Infrastructure-as-a-Service (IaaS) is predicted to increase from $30.5B in 2018 to $38.9B in 2019, growing 27.5% in a year.
Platform-as-a-Service (PaaS) spending is predicted to grow from $15.6B in 2018 to $19B in 2019, growing 21.8% in a year.
Business Intelligence, Supply Chain Management, Project and Portfolio Management and Enterprise Resource Planning (ERP) will see the fastest growth in end-user spending on SaaS applications through 2022.
Gartner’s annual forecast of worldwide public cloud service revenue was published last week, and it includes many interesting insights into how the research firm sees the current and future landscape of public cloud computing. Gartner is predicting the worldwide public cloud services market will grow from $182.4B in 2018 to $214.3B in 2019, a 17.5% jump in just a year. By the end of 2019, more than 30% of technology providers’ new software investments will shift from cloud-first to cloud-only, further reducing license-based software spending and increasing subscription-based cloud revenue.
The following graphic compares worldwide public cloud service revenue by segment from 2018 to 2022. Please click on the graphic to expand for easier reading.
Comparing Compound Annual Growth Rates (CAGRs) of worldwide public cloud service revenue segments from 2018 to 2022 reflects IaaS’ anticipated rapid growth. Please click on the graphic to expand for easier reading.
Business Intelligence, Supply Chain Management, Project and Portfolio Management and Enterprise Resource Planning (ERP) will see the fastest growth in end-user spending on SaaS applications through 2022. Gartner is predicting end-user spending on Business Intelligence SaaS applications will grow by 23.3% between 2017 and 2022. Spending on SaaS-based Supply Chain Management applications will grow by 21.2% between 2017 and 2022. Project and Portfolio Management SaaS-based applications will grow by 20.9% between 2017 and 2022. End-user spending on SaaS ERP systems will grow by 19.2% between 2017 and 2022. | 2019-04-24T16:14:50Z | https://www.enterpriseirregulars.com/blog/ |
Calendars have only just been flipped over to February and already so much has happened in Australian politics this new year. In just a few weeks the government’s standing has gone from bad to worse. Many of the government’s woes over the last 18 months have been as a result of difficult policy decisions made in response to the less than ideal budgetary position. A lot of the government’s troubles are also down to Tony Abbott’s leadership and the style of governance he has allowed to linger. So far in 2015 all the missteps are down to Tony Abbott – and only Tony Abbott.
That brings us to today, the 2nd of February, 2015. The Prime Minister made a rare appearance at the National Press Club today in a bid to give the public a taste of a government finally engaging with the public and proving that they have begun to listen to voter concerns.
Election night in Queensland drew our attention to the address in spectacular fashion, with Jane Prentice nominating it as a forum at which the Prime Minister had to perform some kind of miraculous recovery effort – setting out a way to escape the doldrums.
Unsurprisingly, Prime Minister Abbott was unable to perform this feat. There were ever so slight slivers of hope that the speech might give some kind of direction. At best it was a tired, spent leader trying to conjure up a final burst of energy, sprinting a bit, but wobbling at the crucial moment. At worst it was a display of arrogance and disdain for voters. Actually, it was probably a mix of the two.
The Prime Minister started by making some broad statements about what government should do and followed that with what his government had done and what it could both do, and do better.
In reality, what the Prime Minister should have done first was to move pretty swiftly into apology mode. Almost the whole speech could have been one long mea culpa, with a little bit of what he and the government were going to do for the next 18 months thrown in at the end.
Making broad statements about what governments should do is irrelevant when you have already achieved government. You draw attention to the fact that you are not doing those things if you need to spend time talking about them. Furthermore, it is the talk of an Opposition Leader and that is not a good look 18 months into office.
In other words, Tony Abbott had his speech in completely the wrong order.The resulting display was at least one-third waffle and two-thirds slight improvement.
The arrogance sprang from the way the Prime Minister took so long getting to what everyone was made to believe was the point of the speech – an apology for taking voters for mugs and a new way forward. In the case of a new way forward, we only got a brief glimpse of that, but again it was all vision and no substance. Again, something expected of Oppositions early on in a political term.
Shockingly, the Prime Minister also implied that voters were stupid and had encountered a “fit of absent-mindedness” in the Victorian and Queensland elections. Even a rookie politician knows that this kind of thought must not be put into words publicly, regardless of whether it is a correct observation or not.
It was tired precisely for the reasons mentioned above, in that there had been little thought and substance woven into the speech. And the Prime Minister looked tired too. There was very little energy put into the delivery, except when the PM mentioned the few things his government has actually done. The fact that came so early gave the address a valedictory feel.
Tony Abbott has spoken multiple times of hitting the reset button. On each occasion he has instead forgotten that the metaphorical button ever existed in the first place. Today was another one of those days for the struggling leader.
Today could have been Tony Abbott’s last chance to save his leadership and he did a very poor job of fighting for it . Or perhaps he knows that he is a spent force and today’s speech was simply going through the motions. He did however imply that his colleagues would have a fight on their hands to unseat him.
It is pretty clear from some of the facial expressions of his colleagues, captured on film throughout the hour, that they had noticed his suboptimal performance too.
This morning I took to Twitter as I usually do throughout the day to keep an eye on the latest breaking news and information about politics and the world around us. Cruelly though, the first thing that caught my eye was a newly sent out tweet breaking the sad news that disability advocate and comedian Stella Young had passed away suddenly and unexpectedly on the weekend.
I had to do a double-take. Were my eyes really seeing what was on my phone screen? Still recovering from the tragic passing of Phillip Hughes, now I had to contemplate the loss of another prominent Australian figure. This time a little more personal.
A couple of years ago a report was released by PricewaterhouseCoopers about disability in Australia. It contained some truly distressing statistics in terms of employment and poverty among those with a disability in Australia.
At that point I had been writing for a brief period of time. I had nothing published at that point aside from some thoughts on my own personal blog at the time. I began to furiously write a piece railing against those terrible numbers.
I hammered that piece out in about 45 minutes and shot it off to The Drum, not knowing about the existence of Ramp Up at that time. A short time later I received an email from Stella introducing herself and offering to publish my angry rant on the ABC disability portal.
I had always been an advocate for people with disability, having been born with one myself. But the opportunity Stella gave me opened up a whole new avenue of advocacy I had never contemplated. It gave me the belief that my message, however small and insignificant, could help deliver change in the lives of those with a disability in Australia.
Stella was an intellectual giant – not just in the field of disability advocacy, but also comedy and feminism. She brought her thoughts and feelings to us with incisive wit and sharp and biting humour.
Issues related to disability are all too often overlooked and that people with a disability are often underestimated, even downright forgotten about.
I remarked today that there are two people in Australia I see as having had the biggest impact on disability politics in Australia in the 21st century – both in different ways, but both so important. Stella Young was one of those people. Assisted by the platform given to her by the ABC, but sadly taken away by a narrow-minded funding decision, disability suddenly had an energetic national voice aside from that of Bill Shorten – whose job it is as a politician to institute programs to help the vulnerable.
I never met Stella, but emailed her a number of times over a couple of years with pitches. She was enthusiastic and offered all-important constructive criticism. Despite that, I am deeply saddened by her sudden and unexpected passing.
Knowing her has helped me grow as a person. And her work will help the nation take a big leap forward.
Her voice and presence will be hard to replace. It will probably take a number of people to fill her shoes.
Sixteen weeks ago, Newcastle Knights’ player Alex McKinnon suffered a serious neck injury which has seen him confined to a wheelchair. The 22-year-old now has a long rehabilitation process ahead of him. Almost immediately after the incredibly rare, yet devastating event, the rugby league community, from the professional right through to the grassroots level, rallied around the rising star of the NRL whose whole life has now changed.
In a wonderful gesture, the Newcastle Knights – under financial strain – said that they would honour the rest of Alex McKinnon’s contract in order to assist Alex and his family with the long rehabilitation process. The NRL stepped up and delivered too. Alex McKinnon was graciously offered a job for life with the organisation, and a foundation was set up in his name.
But that was not all. Not that long ago the NRL said that Round 19 would be the #RiseForAlex round. The aim of the round, to raise funds for the foundation and for McKinnon. Another wonderful idea.
That round commenced on Friday night, with two very entertaining and high-scoring matches played. The two games so far were a celebration of rugby league, as much as they were a chance to help out a young man in need.
As I sat comfortably in my loungeroom, I began to ponder all things Alex McKinnon and all things NRL. It was a cathartic experience as I parsed through the thoughts I was having about what this tragedy, and the way the different actors have reacted, says about the NRL and its’ players. And there were thoughts about the man himself.
There might have also been a quiet tear or two. But they were happy tears. The Alex McKinnon situation resonated with me on a personal level.
Aside from a small issue I have grappled with in relation to the #RiseForAlex hashtag, and the contentious judiciary decision involving a Melbourne Storm player, the Knights and the rugby league community as a whole, not just the NRL have conducted themselves admirably. Their actions soon after the full extent of the injury to Alex McKinnon was known, could barely be faulted.
The one thing that I am still the tiniest bit unsure about is the wording of the hashtag. Is it a call to the community to get in and raise money? Or does it imply, in the tiniest way, that others have to help Alex and that he cannot help himself? I am probably over-thinking this. I have a tendency to do that. But nonetheless, the thought did cross my mind. Obviously though, I am not claiming there was any malicious intent. It’s just the case that words can have different meanings to different people.
Aside from my happiness at seeing the NRL community pull together, I also considered how Alex has so far dealt with what is the biggest challenge in his young life.
This is where it got really personal for me. I too have a disability. I was born with Spina Bifida and Hydrocephalus.
I thankfully still have the use of my legs. But life has not been without its challenges. But they pale in comparison with those struggles that Alex and his family are now enduring. Alex is the very personification for me of the old adage that there is always someone doing it tougher than you.
The way that he has dealt with his acquired injury – and in the public eye – is something to behold. I have known nothing but a life of disability and I struggled to come to terms with it, basically until I found swimming when I was about 11. A few months after his injury, Alex appears to be dealing with his far more severe disability in a much more positive way. Of course he admits there have been tough times, but it barely shows when you see his smiling face in the video updates.
A horrific event in rugby league has brought out the best in those involved in the game. And it appears it has brought out the best in the victim. Perhaps most importantly, the response of the broader community appears to have been quite significant based on early indications.
You cannot underestimate too, the effect this might have on the way that we as Australians view disability.
Today the Abbott Government were, 10 months after their election, able to see the repeal of the former Labor Government’s carbon tax pass through the Senate. Finally the Coalition was able to deliver on their most solemn commitment to the Australian people in 2013. It has not been an easy road to this point for the Coalition, not just in the area of carbon pricing, but in general. Understandably then, the relief of today’s events among Coalition MP’s and Senators was palpable.
But not all political players were happy. The Greens led the way with the condemnation of the government and understandably so. It was at their insistence that the former Labor Government introduce a price on carbon in return for their support in minority government. The ALP also voiced their concerns with the events of today. Their position being that Australia needs an Emissions Trading Scheme.
As often happens when controversial things occur in politics, there was not much restraint shown in the language used to describe what happened in Canberra. Hyperbole got a real workout. Both politicians and social media indulged in making hyperbolic statements.
The trouble is, whatever your viewpoint on this, or any other issue, hyperbole does little to further your cause. It makes you look overly emotional and can turn people off your cause. Simple language without outlandish claims works best when trying to communicate serious points. Few people like feeling as if they are being preached to. It is better to feel you are part of a solution than it is that you are part of a problem.
By far the most overblown and indeed overused claim today was that the repeal of the carbon tax would doom the planet. It was said by many that our children and their children should be told it was Tony Abbott and his government who should be held responsible for the state of the planet in their lifetime. This is just plain wrong.
What one nation does in isolation will not curb or exacerbate global warming in any significant way. What the international community as a whole chooses to do, or at least the vast majority of countries, will have an impact.
What one nation does in reversing action on curbing emissions will, on the other hand, have a significant impact on their own natural environment and the health of their citizens.
This so far might sound like an endorsement for so-called ‘direct action’. It is not. That policy is incredibly expensive.
What Australia needs is an Emissions Trading Scheme, or ETS. We almost had one not all that long ago. It was not perfect, but it was a very good start. And it would have saved a lot of political trouble for multiple players in the years after it was dumped. And it would have been reducing emissions long before Labor’s carbon tax began operating.
The debate around climate change and how to tackle it will continue. And that leaves open the possibility that minds will change. The key is that emotion is largely taken out of the debate, while still being able to calmly discuss the potential consequences of global inaction.
In political circles, s18c of the Racial Discrimination Act is one of the hottest topics. Out in the broader community, it is not exactly high on the agenda. But the government is seemingly moving towards repealing that section of the Act. Indeed, it was one of the commitments made by the government when in opposition.
If the government were to break their promise, and not repeal s18c, they would lose no political skin. The government is still talking about a repeal of s18c of the Act however, though the final outcome may not end up being the removal of this part of the Act. There does appear to be mixed messages from the government.
Both sides of the debate have been passionately advocating their respective positions since the policy was announced. Sometimes that passion has been overly emotional. Nuanced and dispassionate consideration of the issue at hand has often been lacking, with the full repeal advocates and those in favour of the status quo being the loudest participants.
As you would imagine, the issue has been hotly debated on the various political panel shows for some time. And that debate has continued to accelerate in recent weeks, including on The Drum and Q&A last week.
There was a mostly mature discussion of the subject on both programs. The political class, the politicians in this case on Q&A, did get a little more emotional than those closer to the periphery of political debate, the guests on The Drum.
And then there was the social media commentary from the politically engaged. Twitter, as it often does, played host to a whole new level of angry and emotional consideration of the topic.
From the Twitter discussion last week, I learned that privileged, white, middle-aged males in particular have no right to take offense at any kind of jibes directed towards them. However everyone else is, in the eyes of a number of people on Twitter, allowed to seek comfort from the law. White privilege apparently means that no laws are required.
This is a problem. It is a problem because we live in what is supposed to be a liberal democracy. Granted we do not always get the application of liberal democratic values right in our society, but we are, for all intents and purposes, at the very least in name, a liberal democracy. That means that everyone is supposed to be equal before the law. Everyone is to be treated the same by and under the law.
When it comes to the section of the Racial Discrimination Act in question, I have been on quite a journey. I have held a few positions since the court case involving Andrew Bolt, which started us on the journey to the debate we are having at the present time.
At first my largely libertarian and liberal politics came to the fore. I thought that section of the Act just had to go because, well, free speech. It was a very absolute position. How could anything else possibly amount to free speech I thought.
Then I thought about it some more when I heard David Marr speaking on one of the panel shows on television. His position was that the part of the Act being debated should be altered.
At present, someone is in breach of the Racial Discrimination Act if they engage in behaviour which ‘offends, insults, humiliates or intimidates’.
David Marr has argued that the first two words: ‘offends’ and ‘insults’ are too subjective. The threshold there is indeed too low. A higher test should apply to the Act, and at the time I thought that Mr Marr’s thinking struck the right balance.
But again in recent days I have reconsidered my position. I have begun to think that the word ‘humiliates’ should be removed from the Act. The word seems to me to be so similar to the first two that it is an unnecessary part of the legal test for discrimination.
I do however think that the word ‘intimidates’ needs to be retained in the legislation. Essentially, racial discrimination and vilification in its purest sense is behaviour which intimidates the victim. It is the very foundation of true hate speech and has no part in a civilised society.
In short, we should have laws against hate speech. However, neither the status qu0 nor the proposed alternative position are adequate ways of dealing with what is a very complex issue.
It is worthy to note too that no single characterisation of the Act, either considered here or elsewhere, will eradicate discrimination. However, a legal remedy must remain available for when discrimination and vilification has been found to have occurred.
Australia has now been to the polls. And as predicted we have elected a Coalition Government, ending 6 years of ALP rule starting and ending with Kevin Rudd, albeit with a stint from Julia Gillard for 3 years.
Kevin Rudd has decided, not so gracefully, to exit stage left in terms of the Labor leadership and Tony Abbott is now PM-elect.
Whether Kevin Rudd decides to stick around for another 3 years on the backbench is another story. Going on history you would expect him to quit the parliament at some stage during this term – likely early on. A number of his current and former colleagues have less than subtly suggested he quit the parliament for the good of the party.
It was not a surprise that Labor lost and that the Coalition victory was significant. For most of the last three years the Liberal and National Party opposition have been ahead in the polls – at times way ahead. An Abbott-led opposition victory, apart from at the very start of Rudd redux and the very early stages of the election campaign, was a fait accompli.
It was not a surprise that the Coalition would pick up seats and that these would be mostly in the eastern states, Liberal and National Party seats were gained in all four eastern states: Queensland, New South Wales, Victoria and Tasmania.
With the retirement of the two Independent MP’s who delivered Julia Gillard minority government, the Opposition started the campaign assured of picking up two electorates.
It was also probably not so much a surprise that a poor campaign performance from Greenway candidate Jaymes Diaz saw the Liberal Party fail to gain the seat of Greenway. At the same time, it was probably somewhat significant that Michelle Rowland actually had a pretty significant swing in her favour which should help out somewhat in future elections.
First of all, a significant feature of the results was that western Sydney did not bring anywhere near as much pain for the ALP as many of the polls had predicted. Western Sydney was largely expected to turn blue, well before the campaign even commenced. As noted though, it was not a surprise that Jaymes Diaz lost in Greenway.
It was a pretty significant surprise that Tasmania saw the biggest swing against the Australian Labor Party. It was also significant that other southern states saw a bigger swing to the Coalition than the more northern states of New South Wales and Queensland, where both the Liberal and National Party were expected to enjoy significant swings.
In Queensland, the LNP would have hoped, even expected to gain the seat of Lilley from former Treasurer Wayne Swan, but this did not eventuate. For much of the night it looked as if the LNP would not take any Labor seats in Queensland, but now it would appear they have picked up two. It would appear they have not gone too well in Fairfax, with Clive Palmer seemingly headed for a surprise victory.
It must be said that the result was probably closer than it would have been had Julia Gillard still been Prime Minister, The election results are still a disaster for Labor. But even party faithful would be pretty happy with the fact that they did not lose a number of seats which would have largely been written off by party tacticians.
So the Rudd return to the leadership was probably responsible for a minor improvement in the electoral standing of the ALP. However it was far from the political masterstroke that polls claimed it would be.
At various times throughout the night it was mooted by commentators, Labor, Liberal and those non-aligned, that a significant factor in the result was the instability within the ALP over the last three years. They would not be wrong on that assumption. Disunity is political suicide.
But those commenting, particularly from the Labor side, gave far too much attention to that one single factor. Few were able to acknowledge that the Opposition were a united force and sufficiently strong force. In doing so, the ALP implied that the collective electorate had made a choice to vote for an Abbott Government based on one factor alone.
There were not just chaotic relationships within the ALP. there was also chaotic administration. Too much was rushed and there was not enough caution in the way the ALP governed. Australians appear to love big government in some ways and not others and Australians also love a pretty conservative style of governance. Labor did not deliver on the latter. And unless they realise that they have to be more cautious and circumspect in the future, they will continue to lose public support pretty swiftly.
The Coalition now has another three years to govern the country. And this new opportunity probably comes a term sooner than expected. The challenge will be to carefully set out and plan the agenda for the next three years and not repeat the same governance mistakes that Labor have. What will likely be the most conservative administration in our history, is unlikely to make the same mistake.
Sorting out the budget as soon as possible is also likely to be a major challenge. The Coalition has come to recognise this in recent weeks, changing its’ stance on the surplus pledge.
There are an interesting three years ahead indeed.
The last three years in particular have been a time of much discussion and soul-searching within the Australian Labor Party. A little over three years ago a first-term PM was deposed with the aid of powerful factional forces and replaced with his deputy. The party vote plummeted not long after the 2010 election and after three years of internal chaos and division the vanquished Kevin Rudd was returned as Labor leader and Prime Minister by more than half the ALP caucus.
Upon his return – and leading up to it actually – the revived Prime Minister promised change. Kevin Rudd promised us that he had changed. He was no longer a micro-managing, frantic and overbearing leader of the Labor Party. Rudd also promised a slight policy shift in certain areas.
By far the biggest, most publicised element of Rudd’s change agenda is the internal reform proposals he has put forward since he was returned as Australia’s Prime Minister. These matters’ of Labor housekeeping include proposed changes to how the party selects and disposes of a leader and how a future Labor ministry will be picked.
There are of course changes which have been proposed as a result of the events in New South Wales, but this piece is not concerned with those proposed changes.
People in policy know of one basically universal rule which applies to policy decisions, and that is that there are almost always unintended consequences – pros and cons of almost every choice made. There are possible unintended consequences and negative outcomes from the ALP renewal proposals which Prime Minister Rudd will put to the party on July 22.
On the potential plus side, a PM free from the knife-wielding wrath of backbenchers with intense factional loyalties would ensure leadership stability and promote a feeling of certainty across the electorate at large – most importantly with the swinging voter who might have backed the party in at the ballot box.
On the face of it, it may not appear that there are downsides to Kevin Rudd’s announcement that a Labor Prime Minister elected by the people will not face the knife of backbenchers, except under extraordinary circumstances.
But there is a downside. A leader who becomes toxic to the party in an electoral sense would be next to impossible to remove as the criteria for removal is set pretty high. A leader would only face removal after having brought the party into disrepute according to 75% of the caucus.
It is also rather difficult to argue against the idea that the rank-and-file members of the Australian Labor Party have a fifty percent say in the election of a leader for the parliamentary arm of the party. The move is quite democratic and fair and rather unique in the Australian political environment, though whether or not it will result in more people rushing to join the ALP is less than clear.
On the downside, the process will be potentially expensive and would leave the party effectively leaderless for 30 days after a wrenching defeat.
With regard to the ideas put forward by Rudd on the leadership side of the equation, there have also been fears that branches will be stacked by unions trying to gain more influence under a slightly less union-friendly environment within the party organisation if these changes are successfully passed.
In terms of parliamentary reform, the other thing Rudd has proposed, which has been flagged for some time, is a restoration of the ability of the ALP caucus to decide who wins coveted ministerial positions.
With caucus able to determine the frontbench, there is the potential for less division within the caucus. Only those with majority support would be successful, leading to a stable team. At least that’s the theory.
With caucus again able to elect ministers, the factions are as important as ever. The powerful factions will dominate the ministry. Those with little factional loyalty, and even those more suitably qualified, may miss out on roles altogether, though the latter will happen regardless of the model for choosing the frontbench.
Kevin Rudd has probably moved as much as he could. What caucus decides will be keenly watched by political observers, though the whispers appear to indicate that the changes will be agreed to by the party room when it meets in a couple of weeks’ time. What the broader union movement feels and how they react will also be a point of interest.
Whatever the outcome, there are potential consequences, good and bad.
There has been a swift end to Mohamed Morsi’s presidency. After just one year, the democratically elected leader in Egypt has been turfed out of office by the military after a groundswell of protest against his rule in the fledgling democracy. There are no ifs or buts about it, the events of the last 24 hours were nothing less than a coup. There was no negotiated transition, instead, as is common in these situations, the military stepped in to ensure that the increasingly unpopular leader was removed from power – and not in a particularly democratic manner. And now an Egyptian judge, Adli Mansour will be interim president.
The events were truly astounding and no doubt troubling, at least for the Western world and Morsi’s supporters. But the events appear to have been potentially positive, despite the unseemly way in which President Morsi was dispatched from office. On the face of it, it seems that the majority of Egyptians are just satisfied that Mohamed Morsi is gone, and that they are not troubled with the method of his departure.
When examining events such as this, it is important to determine the good moves, the bad ones and to provide thoughts on what perhaps might have been a better idea.
There is precious little, at least in terms of individual elements, which is positive about what occurred in Egypt.
The protests, at least initially, were peaceful. People gathered in Tahrir Square, as they did before Hosni Mubarak was deposed in 2011. The numbers grew as days went by. But the last days in particular were marred by violence which claimed lives. There was also a disturbing number of sexual assaults reported.
It is positive, judging by the general reaction, that Mr Morsi is no longer in office. It appears that it is what the majority of people wanted.
But we can also count this as a negative. The former president was not voted out at an election, nor did he resign the presidency after seeing the widespread opposition to his rule. This was a coup by the military, albeit apparently responding to the will of most of the Egyptian people. Regardless, it is far from ideal for a democracy, especially one so young, to see events like this only a year after an election.
The formation of a “grand coalition” appears to be a move that the Egyptian military is willing to help foster and that is certainly positive in terms of helping to aid the transition back to democracy and, if sustainable, helpful for democratic consolidation in Egypt. There also has to be a strong opposition willing to be constructive and to adhere to the rule of law and other democratic ideals.
The arrest of former President Morsi and other officials was unnecessary and inflammatory. This might well provoke significant backlash from supporters of Morsi and would make constructive dialogue across the political divide very difficult. It could be a factor in creating a disenfranchised group in Egypt.
That’s what did happen, what was good and bad about the military backed revolution. What might have been better?
Even though it would have been almost impossible to force, there should have been an election. Ideally, Morsi should have called one when it became clear that support for his regime was falling apart. Or the people could have waited for an election. but there could well have been a significant political and social cost involved and it is possible that it may have never eventuated.
The “grand coalition” idea might have been prosecuted better had it been something done while the status quo remained. At least though, it has a year to form and to attempt to find common ground across a range of different groups.
In moving forward toward elections in a year, proper attention needs to be paid not just to the future of Egypt, but also its history, both distant and the events of the last weeks and months. | 2019-04-26T15:50:13Z | https://aussiepollies.com/page/2/ |
Here we feature the very best mattress manufacturers in each category. Mattresses are shipped in a box from the factory. Cutting out the middlemen so that you receive the best value possible.
Everybody has different reasons for purchasing a mattress. To make your selection easier, we have organized the mattresses by categories. If you need more choices, there are reviews below to your best mattresses in each category too.
The DreamCloud is a medium-firm hybrid mattress with a mixture of pocketed coil springs, latex, and memory foam. The mattress has a luxurious feel and provides a high degree of relaxation, offering pressure relief and rear support but also fantastic movement isolation with additional bounce. In comparison to manufacturers of a similar quality, the DreamCloud mattress-in-a-box is great value for money.
As a luxury mattress, the DreamCloud is constructed with premium materials. It is durable, stable, and supportive. This ensures the bed will last for many years to the future. The business offers a 365-night safe sleep trial in addition to a lifetime guarantee. This allows you to check the bed. If you are not satisfied, you can return it free of charge for a full refund.
The Alexander Signature is a memory foam foam mattress that offers luxury and durability at a price. Produced with CertiPUR-US foams in the USA, the mattress is offered in two firmness options: moderate or luxury firm. This makes the mattress perfect if you like to sleep on stomach, side, or the back. It sleeps cool and offers great back support, stress relief, along with good motion isolation.
The Nectar is a reasonable memory foam mattress using a feel that suits all fashions that are sleeping. The Nectar’s memory foam layers deliver also a high degree of comfort and fantastic pressure relief. The bed is also good at keeping your spine in alignment when sleeping on your side, back, or stomach. Because of this, the Nectar functions well for reducing or removing localized or generalized pain.
As a mattress-in-a-box, the Nectar ships directly from the mill to your doorstep in 2 to 5 business days. This indicates that you skip the middlemen and gain a well-made mattress at a reasonable price. The Nectar has received favorable reviews from customers, many who state the mattress has solved all their pain problems. Benefits include a lifetime guarantee and a 365-night risk-free trial.
For negative sleeping, the DreamCloud is one of the most comfortable mattress-in-a-box brands available on the market. As a medium-firm hybrid , the DreamCloud gets the benefits of a memory foam mattress with the support and response of pocketed coil springs. Therefore, if you’re a side sleeper needing a mattress to keep your shoulders, buttocks, and knees well-protected, the DreamCloud is a good option.
If you lie on your side on the DreamCloud, the memory foam will adapt to your body’s natural curves, whereas the pocketed coils will ensure your back remains in perfect alignment. This reduces back pain and alleviates aches and pains to get a better night’s sleep. Being a top notch mattress-in-a-box new, the DreamCloud also benefits from a lifetime guarantee and a 365-night safe sleep trial.
The Layla memory foam mattress includes two firmness options in one mattress: a soft side and a firm side. Specifically, the soft side of the mattress works nicely in the event that you prefer to sleep on your side. When you lie , the Layla will cradle your hips and shoulders, reducing pressure when keeping your spine in alignment. However, if you find the soft side too plush, you can just flip the mattress to gain a firmer feel.
The Alexander Signature is a multi-layer memory foam mattress that delivers high levels of comfort at a reasonable price. The mattress works well in all regions and has good back support, pressure relief, movement transfer, and border support. Because of this, you should find a vast improvement in the quality of your sleep and awake feeling rested with fewer aches and pains.
Using a medium or luxury firm choice, you can select the ideal degree of firmness to suit your favorite sleeping place: back, side, or stomach. Gel-infused memory foam is used to regulate temperature, keeping you warmer on warmer nights. The mattress also features a plush quilted cover for extra luxury and comfort.
The 15-inch DreamCloud is a premium hybrid mattress combining high-quality materials in 8 different layers. The mattress has a luxurious feel and look, casing a hand-tufted cashmere blend top, high-density memory foam, organic latex, plus a 5-zone pocketed coil system. This premium blend provides superb comfort and a just-right feel however you like to sleep. The mattress has a medium firmness and decent motion isolation, so in the event that you sleep with a partner, you may feel less disturbance during the night.
The DreamCloud is also effective if you are a heavier individual and want pressure relief with enough support to keep you afloat on the mattress. The high-density memory foam will effortlessly ease strain on your joints, while the coil latex and springs will guarantee you never sink a lot to the bed. Other noteworthy aspects include gel memory foam to keep you cool, a 365-night trial, and a lifetime guarantee.
The Nectar is a medium-firm memory foam mattress offering high levels of comfort and support at a reasonable price. The bed uses a mixture of gel-infused memory foam layers, making sure your weight is evenly dispersed throughout the mattress surface. This provides a relaxing and cooler night’s sleep with profound compression support for key joint regions like your hips, shoulders, and knees.
With its multi-layer construction, the Nectar mattress supports different weight classes and accommodates all sleeping places. Therefore, whether or not you sleep on your back, side, or stomach, you will feel comfy and well-supported. A yearlong secure trial period and a lifetime guarantee make the Nectar an affordable and popular choice.
This memory foam mattress has an ideal level of firmness that is not so hard and not too soft. As an all-around mattress, Nectar suits most individuals and will help ease your back pain whether you lie face up, confront, or else on your side. The Nectar’s multiple gel memory foam layers offer a high level of support and stability, which works well in the event that you generalized pain back, or suffer from lower, upper.
The memory foam will cradle your hips and lower back Should you sleep facing the ceiling, however you will not sink too far down. While keeping your spine in alignment for side sleeping, the mattress will adapt to the curves of your body. Stomach sleeping is a possibility on the Nectar, although in the event that you’re a person, you might require a firmer mattress. Advantages include a 365-night trial plus a lifetime warranty.
Clinical studies have proven the Level Sleep’s TriSupport foam to be effective at reducing all kinds of back pain, whether localized or generalized pain. The memory foam also brings strain relief to your joints, Apart from being capable of treating backache. The mattress is made from standard. The Level Sleep comes with a secure trial that is 365-night, so that you may examine this mattress at the comfort of your home’s pain-relieving qualities.
The Nest Alexander is a competitively priced, luxury memory foam mattress available in just two firmness levels: luxury and moderate firm. Made in the USA, the Signature utilizes CertiPUR-US certified memory foam but also compression support for your joints. Keep you cool and A phase change material is utilized to reduce heat. And should you sleep with a partner, the bed has movement transfer, so you will experience disturbance.
Nest Bedding is known within the industry for providing value for money, high quality beds. The business provides efficient and friendly customer service, and a lifetime guarantee, free shipping, and a 100-night trial, which means that you may see whether the mattress is right for you. If you are in the market for a memory foam mattress the Nest Signature is a trusted purchase.
The Nectar is one of the memory foam beds in the marketplace these days. Despite its attractive price tag, the mattress uses high-quality, durable materials offering lots of comfort and support. The bed has CertiPUR-US memory foams, a Tencel cover, and a firmness. This makes it cool and comfy however you sleep during the evening time.
The Nectar ships direct from the factory, ensuring you find the very best possible price. This produces the mattress far more affordable than brands of a comparable benchmark. A yearlong, no-risk trial interval is also available when you obtain the Nectar. This permits you to examine the mattress over a period of 12 weeks so that you may observe the attributes of memory foam.
An memory foam mattress with two firmness choices in one bed. The Layla has a soft side and a firm side so you can find the ideal comfort level. The mattress offers good support if you sleep on your side, back, or stomach. Copper-infused memory foam will help to transfer heat away from your bed, helping you stay cool, while a high-density base foam keeps stability and durability.
Since the Layla utilizes CertiPUR-US certified memory foam, the mattress contains no ozone depleting formaldehyde , chemical fire retardants, or materials. The copper used within the foam is antimicrobial, which prevents mold and germs from growing, prolonging the lifespan of this bed. A lifetime guarantee and durable USA construction add to the benefits of this memory foam mattress.
Combining the advantages of coil springs using layers of memory foam, the Nest Alexander Signature Hybrid brings high-end relaxation and value for money. This luxury mattress has the bounce and support of coil spring bed, but the stress relieving qualities of high-density memory foam, making it a true all-around bed for individuals or couples. As a result, it works for back, side, or stomach sleeping.
The Alexander Signature Hybrid’s multilayer construction includes copper and gel-infused foam for extreme heat system, and a phase change cloth cover to quickly zap heat away from the body. The coil spring program helps air to circulate keeping you cool when the temperature starts to rise. You also gain the advantages of a home-based company along with a lifetime guarantee.
The DreamCloud mattress is a reliable investment if you are on the market for a highly durable, well-built mattress. The multi-layer construction will keep you supported even in the event that you occupy a more heavy weight class. The business is so confident in the quality of their craftsmanship that they provide a lifetime guarantee and a 365-night risk-free trial period.
The DreamCloud is a medium-firm, luxury hybrid that has a mixture of coil springs, latex, and memory foams and premium materials. Designed for couples or individuals, the mattress brings luxury at a less expensive price than in-store brands of quality. The bed is highly durable and luxurious, using soft palate yarns and a plush Cashmere blend quilted cover.
With its medium-firm texture and hybrid configuration, the DreamCloud can accommodate all sleeping places, so if you want to sleep on the back, side, or stomach, the mattress will still feel comfortable and supportive. The bed also has plenty of bounce while maintaining good levels of motion isolation. The DreamCloud is shipped in a box for advantage and comes with a lifetime warranty.
The Alexander Hybrid mattress from Nest Bedding combines memory foam layers with a pocketed coil spring program. Offered in soft, medium, and firm options, you can pick your perfect feel, although medium and firm are greatest if you are a large person. The mattress want your weight spreading evenly across the mattress and has no weight limit, making it ideal if you’re on the side that is heavier.
In particular, the Alexander Hybrid advantages from edge support and movement transfer. Hence, the mattress is recommended if you sleep with a spouse and toss and turn throughout the night. The mixture of coils and memory foam absorb motions, helping you both get a refreshing sleep. The mattress also includes 100-night trial and a lifetime warranty, which means that you can examine it free from danger.
The DreamCloud hybrid vehicle is a mattress-in-a-box that is robust, offering a medium-firm texture and quality construction. If you’re a heavy person and require a mattress that’s supportive but also offers pressure relief, then the DreamCloud is a fantastic choice. Latex layers and the foam are comfortable, bringing deep compression support. In addition, pocketed coil springs keep you well-supported, distributing your weight evenly across the bed surface. This usually means you won’t ever sink too far to the mattress.
Having a 15-inch height, the DreamCloud is ideal if you are a heavy individual. The mattress has been constructed with high-density foams and premium materials. Consequently, there is no weight limitation on the mattress, so it is going to endure for years. The business provides a lifetime guarantee and a 365-night sleep trial that is risk-free. For this reason, you can test the mattress in your home to decide if it is ideal for you. If you are not confident, you can return it free of charge over the trial period for a complete refund.
If you require a lavish and are a heavy individual but priced mattress, the Nest Alexander Signature is a good selection. In a medium or firm firmness, the mattress features multiple high-density memory foams that ease pressure. The foams will cradle your entire body, while a strong 7-inch slab of foundation foam will guarantee you never sink too far into the mattress. This is useful when you are a big person and want proper alignment.
The Eco Terra is a natural hybrid combining natural Talalay latex and wool, organic cotton, and coil springs that are encased. The mattress is available in a medium or medium-firm firmness, so it has a just-right texture that works well if you like to sleep on your side, back or stomach. Among the greatest things about the Eco Terra is its own price . The mattress is one of the latex hybrids in the marketplace.
Since the mattress utilizes 100% organic latex, it offers lots of bounce and is highly responsive. The pocketed coil springs decrease movement move, while the latex comfort layer ease the pressure and will permeate your body and can help keep you afloat. The latex and coil construction also ensures that this mattress frees cool. The Eco Terra includes a manufacturer warranty and a trial period.
The Nectar is an affordable memory foam mattress with a medium firmness. The mattress includes a plush, breathable cover, memory foam that is gel-infused to help keep you cool, and a base layer for maximum support and stability. Should you require a mattress that conforms to your body shape and eases pain, then the Nectar performs. It keeps you well-supported so you never get a feeling. You should discover the mattress comfortable and supportive.
A queen bed costs $699, making the Nectar among the best value for money memory foam mattresses-in-a-box. The CertiPUR-US certificate, which ensures there are no ozone depleters, heavy metals, or chemical fire retardants present has been obtained by the bed. Year-long trial A speedy shipping, and lifetime guarantee make the Nectar among the very economical memory foam mattresses available.
When you obtain the Love & Sleep, then you’re gaining a mattress in the well-established Nest Bedding business. This guarantees excellent customer support and materials that are durable. The business also provides a 100-night sleep trial and a lifetime warranty, so that you may test the Love & Sleep at the comfort of your house.
The Nectar is a cheap but well-built memory foam mattress using a moderate firmness. If you sleep with a spouse and require a mattress that works for all sleeping positions, the Nectar will ensure plenty of back support and pressure relief. As a memory foam bed, the Nectar also offers motion isolation. This helps to minimize vibrations. If you or your partner toss and turn on a normal basis, the Nectar can enable you to get a better night’s sleep. There’s some bounce, but not as much as on a hybrid or coil spring bed. Despite this, there is sufficient to satisfy most couples.
Despite its attractive price point, the Nectar has solid construction and sleeps cool thanks to gel-infused memory foam. It also gains from CertiPUR-US foams that are non-toxic. This is ideal if your spouse or you suffer from allergies or are concerned about chemical fire retardants. Other notable features of this Nectar bed comprise a 365-night secure trial and a lifetime guarantee.
The Alexander Hybrid from Nest Bedding is a competitively priced, luxury hybrid mattress available in 3 firmness levels: soft, medium, and firm. Mixing gel memory foam layers using pocketed coil springs, the mattress brings pressure relief support, but also plenty of response and rebound for fun between the sheets. Additionally, the bed has good advantage support and motion isolation, which are beneficial if you sleep soundly as a couple.
When you obtain the Alexander Hybrid from Nest Bedding, you will put on a mattress from a brandnew. The company manufactures its beds at a purpose-built USA mill. This ensures a high quality mattress is gained by you . As with Nest mattresses, there is a lifetime warranty included.
The Bear is a comfy and cooling mattress which uses memory foam, and this is said to be 7 times cooler than foams. Having a medium firm feel, the Bear offers temperature regulation, body contouring, and stress relief. A base layer that is high-density ensures that your backbone remains supported no matter how you sleep.
The Eco Terra is a value for cash hybrid that combines natural latex with coil springs that are encased. This brings good temperature regulation, keeping you cool on warm nights. Unlike latex’s open mobile nature allows for greater airflow. Pocketed springs guarantee heat keeps moving and away from your mattress. All in allthis guarantees you stay more comfortable for longer.
When combined with all the natural breathability of an organic cotton cap, the Eco Terra is a trusted option if you’re in the market for a hybrid bed that sleeps trendy. Despite its eco friendly construction, the Eco Terra is far less costly than in-store brands of quality and affordable. It’s also more affordable than brands that are online that are competing. You benefit from also a regular warranty and a 90-night secure trial.
Here we feature the very best mattress manufacturers. Mattresses are shipped in a box straight from the factory. Cutting out the middlemen you receive the best value possible.
Everybody has different reasons for buying a new mattress. To help make your selection easier, we have organized the best mattresses. If you need more choices, you will find reviews below for the top mattresses in each class too.
The DreamCloud is a medium-firm hybrid with a mixture of memory foam, foam, and coil springs. The mattress has a luxurious feel and provides a high level of comfort, offering pressure relief and back support but also great motion isolation with bounce. When compared to manufacturers of a similar quality, the DreamCloud mattress-in-a-box is great value for money.
As a luxury mattress, the DreamCloud is constructed with premium materials. It is durable, stable, and supportive. This ensures the mattress will last to the future for many years. The company offers a sleep trial in addition to a lifetime warranty. This allows you to test the bed in the comfort of your property. If you aren’t pleased, you can return it for free for a complete refund.
The Alexander Signature is a memory foam foam mattress that provides durability and luxury at a price. Produced in the USA using CertiPUR-US foams, the mattress is offered in just two firmness options: medium or luxury firm. This makes the bed ideal if you like to sleep on stomach, side, or your back. It sleeps cool and provides excellent back support, pressure relief, and good motion isolation.
The Nectar is a reasonable memory foam mattress using a just-right texture that suits all fashions that are sleeping. The Nectar’s memory foam layers provide good pressure relief and a high level of comfort. The bed is also effective at keeping your spine in alignment when sleeping on your side, back, or stomach. As a result, the Nectar works well for reducing or removing localized or generalized pain.
As a mattress-in-a-box, the Nectar ships directly from the mill to your doorstep in two to five business days. This indicates that you skip out the middlemen and put on a well-made mattress at an affordable price. The Nectar has received positive reviews from customers, many who say the mattress has solved all their pain problems. Advantages include a 365-night trial that is protected and a lifetime warranty.
For negative sleeping, the DreamCloud is one of the most comfy mattress-in-a-box brands available on the market. As a medium-firm hybrid mattress, the DreamCloud gets the benefits of a memory foam mattress with all the support and response of pocketed coil springs. Consequently, if you’re a side sleeper needing a mattress to keep your shoulders, buttocks, and knees well-protected, the DreamCloud is a good option.
If you lie on your side on the DreamCloud, the memory foam will accommodate to your body’s natural curves, whereas the pocketed coils will guarantee your back remains in excellent alignment. This minimizes back pain and alleviates aches and pains to get a better night’s sleep. Being a top notch mattress-in-a-box brand, the DreamCloud additionally benefits from a lifetime guarantee and a 365-night risk-free sleep trial.
The Layla memory foam mattress has two firmness options in a bed: a gentle side and a firm side. Specifically, the soft side of the mattress works well if you would rather sleep on your side. When you lie down, the Layla will cradle your hips and shoulders, reducing pressure while keeping your spine in alignment. However, if you find the soft side too extravagant, you may just flip the mattress over to acquire a firmer feel.
The Alexander Signature is a multi-layer memory foam foam mattress that delivers premium levels of comfort for a reasonable price. The mattress works well in all areas and has great back support, pressure relief, movement transfer, and edge support. As a result, you need to locate a vast improvement in the quality of your sleep and awake feeling rested with fewer aches and pains.
With a moderate or luxury firm choice, you can select the perfect degree of firmness to suit your favorite sleeping position: back, side, or stomach. Gel-infused memory foam is used to regulate temperature, keeping you cooler on warmer nights. The mattress also features a plush quilted cover for extra luxury and comfort.
The 15-inch DreamCloud is a premium hybrid mattress combining high-quality materials in 8 distinct layers. The mattress has a luxurious look and feel, casing a hand-tufted cashmere blend top, high-density memory foam, natural latex, plus a 5-zone pocketed coil system. This premium mix provides excellent comfort and a just-right feel no matter how you like to sleep. The mattress has a medium firmness and decent movement isolation, so if you sleep with a spouse, you will feel less disturbance during the evening.
The DreamCloud is also effective if you are a heavier individual and want pressure relief with sufficient support to keep you afloat on the mattress. The high-density memory foam will effortlessly alleviate strain on your joints, while the coil latex and springs will ensure you never sink too far to the mattress. Other notable aspects contain gel memory foam to keep you cool, a 365-night trial, and a lifetime guarantee.
The Nectar is a medium-firm memory foam mattress offering high levels of comfort and support at a reasonable price. The mattress uses a combination of gel-infused memory foam layers, ensuring that your weight is evenly dispersed throughout the mattress . This provides a relaxing and cooler night’s sleep using deep compression support for crucial joint regions like your buttocks, shoulders, and knees.
Using its multi-layer construction, the Nectar mattress supports different weight categories and accommodates all sleeping positions. Therefore, no matter if you sleep on your back, side, or stomach, you’ll feel comfy and well-supported. A yearlong risk-free trial period and a lifetime guarantee make the Nectar an affordable and popular choice.
This memory foam mattress comes with an ideal amount of firmness which isn’t so hard and not too soft. As an all-purpose mattress, Nectar suits individuals and will help to ease your pain whether you lie face up, face down, or on your side. The Nectar’s multiple gel memory foam layers provide a high level of support and stability, which works nicely if you generalized back pain, or suffer from upper, lower.
The memory foam will cradle your hips and lower back if you sleep facing the ceiling, however you will not sink a lot down. While keeping your spine for side sleeping, the mattress will accommodate to the curves of your body. Stomach sleeping is a possibility in the Nectar, even though in the event that you are a large person, you might require a firmer mattress. Benefits include a lifetime warranty and a trial.
Clinical studies have shown the Level Sleep’s TriSupport foam to be effective at reducing all types of back pain, whether localized or generalized back pain. The memory foam also brings strain relief to your joints besides being capable of treating backache. The mattress is made from standard, non-toxic foams in the united states. The Level Sleep comes with a secure trial that is 365-night, which means that you may examine this bed in the comfort of your home’s attributes.
The Nest Alexander is a competitively priced, luxury memory foam mattress accessible just two firmness levels: medium and luxury firm. The Signature uses CertiPUR-US certified gel memory foam but also profound compression support to your joints. Keep you cool and there is A phase change material used within the mattress to decrease heat. And if you sleep with a partner, the mattress has motion transfer that is low, and that means you will experience interference during the night.
Nest Bedding is known for providing value for money. The company provides efficient and friendly customer support, and a lifetime guarantee, free shipping, and a 100-night trial, which means that you may see if the mattress is right for you. The Nest Signature is a buy if you’re in the market for a memory foam mattress.
The Nectar is among the memory foam beds on the market today. Despite its price tag, the mattress uses durable materials that provide lots of comfort and support. The bed has CertiPUR-US memory foams, a Tencel cover, and also a firmness. This makes it comfy and cool no matter how you sleep during the night.
The Nectar ships direct from the factory, making sure you get the best possible price. This makes the mattress a lot less expensive than brands of a comparable benchmark. A year-long trial interval is also available once you purchase the Nectar. This permits you to test the mattress so that you may observe the attributes of memory foam.
An award-winning memory foam mattress with just two firmness options in one bed. The Layla has a soft side and a firm side so you can discover the ideal comfort level. The mattress provides good support whether you sleep on your side, back, or stomach. Copper-infused memory foam will help to move heat away from your mattress, assisting you to remain cool, while a high-density base foam maintains stability and strength.
Since the Layla utilizes CertiPUR-US accredited memory foam, the mattress includes no ozone depleting materials, chemical flame retardants, or formaldehyde. The copper used within the foam is also antimicrobial, which prevents microbes and mold from growing, prolonging the life span of this mattress. A lifetime guarantee and durable USA construction add to the benefits of this memory foam mattress.
Combining the benefits of pocketed coil springs the Nest Alexander Signature Hybrid brings high-end relaxation and value for money. This luxury mattress gets the bounce and support of spiral spring mattress, but the pressure relieving qualities of high-density memory foam, which makes it a real all-around bed for individuals or couples. Consequently, it works well for side, back, or stomach sleeping.
The Alexander Signature Hybrid’s multilayer construction includes copper and gel-infused foam for extreme heat system, plus a stage change cloth cover to rapidly zap heat away from your body. The coil spring system also helps to circulate throughout the bed, keeping you cool when the temperature starts to rise. You gain the benefits of a lifetime guarantee and a company.
The DreamCloud mattress is a investment if you are on the market for an extremely durable mattress. The construction will keep you supported even in the event that you occupy a heavier weight category. The company is so confident in the quality of their craftsmanship they provide a lifetime guarantee and a 365-night secure trial period.
The DreamCloud is a medium-firm, luxury hybrid that has a mixture of latex memory foams, and coil springs and high quality materials. Designed for individuals or couples, high-end luxury is brought by the mattress in a more affordable price than brands of quality. The mattress is highly durable and lavish, using soft palate yarns and a lavish Cashmere blend quilted cover.
With its medium-firm texture and hybrid configuration, the DreamCloud can accommodate all sleeping positions, so whether you like to sleep on the back, side, or stomach, the mattress will still feel comfortable and supportive. The bed also has plenty of bounce when keeping levels of movement isolation. The DreamCloud is shipped in a box for convenience and comes with a lifetime guarantee.
The Alexander Hybrid mattress from Nest Bedding combines memory foam layers with a durable pocketed coil spring system. Available in soft, medium, and firm alternatives, you can select your ideal feel, although medium and firm are best if you are a large individual. The bed need your weight spreading evenly across the mattress and has no specific weight limit, which makes it ideal if you’re on the side that is heavier.
Specifically, the Alexander Hybrid advantages from great edge support and low movement transfer. Hence, the mattress is recommended should you sleep with a spouse and toss and turn during the nighttime. The mix of memory foam and coils absorb abrupt motions, helping you get a refreshing sleep. The mattress includes 100-night trial and a lifetime guarantee, so that you can test it free from danger.
The DreamCloud hybrid is a mattress-in-a-box that is strong, offering a medium-firm texture and excellent construction. If you’re a heavy person and need a mattress that is supportive but also offers pressure relief, the DreamCloud is a fantastic choice. The latex and foam layers are comfortable, bringing deep compression support. In addition coil springs maintain you well-supported, distributing your weight evenly across the mattress . This means that you will never sink too far into the bed.
Having a 15-inch elevation, the DreamCloud is perfect if you are a heavy individual. The mattress was constructed with high-density foams and premium materials. As a result, there’s no weight limitation on the mattress, so it will last into the future for many years. The business offers a lifetime guarantee and a 365-night sleep trial that is risk-free. For this reason, you can check the mattress on your home to determine if it’s right for you. If you are not confident, you can return it for free within the trial period for a full refund.
If you are a individual that is heavy and require a lavish but priced mattress, the Nest Alexander Signature is a good choice. Available in a medium or firm firmness, the mattress features. Though a strong slab of foundation foam will ensure you never sink a lot into the bed, the foams will cradle the body. This is especially useful when you are a person and need proper alignment.
The Eco Terra is a natural hybrid mattress combining wool and cotton , natural Talalay latex, and encased coil springs. The mattress is offered in a moderate or medium-firm firmness, so it has. One of the greatest things about the Eco Terra is its price tag. The mattress is among the most affordable latex hybrids in the marketplace.
Since the mattress utilizes organic latex, it offers lots of bounce and is responsive. The pocketed coil springs can help keep you afloat and decrease motion move, while the latex comfort layer ease the pressure and will permeate your body. The latex and coil construction also guarantees this mattress sleeps cool. The Eco Terra includes a 90-night trial period and a 15-year manufacturer warranty.
The Nectar is a reasonable memory foam mattress with a moderate firmness. The mattress includes a plush, breathable cover, gel-infused memory foam to help keep you cool, and a base layer for maximum support and stability. If you require a mattress that conforms to your body shape and alleviates joint pain, the Nectar performs well. It also keeps you well-supported so you never have a sinking feeling. However you sleep, you should find the mattress comfortable and supportive.
A queen mattress costs $699, which makes the Nectar among the best value for money memory foam mattresses-in-a-box. The mattress has received the CertiPUR-US certification, which ensures there are no ozone depleters, heavy metals, or chemical fire retardants. A speedy delivery trial, and lifetime warranty make the Nectar among the memory foam mattresses available.
When you obtain the Love & Sleep, you are gaining a mattress in the well-established Nest Bedding company. This ensures excellent customer service and durable stuff. The company also provides a 100-night sleep trial and a lifetime guarantee, which means you can test the Love & Sleep in the comfort of your own home.
The Nectar is an memory foam mattress using a medium firmness. If you sleep with a spouse and require a mattress that works for all positions that are sleeping, the Nectar will guarantee lots of support and pressure relief. As a memory foam bed, the Nectar also offers great movement isolation. This will help to minimize vibrations throughout the surface of the mattress. Therefore, if you or your partner toss and turn on a regular basis, the Nectar can help you get a better night’s sleep. There’s some bounce, but not as much as on a hybrid or coil spring bed. Despite this, there is sufficient to satisfy most couples.
Despite its appealing price point, the Nectar has quality construction and sleeps cool thanks. Additionally, it gains from non-toxic CertiPUR-US foams. This is ideal if your partner or you suffer from allergies or are concerned about fire retardants. Other features of this Nectar bed include a lifetime guarantee plus a 365-night secure trial.
The Alexander Hybrid from Nest Bedding is a competitively priced, luxury hybrid available in 3 firmness levels: soft, medium, and firm. Combining memory foam layers that are gel with coil springs, the mattress brings pressure relief support, but also lots of response and rebound for fun between the sheets. Additionally, the bed has advantage support and motion isolation, which are valuable if you sleep as a couple.
When you obtain the Alexander Hybrid from Nest Bedding, you will put on a mattress . The business produces all its beds at a USA mill. This ensures you gain a high excellent mattress . Much like all Nest mattresses, there is a lifetime warranty also included.
The Bear is a comfy and cooling mattress which uses memory foam, which is said to be 7 times cooler than foams. Having a medium firm feel, the Bear presents great temperature regulation, body contouring, and stress relief. A high-density foundation layer ensures that your backbone remains supported no matter how you sleep.
The Eco Terra is a value for cash hybrid mattress that combines coil springs that are encased and natural latex. This brings temperature regulation, keeping you cool on hot nights. Unlike traditional memory foam beds which trap warmth, latex’s open mobile temperament allows for better airflow. Likewise, pocketed springs guarantee heat keeps moving and away from your mattress. Overall this ensures you stay cooler for longer.
When combined with the breathability of an organic cotton cap, the Eco Terra is a trusted option if you are on the market for a hybrid latex bed that sleeps cool. Despite its eco friendly construction, the Eco Terra is affordable and much less costly than in-store brands of quality. It’s also less expensive than many online brands that are competing. You also gain from a 90-night secure trial and a guarantee that is standard. | 2019-04-25T20:21:34Z | https://www.stewardsofstewart.org/do-i-need-a-mattress-foundation-casper-the-best-mattresses-by-type-2019/ |
Developing custom web apps on python/django.
WebCase is a web-production studio, with a main focus on development web-apps and start-ups, using Python programing language and Django framework.
We have 23 employees in our company. We provide our clients with full-service development, starting with specification file development and ending by teaching our clients how to use the app or website.
Long-term experience allows us not just to develop websites on your demand, but propose to you solutions and improvements, which will make your product more effective. We always aspire to increase our skills and technologies. We use the best experience from the past projects and optimize all development processes, in order to reach the best results.
The service provides you with ability to make a request for сargo transportation, and also view a catalogue of freighters.
This website is for 3 types of users. There are customer, customer-freighter, partnership organizations. Specially, for all of them, we have developed 3 types of their own profiles.
Easy filter, possibility to send messages and public information about companies allow you easily to find clients or reliable partners.
Website development for the British startup. Which calls people to share their knowledge, and encourage it.
Functionality of content sharing(video and articles) - Knode, was developed. That allows to every single user to share something this others.
And for users, who can share some interesting learning material or something like that, courses category was developed. Every user can add a course.
And if their courses will be interesting they can earn good money on it.
The task, that has been delivered to us is: to automaze the working process of the company, simplify communications between employees, and centralize: statistic, requests control, all possible accounting information.
In statistics tab functions for control the company status was developed such as: statistic about managers work , marketing sources statistics, financial balance of the company, with all incomes and outcomes.
In the case all cars must be with all needed inventory and consumables the functionality of questionnaire was developed.
All current gas sales can be found on the home page, so every user can get fresh information right after he gets to the website.
Background changing set up due to the time of day. Responsive layout, that was developed by our front-end specialist, allows any user with any mobile device to know current information from the website.
We have developed auto user geolocation, possibility to draw the route on the map, and a lot of filters to choose the right gas-station for you. User are able to find the closest gas station and also gas station with needed services and minimal price.
We also developed map services that is connected with google maps. User are able to read all information about gas station just by the click on the marker on the map.
Ipass is a system which allows you to forget about any plastic cards.
This Service gives you an opportunity to create discount and bonus cards, coupons, tickets and send them to mobile applications: iOS - wallet and android - pass wallet. You can send new cards, and existing cards also.
A card sends to the smart-phone by sms or e-mail. The Advanced Search Filters and organized by groups contacts give you a great opportunity to work with your cards or contacts more easily.
Providing an easy export from csv format to contacts uploading, and also you have an opportunity to create contacts directly on the website. There is an API for more faster exchange of Data between users, it allows you to check an important information about your cards on-line.
Cashback services became more and more popular. And this is a nice example of one of them. Simple and catching design. Easy registration via email or social networks. Only two clicks and you will get a percentage of your purchases back.
There is an easy and multifunctional personal profile, which allows to withdraw your cash back in any comfortable way for you. “Favourite” tab will not let you forget the any store that you would like to see a little bit later.
In sum, there are all available features on the site: filters, search, promotional sliders and much more have made the website easy and comfortable to use.
E-commerce website for the spot-clothing manufacturing company, with wide shops network all our the Austria.
Stylish design, that has been done in the company brand colors, easy to remember and also make you want to interact with it.
On the product page, you can find chained option functionality, which allows you to see how many products are available, with options that you need.
For better control products amount, .xls file import have been implanted on the website. This functionality is really important due to online payment systems that have been implanted on the website.
Besides that functionality group, that must be done on every ecommerce website, have been done: personal account (with history and dynamic order status), useful and simple filter, feedback form etc.
co-workers use the resource as a management object system.
Specially for the catalog, we've developed a filter, which can satisfy all search requests of clients.
There are realtor's own profiles on the website. It allows to manage your own objects of property and edit your own information there.
New design and functionality add-ons for ecommerce website “Pani Yanovska”.
For successful e-shop, good design and usability are very important. That is why such task was delivered to our team.
First steps that we made to reach the purpose, was the fully new look for the site and functionality add-ons such as: online payment system, preview mechanic etc.
Nowadays, in the terms of hard competitiveness between e commerce sites, being the leader is possible just by adding something new to the website the can give maximum usability to the client: 3D images, YML and XLS downloads and uploads, and also many small things, that increase the website usability.
"Their project management was quite comprehensive, flexible, and responsible."
Working from a project guidebook, WEBCASE built a user-friendly news platform and discussion forum. They handled HTML coding, web design, testing, and launch.
Several months after launch, the platform earned a strong reputation and enrolled in an official media pool. WEBCASE impressed with their flexible management and collaborative support. Although they sometimes compromised quality to meet deadlines, they communicated effectively to solve issues.
We publish news and information for the retail petroleum market and for filling stations in Belarus. Our main goal is to deliver correct and reliable data to our customers, including market events, analysis, reviews and ratings, and up-to-date price and discount information on fuel and petrol station services. We also serve as a discussion platform where our readers and clients can share their opinions with others. I am a concept idea creator and project owner.
For what projects/services did your company hire WEBCASE?
Since this was a startup, we needed a good web developer that could bring our ideas to life with our assistance and control. We developed a clear understanding of our needs and some detailed technical instructions. We wanted a product that was complete, fine-tuned, user-friendly, and innovative.
Before we launched our project, there were no complex information and media solutions in Belarus that focused on the retail petrol station business. Some journalists wrote articles on the problem, some media published fuel prices, and some people tried to publish an online map of Belarusian filling stations. Meanwhile, people were posting on forums to express their attitudes and share their experience.
Our principal goals were to connect those people and roll out a unified, convenient platform that could satisfy all parties. We needed to make modern, effective, functionally approved and efficient decisions about how to fulfill it. A lot of design and programming needed to be done.
We used a two-stage procedure to select a vendor. First, we announced an open tender on a special web portal where many web developers sought jobs. After choosing some potential vendors, we began separate negotiation processes to choose a winner.
Our project generated real interest among developers, so we got 25 or 30 applications in the first round and conducted negotiations with the five companies that had the best propositions. WEBCASE was one of these. Our main selection factors were terms, experience, cost, a portfolio from previous jobs, and initial realization propositions.
We selected a winner collectively. Employees responsible for the commercial, technical, design, and programming elements discussed the offers and voted on the best one.
In the beginning, we gave them our technical guidance and project brand book as regulation documents. We agreed to conditions, set terms, and signed papers to get started.
WEBCASE was responsible for a turnkey contract that included the web design, HTML coding, programming, testing, and initial launch of our server.
During the development process, we established a proper system of coordination and cooperation. WEBCASE consulted with us to determine how they could better realize some features on our web portal. They suggested solutions and gave us proper recommendations.
I don't know exactly how many people from WEBCASE were involved in our project. There were seven or eight people, ranging from executives to responsible officers, depending on the issues we discussed.
We maintained close contact with our project manager. When we needed consultations between specialists, they connected directly.
After launching the project, it took us about four months to get to the top of the Belarus retail petrol filling station business. A bit later we got an invitation to enroll in the governmental petroleum media pool. We're still novices in the market, with a lot of work to do, but our first results were amazing. People read our stories, use our services, and quote us. They like us as a special media resource.
Their project management was quite comprehensive, flexible, and responsible. it took both parties some time to work out the proper coordination and speak the same language. But once we had done that, we mostly understood each other. Sometimes they didn't comply with deadlines, but we preferred quality over raw products. We used Skype, email, and Google Tables for communication.
WEBCASE was very flexible and sensitive toward clients, so you can make a deal with them.
We would like them to be more attentive to details and finished results. They shouldn't comply with deadlines in a way that makes quality directly suffer.
Overall Score There are always ways to improve.
Some issues had to be solved after results were delivered.
Why not? They are a good developer.
"They are very competitive and exemplify a high level of professionalism."
WEBCASE developed a CRM system and designed its UX/UI. They continue to work on creating new features and advise on technical issues.
With automation of various administrative tasks, the new CRM system accelerates the business process and broadens work opportunities. WEBCASE's timely workflow and high level of professionalism expedite project timelines and the creation of feature solutions.
I am the director of an international dating service.
We hired them to create a CRM system our business.
We wanted to automatize business processes.
I found them through an internet search and they seemed the most professional.
WEBCASE first initiated a discovery phase to understand the purpose of the project and the functions we required. We worked together to design the UX/UI blueprints. After we approved everything, they developed the system in a span of a year because the system was multi-functional. Their team currently advises on technical issues and creates new features.
We work with one project manager, one designer, and two developers.
The CRM system automates most of our administrative process and makes our business much more efficient. We take on more business and can handle more tasks at the same time.
Our workflow is effective and convenient. They plan new features with us immediately and work quickly to create solutions for our request.
They are very competitive and exemplify a high level of professionalism.
No, there are no areas to improve so far.
Overall Score They are amazing.
Their scheduling skills were exceptional.
Their cost was not cheap, but it was due to the nature of the project.
Their work is absolutely high quality.
WEBCASE has designed and developed a website that will allow users to compare prices from sellers and service providers. They’re now providing design work for a second startup initiative.
The website hasn’t launched yet, but initial user feedback from testing has been positive. WEBCASE delivers quickly, remains reactive, and offers a flexible price model that accommodates mid-project changes. Their high-quality output and honesty continue to strengthen the relationship.
I’m head of management of Nikar, a multinational Cyprus-based company that buys and sells property investments. People will be able to go to our website to check prices for goods they wish to purchase: property, cars, agricultural goods, commodities, services. We compare prices from different sellers and service providers.
What challenge were you trying to address with WEBCASE?
We contacted WEBCASE to develop our website around a year ago.
I believe they used Python for the programming. We explained our idea to them and then they designed the website. We negotiated the design with our partners, after which WEBCASE continued with the actual development. They’re also helping us with some design work for another one of our startups.
I’ve worked with their manager who handles the contract and does a great job.
How did you come to work with WEBCASE?
We didn’t know who to contact at first since we were a startup that didn’t know much about programming or web design. We found WEBCASE on a website for web designers.
We’ve spent between $70,000–$75,000 with them.
We started working with them in December 2016. The website is complete, but we haven’t launched yet because we’re still collecting data. We’re planning to launch in midsummer in Cyprus and Ukraine. We also plan on expanding to the rest of Europe and the U.S. at some point.
We don’t have metrics since we haven’t launched the website yet. We’ve tested the product with users and they’ve all been delighted. They did finish the project really quick, which was good for business. Their responsiveness is the most valuable thing about working with them.
How did WEBCASE perform from a project management standpoint?
We’re very happy. They do something immediately when we ask it of them with no problems. Their prices are also ideal. We communicate via Skype since we’re located in both Ukraine, Moscow, and Cyprus. We haven’t had any communication problems since we both speak Russian.
They respond quickly and their delivery is high-quality. They don’t charge us extra and are honest about prices, which is great because you never know how a project is going to evolve. I would recommend them.
I recommend thorough planning prior to starting the work. If you make changes after starting a project, it might cause delays. Think about what you desire for an end product so that you don’t risk having to alter your schedule.
WEBCASE worked on a Django Python web portal, developing the layout design all the way to the programming of the server and its configuration. Following completion, they worked on improvements for 6 months.
The finished portal is still undergoing QA testing, with any bugs highlighted and then fixed by the WEBCASE team. The team is quick to learn and improve their skills, working patiently and considerately to complete the project within a reasonable budget and largely keeping to the agreed timeframe.
I am a director at CARGO-CARDS, which is a freight portal. We bring together shippers and logistics companies, with several categories on the website. We have also tried to create an efficient platform for communication and PR.
Our portal is complex on the backend. We needed a highly-skilled team to accomplish our needs in terms of product functionality. This was our first attempt at developing this platform.
They implemented the project in its entirety, from the design of the layout to server programming and configuration.
We have a convenient system for searching cargo companies, as well as a complex system in place for registered users. There are elaborate rules for data access; some of it is for everyone, while some is for authorized users only.
Their team suggested we use the Django Python framework and so the entire project was done through that.
They assigned a layout designer, 3 programmers, and 2 project managers for our work. We mostly used Skype for communication, as well as email.
I found them online after looking for Ukrainian developers. I found several options and conducted negotiations with each of them. It was a difficult and time-consuming task, since there were multiple to choose from, but I think I made the right decision.
I have spent around $7,000 for this project.
We started working with them during summer 2015. The project lasted a year and a half, followed by a 6-month period for improvements. We have kept in touch with their team but the collaboration is not as sizeable as it was.
We spent the first few months fixing bugs and improving the search system of the portal, making it more convenient for users. All the small features which could be improved were addressed in this process.
We still need a couple of months to truly test the platform, since most of the functionalities have only been tested by myself so far. I am not a quality assurance engineer, but I’ve done this in order to save money. When I found bugs, I made a note of them and they were corrected by them.
I dealt with 2 project managers from their team and both were patient with my requests.
They did their best to deliver a good product. They are a growing company; I have seen their team develop their skills across the course of our 2 ½-year collaboration.
The schedule was pretty good. It took more time than planned but this was due to some feature corrections.
I got the best possible result for my money, especially since the budget was so low.
I have recommended their company and services. Not everything was perfect, but this is typical for real-life situations.
WebCase developed a workflow application using Python and is in the process of developing an online booking platform. They also developed a WordPress site and currently monitors Google Ads campaigns.
CFO, Local Moving Services Inc.
WebCase’s project management is seamless, and despite a vast time difference, they always respond as soon as they’re available. Their software is significantly streamlining workflow, which ultimately facilitates a heavier focus on customer satisfaction. They are currently an active contributor.
I’m the owner of a local moving company providing services in Los Angeles and the surrounding areas.
We’ve been in business for many years, so we know all moving companies have the same ways of communicating with customers. We were previously using Google Sheets to record all of our information, but we needed to have an internal system to streamline the process. We wanted to eliminate the nightmare of having to keep track of all the paper agreements customers signed when we moved them.
WebCase built a web-based application that allows us to internally send emails, keep computer files, and notify customers of their upcoming registrations. We’ve started using this app on tablets where customers can sign, pay, and receive a receipt via email.
To access our system, I go online and enter my login and password, then I am able to see all the information about my company, jobs, schedule, trucks, and employees. Everything is in the same place, which is a huge relief for us because we don’t need to use spreadsheets anymore.
We probably spent 4 months developing the system with WebCase, which included a lot of requirements, features, and changes. Everything is working now, and we have access to the application from our cell phones, computers, and tablets.
We’re also using WebCase for some additional projects, including online booking. We have a regular website for the moving company, and our next step is to integrate the system into the website. This will give our customers the option to make reservations, as well as choose dates and payment methods online, instead of calling into the office.
I believe WebCase built our app using Python, and the website was done using WordPress.
I worked with a project manager who communicated with the rest of the team. I prefer this method because I only have to speak to one person if I have a question, or if I notice an error on the website.
I decided to look for a company in Ukraine because of cost and the fact that there are a lot of professionals there. I went to freelance.ua, a Ukrainian website for freelancers, and posted the project file. Two or three companies contacted me, and I chose WebCase because of their high rating. We discussed my project needs, and they offered fair prices.
We’ve spent about $20,000 with them so far.
We started working on this system in September 2016, and it went live in November or December 2016. It took about 3 months to get from the first version to product completion, and we’ve made improvements thereafter. It’s been working fine since February 2017, and every month we have a new release, so our work together is ongoing.
There are two aspects to our project, including Google Analytics for the website and the functionality of the internal system. We pay for GoogleAds already, so using Google Analytics is a bargain for us and helps us focus on our online reputation, which is the most important aspect of our business.
We have a total of 35 employees who’re all using the internal system on their cell phones to check their salaries and job statuses. The movers can also use the app to access the customer’s information when they are sent for a job.
The web app has been working without any issues, and we don’t have any complaints. There are some things we don’t like, but we’ve been working on improvements over the last year, like adding new pictures and removing features.
I can write them a message in the morning, and as soon as they’re in the office, they respond. I’ve never had any problems with their communication or project management.
They’re doing well, and I haven’t had any problems with them. They have the same quality of work as US companies, but at lower prices.
My main concern has been the time difference. It’s not a problem for me to wait, but if they want to work with customers in the US, they need to change their shifts or hire people who will work at night so they can immediately reply to questions.
Overall Score I have no problems with them, so I have no reason to give them a lower rating.
WEBCASE built the website of an ed-tech platform using Django Python. They developed a complex site, rich in features, including courses and a social network.
The site built by WEBCASE is high-quality and polished. It was well-received by beta testers during its soft launch, receiving unanimously positive feedback. WEBCASE were flexible partners, meeting most deadlines and budget targets, and communicating effectively despite geographical distance.
KNOW is a learning, ed-tech platform that allows people to share their knowledge through short videos or text cards called nodes, which are 3 minutes or less, or 300 words or less. I am the founder of the company.
We were pivoting the organization towards the concept of nodes and knowledge-sharing, which hadn’t previously existed. We were looking to approach a younger audience, in order to address the lack of attention to detail in which is common nowadays.
The website brief was for something engaging and easy to use, but more complex than a WordPress site. Our target market of Muslim Millennials are savvy web users, familiar with the likes of Dropbox, Google, and YouTube, which are all more polished avenues. If we were to be a viable alternative, we needed a high-end solution which could compete.
WEBCASE built our website using Django Python. I am not an IT person, but I researched the technology and found that it was ideal. It’s dynamic and contemporary, as well as meeting my criteria to build a robust platform. They suggested this technology and told me they were specialists in the field. Big names such as Dropbox and Pinterest using this technology, which gave me confidence as well.
The website they created is complex and feature-rich, with courses and a social network similar to Twitter.
I had requested some branding and app development work since this was the next step for us, but WEBCASE told me that they purely focus on websites, which is understandable.
I interacted with two people in particular: a salesperson/project manager (who was the main English speaker) and a developer.
I had been using a number of different companies and decided that I needed to take the project into my own hands. I posted it online on Upwork, and WEBCASE bid for it. The work was quite expensive compared to other groups on that platform, but the main thing for me was finding someone who could actually do the job. I received quotes for $3,000 to $4,000, but I didn’t mind paying extra in order to have confidence in the company.
They were very detailed in their proposal, describing what they would do and how they would do it. That made me realize they knew what they were doing. I had a Skype call with them, and they summed up everything they needed to say in 20 minutes. It was very efficient, and I was genuinely impressed.
So far, I’ve only paid for the first phase, which was 6,000 altogether [currency not stated]. The next phase will have a separate cost.
We started working with WEBCASE in January or February 2017. They’ve completed the website according to the current phase. The next one is planned, but we’re not at that stage yet.
Looking at the site, it’s clear to see that it’s well-produced and high-quality. The feedback has been unanimously positive from the few people I’ve sent it to. We’ve only had a soft launch to our own list of beta testers.
In terms of metrics, we’ve posted 300–400 pieces of content in a couple of months and have over 100 organic users, without any marketing. The numbers are slow, but that’s because we haven’t launched yet.
Their native language is Russian, but there are English speakers in the team, so communication wasn’t too difficult. They are better in written than spoken English, but when we did have to speak, it was fine. A few things could get lost in translation, so we had to paraphrase and be sure to make instructions very clear. For example, there’s a Facebook feature called Instant Articles, by which a user can click on an article and have it come up immediately. This was quite difficult to explain to them, although it’s quite a normal feature here. I’d give them a 7 or 8 out of 10, in this respect.
The time difference wasn’t too bad as they’re only one to two hours ahead of the UK. I often work on weekends, and the project manager did message me regularly on the weekend, but told me that the developers only work during the week. It was good to know that he was available, and I can't hold those working hours against anyone.
I’ve worked on a number of different businesses in the past, and have used other freelancers and companies. With WEBCASE, everything was very smooth. The website had been two years in the making, and they took it on, delivering within three months. Many others had struggled with it but with WEBCASE, I had instant confidence. They were structured in their approach and gave me set deadlines and costs.
Although I had an idea in mind about the site when I began with them, my ideas evolved during the project and they were flexible about that. They incorporated the new work into the base cost, as long as it wasn’t significant.
The other thing I was most impressed with was that they are solution-focused. It may sound like a cliché, but I’ve worked with many different agencies and wanted to find someone who could take the problem and sort it out. As an example, some of our wireframes and designs were missing, so, when creating a page, WEBCASE offered to simply design those elements themselves.
There were a couple of times when the project phase didn’t meet deadlines. The main scope was broken down into milestones, and, instead of finishing in March, it was at the end of April or early May. Some of this was down to me, but it was also down to them. WEBCASE may have underestimated the complexity of certain things.
Capability-wise, WEBCASE is a competent group, but the caveat is that I managed them well, whereas someone more hands-off might not have.
Overall Score There were no major problems at all.
Most things were good, but a few items flipped. That was understandable given the nature of the project.
I received quotes from a few British and American firms, which were in the five-figure range.
They can handle complex briefs, and deliver to high standards.
WEBCASE rapidly developed and tested a start-up web-based application made to organize B2C loyalty rewards.
Quick turnaround, consistent availability, and organized project management allowed WEBCASE to deliver an outstanding product.
I am the founder of iPass. We provide electronic loyalty cards for companies. After noticing I had an excessive amount of plastic cards, I came up with the idea for the service.
We needed help developing our web-based app. I didn’t know how to implement my ideas.
WEBCASE helped flesh out our ideas by developing and testing our web-based application. We allowed 6 months for the project, but they finished it faster than that. The product is complete, but we haven’t launched it yet because we still need to do some marketing. We are now working on our website.
My colleague found WEBCASE on a freelancing site, and they were the first team we chose. We explained the scope of the project, and after they contacted us, we had some meetings to lay out planning. They gave us a fully working program service.
We worked with them from May 2016 to September 2016.
I will have some Google Analytics data in the future, but so far, I’m impressed with WEBCASE’s performance. Their planning, scoping, and development processes are great, and they actually finished the project under budget with time to spare.
They responded quickly to my questions, and they were always available via phone or Skype to help with any issues.
I faced a lot of problems when testing this service, but they were always available to help.
I can't think of any areas for improvement.
They finished the project much faster than I expected.
It was great because it cost less than I expected.
They gave us stability for our issues.
I think I will work with them in the future, and will definitely suggest them to all my friends.
"[W]e were glad to work with WEBCASE."
WEBCASE developed an e-commerce website for a Ukrainian furniture company.
The site was successfully delivered and met all expectations, including mobile adaptability. The team’s ability to meet strict deadlines stood out.
I am the internet marketer for a furniture company called Dom Mebeli.
We hired WEBCASE to develop our online store.
We wanted a ready to use, competitive site.
We created a project application that we posted online. After a difficult selection process, we chose WEBCASE.
Our company has 11 storefronts in the Zaporozhye region, but we wanted to enter the wider Ukrainian market with a good e-commerce site. We wanted the site to be modern, convenient, and adapted for mobile devices.
We worked with a designer, marketer, programmer, and project manager.
All our expectations were met. The site was designed according to our preferences, and we were glad to work with WEBCASE.
We were satisfied with the work of the project manager.
All tasks were performed by the exact set time. | 2019-04-18T18:23:00Z | https://clutch.co/profile/webcase |
A disposable integrated miniaturized array of chemical sensors for analyzing concurrently a number of analytes in a fluid sample is described. Each sensor is a complete electrochemical cell consisting of its own individual reference and indicator electrodes and is selective with respect to a particular analyte in the sample.
Where required, a sensor can be individually calibrated, such that each analysis can be read directly.
This invention relates to an article of manufacture and, more particularly, to an integrated array of chemical sensors for rapidly performing concurrent multiple analyses of analytes in a fluid sample.
In the past, multiple chemical assays have been performed on biological fluid samples such as whole blood, plasma, or serum. Generally, such testing has been carried out by continuous-flow systems such as those shown in the U.S. Patents to: L. T. Skeggs, U.S. Pat. No. 2,797,149, issued June 25, 1957; L. T. Skeggs-E. C. Whitehead-W. J. Smythe-J. Isreeli-M. H. Pelavin, U.S. Pat. No. 3,241,432, issued Mar. 22, 1966; W. J. Smythe-M. H. Shamos, U.S. Pat. No. 3,479,141, issued Nov. 18, 1969; and M. H. Shamos-W. J. Smythe, U.S. Pat. No. 3,518,009, issued June 30, 1970; all assigned to a common assignee.
Also, chemical testing of ionic analytes has been performed in an automated fashion using thin films of material, such as shown in the U.S. Pat. No. 4,053,381, issued Oct. 11, 1977 to Hamblen et al.
In order to perform blood testing, however, a great number and variety of tests have to be made. This will naturally require many electrochemical cells of different structures and chemistries. There is little savings in time, sample size and monies in performing each test separately. Rapid and cost-effective methods require a simultaneous analysis of all the analytes in a fluid sample. Emphasis must also be directed to reduction of the sample size; preferably to a few drops or less of blood to minimize demands on the subject, e.g., in the case of infants.
A device that suggests an integrated circuit approach for the testing of a variety of blood analytes in a sample is shown in U.S. Pat. No. 4,020,830 issued to C. C. Johnson et al on May 3, 1977. This device features an integrated array of field effect transistors (FETs), each designed as a discrete sensor. While this is a valid approach to the automated testing of blood samples, certain shortcomings are inherent in this technique.
(a) Only ion-selective FETs have been successfully and reliably demonstrated. When designed to measure non-ionic analytes, the FET structure becomes very complex, because an additional electrochemical cell must be placed at the gate electrode of the FET to influence the measured drain current. This measurement, however, requires a constant current source in addition to the cell FET and external reference electrode.
(b) Instability in any complement will naturally cause fluctuations in the drain current, and, hence, errors in the measurement of the analyte. In addition, the proposed enzyme and immuno FETs have polymer layers, where concurrent processes such as adsorption and ionic double layer capacitance changes can effect the electric field at the gate of the FETs. Extraneous electric fields are also produced at the fringes of the gate area. These effects will likewise cause errors in the analysis of the analytes.
(c) The need for an external reference electrode when measuring non-ionic analytes complicates the integration of a FET array.
(d) FETs will only detect a charged molecule, i.e., an ion. Non-charged analytes do not influence the gate voltage in an interference-free manner. Hence, analytes which can be successfully analyzed are limited.
However, the semiconductor fabrication technology is so advanced that very precise miniature devices can be easily and cheaply manufactured. Furthermore, precedence has been established for superior stability, reproducibility and sensitivity. Hence, this invention seeks to combine the best attributes of two technologies (electrochemistry and semiconductors) to achieve integration of sensors without the drawbacks and limitations of the FET approach.
The present invention contemplates the structure and fabrication of a micro-miniaturized, multi-functional, electrochemical, integrated circuit chip or array of improved electrochemical sensors. This circuit chip requires a minimal sample volume to effect the simultaneous analysis of a plurality of analytes in on-site fashion. In addition, immediate analysis will be affordable by use of this circuit chip which can be easily analyzed, or "read out" by a small, hand-held analyzer or computer at the scene of an emergency or at a patient's bedside. As the circuit chip is relatively inexpensive, it may be disposable. Since the sample can be whole blood, sample handling by the user is minimized. Also, as a plurality of analytes can be simultaneously analyzed, requiring only a minimum volume of blood sample, e.g., one drop or less of fluid, the advantages to be gained by the invention are considerable.
This invention relates to a micro-miniaturized, multi-functional, electro-chemical, integrated circuit chip of electro-chemical sensors for analyzing concurrently a plurality of analytes in a minimal sample volume. The circuit chip comprises a substrate supporting a plurality of individual sensors arranged in a dense but discrete relationship to form an integrated array. Unlike integrated sensor arrays of the prior art, which provide a single common reference electrode, the present invention appreciates that a more reliable analysis results when each electro-chemical sensor has its own reference electrode. Normally, it would be expected that the use of separate reference electrodes for each sensor is an unnecessary duplication of components. The present invention, however, achieves this result while providing a more compact chip, which is of a relatively simple fabrication.
The circuit chips may be a combination of any one or more of three types of electro-chemical cells: (a) a current measuring cell; (b) a potential measuring cell; or (c) a kinetic rate measuring cell. Some of the electro-chemical sensors will be ion-selective and adapted to measure ions, such as Na+ or K+, potentiometrically. Other sensors may be adapted to measure a redox reaction for the detection of glucose, LDH, etc., by amperometric/voltammetric methods.
In one embodiment of the invention, a small, hand-held computer is used to analyze, or "read out", and display the measurements of each of a plurality of analytes in the fluid sample.
While it has been suggested in the prior art to fabricate integrated circuits using semiconductor techniques, as illustrated by the prior-mentioned U.S. Pat. No. 3,020,830, it is believed to be the first time an integrated circuit chip of this kind, consisting of various conventional-type electro-chemical sensors has been so constructed. In addition, the invention teaches improvements in construction, performance, reliability and convenience for these sensing elements.
Each electro-chemical sensor is selective with respect to only one analyte. For example, such selectivity is achieved by providing each sensor with a first porous medium or gel layer containing an immobilized enzyme, specific for only one analyte in the sample. This first porous layer is combined, in some cases, with a second porous filtering layer to selectively screen the fluid sample for a particular analyte. In other cases, the first porous layer functions as a filter to extract the desired analyte from the fluid sample. The first porous layer may also contain a substance to extract the particular analyte and/or make the analyte more soluble in the porous medium, such that the analyte will prefer the porous medium to that of the fluid sample.
A barrier or encapsulating layer is provided for the circuit chip to preserve its shelf-life and to protect against enviornmental or external contamination. In one embodiment, the encapsulating layer can comprise a tear-away, impermeable or mantle. In another embodiment, the barrier layer can comprise a semi-permeable filter layer for preventing contamination and for removing high molecular weight molecules or other particles that may interfere with the chemical analyses of the fluid sample, e.g., red cells in whole blood.
Electrical isolation is accomplished by designing each electro-chemical sensor in the array to have its own specific reference electrode and by electrically isolating the electro-chemical sensor.
(g) a protective barrier is then placed over the sensors.
(a) The circuit chip is intended as a dispoable device, and, therfore, does not suffer from "prior sample memory" problems associated with prior art electro-chemical sensors.
(b) Where required, the electro-chemical sensors include a self-contained calibrating solution to stabilize its particular chemical activity. The calibrating solution may contain a known quantity of analyte and may be impregnated in one of the porous layers of the electro-chemical sensor, which is adapted to minimize capacitive and impedance effects, and eliminates the need of calibrating each test in every sample. For example, in the measurement of potassium, two identical potassium sensing electrodes are incorporated in a single sensor structure and used in a differential mode in a way that external reference electrodes are not required. The layer of the sensor contacting the sample and associated with the sample sensing electrode contains a low concentration of potassium ion (e.g., 1.0 mEq./L.). The layer associated with the other electrode, which is not in contact with the sample, contains a high concentration of potassium ion (e.g., 5.0 mEq./L.). The difference in potassium ion concentration allows calibration of the sensor for sensitivity prior to sample introduction while the differential EMF measurement procedure minimizes signal drift during sample measurement.
In a sensor for the measurement of BUN, as another example, appropriate layers are similarly impregnated with high and low concentrations of NH4 +. Additional NH4 + generated by the ureasegel layer results in a change in the differential signal. The self-calibrating sensors also provide ease of fabrication of the circuit chip by reducing the manufacturing tolerances required for the gel layers and electrode structures, because electrodes realistically can never be perfectly matched.
(c) The self-contained integrated structure of electro-chemical sensors, each including its own reference electrode, disposed and interconnected on a common substrate eliminates effects common to other multiple-sensor arrangements, such as liquid junction effects, electrolyte streaming effects and electro-kinetic phenomena. In addition such structure is more compact and easily fabricated.
(d) The barrier layer or encapsulation ensures that the circuit chip can have an extended shelf-life by preventing environmental and external contamination.
(e) Signal-to-noise characteristics are improved, as noise sources are eliminated.
(f) Chemical noise is minimized by confining substances to polymer or gel layers.
(g) Thermal and mass transport gradients are minimized by the commonality of substrates, construction materials, and the miniaturization of the sensing elements.
(h) Each circuit chip is made to interface with a small, hand-held computer, by means of snap-in connections, thus providing on site analyzing convenience and portability.
(2) electrically monitoring the enzyme reaction to control the generation of the reactant and establish the steady state condition.
The method and apparatus also features: controlling the concentration of a reactant of the enzyme reaction in accordance with the quantity of enzyme in the sample, wherein a steady state condition is rapidly achieved, and then measuring the reaction rate from the steady state condition to determine the activity of the enzyme.
The new sensor construction capable of performing this new technique includes: a generating electrode, a monitoring electrode and a reaction medium disposed therebetween. The steady state is achieved as a result of the rate of reagent formation and rate of depletion by the enzyme reaction.
It is yet a further object of this invention to provide an improved article of manufacture and apparatus for testing blood which features portability, convenience and extremely low cost.
FIG. 14 is an enlarged plan view of the film shown in FIG. 13.
Generally speaking, the invention is for an article of manufacture and an apparatus for analyzing fluid samples containing a number of analytes.
While the invention is primarily directed and described with reference to blood analyses, it should be understood that a great variety of fluid samples can be analyzed by modifying the sensor chemistries.
Referring to FIGS. 1 and 1a, a circuit chip 10 for analyzing a fluid sample is shown in an enlarged view. The chip 10 is disposed within a hand-held tray support 11. The chip 10 and tray support 11 are both covered by an encapsulating barrier 12 that can either be in the form of a peel-off layer 12a of FIG. 1, or a severable encapsulation envelope 12b of FIG. 1a. The barrier layer 12 may also take the form of a built-in semi-impermeable layer or membrane 12c of FIGS. 1a and 2. The semi-impermeable membrane 12c may also act as a filter, for removing high molecular weight molecules or particles, such as red blood cells. The barrier, regardless of structure, excludes contaminants from chip 10, and thus preserves its reliability and shelf-life. The circuit chip 10 is composed of an array or plurality of spaced-apart sensors 14, which may be planar shaped or designed as miniature cups or wells to receive a drop of blood 13 deposited on the chip 10, as illustrated in FIG. 2. Each sensor 14 is designed and constructed to be specific to a particular analyte in the fluid blood sample 13. This is generally achieved by including within each sensor 14, an enzyme or catalyst that initiates a characteristic reaction. The particular chemistries, reagents, materials, and constructions for each sensor 14 is described in more detail hereinafter.
The hand-held support 11 for the chip 10 comprises a flat base surface 15 and vertically tapered side walls 16 extending from surface 15 for supporting the chip 10 and directing fluid sample 13 into wetting contact with chip 10 and sensors 14. The side walls 16 may be coated with hydrophobic material and serve as a sample confining structure. These side walls 16 define a perimeter of the chip circuit and the outer boundaries of liquid/chip contact.
Obviously, other designs are achievable within the objectives set forth above, such as, for example, a circular retaining well to replace the square-shaped well defined by walls 16, or a planar boundary wall flush with the surface of the chip (not shown).
The tray support 11 and chip 10 are designed to hold a small volume of sample fluid, i.e., one drop or less. Thus, a finger 17 can be placed directly over the chip 10 and pricked so as to dispense a drop of blood 13 directly onto the chip, as illustrated in FIG. 2. The blood drop 13 spreads over the entire chip 10, to simultaneously wet all sensor sites 14. Because chip 10 is miniaturized, a minimal amount of blood sample will coat the entire sensor surface 18.
Each electro-chemical sensor 14 has a different number of electrodes 22 (FIGS. 5, 8 and 8a) depending upon whether its chemical reaction is measureable as a kinetic rate, a current change or a potential change. The electrodes 22 of each sensor 14 are deposited upon a common substrate 20 of the chip 10, as shown in FIGS. 7a-7d, 8 and 8a, so as to provide a compact and easily fabricated structure. An interconnection circuit 24 is deposited on the opposite side of the common substrate 20 to which all the electrodes 22 are electrically connected, as illustrated in FIGS. 8 and 8a. The use of two surfaces of a common substrate 20 for all the electrodes 22 of each sensor 14 and the signal receiving wires 25 of circuit 24 (FIG. 8a) provide a self-contained, integrated array of sensors 14 unique to chip constructions of this type.
FIG. 4 shows a greatly enlarged schematic plan view of a chip 10 having a typical sensor array. Sixteen sensor sites 14 are depicted, by way of illustration. Each sensor 14 may be symmetrically spaced-apart from the other sensors 14, but this symmetry is not of a functional necessity. Each sensor 14 has a group of electrical interconnectors 25 (FIGS. 4 and 4a) forming part of the interconnection circuit 24. The number of interconnections 25a, 25b, 25c, 25d, etc. for each sensor 14 in a typical sensor row, as shown in FIG. 6, depends upon the type of sensor 14a, 14b, 14c, and 14d, (FIGS. 5 and 6), respectively, being interconnected, as will be described in more detail hereinafter.
The interconnectors 25 each terminate in an electrical connection 27 projecting from the end 26 of chip 10 (FIGS. 1, 3 and 4), which is adapted to mate with a snap-in electrical connector 28 disposed in slot 29 of an analyzing device 30. The connection 27 of chip 10 overhangs the tray 11, as illustrated, and includes a slot 31 for keying into connector 28 of analyzer 30.
The analyzing device 30 (FIGS. 3 and 3a) receives the electrical inputs from each sensor 14 on chip 10 via the snap-in connector 28. Analyzing device 30 may be a hand-held computer, with a keyboard 32 and a display 33. A print-out 34 may also be provided, as shown. Certain keys 35 of keyboard 32, when depressed, interrogate a particular sensor 14 of chip 10. Other keys 35 are adapted to initiate a programmed sequence, such as a test grouping, system calibration, sensor calibration, etc. The analysis of the blood sample 13 for a particular analyte is initiated by depression of a selected key 35 and the result is displayed in display window 33. The signal processing by the analysis device 30 is explained hereinafter with reference to FIGS. 9, 10, and 10a.
Referring to FIG. 8, a perspective cutaway view of a typical sensor site is shown. First, substrate 20 is press-formed from powdered alumina. The appropriate thru-holes 48 for each sensor site 14 are defined in substrate 20. Horizontal surfaces 41 and 45 define a typical electrode area. On the bottom surface 45 of substrate 20, the interconnection circuit 24 is deposited by conventional photoresist etching techniques. Holes 48 are filled with electrode conductor material, such as pyrolytic carbon, to provide electrical connection between surfaces 41 and 45 of substrate 20. The deposition of the pyrolytic carbon is conventionally effected by an appropriate masking technique.
Interconnection circuit 24, containing connectors 25 for connecting electrodes 22 in each sensor site 14, is formed over surface 45 of substrate 20. A thin coat 46 of epoxy is layed over surface 45 to protect the interconnection circuit 24.
On the upper surface 41, a layer 50 of thermoplastic material is then deposited to form the necessary well-shaped sensor sites 14, as defined by surfaces 16, 40, 42 and 43. In some cases, (FIG. 7b) sensor construction requires photoresist layers 44 prior to the thermoplastic well formation.
Next, the chemical layers are formed at each sensor site 14 by depositing layers 51, 52, 53, 54, etc. After layers 51, 52, 53, 54, etc. have been deposited, the chip 10, with the exception of the contact area 18 defined by borders 60 (FIGS. 1a and 2), is coated with an epoxy or thermoplastic layer 12b defining a support tray 11. A protective semi-permeable barrier layer 12c is then deposited over the blood contact area 18. If desired, the entire chip 10 and tray 11 may be overlayed with the aforementioned tear-away impermeable layer 12a of FIG. 1, or the encapsulation envelope 12b of FIG. 1a.
Now referring to FIGS. 5, 6, and 7a through 7d, a typical row of sensors 14a, 14b, 14c, and 14d are respectively illustrated to describe four different basic sensor electro-chemistries. Each of the sensors 14a, 14b, 14c, and 14d have electro-chemistries which will apply to the other similar sensors upon chip 10 and with respect to other contemplated analytes being assayed.
The sensor 14a shows a sensor construction for measuring glucose (GLU) in the blood sample. The glucose in the blood will permeate and filter through the barrier layer 12c and a further cellulose filtering layer 70, respectively, and then diffuse into a polymer or gel layer 71a containing the enzyme glucose oxidase. Hydrogen peroxide is produced in layer 71a from the enzyme-catalyzed oxidation of glucose within the polymer layer. The hydrogen peroxide diffuses through layer 71a to the surface 22a of electrode 72a. The concentration of the hydrogen peroxide is monitored by measuring the anodic current produced at electrodes 72a by the electro-oxidation of hydrogen peroxide at +0.7 volts vs. silver/silver chloride reference electrode as applied at electrodes 72b vs. 72c and 72a vs. 72c. Alternatively, the total anodic charge may be measured. Layer 71b is similar to layer 71a, but does not contain the enzyme glucose oxidase. Therefore, as glucose diffuses through layers 12c and 70 into layer 71b, no reaction will be monitored at electrode surface 22b of electrode 72b. This electrode 72b acts as an error correcting electrode. The signal from electrode surface 22b will be subtracted from the signal of electrode surface 22a by differential measurement to eliminate other oxidizable interferences in the blood sample.
The reference electrode 72c extends in an annular fashion (shown only in cross-section here) about electrodes 72a and 72b. Thus, the surface 22c of electrode 72c is made much larger in area than electrode surfaces 22a and 22b, in order to maintain voltage stability during measurement (during current flow). Electrode 72c supports the current flow of sensor 14a. The formal potential of the electrode 72c is maintained by annular layer 71c (also only shown here in cross-section), which comprises a Cl- containing polymer or gel (Ag/AgCl with Cl-). The reference electrode 72c is the Ag/AgCl electrode couple. The respective electrodes 72a and 72b are composed of carbon and are connected electrically to respective wires 25. The annular reference electrode 72c may contain carbon or Ag.
Sensor 14b of FIG. 7b is designed to measure LDH in the blood sample. The chemistries used for determining LDH, as well as other enzyme analytes in blood requires that a kinetic rate be measured. In the past, kinetic rate measurements of this type always have required the measurement of time dependent parameters. Therefore, two or more readings in time or a continuous monitoring was required to obtain kinetic rate measurement. Sensor 14b, however, is constructed in a new way in order to make use of a new method of measuring kinetic rate. The new method will provide a virtually immediate enzyme activity reading. Only one reading is required, and the electro-chemical sensor is not subject to electrode surface effects that will alter the calibration, nor to prior experienced changes in the electro-chemical nature of the gel composition resulting from current flow during the measurement. Furthermore, the enzyme reaction does not occur until actuated by a new type of current generating electrode of the sensor, as will be explained hereinafter. The inventive sensor 14b is a more accurate, reliable, and convenient device for determining enzyme analytes requiring a kinetic rate measurement.
When the reactants are controlled, a steady state condition will apply for this extended period of time. During this steady state condition, a single measurement of the kinetic rate of the enzyme reaction will determine the activity of the LDH enzyme. Obviously, only a single measurement need be made because there will be no change in kinetic rate with time (steady state). The formation of the NAD+ is kept at a very high level to maintain maximum rate and linearity of response. A pyruvate trap is provided to force the reaction to the right and prevent a back reaction from influencing the monitored forward reaction. This is accomplished by impregnating the enzyme reaction layer with a semi-carbazide, which will react with the pyruvate product. This method of kinetic rate measurement may also be used in other media besides thin film. It can be used either in a batch sampling analysis or in a continuous flow analysis, as long as the mass transport of reactants, i.e., flow rates and mixing, is also controlled.
The LDH of the blood sample initially permeates the barrier layer 12c and is then diffused through a second barrier layer 80 of an electrically conductive material such as sintered titanium oxide, tin oxide or porous graphite. The barrier layer 80 also serves as the counter or auxiliary electrode of the sensor, and is connected to a wire 25 of circuit 24 by means of a current conductor 48, as aforementioned. The LDH next permeates to a gel layer 81 containing the enzyme substrate (such as lactic acid) and a coenzyme NADH. The NADH in this layer is electrochemically converted to NAD+ by means of a generating electrode 82, which is carbon deposited within gel layer 81, as shown. Layer 81 also contains a semicarbazide for trapping the pyruvate product of the reaction. The electrode 82 receives predetermined constant current from the analyzing device 30 via a wire 25 and vertical current conductor 48. The rate of formation of NAD+ will be controlled due to the predetermined constant current being fed to the generating electrode 82.
This generating rate is measurable by the monitoring electrode 84, which is positioned below the reactant generating electrode 82. However, as the LDH of the sample diffuses through layer 81 into polymer layer 83, the NAD+ which is being generated at electrode 82 will be consumed by the enzyme catalyzed reaction with the lactate substrate. The electrode 84 will now sense the rate at which the NAD+ is being reconverted to NADH. Therefore, the monitoring electrode 84 will sense the altered NAD+ generating rate. The altered current flow from that of the initial NAD+ generating rate is directly proportional to the activity of LDH in the sample. Polymer layer 83 also acts as a medium for the reference electrode of the sensor 14b. All the electrodes 80, 82, 83, and 84, respectively, are electrically connected to respective wires 25 via carbon conductors 48. The monitoring electrode 84 will provide the analyzer 30 with an almost immediate current or charge that will be a single measurement or reading of the kinetic rate of the reaction. Reference electrode 85 comprises a film of carbon covered by a polymer layer 85a which contains quinone/hydroquinone to define a stable redox potential.
If the LDH or other enzyme analyte were measured the old way by taking several readings with respect to time, sensor 14b would be constructed more like sensor 14a. The new method of measurement, as applied to thin film integration, however, does not require a difficult structure to fabricate. Yet, it provides an enormous advantage of obtaining a reading in only a few seconds required for steady state conditions to be achieved. This new method and sensor construction makes the integrated circuit approach to blood analysis more viable than any device previously contemplated since many enzymes in the blood can be easily and quickly analyzed by this approach. This is so, because this method greatly simplifies the electronics needed to determine the kinetic rate (no time base required), and it is more accurate and reliable due to the shortened period of response required to accomplish this measurement. Also, because the reagent is generated at will, the device has improved shelf-life and over-all stability, i.e., the reaction starts only wnen the system is ready to accept data. As a result, it doesn't matter whether a portion of the NADH in layer 81 degrades during storage because the generation is controlled.
Sensor 14c illustrates a sensor construction required for determining the K+ analyte in blood. After the K+ filters through the initial barrier layer 12, it diffuses into a layer 90 of cellulose which is a permeable secondary and optional barrier/filter medium. The sensor 14c is structured as a twin electrode sensor comprised of two identical potassium sensing electrodes. The right-hand electrode 95a functions as a reference electrode because its potassium concentration is fixed by the gel layer 91a and, hence, provides a fixed half-cell potential for the left-hand electrode 95b.
Layer 95a together with layer 91b provides the means for sensitivity calibration of sensor 14c. Layers 91a and 91b each have a predetermined concentration of K+, but one which sets up a differential voltage signal between the two electrodes, e.g., layer 91a could have 5.0 mEq./L of K+, whereas layer 91b could only have 1.0 mEq./L of K+ and ideally the resulting voltage between them should be 42 mV, but for practical purposes the voltage will vary depending primarily on fabrication irregularities. Hence, the twin electrodes 95a and 95b provide a differential measurement which allows actual sensitivity calibration prior to sample measurement and at the same time will nullify any drift and offsets in the measurement.
The cellulose layer 90 filters the blood sample to allow only K+ ion to filter to the lower layers.
Layers 12c and 90 are designed to allow diffusion of ions in the sample primarily into layer 91b where the change in voltage of electrode 95b yields the additional potassium introduced by the sample. Alternatively, the differences in concentrations in layers 91a and 91b can be made so large that diffusion of sample potassium into layer 91b will not constitute a significant error. For example, if layer 91a contains 0.1 mEq./L of K+ and layer 91b contains 100 mEq./L of K+ then a 5 mEq./L sample would result in voltage changes of 102 mV and 1.3 mV, respectively. If uncompensated, the 1.3 mV voltage change of electrode 95b would only constitute an assay error of 0.2 mEq./L. However, regardless of the concentrations of K+ in layers 91a and 91b, an algorithm can be written to take into account the signal changes, however minute, in both electrodes 95a and 95b. From a practical standpoint, however, the reference side of the sensor should not change significantly in voltage relative to the other sample sensing side.
Layer 93a directly above the reference electrode 95a contains ferro/ferric-cyanide to form a stable redox couple for electrode 95a and also a fixed K+ concentration to maintain a stable interfacial potential between layers 93a and 92a. Layer 92a above layer 93a is polyvinyl chloride impregnated with a neutral ion carrier valinomycin, which is selective to potassium.
Layers 92b and 93b, respectively, are identical layers to their counterpart layers, 92a and 93a, with the exception of the reagents contained therein.
The calibrating layers 91a and 91b, respectively, may be maintained at a given or predetermined distance above the electrodes. Also, their thickness or size may be carefully controlled in manufacture. This will insure predetermined electrical characteristics such as capacitance and impedance for the sensor.
Sensor 14d depicts a construction necessary for the assay of Blood Urea Nitrogen (BUN).
The urea assay is accomplished by the sensing of the ammonium ion NH4 +. The urea in the blood permeates the barrier layer 12c and the cellulose barrier layer 100. Layer 101a comprises a polymer containing an immobilized enzyme such as urease. Within this layer 101a, the urea of the sample is catalytically hydrolyzed to ammonium bicarbonate by urease. The NH4 diffuses into the next layer 102a which is a polyvinyl chloride containing an antibiotic such as nonactin as the neutral ion carrier. The NH4 + is at the interface layers 101a and 102a. The next layer 103a is a gel containing the electrode couple Fe(CN)6 3- /Fe(CN)6 4- introduced as ammonium salts. The carbon electrode 105a lies below layer 103a. Electrode 105a in contact with layer 103a serves as the inner reference electrode for the NH4 + sensor 14d. The interfacial potential at the layers 102a/103a is fixed by the ammonium ferrocyanide salt concentration, and only the interfacial potential of layers 101a/102a will vary with sample urea concentration.
Electrode 105b serves to subtract interferences by measuring the differential of the potential. Layers 101b, 102b, and 103b, respectively, are similar to layers 101a, 102a, and 103a, except that layer 101b does not contain urease as its counterpart layer 101a.
Layers 104a and 104b of the sensor are impregnated with a known or predetermined amount of NH4 + to internally calibrate the sensor sensitivity and compensate for drifts. These layers, similar to the calibration layers in sensor 14c, contain high and low levels of the measured species (NH4 +) or alternately the analyte itself (urea).
These predetermined impregnated layers in sensors 14c and 14d which provide self-calibration, not only assure built-in reliability and accuracy, but relax manufacturing tolerances. Thus, sensor fabrication is greatly facilitated by the built-in calibration.
As aforementioned, many more tests will be performed by the other sensors in the chip array, but all the other sensors, despite their different chemistries, will have the same structure as one of these four sensors (14a, 14b, 14c, and 14d). The following Table I is a list of intended measureable analytes, and their corresponding sensor structures, i.e., whether they resemble sensor construction for sensors 14a, 14b, 14c, or 14d, respectively. The immobilized reagents for the various analytes under assay are also given.
Referring to FIG. 11, another embodiment of the integrated chip approach to analyte testing is shown. Chip 10 is replaced by a new thin film sensor matrix 10a, which comprises sensors 14' having just the electrode structures, redox and calibration layers upon a common substrate. The enzyme layers are absent. Instead, the necessary enzymes for each sensor reaction are contained in a reaction cell or chamber 110. The enzymes are supported upon polymer beads 111 or hollow fibers, etc. The chamber 110 may also be constructed to contain a porous polymer for this purpose.
The sample under analysis is introduced into chamber 110 (arrow 112) via conduit 113 and valve 114. The analytes of the sample each respectively react with their specific enzyme, and are then discharged via valve 115 and conduit 116 to sensor matrix 10a.
Each sensor 14' of matrix 10a will sense a particular analyte-enzyme reaction as before, i.e., some sensors 14' will measure current, some potential, and some kinetic rate differentials.
After the sensors 14' have accomplished their analyses of the sample, the reaction cell 110 and the matrix 10a are washed clean. A first wash liquid is introduced (arrow 121) into conduit 117 and enters chamber 110 via valve 118. The wash is allowed to soak through beads 111 and is discharged (arrow 122) from the chamber 110 via valve 119 and conduit 120. A second wash liquid is then introduced to chamber 110 via conduit 113 and valve 114. The second wash is forced through chamber 110 and is continuously flushed through valve 115 and conduit 115 to matrix 10a. The second wash will flow past matrix 10a cleaning sensors 14', and then discharges (arrow 123) from the matrix 10a via conduit 124.
Naturally, the valves 114, 115, 118 and 119, are respectively opened and closed in proper sequence to accomplish the various sample and wash cycles.
After the second wash, the next sample is introduced into the reaction cell, and the same procedure is followed.
FIGS. 12-14 illustrate still another embodiment of the thin film integrated circuit approach of this invention. FIG. 12 shows an automatic continuous analyzing system 130. A first continuous endless web 131 is stored and dispensed (arrow 133) from reel 132. The web 131 travels past tensioning roller 134 toward a pair of pressure rollers 135. The first endless web 131 comprises discrete partial sensors 140 disposed within a common substrate layer 136 deposited on belt 131 as depicted in FIG. 13. Each partial sensor 140 is individually comprised of the necessary gel and polymer layers 141 common to the respective sensors 14a, 14b, 14c, etc., of chip 10. The partial sensors 140 are each sequentially disposed upon the common substrate 136, but rows 151 of various numbers of partial sensors 140 can be disposed transversely across web 131 as illustrated in FIG. 14.
A second continuous web 150 (FIG. 12) is advanced (arrow 145) about a frame of rollers 146, 147, 148, and 149, as shown. The second web 150 comprises the electrode structures (not shown) for the corresponding partial sensors 140 of belt 131. When the belts 131 and 150 are advanced and married by pressure rollers 135, a series of completed sensors are formed with structures similar to the sensors 14a, 14b, 14c, etc.
Prior to the completion of the full sensor structures by the pressure rollers 135, either web 131 or web 150 passes a sample dispenser 160. The dispenser 160 is preferably placed over the web 131 (solid lines) instead of web 150 (dotted lines). A drop of sample liquid is dispensed to each partial sensor 140, and permeates the various respective layers 141.
When the electrodes of the web 150 merge with the sample impregnated enzyme layered sensor medium 140, analytes of the sample will already be reacted. The various signals and information will be conveyed through the electrodes to an analyzer 170 as both the merged webs 131 and 150 pass therethrough, as illustrated.
At the rear of the analyzer 170, the spent web 131 is discarded (arrow 169). The electrode web 150, however, may be passed by a wash or reconditioning station 168, and can be recycled.
The web 131 may contain an identifying code 161 opposite each particular sensor or sensors 140. This code 161 will be read by the analyzer 170 to properly record the analyzed data.
Referring to FIG. 9, a testing circuit for the enzyme sensor 14b of FIG. 7b is illustrated. The auxiliary electrode 80 and the reference electrode 83a will form part of a potential stabilizing feedback loop 180 for controlling the voltage between these electrodes. The loop 180 comprises an amplifier 181, which receives an input voltage Vin. The applied voltage is sensed at the generating electrode 82. Amplifier 181 supplies the current to the generating electrode 82 via the auxiliary or counter electrode 80.
The sensing electrode 84 is voltage biased at amplifier 182 by Vin, and the current is monitored by this amplifier.
Referring to FIGS. 10 and 10a, a schematic of the computer configuration for analyzer 30 of FIGS. 3 and 3a is illustrated.
The computer is under the control of the central processor (CPU) 205, which derives its instructions from a stored program in memory 206, which also contains calibration data for adjusting the processed signals, and stores data in working and permanent storage. The processor 205 does all the arithmetic calculations and processing of the sensor signals. The sensor signals are fed from chip 10 into the analyzer 30 via connectors 28 (FIG. 3). After an initial conditioning of the signals 201, they are multiplexed by multiplexer 200, and then converted to digital form by the analog-to-digital converter 202. When a particular key 35 (FIG. 3) of keyboard 32 is depressed, the key calls for a specific analyte analysis or other appropriate programmed sequence via the process coder 203. The appropriate signal from chip 10 is then processed by the CPU. The processed signal may then be displayed by display 33 and/or a hard copy made by the printer 34. All the signals are properly called up, processed and read under the guidance of the process coder 203. Where desired, an optional set of digital-to-analog converters 207a-207z will provide an analog input for other peripheral devices. Also, a communication interface 209 can be provided for talking to another computer device, such as a master computer at a data center.
FIG. 10a depicts the signal conditioning for signals from a typical sensor 14. The signals from sensor 14 are amplified, biased, and calibrated via the differential amplifier 210 and calibrator controls 211. Then the output 201 from the amplifier 210 is fed to one of the inputs 1 through n of multiplexer 200. The multiplexed signals are fed to the analog/digital converter 202, as aforementioned.
The various techniques for constructing the integrated circuit chip are well known to the practitioners of the electrical arts, but a better understanding of the techniques expressed herein may be obtained with reference to: L. I. Maissel and R. Glang; Handbook of Thin Film Technology; McGraw-Hill Book Co.; Copyright 1970.
Having described the invention, what is desired to be protected by Letters Patent is presented by the following appended claims.
1. An article of manufacture used for analyzing a multiplicity of analytes in a fluid sample, said article comprising an array of discrete, electrically isolated electrochemical sensors supported on a common substrate for analyzing different ones of said analytes, at least one of said sensors in said array having a built in calibrating means including means to establish a differential concentration of an analyte within said one sensor during measurement comprising, a first electrode layer containing a given concentration of the analyte being measured, and a second electrode layer containing a given but different concentration of the analyte being measured, and electrical means to provide access to each of said sensors.
2. The article of manufacture of claim 1, wherein said electrical means comprises electrical conductors supported upon said substrate to provide access to said discrete sensors, said conductors extending to a periphery of said substrate to define a plug-in type connector.
3. The article of manufacture of claim 2, wherein said electrical connectors comprise printed connectors deposited upon a common substrate.
4. The article of manufacture of claim 3, wherein selected ones of said sensors comprise printed electrodes deposited upon said substrate in electrical continuity with respective ones of said printed connectors.
5. The article of manufacture of claim 3, wherein said printed connectors are adapted to be received in a snap-in receptacle of an analyzing means responsive to at least selected ones of said sensors. | 2019-04-24T07:00:52Z | https://patents.google.com/patent/US4225410A/en |
Systems and methods are disclosed that improve the performance of an extremum seeking control strategy by limiting, removing or preventing the effects of an actuator saturation condition, particularly as the extremum seeking control strategy relates to HVAC applications.
The present application claims the benefit of U.S. Provisional Application No. 60/950,314, filed July 17, 2007, which is incorporated herein by reference in its entirety. This application hereby expressly incorporates by reference the entirety of: U.S. Patent Application No. 11/699,859, filed January 30, 2007, entitled "Sensor-Free Optimal Control of Air-Side Economizer;" and U.S. Patent Application No. 11/699,860, filed January 30, 2007, entitled "Adaptive Real-Time Optimization Control."
The present application generally relates to extremum seeking control strategies. The present application more particularly relates to regulating, via extremum seeking control, the amount of air that is flowing through a heating, ventilation and air conditioning (HVAC) system in order to reduce the amount of mechanical heating and cooling required within an air-handling unit (AHU).
Extremum seeking control (ESC) is a class of self-optimizing control strategies that can dynamically search for the unknown and/or time-varying inputs of a system for optimizing a certain performance index. It can be considered a dynamic realization of gradient searching through the use of dithering signals. The gradient of the system output with respect to the system input is typically obtained by slightly perturbing the system operation and applying a demodulation measure. Optimization of system performance can be obtained by driving the gradient towards zero by using an integrator in the closed-loop system. ESC is a non-model based control strategy, meaning that a model for the controlled system is not necessary for ESC to optimize the system.
Typical ESCs utilize a closed-loop configuration in which a gradient is calculated between the inputs to a plant and system performance. An integrator is then used in the closed-loop system to drive the gradient to zero. A detrimental phenomenon known as "integrator windup" may occur if the determined optimal reference point for the system is mathematically outside of the operating range for the actuator, causing the optimal settings for the actuator to correspond to an operating boundary. When the actuator cannot move to the optimal setting determined by the ESC loop, a condition known as actuator saturation is said to exist. For example, the optimal power consumption for an AHU utilizing an extremum seeking controller may correspond to a damper opening of less than 0%, a physical impossibility. When an actuator saturation condition exists, the integrator output will continue to grow until the sign of the input to the integrator changes.
The invention relates to a method for optimizing a control process for an actuator. The method includes operating the control process using an extremum seeking control strategy. The method further includes using an electronic circuit to compensate for an actuator saturation condition of the extremum seeking control strategy. The invention also relates to a controller for controlling an actuator. The controller includes a processing circuit configured to operate the plant using an extremum seeking control strategy. The processing circuit is further configured to compensate for an actuator saturation condition of the extremum seeking control strategy.
The invention further relates to a controller configured for use with an air handling unit having a temperature regulator and a damper effected by an actuator. The controller includes a processing circuit configured to provide a first control signal to the temperature regulator, the first control signal being based upon a setpoint. The processing circuit is further configured to provide a second control signal to the actuator, the second control signal being determined by an extremum seeking control loop. The processing circuit is yet further configured to adjust the extremum seeking control loop to compensate for an actuator saturation condition.
FIG. 1 IB is a diagram of a self-driving ESC loop configured to limit the effects of an actuator saturation condition, according to an exemplary embodiment.
DETAILED DESCRIPTION OF THE EXEMPLARY EMBODIMENTS Before turning to the figures, which illustrate the exemplary embodiments in detail, it should be understood that the application is not limited to the details or methodology set forth in the description or illustrated in the figures. It should also be understood that the terminology is for the purpose of description only and should not be regarded as limiting.
Referring generally to the figures, a controller is configured to control a plant having an actuator using an extremum seeking control strategy. The extremum seeking control strategy is configured to compensate for the effects of an actuator saturation condition.
FIG. 1 is a perspective view of a building 5 with an HVAC system, according to an exemplary embodiment. As illustrated, building 5 has an air handling unit (AHU) 10. AHU 10 is part of an HVAC system and is used to condition, chill, heat, and/or control the environment of a room 12 in building 5. The control system for AHU 10 utilizes extremum seeking to provide economizer functionality by optimizing the flow of outdoor air into AHU 10 in order to minimize the power consumption of AHU 10. According to various other exemplary embodiments, building 5 may contain more AHUs. Each AHU may be assigned a zone (e.g., room 12, a set of rooms, part of a room, floor, set of floors, part of a floor, etc.) of building 5 that the AHU is configured to affect (e.g., condition, cool, heat, ventilate, etc.). Each zone assigned to an AHU may be further subdivided through the use of variable air volume boxes or other HVAC configurations.
Inc. According to other exemplary embodiments, system 400 may be a unitary system having an AHU or another damper system. In an exemplary embodiment, controller 410 is operatively associated with a controlled air handling unit such as AHU 430. Controller 410 is configured to operate as a finite state machine with the three states depicted in FIG. 3, wherein AHU 430 uses extremum seeking logic when in state 503. A transition occurs from one state to another, as indicated by the arrows, when a specified condition or set of conditions occurs. In an exemplary embodiment, the operational data of AHU 430 is checked when controller 410 is in a given state to determine whether a defined transition condition exists. A transition condition is a function of the present state and may also refer to a specific time interval, temperature condition, supply air condition and/or return air condition. In an exemplary embodiment, a transition condition occurs when controller 410 remains in a given operating mode for a predetermined period of time without adequately providing an output corresponding to a setpoint provided to the controller 410 by the supervisory controller 404. For example, a transition condition occurs in a mechanical cooling mode when the system is unable to provide an output of air at the desired temperature within a reasonable amount of time.
In state 501, valve 442 for heating coil 440 is controlled to modulate the flow of hot water, steam, or electricity to heating coil 440, thereby controlling the amount of energy transferred to the air. This maintains the supply air temperature at the setpoint. Dampers 460, 462, and 464 are positioned for a minimum flow rate of outdoor air and there is no mechanical cooling, (i.e. chilled water valve 446 is closed). The minimum flow rate of outdoor air is the least amount required for satisfactory ventilation to the supply duct 490. For example, 20% of the air supplied to duct 490 is outdoor air. The condition for a transition to state 502 is defined by the heating control signal remaining in the "No Heat Mode." Such a mode occurs when valve 442 of heating coil 440 remains closed for a defined period of time (i.e. heating of the supply air is not required during that period). This transition condition can result from the outdoor temperature rising to a point at which the air from the supply duct 490 does not need mechanical heating.
In state 502, dampers 460, 462, and 464 alone are used to control the supply air temperature in supply duct 490 (i.e. no mechanical heating or cooling). In this state the amount of outdoor air that is mixed with the return air from return duct 492 is regulated to heat or cool the air being supplied via supply duct 490. Because there is no heating or mechanical cooling, the inability to achieve the setpoint temperature results in a transition to either state 501 or 503. A transition occurs to state 501 for mechanical heating when either for a defined period of time the flow of outdoor air is less than that required for proper ventilation or outdoor air inlet damper 464 remains in the minimum open position for a given period of time. The finite state machine makes a transition from state 502 to state 503 for mechanical cooling upon the damper control remaining in the maximum outdoor air position (e.g. 100% of the air supplied by the AHU is outdoor air) for a period of time. In state 503, chilled water valve 446 for cooling coil 444 is controlled to modulate the flow of chilled water and control the amount of energy removed from the air. At this time, extremum seeking control is used to modulate dampers 460, 462, and 464 to introduce an optimal amount of outdoor air into AHU 430. In an exemplary embodiment, a transition occurs to state 502 when the mechanical cooling does not occur for the given period of time (i.e. the cooling control is saturated in the no-cooling mode).
Referring further to FIG. 3, in state 501, heating with minimum outdoor air required for ventilation is initiated. In cold climates, the initial state of control is a heating with minimum outdoor air state 501. The system initiates in state 501 to minimize the potential that cooling coil 444 and heating coil 440 could freeze. State 501 controls the supply air temperature by modulating the amount of heat supplied from heating coil 440. Dampers 460, 462, and 464 are controlled for minimum ventilation. In an exemplary embodiment, a transition to state 502 occurs after the heating control signal has been at its minimum value (no-heat position) for a fixed period of time.
In state 502, the system is utilizing outdoor air to provide free cooling to the system. State 502 controls the supply air temperature by modulating dampers 460, 462, and 464 to adjust the mixing of outdoor air with return air. In an exemplary embodiment, a transition to state 501 occurs after dampers 460, 462, and 464 have been at a minimum ventilation requirement for a fixed period of time or the damper control signal is at a minimum value for a fixed period of time. In an exemplary embodiment, a transition to state 503 occurs after dampers 460, 462, and 464 have been controlled to supply 100% outdoor air for a fixed period of time.
In state 503, the system utilizes mechanical cooling with an extremum seeking control strategy to control dampers 460, 462, and 464. State 503 controls the supply air temperature by modulating the flow rate of chilled water or refrigerant through cooling coil 444. An extremum seeking control strategy is used to determine the positions of dampers 460, 462, and 464 to minimize the amount of mechanical cooling required. An actuator saturation condition may occur using a standard extremum seeking control strategy if the optimum damper opening for a damper corresponds to a physical boundary on the operation of the damper. Controller 410 has been adapted to limit the detrimental effects of an actuator saturation condition. Ventilation requirements are set at a lower limit for the amount of outside air in supply duct 490. In an exemplary embodiment, a transition to state 502 occurs after the control signal for cooling has been in the no-cooling command mode for a fixed period of time.
Referring to FIG. 4A, a block diagram of an ESC loop 600 that compensates for an actuator saturation condition is shown, according to an exemplary embodiment. A controller 602 having extremum seeking control logic continually modifies its output in response to changing measurement 621 received from plant 624 via input interface 604. A plant in control theory is the combination of a process and one or more mechanically controlled outputs. Measurements from the plant may include, but are not limited to, information received from sensors about the state of the system or control signals sent to other devices in the system. Input interface 604 provides measurement 621 to performance gradient probe 612 to detect the performance gradient. Actuator saturation compensator 614 then adjusts ESC loop 600 to compensate if an actuator saturation condition is present in plant 624. Manipulated variable updater 616 produces an updated manipulated variable 620 based upon the performance gradient and any compensation provided by actuator saturation compensator 614. In an exemplary embodiment, manipulated variable updater 616 includes an integrator to drive the performance gradient to zero. Manipulated variable updater 616 then provides an updated manipulated variable 620 to plant 624 via output interface 606.
Referring to FIG. 4B, a block diagram of an extremum seeking control loop with a plurality of measurements and configured to limit the effects of actuator saturation is shown, according to an exemplary embodiment. ESC loop 601 contains many of the functions and structures of ESC loop 600 (FIG. 4A), but utilizes a plurality of measurements 622 to determine a performance index. Controller 603 receives measurements 622 from plant 600 via input interface 604. A performance index is calculated by performance index calculator 610 using measurements 622. The performance index is a mathematical representation of the system performance of ESC loop 601 using measurements 622. Performance gradient probe 612 receives the performance index from performance index calculator 610 to detect the performance gradient. Actuator saturation compensator 614 then adjusts ESC loop 601 if an actuator saturation condition is present in plant 624. Manipulated variable updater 616 produces an updated manipulated variable 620 based upon the performance gradient and any compensation provided by actuator saturation compensator 614. In an exemplary embodiment, manipulated variable updater 616 includes an integrator to drive the performance gradient to zero. Manipulated variable updater 616 then provides an updated manipulated variable 620 to plant 624 via output interface 606. Referring to FIG. 5A, a flow diagram is shown of a process 719 for limiting the effects of an actuator saturation condition in an ESC loop, according to an exemplary embodiment. In this embodiment, extremum seeking control is provided to a plant in step 720. During extremum seeking control, the ESC controller distinguishes between a state in which the actuator is saturated and a state in which the actuator is not saturated (step 722). In an exemplary embodiment, step 722 can be achieved by comparing the manipulated variable designated by the extremum seeking control strategy to a range of control signals that correspond to the physical range of actuator positions. For example, the extremum seeking controller may contain a memory module that stores information on the physical limits of the actuator. In another exemplary embodiment, the controller can be configured to receive input data from a position sensor that provides data on the position of the actuator to detect an actuator saturation condition. If an actuator saturation condition is detected, the saturation condition is removed and the control loop is updated (step 724). The saturation condition can be removed by reducing the control parameters sent to the actuator to those within the range corresponding to the physical limits of operation for the actuator. Alternatively, the control system can be turned off for a period of time if an actuator saturation condition is detected. Turning the control system off and on again can have the effect of reinitializing the ESC loop, thereby preventing the integrator from continuing to wind up.
FIG. 5B is a flow diagram of a process 700 for preventing and/or limiting the effects of an actuator saturation condition of an ESC loop. Process 700 is shown to include receiving a measurement from the plant (step 702). A plant in control theory is the combination of a process and an actuator. In an exemplary embodiment, the algorithm for the extremum seeking system utilizes a single input measurement from the plant. The algorithm may also have a plurality of input measurements. In an exemplary embodiment for an HVAC system, measurements may include inputs from temperature sensors, humidity sensors, air flow sensors, damper positioning sensors or may reflect power consumption. Process 700 is further shown to include probing for a performance gradient (step 706). In an exemplary embodiment, probing for a performance gradient may entail using a dither signal and demodulation signal in the closed-loop system to determine the performance gradient. Process 700 further includes utilizing an integrator to drive the performance gradient to zero (step 708). An actuator saturation condition is then detected and the condition is removed (step 710), altering the manipulated variable that is passed to the plant (step 712).
In FIG. 5C, a flow diagram of a process 800 for limiting the effects of an actuator saturation condition of an ESC loop using feedback from the actuator is shown, according to an exemplary embodiment. In this embodiment, no logical determination is necessary to detect the presence of an actuator saturation condition because a feedback loop automatically corrects for this condition. Process 800 includes the steps characteristic of an extremum seeking controller including: receiving a measurement from the plant (step 702), probing for a performance gradient (step 706), using an integrator to drive the gradient to zero (step 708) and updating the manipulated variable to the plant (step 712). Steps 702, 706, 708 and 712 can be performed in the same manner as outlined for process 700 in FIG. 5B. Process 800 further includes calculating the difference between the input and output signals to the actuator (step 812). The difference between the input and output signals to the actuator remains zero unless the actuator is saturated. Process 800 is further shown to pass the resulting difference signal from step 812 into an amplifier (step 814). The amplified difference signal from step 812 is then passed back to step 708 and combined with the output of step 706 to form a new input to the integrator of step 708 (step 816). This prevents the integrator in step 708 from winding up and the extremum system from becoming unable to adapt to changes in the optimum operating condition. In FIG. 6, a filtering ESC loop 970 configured to limit the effects of an actuator saturation condition is shown, according to an exemplary embodiment. Filtering extremum seeking controls determine a performance gradient through the use of a high-pass filter, a demodulation signal, a low-pass filter, and a dither signal. An integrator is used to drive the performance gradient to zero in order to optimize the closed-loop system. In an exemplary embodiment, filtering ESC loop 970 utilizes a feedback loop in order to limit the effects of an actuator saturation condition. Plant 951 can be represented mathematically as a combination of linear input dynamics 950, nonlinear performance map 952, and linear output dynamics 954. The actual mathematical model for plant 951 does not need to be known in order to apply ESC and is illustrative only. Input dynamics 950 produce a function signal 'x' which is passed to nonlinear performance map 952. The output of the performance map 952 is then passed to output dynamics 954 to provide an output signal 'z'. ESC loop 970 seeks to find a value for 'x' that minimizes the output of the performance map 952, thereby also minimizing output signal 'z'. As an illustrative example only, output signal 'z' may be represented as the expression: z = f(x) = (x - xopt)2 + 2 where f(x) represents the performance map and xopt represents the value at which f(x) is minimized. The actual representative formula of a performance map in an ESC loop is system and application specific. Output signal 'z' is passed through linear output dynamics 954 to produce signal "z"', which is received by the extremum seeking controller. A performance gradient signal is produced by first perturbing the system by adding dither signal 966 to ESC loop 970 at processing element 959. The return signal "z"' is then used to detect the performance gradient through the use of high-pass filter 956, a demodulation signal 958 combined with (e.g., multiplied by) the output of high-pass filter 956 at processing element 957, and low-pass filter 960. The performance gradient is a function of the difference between 'x' and 'xopt'- The gradient signal is provided as an input to integrator 964 to drive the gradient to zero, thereby optimizing ESC loop 970. Feedback from actuator block 968 has been added to ESC loop 970 to limit the effects of an actuator saturation condition. The difference between the input and output signals for the actuator controlled by ESC loop 970 is calculated at processing element 971. Actuator block 968 is representative of the input and output signals for the actuator. In an exemplary embodiment, processing element 971 computes the difference between the signal sent to the actuator and a measurement taken at the actuator that is indicative of the physical output of the actuator. The difference signal produced by processing element 971 is then amplified by a gain 972 and added to the input of integrator 964 at processing element 962, thereby limiting the input to integrator 964 and preventing the integrator from winding up. In another exemplary embodiment, processing element 971 is implemented as software and compares the signal outputted to the actuator to a stored range of values corresponding to the physical limits of the actuator.
In FIG. 7, an ESC loop 76 to control an AHU is shown, in an exemplary embodiment. ESC loop 76 has been adapted to compensate for an actuator saturation condition using feedback from actuator 850. The AHU includes a temperature regulator 80, a temperature regulator system controller 90, a damper actuator 850 and damper 852. Temperature regulator 80 may be any mechanism used to alter air temperature. This may include, but is not limited to, cooling coils, heating coils, steam regulators, chilled water regulators or air compressors. In an exemplary embodiment, temperature regulator 80 lowers the temperature of the air. Temperature regulator system controller 90 maintains a supply air temperature at a setpoint 92 by adjusting the position of chilled water valve 446 of cooling coil 444 (FIG. 2). Actuator 850 maintains the damper 852 to provide between 0% and 100% outside air.
A control loop consisting of temperature regulator system controller 90, temperature regulator 80, and temperature sensor 480 controls the amount of mechanical cooling in the AHU, according to an exemplary embodiment. Temperature regulator system controller 90 receives a setpoint supply air temperature 92 from a supervisory controller 404 (FIG. 2), according to an exemplary embodiment. Temperature regulator system controller 90 also receives measurements from temperature sensor 480, which measures the temperature of the air supplied by the AHU to the building. Temperature regulator system controller 90 compares the setpoint temperature to the measured temperature and adjusts the amount of mechanical cooling provided by temperature regulator 80 to achieve the setpoint supply air temperature 92.
ESC loop 76 is connected to the temperature regulator control loop in order to control damper 852, which regulates the amount of outdoor air into the AHU. In an exemplary embodiment, ESC loop 76 determines an optimum setting for actuator 850 in order to maximize the use of outdoor air for cooling, thereby minimizing the power consumption of the temperature regulator 80. The performance gradient for ESC loop 76 is detected through the combination of a dither signal 62 added to ESC loop 76 at processing element 67, high pass filter 86, a demodulator 69 that uses demodulation signal 60, and low pass filter 64. Integrator 98 serves to drive the detected gradient to zero. Control parameters from integrator 98 are passed on to actuator 850 to regulate damper 852, thereby controlling the amount of outside air utilized by the AHU. The outside air and/or air from other sources (e.g. return air) is combined with the air treated by temperature regulator 80 and provided to the zone serviced by the AHU. Temperature sensor 480 measures the air supplied by the AHU and provides temperature information to temperature regulator system controller 90.
The effects of an actuator saturation condition in ESC loop 76 are limited using feedback from the input and output signals to actuator 850. The difference between the input and output signals to actuator 850 is calculated by processing element 68. The difference signal that results from the operation at processing element 68 remains zero unless the damper actuator 850 becomes saturated. The difference signal is then amplified by amplifier 66 and fed back into the input of integrator 98 at processing element 96, thereby limiting the input to integrator 98 and preventing integrator 98 from winding up. Preventing integrator windup also prevents ESC loop 76 from becoming unable to adapt to changes in the optimal setting for actuator 850. It should be appreciated that the functions of ESC loop 76 can be implemented as an electronic circuit or as software stored within a digital processing circuit.
Referring now to FIG. 8, a diagram of a control system for an AHU configured to limit the effects of an actuator saturation condition is shown, according to an exemplary embodiment. AHU controller 410 receives a temperature setpoint from supervisory controller 404. The temperature setpoint is used to drive a control loop including a temperature regulator system controller 90, a temperature regulator system 952 and a temperature sensor 480. Temperature regulator system controller 90 compares the temperature measured by temperature sensor 480 to that of the setpoint temperature provided by supervisory controller 404. A temperature regulator command signal is then sent from control 90 to temperature regulator system 952 to provide mechanical heating or cooling to drive the temperature of the air supplied by the AHU to that of the setpoint. AHU controller 410 also contains an ESC loop 860 to control the position of outdoor air damper 852 via actuator 850. ESC loop 860 is coupled to the temperature regulator control loop in order to minimize the power consumption of the temperature regulator system 952. In an exemplary embodiment, ESC loop 860 searches for a setting for the damper opening that minimizes the power consumed by temperature regulator system 952 by making use of outdoor air. A performance gradient probe 862 detects a difference between the optimal settings for damper 852 and the current settings for damper 852. In an exemplary embodiment, performance gradient probe 862 utilizes a high pass filter, a demodulation signal, a low pass filter and a dither signal to detect the performance gradient. Integration of the gradient produces an actuator command signal to drive the actuator 850 to its optimal setting. Actuator 850 receives the actuator command signal and regulates damper 852, controlling the flow of outside air into the AHU. The effects of an actuator saturation condition are limited in AHU controller 410 by computing the difference between the actuator command signal sent from integrator 98 and the output of actuator 850. The output of actuator 850 is fed back to ESC loop 860 and combined with the actuator command signal at element 68. Element 68 performs the mathematical operation of subtracting the actuator command signal from the actuator feedback signal. The difference signal produced by element 68 is then amplified by a gain at amplifier 66 and added to the input to integrator 98 at processing element 96. If the damper actuator 850 is saturated, the difference signal is nonzero, limiting the input to integrator 98 to prevent integrator windup. Referring to FIG. 9, a block diagram of the controller 410 in FIG. 8 is shown, according to an exemplary embodiment. Controller 410 is shown to include a processing circuit 418. Processing circuit 418 is shown to include processor 414 and memory 416. Processing circuit 418 may be communicably coupled with fan control output 456, chilled water valve output 454, heating valve output 452, actuator command 458, temperature input 450 and communications port 412. According to various exemplary embodiments, processing circuit 418 may be a general purpose processor, an application specific processor, a circuit containing one or more processing components, a group of distributed processing components, a group of distributed computers configured for processing, etc. Processor 414 may be or include any number of components for conducting data processing and/or signal processing.
Memory 416 (e.g., memory unit, memory device, storage device, etc.) may be one or more devices for storing data and/or computer code for completing and/or facilitating the various processes described in the present disclosure, including that of using extremum seeking logic to control an AHU. Memory 416 may include a volatile memory and/or a non-volatile memory. Memory 416 may include database components, object code components, script components, and/or any other type of information structure for supporting the various activities described in the present disclosure. According to an exemplary embodiment, any distributed and/or local memory device of the past, present, or future may be utilized with the systems and methods of this disclosure. According to an exemplary embodiment, memory 416 is communicably connected to processor 414 (e.g., via a circuit or other connection) and includes computer code for executing one or more processes described herein. Memory 416 may include various data regarding the operation of a control loop (e.g., previous setpoints, previous behavior patterns regarding energy used to adjust a current value to a setpoint, etc.).
In an exemplary embodiment, the functions of controller 410, as depicted in FIG. 8, may be implemented as software stored within memory 416 of processing circuit 418. Supervisory controller 404 provides a setpoint to controller 410 through communication port 412. Temperature sensor 480 (FIG. 8) provides temperature input 450 to controller 410, which compares the measured temperature to the setpoint temperature. In an exemplary embodiment, a temperature regulator command is sent to chilled water valve output 454 to cool the air within the AHU. Extremum seeking control strategy 860 can be used to control actuator 850 for damper 852 via actuator command 458. In an exemplary embodiment, feedback from the actuator can be achieved through the use of a physical signal received from a damper position sensor. In another exemplary embodiment, memory 416 can store information on the physical limits for actuator 850 to detect an actuator saturation condition. In yet another exemplary embodiment, detection of an actuator saturation condition may cause the input to the integrator 98 to be limited (FIG. 8). In FIG. 10, a flow diagram of a process 1000 for limiting the effects of an actuator saturation condition in an extremum seeking control loop for an AHU is shown, according to an exemplary embodiment. In an exemplary embodiment, process 1000 can be implemented as software stored within the memory of AHU controller 410. In another exemplary embodiment, process 1000 can be implemented as an analog circuit. Process 1000 includes the steps characteristic of an extremum seeking control strategy including: receiving a measurement from the temperature regulator control loop (step 1002), probing for a performance gradient (step 1006), using an integrator to drive the gradient to zero (step 1008) and updating the manipulated variable sent to the damper actuator (step 1010). Process 1000 further includes calculating the difference between the input and output signals to the actuator (step 1012). The difference between the input and output signals to the actuator remains zero unless the actuator is saturated. Process 1000 is further shown to pass the resulting difference signal from step 1012 into an amplifier (step 1014). The amplified difference signal from step 1014 is then passed back to step 1008 and combined with the output of step 1006 to form a new input to step 1008. This prevents the integrator in step 1008 from winding up and the extremum system from becoming unable to adapt to changes in the optimum operating condition.
Z = f(x) = (x - Xopt) + 2 where f(x) represents the performance map and xopt represents the value at which f(x) is minimized. The derivative of 'z' is then taken with respect to time at differentiator 908 and used as an input to a flip-flop based control 910 with some hysteresis. The flip-flop of circuit 910 is configured such that the change over associated with a negative value of the output derivative causes the flip-flop to change states. In one embodiment, a J-K flip-flop can be used with the hysteresis output driving the clock of the flip-flop. The output of circuit 910 is then integrated by integrator 912 and fed to the actuator of plant 903. Saturation block 914 mathematically represents the actuator of plant 903 with an input corresponding to the manipulated variable produced by ESC loop 922 and an output corresponding to the output of the actuator.
The effects of an actuator saturation condition at saturation block 914 are limited through the use of a feedback loop. The difference between the input and output signals for saturation block 914 is calculated at processing element 916. The difference signal is then amplified by a gain 918 and combined with the input to integrator 912 at processing element 920 to prevent wind-up in integrator 912.
In this example, the relationship between the damper openings is such that ESC can be used to optimize the control of any damper, because optimization of one damper opening leads to the optimization of all damper openings.
In yet another exemplary embodiment, one or more dampers may have fixed positions while other damper openings are variable and interrelated. In this embodiment, the damper positions for dampers 460, 462, and 464 may be as follows: θout = 1, θex = manipulated variable from the ESC, and θre = 1 - θex In this example, the ESC is used to optimize the control of damper 460 to minimize the power consumption of the AHU, while outdoor air inlet damper 464 remains fully open and damper 462 varies based on damper 460. ESC can therefore be used to optimize any combination of fixed position dampers and interrelated variable position dampers in an AHU, where ESC is used to control one or more of the variable position dampers. ESC can also directly control more than one damper at a time. For example, multiple ESC controllers may be used to control a plurality of independent dampers. Alternatively, a single ESC controller with multiple inputs can be used to regulate a plurality of independent dampers. The dampers in an AHU controlled by the extremum seeking control strategy may include, but are not limited to, outside air inlet dampers, recirculation air dampers, exhaust dampers, or a combination thereof. Embodiments within the scope of the present disclosure include program products comprising machine-readable media for carrying or having machine-executable instructions or data structures stored thereon. Such machine-readable media can be any available media that can be accessed by a general purpose or special purpose computer or other machine with a processor. By way of example, such machine-readable media can comprise RAM, ROM, EPROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code in the form of machine-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer or other machine with a processor. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a machine, the machine properly views the connection as a machine-readable medium. Thus, any such connection is properly termed a machine-readable medium. Combinations of the above are also included within the scope of machine-readable media. Machine-executable instructions comprise, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing machines to perform a certain function or group of functions.
It should be noted that although the figures may show a specific order of method steps, the order of the steps may differ from what is depicted. Also two or more steps may be performed concurrently or with partial concurrence. Such variations will depend on the software and hardware systems chosen and on designer choice. All such variations are within the scope of the disclosure. Likewise, software implementations could be accomplished with standard programming techniques with rule based logic and other logic to accomplish the various connection steps, processing steps, comparison steps and decision steps.
1. A method for optimizing a control process for an actuator, the method comprising: operating the control process using an extremum seeking control strategy; and using an electronics circuit to compensate for an actuator saturation condition of the extremum seeking control strategy.
2. The method as recited in Claim 1, further comprising: receiving a feedback signal from the actuator; comparing the feedback signal to a control signal sent to the actuator; subtracting the control signal from the feedback signal to obtain a difference signal; amplifying the difference signal to exaggerate the difference signal; and providing the amplified difference signal to an input of the extremum seeking control strategy to override previously determined optimum control parameters.
3. The method as recited in Claim 2, further comprising: receiving the amplified difference signal at the input of the extremum seeking control strategy; and providing the amplified difference signal to an integrator configured to reduce a performance gradient.
4. The method as recited in Claim 1, further comprising: actively distinguishing the actuator saturation condition from a state in which the actuator is not saturated.
5. The method as recited in Claim 4, further comprising: resetting the extremum seeking control strategy when the actuator saturation condition is actively distinguished.
6. The method as recited in Claim 1, further comprising: retrieving physical boundaries for the actuator from memory; and detecting the actuator saturation condition using the physical boundaries.
7. A controller for controlling an actuator, the controller comprising: a processing circuit configured to operate the plant using an extremum seeking control strategy and to compensate for an actuator saturation condition of the extremum seeking control strategy.
8. The controller of Claim 7, further comprising: an input configured to receive a feedback signal from the actuator; and an output configured to send a control signal to the actuator; wherein the processing circuit is further configured to subtract the control signal from the feedback signal to obtain a difference signal and wherein the processing circuit is further configured to provide the difference signal to logic of the processing circuit for providing the extremum seeking control strategy, the difference signal to override previously determined optimal control parameters.
9. The controller of Claim 8, further comprising: amplifying the difference signal prior to providing the difference signal to the logic of the processing circuit for providing the extremum seeking control strategy.
10. The controller of Claim 8, wherein the logic of the processing circuit for providing the extremum seeking control strategy comprises an integrator configured to reduce a performance gradient, and wherein the integrator is configured to receive the difference signal as an input.
11. The controller of Claim 8, wherein the processing circuit is further configured to actively distinguish the actuator saturation condition from a state in which the actuator is not saturated.
12. The controller of Claim 11, wherein the processing circuit is further configured to reset the extremum seeking control strategy when the actuator saturation condition is actively distinguished.
13. The controller of Claim 8, wherein the processing circuit further comprises: memory storing physical boundary information for the actuator; wherein the processing circuit is configured to retrieve the physical boundary information from memory and to detect the actuator saturation condition using the retrieved physical boundary information.
14. The controller of Claim 8, wherein the processing circuit comprises a processor and a memory device communicably coupled to the processor, the memory device storing computer code for operating the plant using the extremum seeking control strategy and computer code for compensating for the actuator saturation condition of the extremum seeking control strategy.
15. A controller configured for use with an air handling unit having a temperature regulator and a damper effected by an actuator, the controller comprising: a processing circuit configured to: provide a first control signal to the temperature regulator, the first control signal based upon a setpoint; provide a second control signal to the actuator, the second control signal determined by an extremum seeking control loop; and adjust the extremum seeking control loop to compensate for an actuator saturation condition. | 2019-04-20T09:40:51Z | https://patents.google.com/patent/WO2009012269A2/en |
© New Age Banking Summit Nigeria 2018, All Rights Reserved. No part of this website may be reproduced or transmitted in any form or by any means without the prior written permission of the event producer.
Banking Technology is the definitive source of news and analysis of the global fintech sector.
Founded in London in 1984, Banking Technology has been at the forefront of the print and online publishing for the international community of bankers, financial services professionals, vendors, consultants, analysts and other industry participants, big and small.
Our website attracts around 80,000+ unique monthly visitors and our daily newsletter is delivered to over 17,500+ key decision-makers in the financial services and technology sectors.
Our broad readership and reputation, combined with in-depth coverage of fintech and banking technology issues on a worldwide scale, makes Banking Technology the leading resource for technology buyers, sellers, developers, integrators and other specialists across the sector.
The Payments & Cards Network is dedicated to the Payments industry. Adding value to clients by offering innovative executive search, recruitment and RPO solutions to international clients. With offices in Amsterdam, Hong Kong, San Francisco and London, a global outlook, and understanding of the payments landscape is in place to offer the best advice and service on the market.
Call us with your specific needs today on +31 203 030 257 and one of our consultants will be able to help quickly and efficiently.
Payments Afrika is an independent news site for professionals and executives. Focused on the continuing evolution and innovation of payments in Africa and abroad, Payments Afrika delivers the latest news and insight into a wide range of electronic payment topics including: card payments, ATMs, online payments, ecommerce, mobile payments, online banking, alternative payments, security and point of sale technology.
Financial Nigeria magazine, Africa’s foremost development and finance journal, is produced as a knowledge-intensive complementary reading material. It is targeted at senior officials in the public, private and voluntary sectors. A number of Federal Government Ministries, Departments and Agencies, and private sector institutions are institutional subscribers to the magazine. The magazine regularly publishes articles on Energy, Policy and Governance, Private Sector Development, ICT, Finance and Investment, Monetary Policy, International Trade, Market Risks and International Capital Flows, Market Regulation, Agriculture and Food Security, Sustainable Development, and Technology. Financial Nigeria magazine debuted in August 2008 as a monthly. Since then, it has been published every month.
Dr. Omobola Johnson is a Partner of TLcom Capital (Lagos, Nigeria)—a venture capital firm focused on investments in technology-enabled companies in Sub-Saharan Africa. Before joining TLcom, Omobola was Minister of Communication Technology, Nigeria from 2011 to 2015. During this time, she supervised the launch and execution of the National Broadband Plan and provided support to the Nigerian technology industry.
Before serving in the Nigerian government, Omobola worked in Accenture for over twenty-five years, including five years as Country Managing Director. She serves on the board of a number of not-for-profit and for-profit organizations, including Women in Management and Business (WIMBIZ), where she was the founding chairperson. She has a Bachelor's degree in Electrical and Electronic Engineering from the University of Manchester, a Master's degree in Digital Electronics from King's College, London and a Doctorate in Business Administration from the School of Management of Cranfield University.
Tomisin Fashina is the Chief Information Officer for the Ecobank Group and Managing Director of eProcess (a technology shared service subsidiary of Ecobank). He has over 27 years of experience in several industries, including technology and financial services, with over 15 years in management positions. At Ecobank, he is responsible for ensuring the efficient and effective use of technology to meet business goals and objectives.
Tomisin served as the Managing Director of Yookos, a social media company providing social network services. Prior to that, he was the General Manager, Transaction Banking Products with ABSA Business Bank (Barclays Africa Group) Johannesburg. He has held several other positions, including Director, Cash Management, Barclays Bank PLC Dubai and Division Head and Director Client Delivery, Global Transactions Services at Citigroup South Africa. Tomisin holds a B.Sc. degree in Computer Engineering from Obafemi Awolowo University and an MBA in Marketing from the University of Lagos.
Folasade Femi-Lawal is a fellow of both, the Institutes of Chartered Accountants and Taxation of Nigeria, and an alumnus of Harvard Executive Education Business School. She is currently the Head of Digital Banking for First Bank of Nigeria. Her accomplishments include the successful launch of the first integrated lifestyle mobile banking app FirstMobile—which received a 4.7 rating within 24 hours—on Google, blankberry and Android Appstore.
Folasade joined FirstBank in July 2012, when she held the role of Deputy Head, Mobile Financial Services (MFS) and helped the bank take giant strides that led to partnerships with major institutions such as the Cherie Blair Foundation, UNICEF and major telecoms in Nigeria. She has to her credit the Global Finance award for the ‘Best Digital Bank of Distinction for 2016’ as well as the 2016 Asian Banker Award for the ‘Best Mobile Payments' at the 3rd West Africa International Banking Convention.
Dennis Onome Ezaga is an accomplished banking executive with over 19 years of demonstrated career success in banking, encompassing experience in sales, product management, business development, strategic planning/new setups, remittances, and banking operations.
Prior to his current role, he was responsible for opening First City Monument Bank’s Transaction Banking office in Abuja, where he led the team and helped the organization significantly increase the adoption and usage of electronic products. Dennis is a fellow of the Chartered Institute of Finance and Control, as well as the Institute of Credit Administration of Nigeria.
Currently working as the Chief Information Security Officer and Group Head Information Security Group at Guaranty Trust Bank Plz. Nigeria, Bharat Soni has over 18 years of experience in information security risk assessments, business risk assessment, IT governance and compliance. He has demonstrated skill in developing information security frameworks and conceptualizing information security policies, as well as ensuring compliance with security standards and procedures.
Shina Atilola is a consummate strategist and communication expert with experience driving product strategy and execution for banks. He has over 20 years of experience in business strategy, marketing communication, branding, auditing, mergers & acquisitions, and financial management. Currently the Group Head, Strategy and Innovation, of Sterling Bank, Shina has established himself as a reputable banker with expertise in enabling sustainable businesses over his 18-year banking career.
An erudite strategist in the Nigerian banking sector, Shina has enabled Sterling Bank to develop innovative approaches to doing business. He is an entrepreneur and thrives in challenging and fast-paced environments. He also runs businesses in the agriculture and real estate sectors. Shina is a graduate of Obafemi Awolowo University Ife, with a first class honours degree in International Relations, and an alumnus of Wharton Business School. He is trained in strategic thinking, credit analysis, and brand management, among others.
Usman Abdulqadir has over 20 years of banking experience, spanning retail and commercial banking, as well as bank supervision and regulation. He began his banking career with FSB International Bank Plc (now Fidelity Bank) in 1996, and served as the Kaduna Business Development Manager of Reliance Bank Limited (now Skye Bank), where he managed corporate, private and public-sector clients, before moving on to the Central Bank of Nigeria in 2003. He played a central role in the banking sector reforms of 2005 and 2009, and authored several in-house papers in the areas of financial stability, risk management and bank supervision.
Between 2010 and 2012, Usman represented the Central Bank of Nigeria in the Liquidity Risk Management Working Group of the Islamic Financial Services Board, a global body with 188 members (including 61 regulatory/supervisory authorities and 8 inter-governmental bodies). He also represented the CBN at the 2015 Seminar for Senior Supervisors from Emerging Economies. Usman graduated from Bayero University, Kano, in 1993 with a upper second class degree in Accounting, and also holds a Master's degree in Islamic Finance from Durham University, UK. He is an Associate Member of the Institute of Chartered Accountants of Nigeria.
Victor Okigbo is a pioneering digital innovator in Nigeria, and an advocate of technology entrepreneurship to foster greater digital inclusion on the African continent. Over the last 25 years, he has co-founded InfoSoft Nigeria - one of the country’s leading software development firms, worked with firms as diverse as Microsoft and British American Tobacco, and initiated the formation of IDEA Nigeria - the country’s first public-private partnership technology incubator.
Victor is Head of Financial Technology and Innovation at Access Bank, and currently leads The Africa Fintech Foundry - a fintech startup incubator based in Lagos. In his spare time, he runs Anthill 2.0, an acclaimed invitation-only poetry and spoken word forum that is held at various discreet locations in the city of Lagos. Victor also voluntarily serves on the Nigerian Economic Summit Group's Science and Technology Policy Commission.
Dr. Etienne Slabbert joined Barclays Africa Group in May 2010 and is responsible for technology strategy, execution and service on the continent outside of South Africa. Prior to joining Barclays Africa, Etienne held various senior technology, operations and banking roles at firms such as Nedbank, Accenture and Infoplan.
He holds a Ph.D. in Management of Technology and Innovation from the Da Vinci Institute for Technology and Innovation, and an MBA from Oxford Brooks UK. Etienne was a councillor at Sol Plaatjes University (one of South Africa’s two new universities), Chairman of the Facilities, Infrastructure and Information Technology Committee and a member of the University Executive Committee.
Dipo Fatokun is a fellow of the Institute of Chartered Accountants of Nigeria and the Chartered Institute of Bankers, with leadership and top management experience in both, the public and private sectors. He has attended several training programmes locally as well as internationally, including The Payments System Policy and Oversight Training by the Federal Reserve Bank of New York.
Dipo is presently serving as the Director, Banking and Payments System Department, Central Bank of Nigeria, and was a Deputy Director in the Banking Supervision Department of the Bank. He is the current Chairman of the Nigeria Electronic Fraud Forum, MICR Implementation Technical Committee and Payments Infrastructure Coordinating Committee, and the Secretary of the Payments System Strategy Board. Dipo holds a B.Sc. degree in Accounting from the University of Ilorin, Kwara State and a Masters’ degree in Business Administration from the University of Lagos.
Shola Akinlade is co-founder and CEO of Paystack, which provides a simple and secure way for merchants in Africa to accept payments from their websites or mobile apps. Paystack is the first Nigerian startup that was accepted for the Y Combinator’s accelerator programme. The company recently received US$ 1.3 million in seed funding.
Prior to Paystack, Shola co-founded Precurio—an open-source collaboration software for businesses in emerging markets—which was downloaded over 150,000 times and made available in 6 languages. He graduated with a B.Sc. in Computer Science from Babcock University.
Benedict Anyalenkeya has over 25 years of experience in consulting, manufacturing and banking, with core competencies in technology strategy, electronic banking, channels and product management, retail banking, accounting, and project management. He began his career in banking as operations staff before moving into IT audit consulting in 1994 when he joined EDP Audit & Security Associates. At EDP Audit & Security Associates, he consulted for regulators, multi-national companies and banks.
Benedict is a Microsoft Certified Professional, Certified Information Systems Auditor, Fellow of the Institute of Chartered Accountants of Nigeria, Associate Chartered Certified Accountant of London, Lead Auditor certified in British Standard number 7799 on Information Security and certified International Cards & Payments Professional. He is also certified in Risk and Control of IT.
Muhammad Jibrin has rich experience in banking and management, having previously held positions in Union Bank of Nigeria, Citigroup NA and Barclays Bank in four countries across Africa and Europe. Prior to joining SunTrust, he was an Executive Director on the Board of Aso Savings and Loans Plc. and at one point the Group Head responsible for growing Bond Bank’s (now Skye Bank Plc) business in Abuja and the North.
Muhammad has served on various boards and was at one time the National Deputy President of the Mortgage Bankers Association of Nigeria and a member of the Presidential Committee on Affordable Housing. He holds an M.Sc. in Risk Management from the NYU Leonard N. Stern Business School, New York University, an MBA from the Imperial College London, a diploma in General Management from Harvard University and a postgraduate diploma in Financial Management from the Abubakar Tafawa Balewa University, Bauchi.
Onajite Regha is the founder of the non-profit E-Payment Providers Association of Nigeria (E-PPAN), which actively engages with the banking industry and digital payment stakeholders to ensure that Nigerian consumers and businesses benefit from access to world-class payments. She has been its Executive Secretary/Chief Executive Officer since the organisation’s inception, partaking in advocacy, research, strategy, stakeholder facilitation and articulation of initiatives that educate and connect people in the payments environment.
Onajite has over 17 years of experience in media, banking and electronic payment systems. She has worked in team-oriented, high-volume, fast-paced, client-centric environments, serving as an operations manager at an oil servicing company in the Niger Delta, a counter service supervisor at Citibank Nigeria and a leader of the first e-payment magazine in the West African sub-region. Onajite studied Mass Communication at Delta State University, Abraka, received a certificate in Projects Management and is an Extra Value Certified Professional by Harvard Associates.
Peter Martis is currently director of the Face Recognition Business Unit at Innovatrics. His areas of focus include continuous product improvement, increasing Innovatrics’ presence in the face recognition market and evangelising the use of facial biometrics on mobile devices.
Prior to Innovatrics, Peter worked for Nuance, Genesys and Siemens, focusing on the sales activities and business development of the biometrics and customer service solutions verticals. While working at Nuance, he managed to increase the sales of voice biometrics technology in the Central European region by 400 per cent. Peter holds a degree in Computer Automation from Slovak Technical University. He is married with two children.
Temitope Akin-Fadeyi is the Head, Financial Inclusion Secretariat at the Central Bank of Nigeria (CBN). She also serves as Secretary to the National Financial Inclusion Governing Committees and Coordinator: Financial Inclusion Working Groups, focusing on products, channels, financial literacy and special interventions for priority segments. Prior to joining CBN, Temitope was a Management Board member and Head of Banking Services of FINCA International—a global microfinance organisation—where she pioneered the Banking Services Department in Uganda and mentored subsidiary heads.
She has over 16 years of work experience spanning strategy, operations, digital financial services, microfinance, retail/consumer banking, product development/sales, project management and international money transfers. She is an alumnus of Harvard Business School and a Fellow of the Fletcher Leadership Program for financial inclusion. Since assuming office as the Head, Financial Inclusion Secretariat, she has played a key role in advancing financial services to previously excluded populations across Nigeria.
oby Shapshak is Editor in Chief and Publisher of Stuff South Africa, a contributor to Forbes and a columnist for the Financial Mail. His TED Talk on innovation in Africa has over 1.4 million views. He believes Africa is a mobile-driven continent, and has written about the same for CNN, The Guardian in London and Forbes.
Toby has been featured in the New York Times and has won the ICT Journalist of the Year award. He was named in GQ’s top 30 men in media and the Mail & Guardian newspaper's 300 influential young South Africans lists. GQ said he "has become the most high-profile technology journalist in the country." As a news and political journalist, he ran the Mail & Guardian newspaper’s website when it was the first news site in Africa, shadowed Nelson Mandela when he was president, and covered the Truth and Reconciliation Commission.
Rishi Pillay is the General Manager and Regional Head: Africa for FSS, a leading global payments technology and processing company headquartered in Chennai, India. FSS offers business value in the areas of electronic payments and financial transaction processing solutions and services. The company has earned the status of a payments systems leader through a combination of an established portfolio of technology solutions, state-of-the-art infrastructure and 26 years of experience in the payments domain.
Rishi was previously the CEO of the automated clearing bureau business within BankservAfrica. He has over 20 years of experience in the financial services industry, undertaking senior roles across retail banking, payments, CRM, product management and IT. He has worked extensively in South Africa and the rest of Africa, managing existing operations and creating startups spanning consumer banking and retail transaction solutions. Rishi is passionate about payments and technology innovation, and is an enabler of financial inclusion and social upliftment.
Osasu Igbinedion is an experienced e-business and financial technology expert. He began his professional career in 2010 as an E-business Executive with Zenith Bank Plc in Lagos, Nigeria. He was responsible for managing e-business products in upcountry locations, training, and implementing electronic products for branches and clients in the zone.
In 2013, he joined a company with a mandate to grow and deepen electronic product adoption in Northern Nigeria. He successfully executed and managed revenue collections and implemented a framework for Treasury Single Account (TSA) for major states in the region, including Kano, Kaduna and Gombe. He is currently the Business Development Manager for Software Group in Nigeria.
SAS and all other SAS Institute Inc. product or service names are registered trademarks or trademarks of SAS Institute Inc. in the USA and other countries. ® indicates USA registration. Other brand and product names are trademarks of their respective companies. Copyright © 2014 SAS Institute Inc. All rights reserved.
Paystack offers a modern, secure, and affordable way for Nigerian businesses to accept online payments from their customers, wherever they are in the world. We provide merchants—from solo entrepreneurs to large multinationals—with the powerful but intuitive tools and services required to run and grow their businesses. We also work with banks to aggregate their payment channels and gain access to the latest financial technology. Through these efforts, we aim to play a major role in realising the Central Bank's vision of driving Nigeria towards a cashless society.
Detecon is a global management and technology consultancy specialized in Telecoms and has become a trusted advisor to many investors, carriers and regulators in Africa. Deteconhas successfully launched a number of Operators in Africa and worldwide. Detecon’s Knowledge Centre has one of the world’s largest data bases for ICT markets and related subjects.
Virmati Software is a preeminent IT Solutions & Delivery Organization with marquee offerings in verticals of BFSI (Banking & Finance), ERP and mCommerce Platforms.
c. Regional Partners enabled in East Africa, Central Africa, Central Asia & South East Asia with 2 Global Representational Offices at Kenya and Dubai.
As a transcendent IT Technology Player, Virmati excels in developing & in delivering neo, cutting edge Solutions to clients & partners along with resolute Life Cycle Support.
Come see a demo of Aware’s mobile face authentication with liveness detection! Aware is a veteran of the biometrics industry, providing a comprehensive portfolio of biometrics software products since 1993 for fingerprint, face, and iris recognition applications ranging from defense and border management to mobile authentication. We specialize in providing for top-tier biometric analysis, processing, and matching algorithms, provided in products that are easy to use with world-class technical support. Our mobile face authentication SDKs perform robust spoof detection and high-performance matching for easy, reliable, secure authentication.
DERMALOG Identification Systems GmbH, based in Hamburg, is the largest German biometrics manufacturer and is known as biometrics innovation leader. A team of scientists is constantly working on "Automatic Biometric Identification Systems" (ABIS and AFIS) including the latest fingerprint live scanners as well as biometric border control systems and biometric ID cards and other documents. "FingerLogin", "FingerPayment" and "FingerBanking" are also DERMALOG products, as well as automatic face identification and iris identification.
Apart from Germany and Europe, DERMALOG's main markets are in Asia, Africa, Latin America and Middle East. The company has now delivered its technologies and solutions to more than 140 government agencies in 75 countries.
DERMALOG also provides biometric solutions for banks and ATM manufacturers. The world´s largest biometrics bank project (50 Million USD) has been implemented by DERMALOG: An ABIS for the 23 Banks and for the Central Bank in Nigeria ensures single identities of bank customers and guarantees best possible KYC. Numerous ATMs worldwide have been equipped with DERMALOG´s fingerprint technology replacing insecure PINs. To learn more about DERMALOG's innovative biometric products and solutions.
InfoFort provides complete digital transformation solutions that span the full information management lifecycle and allow customers to move from paper to digital content management; structure their information; capture, process and validate data; automate customized workflows; and deploy electronic and digital signatures using smart and secure mobile technologies for easier accessibility, compliance and business continuity.
Financial Software and Systems (FSS) is a leader in payments technology and transaction processing, offering a diversified portfolio of software products, hosted payment services and software services built over 25 years of comprehensive experience across payments spectrum.
FSS, through its innovative products and services, caters to the wholesale and retail payments initiatives of leading banks, financial institutions, processors, merchants, governments and regulatory bodies. Its end-to-end payments suite powers retail delivery channels such as ATM, PoS, Internet, Mobile and Financial Inclusion as well as critical back-end functions such as cards management, reconciliation, settlement, merchant management and device monitoring.
Software Group is a global technology company that is specialized in delivery channel and integration solutions for institutions that provide financial services. Founded in 2009 and headquartered in Sofia, Bulgaria, it currently serves a worldwide client base in more than 65 countries from 9 regional offices located in Australia, Bulgaria, Egypt, Ghana, India, Kenya, Mexico, the Philippines and the USA. The company’s vision is to accelerate financial inclusion by creating cutting-edge technology solutions. Some of Software Group’s customers are organizations such as Bill & Melinda Gates Foundation, International Finance Corporation (IFC), Asian Development Bank, Asian Confederation of Credit Unions (ACCU), Financial Sector Deepening (FSD), 7 of the top 10 Microfinance networks (Finca, OI, VFI, Accion, Hope International, Microcred, ReAll), Bank South Pacific, National Bank of Vanuatu, Fullerton Financial Services Holding, Fidelity Bank Ghana, LAPO Nigeria, Access Holding, Grameen Koota Financial Services etc.
SmartStream provides Transaction Lifecycle Management (TLM®) solutions and Managed Services to dramatically transform the middle and back-office operations for financial institutions. Over 1,500 clients, including more than 70 of the World's top 100 banks, 8 of the top 10 asset managers, and 8 of the top 10 custodians rely on SmartStream’s solutions.
SmartStream delivers greater efficiency, automation and control to critical post trade operations including: Reference Data Operations, Trade Process Management, Confirmations and Reconciliation Management, Corporate Actions Processing, Fees and Invoice Management, Cash & Liquidity Management and Compliance Solutions. Used independently or as a suite of solutions and services, clients’ gain a lower cost-per-transaction while reducing operational risk, aiding compliance while improving customer service levels.
Volodymyr Budanov is currently a marketing and sales professional in a fintech company. After spending nearly 15 years working in different international banks, He has a clear idea of how financial services work and what customers really need in their daily life.
Volodymyr creates and sells various fintech products, including mobile wallets, electronic money systems, mobile QR POS terminals, customer loyalty management and mobile gifts. He works with customers and prospects from more than 100 counties across the world, and is interested in launching ambitious projects in African countries.
Walid Kaâbachi is an Associate Director of Biware Consulting, which he co-founded in 2011. Biware is a consulting and system integration company specialising in business intelligence and analytics solutions. Its clients include Attijariwafa Bank (Morocco), AlBarid Bank (Morocco), BMCE Bank (Morocco), Ecobank Group (Togo and Ghana), Ooredoo Group (Tunisia, Algeria), BIAT (Tunisia), Bank Al-Maghreb (Morocco), and Bouygues Telecom (France).
Walid is an expert in consulting and information system implementation for clients in the financial and telecommunications sectors. Before founding Biware, he was a Project Manager and MOA for over 12 years.
Adegbami Adegoke Elijah is an experienced banker with in-depth knowledge and expertise in financial management, performance management, audit practices, financial reporting and management accounting, and different areas of management practices. He has worked as a management consultant for numerous organisations, including banks. After the Microfinance Policy was introduced by the Central Bank of Nigeria in 2005, Adegbami worked as a consultant for a number of community banks that were being converted to microfinance banks.
Subsequently, he joined Amaifeke Microfinance Bank Limited as the Head of the Internal Control and Audit Department. In 2009, he joined Mainstreet Bank Microfinance Bank Limited as the pioneer Head of Internal Control and Audit. Adegbami is an alumnus of the School of African Microfinance-Mombasa, Kenya, where he specialised in Strategic Planning (Using Microfin Tool) and Financial Analysis for Microfinance Institutions (Seep Tool). He has published several books on subjects related to financial intelligence, entrepreneurship and financial empowerment.
gbonna Ukuku is the Executive Director – Nigeria, Global Chamber and the CEO of ONS Triumph Ltd., a company whose primary objective is to promote, advocate and support investors. The company also focuses on SME development, youth development and entrepreneurship. Ogbonna is the Lead Consultant for Investment Support Services to the Abeokuta Independent Power Project. Prior to this, he was an Associate Consultant to the prestigious Baben International Business School (BIBS) in Switzerland.
Ogbonna was also the Senior Consultant on Investment Promotion to Indorama Group (owners of Eleme Petrochemical Company Ltd). He holds a professional certificate in Investment Promotion and Economic Development from the International Institute for Investment Promotion (IIIP) – Switzerland (Now Baden International Business School), an M.Sc. in Investment Promotion and Economic Development from Edinburgh Napier University – Scotland, a Master’s degree in Project Management and a first degree in Geology and Mining.
public awareness programmes in Nigeria. Dele is certified as an e-Business Consultant (CEC) by the Internal Council of Electronic Commerce Consultants in UK. He is also a Microsoft Certified Systems Engineer. He has 25 years of post-graduation work experience in the IT and banking industries. Dele is a Chartered Accountant, and attended the Senior Management Program at the prestigious Lagos Business School (LBS). He will be the Chairman on Day 1 of the summit.
Currently based in Morocco, Ndagi Job Goshi serves as General Manager for Liferay in Africa. Being a native of the continent, Ndagi is tasked with serving the African customer through the consultative provision of the robust, customer-focused Liferay Digital Experience Platform. Having worked for Fortune 100 companies in both, the financial services and insurance industries, he brings a balanced mix of sound technical knowledge and business know-how to the table.
Ndagi enjoys helping empower people to reach their potential. As such, he has mentored several professionals on Wall Street over the years. Ndagi currently continues to mentor entrepreneurs in Africa as part of the prestigious BMCE Bank of Africa’s African Entrepreneurship Award. He is married and is the father of three daughters.
Antoinette Edodo is currently an Account Executive working with SAS Nigeria. She joined the company in 2008 and rose through the ranks, gaining experience in several business units across the enterprise and acquiring skills in channel management, negotiation, territory sales and strategic account management. Antoinette has 9 years of experience selling technology solutions to organisations via channel partners and to clients (B2B and B2C).
She has led key strategic accounts across the African continent. Antoinette is highly focused on client satisfaction and improving the bottom line. She has a keen eye for spotting strategic partnerships and building trust. After graduating with a B.Sc. in Estate Management from the University of Lagos in 2004, she took on a business support role at Berkeley Group and swiftly moved into the oil & gas industry as the Executive Assistant to the Managing Director of the then Wilbros group before joining SAS.
Wael Issa is a Fintech expert specialising in the transformation of back-office operations through automation. In the past 10 years, Wael has worked with over 300 financial institutions covering the Middle East and Africa. He is currently based in Dubai, and is pursuing a Pre-MBA programme at Harvard Business School.
George Agu is an experienced IT professional and entrepreneur. Over the past 16 years, he has navigated roles in professional services, sales and business development, and company leadership in several African countries. Prior to founding ActivEdge Technologies in 2010, George was the Managing Director, Chief Executive Officer, of Neptune Software, West Africa, and helped the company gain substantial market share in the public and private sector. He also worked with an investor and a commercial bank to host the first cloud-based solution for a microfinance bank in Nigeria.
As the CEO of ActivEdge Technologies, George has led several industry initiatives, ranging from revolutionary industry-driven IFRS transition management (insurance, manufacturing, oil & gas and pension sectors) and cybercrime prevention security solutions to PCIDSS and transaction reconciliation and exceptions management. George studied Computer Science and Statistics at University of Nigeria, Nsukka. He is a certified Information Systems Auditor, Information Security Manager, and Project Manager.
Olusola Teniola is currently a Client Partner with Detecon International—a subsidiary of Deutsche Telekom, Germany—where he is responsible for Nigeria and West Africa. Prior to this, he was a board member and CEO of IS Internet Solutions (a Dimension Data company) in Lagos, Nigeria. Previously, he was the COO and Director of Engineering at Phase 3 Telecom, Abuja. His ICT career spans 25 years in strategic management positions at several major telecom companies across the globe.
Olusola was instrumental in the rollout of mobile and broadband services by World Telecom in Portugal (2003–2007) and has served in executive positions at British Telecom, Vodafone, Cisco Inc. and Alcatel-Lucent Technologies. He has also been a member of the ETSI Standardization Work Group Committee for Next Generation Protocols and IP Distribution Architectures (2001). Olusola is currently a member of the IPv6 Council in Nigeria and the A4AI coalition.
Olivier Dipenda is Regional Head: West and Central Africa for FSS, a leading global payments technology and processing company headquartered in Chennai, India. FSS offers business value in the areas of electronic payments and financial transaction processing solutions and services. The company has earned the status of being a payments systems leader through a combination of an established portfolio of technology solutions, state-of-the-art infrastructure and 26 years of experience in the payments domain.
Olivier has held a number of senior positions in leading global financial services companies in the pan-African region. These roles encompassed mobile banking and payments, card issuance and acquiring, agency banking, financial inclusion and the insurance and reinsurance sectors. He participated in an advisory capacity in several African financial markets, and has extensive and in-depth knowledge of the pan-African retail banking, transacting and payments sectors, as well as strategic stakeholder management.
Africa.com is the leading digital media company providing business, political, cultural, lifestyle and travel information related to the continent.
The Africa.com Top10 is the smart choice for busy people who don’t have time to filter through all of the headlines searching for the latest news on Africa. The Top10 is the trustworthy curated news source that makes staying up-to-date quick, easy and interesting.
CR2 is at the forefront of digital banking, consistently delivering on the needs of end users, embracing next - gen technologies and digital disruptions that are enabling new experiences in digital banking– Mobile, Internet and ATM.Our maturity and experience has enriched our product, providing a depth and breadth of functionality that is unrivalled.With deployments in over 100 banks in 60 countries, CR2 enables banks achieve digital transformation and deliver a consistent, cross - channel, exceptional user experience.CR2 has offices in Dublin, Dubai, Amman, Bangalore, Perth and presence in London, Moscow, Lagos and Johannesburg.
Eclectics International was founded in 2007 by talented ICT professionals with extensive knowledge and experience in the financial industry.
enabling them stay ahead of an evolving marketplace.
We innovate, develop and deploy market leading solutions by simply putting our customer needs at the center of everything we do .With over 205 clients across 24 countries using our award - winning ICT solutions we must be doing IT right!We are a CMMI Level 3 appraised and PCI DSS compliant company.
Established in 2007, Accion Microfinance Bank has a mission, “to economically empower micro-entrepreneurs and low-income earners by providing financial services in a sustainable, ethical and profitable manner.” The bank which has a license to operate nationally in Nigeria, has an extensive branch network of 60 service outlets as at end of 2017, where customers have easy access to various products and services that include savings, loans, micro-insurance and e-commerce.
Accion Microfinance Bank has solid shareholder investments from three major banks Eco- bank, Zenith Bank and Citi Bank as well as International Finance Corporation, a member of the World Bank and Accion Investments - all of which contribute to its strong financial base and allow it to service an ever-increasing number of customers. The bank's corporate citizenship focuses on education, in which donations of educational materials and supplies are made to pupils of public primary schools.
Accion Microfinance Bank has won several awards including the EFInA Award for the financial service provider that has deepened financial inclusion in Nigeria, the Lagos State Enterprise (LEAD) award for Best Microfinance Bank in Lagos State multiple times, and the LEAD Centenary MFB of the Year, for its impact on socio-economic development, contributions to sustainable development, and its commitment to financial inclusion in Nigeria as well as the Ikeja City Award for the Most Consistent Microfinance Bank of the Year.
HPS is a multinational company and a leading provider of payment solutions for issuers, acquirers, card processors, independent sales organizations (ISOs), retailers, and national & regional switches around the world.
PowerCARD covers the entire payment value chain by enabling innovative payments through its omnichannel solution that allows the processing of any transactions coming from any channels initiated by any payment means. PowerCARD is used by more than 400 institutions in over 90 countries.
HPS is listed on the Casablanca Stock Exchange since 2006 and has offices located in major business centers (Africa, Europe, Asia, Middle-East).
Forcepoint is transforming cybersecurity by focusing on what matters most: understanding people’s intent as they interact with critical data and intellectual property wherever it resides. Our uncompromising systems enable companies to empower employees with unobstructed access to confidential data while protecting intellectual property and simplifying compliance. Based in Austin, Texas, Forcepoint supports more than 20,000 organizations worldwide.
For more about Forcepoint, visit www.Forcepoint.com and follow us on Twitter at @ForcepointSec.
Dell EMC, a member of Dell Technologies unique family of businesses, enables organizations to modernize, automate and transform their data center using industry-leading converged infrastructure, servers, storage and data protection technologies.
Dell EMC serves a key role in providing the essential infrastructure for organizations to build their digital future, transform IT and protect their most important asset, information. Dell EMC enables enterprise customers’ IT and digital business transformation through trusted hybrid cloud and big-data solutions, built upon a modern data center infrastructure that incorporates industry-leading converged infrastructure, servers, storage, and cybersecurity technologies.
Dell EMC brings together Dell’s and EMC’s respective strong capabilities and complementary portfolios, sales teams and R&D. We seek to become the technology industry’s most trusted advisor, providing capabilities spanning strategy development, consultative services and solution deployment and support to help our customers and partners drive the digital transformation of their businesses.
Dell EMC services customers across 180 countries – in every industry and of every size in the public and private sector, which includes 98% of the Fortune 500 – with the industry’s most comprehensive and innovative portfolio from edge to core to cloud. Our customers include global money center banks and other leading financial services firms, manufacturers, healthcare and life sciences organizations, Internet service and telecommunications providers, airlines and transportation companies, educational institutions, and public sector agencies.
VMware, Inc. is a subsidiary of Dell Technologies that provides cloud computing and platform virtualization software and services. It was the first commercially successful company to virtualize the x86 architecture.
VMware's desktop software runs on Microsoft Windows, Linux, and macOS, while its enterprise software hypervisor for servers, VMware ESXi, is a bare-metal hypervisor that runs directly on server hardware without requiring an additional underlying operating system.
This year, VMware celebrates 20 years as an industry pioneer. When the company launched in February 1998, we transformed the data center forever by mainstreaming virtualization, the core principle of cloud computing. Twenty years later, we remain just as focused on innovating in everything we do, and committed to solving the most difficult technology problems for our customers. We apply the same principles of virtualization and software innovation to securely connect, manage and automate the world's complex digital infrastructure. And there's so much more to come.
We see opportunity to apply those principles to growing technology areas like IoT, edge computing, and AI, amongst others. We are optimistic about the power of technology to be a force for good, with the potential to solve the big societal problems of today and tomorrow. Software, as we see it, has the power to transform business and humanity. We're here to make that happen.
Intel is the World’s Largest Semiconductor Manufacturer. A leading Manufacturer of Computer, Networking & Communications Products. Founded by Gordon Moore and Robert Noyce in 1968 with headquarters in Santa Clara, California.
Intel Corporation is an American multinational corporation and technology company headquartered in Santa Clara, California, in the Silicon Valley. It is the world's second largest and second highest valued semiconductor chip maker based on revenue and is the inventor of the x86 series of microprocessors, the processors found in most personal computers (PCs). Intel supplies processors for computer system manufacturers Including Dell. Intel also manufactures motherboard chipsets, network interface controllers and integrated circuits, flash memory, graphics chips, embedded processors and other devices related to communications and computing.
Intel Corporation was founded on July 18, 1968. It has over $55.9B in Annual Revenues with 25+ Consecutive Years of Positive Net Income. Over 107,000 Employees, 170 Sites in +70 Countries. The 12th Most Valuable Brands in the World by Interbrand and ranked #12 on Forbes World’s Most Reputable Companies. It’s the Largest Voluntary Purchaser of Green Power in the United States since 2008 and has Invested more than $1B in Education across more than 100 Countries over the past decade. Intel employees have generated over 4 Million Hours of Volunteer Service toward improving education over the past decade.
Nucleus Software is the leading provider of mission critical lending and transaction banking products to the global financial services industry. For 3 decades, we have helped more than 150 customers in 50 countries drive innovation, enhance business value and deliver outstanding customer experiences. We offer solutions supporting retail and corporate lending, Islamic finance, corporate banking, cash management, mobile and internet banking, automotive finance and other business areas. Nucleus Software’s FinnOne Neo is a 10 time winner - World’s Best Selling Lending System, which helps Banks and other financial institutions transform their retail lending businesses by enhancing their end to end digital capabilities, launching innovative products quickly, offering personalized customer service and enhancing loan product portfolios. Our advanced technology solutions for corporate lending deliver the business agility you require to cater to the complex needs of lending to large corporate and Small-to-Medium Enterprises (SME) customers. With cutting edge capabilities of machine learning and text mining, Nucleus Software’s Lending Analytics is a powerful solution enabling banks to make informed loan decisions through data visualization and business insight generation across the loan lifecycle.
For more than 20 years, CR2 has been trusted by over 100 leading banks in 60 countries to deliver retail banking services incorporating ATM, Cards, Mobile and Internet to millions of customers every day. With headquarters in Dublin, Ireland, CR2 has regional offices located in Amman, Dubai, Bengaluru and Perth with additional presence in London, Lagos, Johannesburg, Cairo and Singapore. CR2 is led by a team of fintech experts who develop and align retail banking propositions that meet the needs and challenges faced by banks across emerging and developing markets today. For additional information.
Freshworks provides organizations of all sizes with SaaS customer engagement solutions that make it easy for support, sales and marketing professionals to communicate effectively with customers for better service and collaborate with team members to resolve customer issues. The company's products include Freshdesk, Freshservice, Freshsales, Freshcaller, Freshteam, Freshchat and Freshmarketer, Freshconnect and Freshping. Founded in October 2010, Freshworks Inc., is backed by Accel, Tiger Global Management, CapitalG and Sequoia Capital India. Freshworks' headquarters are located in San Bruno, Calif., with global offices in India, UK, Australia and Germany. It is widely used by over 150,000 businesses around the world including Honda, Hugo Boss, and Cisco. | 2019-04-24T06:10:40Z | http://newagebankingsummit.com/nigeria/news/page/2/ |
RBS number 0844 545 7918 Calls to our 0844 number cost only 5p per minute, plus your phone companies access charge.
RBS is just a phone call away, call the RBS number 0844 545 7918 to be put through to a friendly representative.
Tesco Bank Contact Number 0844 545 6545 Calls to our 0844 number cost only 5p per minute, plus your phone companies access charge.
Tesco Bank owned by Tesco PLC is a retail bank of the UK formed in 1997. This bank offers a variety of services to its customers. Some of the services offered by the bank include mortgages, current accounts, savings accounts, loans, credit cards, foreign exchange currency and insurance. You can access one or more of these Tesco Bank services. You don’t always have to visit the bank in person to acquire the banks services. You can call the bank using their customer service line to access most services. The Tesco bank contact number is 0844 545 6545. This line is staffed by a team of professional customer care representatives who are friendly enough to answer all your questions. This line is in operation all night and day.
If you want to own your own home but lack the finances to build one, you can consider getting a mortgage from Tesco Bank. There are however a number of things that you may want to know before getting the mortgage. Such include the amount you can borrow, interest that the mortgage will attract, the deposit that will be needed, when you will receive the mortgage, when to repay your mortgage, restrictions that will apply if you borrow against an existing mortgage and the measures taken if you have a mortgage with another lender. To discuss all of these issues, you can call the customer service via the Tesco bank telephone number on 0844 545 6545.
Tesco Bank also offers personal loans to its customers. You can enquire from the bank’s advisor on several things like your qualification to apply, the maximum amount you can borrow, when to get your loan, fees that apply if any, the repayment periods, additional borrowing with an existing loan, and if you can apply in joint names. Call the Tesco Bank helpline on 0844 545 6545 to enquire about Loan borrowing and get some financial advice.
You can open a Tesco Bank savings account or current account to help you manage your earnings. Accessing your accounts or carrying transactions can be easy over the customer care phone number. The right Tesco bank number to call is 0844 545 6545. Making transactions on your bank account from the comfort of your home can be made very convenient with the help of the customer service helpline. Tesco Bank transactional accounts allow customers to preserve a part of their liquid assets while earning money.
As a Tesco Bank customer, you can consider getting a debt card. This card will enable you have an electronic access to your bank account. Your card will enable you to withdraw cash from a designated bank account. The card can as well allow you to pay for purchased items instead of using cash as long as it is accepted as means of making payments. You can call Tesco Bank customer support to enquire about the use of your account number on the internet if you are not using the actual card. The customer support contact number to call is 0844 545 6545. You can use this number to apply for the debit card and enquire for guidelines on how to use it.
With Tesco Bank credit card, you can pay for goods and services. If you need a credit card, you can apply for one. If you have questions on whether you can apply, annual fees to pay, balance transfers, card protection policies or any restrictions that apply in getting and using Tesco Bank credit card, dial 0844 545 6545 which is the Tesco bank customer service number that will promptly be answered by a knowledgeable customer service advisor.
With the customer service helpline, you will be able to do many transactions on your personal account without visiting the bank. If at any time you are experiencing a problem with your bank account or need some clarification on the services offered, you can conveniently call the Tesco bank contact number on 0844 545 6545. An advisor from the customer service group will promptly address your complications. If the issues you have can not be addressed over the phone, the customer service agent will refer you to your nearest Tesco Bank branch where a specialist can answer the questions you have.
Telephone Customer Service is a telephone directory and call routing service and is not connected or has any affiliation to Tesco Bank. The direct contact number can be found in the public domain or on their official website.
RBS Telephone Banking Number 0844 545 6558 Calls to our 0844 number cost only 5p per minute, plus your phone companies access charge.
Giving these details will ensure that no one accesses your account by posing as you.
As an RBS customer, you can withdraw cash at an ATM even if your card is stolen or lost. By calling RBS telephone number on 0844 545 6558, you can find out how much money you can withdraw depending on the available amount in your account.
The RBS offers loans and mortgages for its customers. If you dream of buying a new car or going for a special holiday but lack enough funds, you can borrow a loan to support your plan. Dial the RBS telephone banking contact number on 0844 545 6558 to discuss about your loan and get to know the requirements you need to fulfill before qualifying for one. If you wish to own your own home, you can also get RBS support by requesting for a mortgage. By calling the RBS customer service helpline, you will get connected to an advisor who will help you know how much funding you can get for your mortgage plan. You will also be advised on how to keep up with the repayments of your mortgage as failure to repay as per the mortgage contract may subject you to losing your home. If you want to make investments or savings and facing some challenges. The RBS customer service advisors can come to your help. You simply have to call the customer helpline number for professional financial advice.
Call the RBS customer services phone number on 0844 545 6558 to set up an appointment with an expert advisor in a RBS branch near your home or work place. In your meeting, you can feel free to ask questions that you are not comfortable discussing over the phone. The RBS specialist will also help you make good financial decisions especially when it comes to making investments or getting loans to fund your projects.
RBS can be your reliable travel partner when it comes to getting travel currency. Simply call the RBS helpline on 0844 545 6558 when travelling overseas to order for travel currency. In situations where you need to make international financial transfers and don’t known how to go about it, contact the customer service helpline. A representative from the dedicated and professional customer service agents will take you through the procedural steps in a manner that you will easily cope with. Your conversation will not be cut off until you are satisfied with the answers given.
Banking with the RBS has never been so easy with the customer service helpline that operates around the clock. If you have any complains or compliments to make about RBS services, the customer service helpline is always open. Call the RBS telephone banking number on 0844 545 6558 to get answers to your queries and have room to discuss all your banking needs from the comfort of your home in an instant without the hassle of visiting the bank.
Telephone Customer Service is a telephone directory and call routing service and is not connected or has any affiliation to RBS. The direct contact number can be found in the public domain or on their official website.
Yorkshire Bank Number 0844 545 7916 Calls to our 0844 number cost only 5p per minute, plus your phone companies access charge.
Headquartered in Leeds, UK, Yorkshire Bank is a leading provider of financial services and the Clydesdale Bank plc’s trading division. The bank traces its history back to 1859 when it was founded. Today the Yorkshire Bank has branches across the UK and is a leader in personal banking services. Whether you are existing customer or are considering to become one with the bank, you will find the Yorkshire Bank number 0844 545 7916 useful in many ways. You can call it to enquire about their financial services like opening a bank account, application for loans and so on. Their customer services department is populated with knowledgeable and friendly customer service representatives that will promptly answer your queries and advise you accordingly.
Yorkshire Bank has a strong online presence with a platform through which the bank provides services and information to the convenience of their customers. Their internet banking services provides great convenience to many registered customers who can easily log in to access and manage their bank accounts from practically anywhere. However, in 2013 there was a problem with access to their internet banking services as customers could not log onto their website for several days. It was alleged that the bank forgot to renew its domain name. More IT problems were experienced in September 2014 that made customers unable to send and receive payments for sometime. When such problems occur, customers obviously make every effort to communicate with the bank for answers. Calling the Yorkshire Bank contact number 0844 545 7916 can help in such situations because fast connection through to customer service representatives is guaranteed.
The Yorkshire Bank has fairly enjoyed expansion over the years with its profits in a number of years rising. In 2006, profits rose by 16.7% to £454 million in comparison to the previous year. Total income increased by 8.7% at £1,193 million and the net interest income grew by 14.6% to £769 million. However, in 2014 the National Australia Bank (NAB) started working on an exit strategy from the UK, with several options considered for Yorkshire and Clydesdale Banks. In the same year, NAB announced its plans of floating Clydesdale Bank plc (Yorkshire Bank included) on the London Stock Exchange (LSE) via an initial public offering, aiming to raise £2bn. As a Yorkshire Bank customer, you may find out if any of the plans to exit from the UK will in any way affect the services you expect. That can be simply done by calling the Yorkshire Bank telephone number on 0844 545 7916.
NAB’s priority to exit UK is mainly as a result of crisis within the economy that has faired worse than in Australia and New Zealand. Clydesdale and Yorkshire banks have turned out to be assets with low returns for NAB that wants to focus on home markets instead. The Yorkshire Bank and Clydesdale plc in general have witnessed falling annual profits in the recent years. Even as the exit plans continue to be fully laid out, customers of the Yorkshire Bank still continue to enjoy the full range of financial services. Any plans that directly affect customers are obviously communicated to them, but customers can still confirm by calling the Yorkshire Bank helpline on 0844 545 7916. Knowledgeable customer service representatives with all the information at their fingertips are ever ready to address all issues and any concerns from customers.
Yorkshire Bank financial services are enjoyed by many customers in the UK. You may open any type of bank account including savings and current accounts with the bank. Most customers have credit cards with the bank while others may also enjoy lending services to suit a variety of their needs. Customers conveniently register for internet banking services and can access more information relating to their transactions. For any financial services you might be currently enjoying from the Yorkshire Bank, you can easily get help and support whenever you face any difficulties. Resources available at the bank’s online platform can be very useful in providing guidance to solve many issues, but sometimes circumstances demanding calling and speaking with someone from the customer services department. Calling the Yorkshire Bank number on 0844 545 7916 guarantees the fastest connection to speak to a customer service representative. Just dial 0844 545 7916 and you will not only save time but also money with every 5 minute call costing only 25p (Plus your phone companies access charge).
DISCLAIMER Telephone Customer Service is a telephone directory and call routing service and is not connected or has any affiliation to Yorkshire Bank. The direct contact number can be found in the public domain or on their official website.
Natwest Telephone Banking Number 0844 545 6539 Calls to our 0844 number cost only 5p per minute, plus your phone companies access charge.
Natwest is a leading bank in the UK providing retail and commercial banking services. The bank traces its history from a 1968 merger between National Provincial Bank and Westminster Bank. Today Natwest has more than 1,600 branches and over 3,400 cash machines. The bank offers a wide range of personal and business banking products. Whether you want to enquire about savings account, current account, loans, mortgages, insurance or any other financial service, just call the Natwest telephone banking number 0844 545 6539 and you will be given the appropriate information. Calling 0844 545 6539 connects you through to the customer services department of the bank where competent professionals will be glad to advise or help you in any way possible.
Natwest has a large base of both personal and business customers in the UK. It even reaches customers in the most remote areas by operating mobile branches using special vans. Many customers are also able to take advantage of Natwest’s online banking services using login details that make it possible to access information and carry out some transactions. With Natwest mobile app, banking while on the go is also now a reality for many customers. With all the innovative ways Natwest allows its customers to access banking services, issues are also sometimes expected to arise but these can be solved using various means. An online Support Centre allows many customers to conveniently access information and help when they encounter difficulties with banking services. Many customers will also choose to call the Natwest telephone banking contact number on 0844 545 6539 especially if the online help doesn’t work for them conveniently as they expect.
Banking services in most cases require customers to be careful especially where sensitive information and decision making processes are involved. Sometimes customers forget PIN or passwords and need to follow certain procedures to have their issues resolved. The option majority of them would be comfortable with is calling the bank to speak to a customer service representative who advises them on what to do exactly. When a credit card gets lost, it is advisable to call the bank immediately so that no unauthorised persons succeed in using it. You can get the fastest connection through to customer care by calling 0844 545 6539 with charges of 5 pence per minute.
When you need special financial advice including application of loans or mortgages with Natwest, the best option would definitely be to make a call to the bank. A customer service representative will link you to a financial advisor at the bank to speak to you and help you make an informed decision. Similarly, you will also need advice if you need to invest through the bank. Natwest as one of the oldest and largest financial institutions in the UK has experienced and knowledgeable financial advisors that are always ready to help customers. Many customers are always calling the bank for this kind of help to make wise financial decisions based on the banking products that the bank has to offer. Natwest provides such services to its customers because they want to provide financial services tailored to the needs of individual customers. Call the bank on 0844 545 6539 to know how you can best take advantage of their special services.
If you need credit card support, you may take advantage of the Support Centre Natwest provides on their online platform. Some customers browse through FAQs and find al the answers they need to common issues. However, sometimes you may have very specific issues with your credit cards or other banking products and have to call customer service for help. Even while trying to find help on certain issues from the bank’s online platform, you may get advised to call for help.
As Natwest continues to introduce more innovative financial solutions for personal and business customers, it will become more important to increase customer service levels. The bank will continue to make every effort to address issues that may affect customers using their banking services. For customers to better manage their accounts and carry out their transactions smoothly, they will often need to call the bank for information and any kind of help they will need. The Natwest telephone banking number 0844 545 6539 will simply provide the best option.
Telephone Customer Service is a telephone directory and call routing service and is not connected or has any affiliation to Natwest Telephone Banking. The direct contact number can be found in the public domain or on their official website.
RBS Contact Number: 0844 545 7918 Calls to our 0844 number cost only 5p per minute, plus your phone companies access charge.
The RBS or Royal Bank of Scotland is a banking and financial services company with headquarters in Edinburg, Scotland. The company has a subsidiary in the UK and serves millions of customers in Europe, Americas, Asia and the Middle East. The universal banking group provides a scope of financial services to individuals, retails, corporate and financial institutions. If you are encountering a problem with your RBS account or need some advice when joining the bank as an account holder, call the RBS contact number on 0844 545 7918. The customer service phone number is answered by professional customer friendly advisors who can help you with making sound decision on investments, savings, loans and travel currency as well as national and international financial transfers.
RBS has a reputable customer service to its existing customers. If your credit cards have been stolen if they have got lost, you should immediately dial RBS helpline on 0844 545 7918 for assistance. Over your phone conversation with the RBS customer service agent, you will need to provide certain information for prompt help. Such include your account number, where and when you last used your card and where and when it was lost or stolen. Providing this essential information will help you withdraw cash at an ATM. In cases where you suspect that your card details have been used fraudulently, report directly to RBS using the customer service number to prevent fraudulent access to your account.
Sometimes a short-term need for liquidity may push you to sell your long- term assets. If you not sure on the best move to make, you can consider calling the RBS number on 0844 545 7918 to get financial guidance. The RBS offers credit solutions to its diverse clients that will provide liquidity for personal, investment or business needs in the most efficient way. By calling the RBS team on their helpline, you can explain your cash flow requirements. This helps the bank to provide you with flexible withdrawals to fulfill your liquidity needs in a timely manner.
Investing in real estate is a brilliant idea especially if you identify the right investment opportunity. Whether you wish to have a perfect home for you and your family or want to invest in commercial property, RBS can help you achieve your investment objectives. The bank’s financial services will help you achieve a valuable long-term asset in the existing booming real estate market. The bank will also ensure provision of liquidity that will enable you peruse your asset. By contacting RBS customer service number on 0844 545 7918, you will be connected with the best financial institutions and property consultants. RBS carries out due diligence on the market expertise of all the institutions you are dealing with on your behalf. You can contact the RBS customer service team if you want to engage in the buying and selling of property for the best Return On Investment (ROI), investing through securities or owning property yourself. The vital information on real estate providers is very essential in helping you carry out sound transactions when investing in real estate property. This can also save you from future risks that you may not foresee in the initial stages of investment.
You can easily dial the Royal Bank of Scotland customer support number if you wish to have a discussion on how to access a loan for a special purchase. This could be a loan to finance a perfect holiday with your partner or family or even to finance your car purchase. RBS telephone number is 0844 545 7918 that is serviced by a pool of professional advisors all day and night. You can also request for assistance over the phone on getting mortgages for buying a home, moving home or re-mortgaging to RBS.
The RBS has several branches across the UK, you can visit your nearest office in person if you have a need or call their customer service number for assistance or arrange for an appointment with an expert advisor in whatever financial need you have. Dialing the RBS contact number on 0844 545 7918 is the most convenient way to discuss all your financial needs since emails may take sometime to be responded to and visiting the location branches may cost you some of your precious time.
Lloyds Contact Number Call: 0844 545 6542 Calls to our 0844 number cost only 5p per minute, plus your phone companies access charge.
Lloyds TSB was the bank formed after the 1995 merger between Lloyds and TSB. After the merger and fulfillment of all the statutory requirements, business started its operations in 1999. In terms of the market share, the Lloyds TSB became the largest bank in the UK market. But in terms of market capitalization the bank was second largest only to Midland Bank which today is the HSBC. The bank remained in operation until a divestment was necessary in 2013. With the two banks going separate ways after being in operation for 15 years, the need for customer service with each of the banks became critical to manage the transition for its customers. The Lloyds contact number 0844 545 6542 links customers with the knowledgeable staff populating the customer services departments of the formerly merged banks.
Whether as a customer at any point in time need to know anything regarding the transition from the merger between Lloyds and TSB to the current independent banks, all the answers can be found after calling the Lloyds number on 0844 545 6542. It takes only a few seconds to be connected through to speak to a customer service agent who provides the help and support you may need. Lloyds TSB channels of communication between the banks and their customers still remain open, so that even after the banks went separate customers understand the transition process and at least which of the two banks they bank with.
Both Lloyds and TSB have online platforms that allow customers to log into their accounts and access any information they might need 24/7. Online banking options including help and support are made available for the convenience of customers. If you have an account with any these banks and are registered for online banking it is easy to do your banking transactions online and access bank statements or other important pieces of information. If for any reason you are not able to log into your account, you can follow all the procedures recommended to resolve the problem. One of the options would be to call Lloyds customer service number on 0844 545 6542. For any other issue, you would still have a choice to either access help and support online or call the number which links you customer services.
After going separate, the Lloyds bank still offers a wide range of banking products including mortgages, loans, savings accounts, current accounts, insurance products and much more. Personal banking services are conveniently accessed online and on the go via a mobile app. You may enjoy any of these services in flexible ways but sometimes may also encounter challenges that demand help. The Lloyds helpline comes in handy for you with the only thing you need to do being to call 0844 545 6542. It doesn’t take long before a competent customer service agent talks to you and helps resolve any issues.
The Lloyds customer services department is usually divided into sections that handle different issues from customers. That makes it easier to resolve most of the issues faster without causing any delays to customers. When you call the Lloyds telephone number 0844 545 6542, you are connected through to the right section depending on the nature of your query or issue for which you need help. You only speak to an experienced customer service representative who is knowledgeable about the issue you need help with. That significantly expedites the resolution of the issues and at the end leaves you as customer more satisfied. Lloyds as one of the most popular banks in the UK is innovative and constantly keeps improving its customer service in every way possible.
Whether you have had your account with the Lloyds bank before it separated from TSB, or have just opened one recently, you can reach their customer services using the Lloyds contact number 0844 545 6542. Although vast amounts of information are available and easily accessible online, you always have the freedom of choosing which way you would like to get help. If you prefer the contact number 0844 545 6542, just go ahead and call it with confidence that any help you seek will be promptly provided to you. Every call using the number will only cost you 5p per minute (Plus your phone companies access charge).
Telephone Customer Service is a telephone directory and call routing service and is not connected or has any affiliation to Lloyds Bank. The direct contact number can be found in the public domain or on their official website.
TSB Contact Number 0844 545 7924 Calls to our 0844 number cost only 5p per minute, plus your phone companies access charge.
TSB Bank plc operates under the brand name TSB, and is the principal subsidiary of the TSB Banking Group. TSB provides its retail banking services across the UK through its 631 branches. The bank was launched in 2013 having over 4.6 million customers and its headquarters are located in Edinburgh. The formation of the bank came from several branches of the Lloyds TSB, all Cheltenham & Gloucester branches, and Lloyds TSB Scotland business. Considering the number of customers TSB serves through its nationwide network of branches, the TSB contact number 0844 545 7924 can be very useful whenever any need arises to call the customer service department of the bank. The number is meant to provide the fastest connection when you need to make enquiries, complaints, or get help on any other issue related to the retail banking services offered by TSB.
TSB offers a wide range of financial services including savings accounts, current accounts, mortgages, loans insurance and much more. Customers can register for internet banking and mobile banking services. Help and support is also available online to deal with specific issues that customers may have with their TSB bank accounts and its services. However, even when trying to get help online you are still more likely going to end up calling TSB’s customer service department. There are issues that may not be resolved for you until you call and talk to someone who verifies your identity. So when you have to call the TSB customer service, call 0844 545 7924 and you will talk to the customer service representative who will resolve your issues within the shortest time possible.
TSB offers the full range of banking and financial services for individual customers and businesses. If you need to enquire about any of their personal banking services, the TSB number to call is 0844 545 7924 and a customer service representative will be glad to provide you with the most accurate information. You can do that if you need to know what bank accounts they offer and the interest rates applicable. When you need help with business banking, the TSB bank also excels in this area. You will need to call and talk with advisers about your business needs so that they can advise you what financial solutions offered by the bank would be a good fit.
When you have issues with your TSB account, you may try to look for solutions online. Sometimes you can manage everything online, but for security reasons the bank may require that you call the TSB helpline so that a series of approved procedures can be applied to provide the help needed. Also for security reasons, some issues usually need to be reported to the bank within the shortest time possible. In such situations you have to call 0844 545 7924 for the fastest connection to the customer service of the bank. In cases of lost credit cards or debit cards, they must be reported immediately to prevent the possibility of unauthorised use by someone else.
Given that TSB serves millions of customers across the UK, it brings more competition to the UK banking sector. The bank has declared its commitment to keep customers informed about what it does with their money. If as a customer of the bank need to know how the bank makes its money, accessing that information by visiting their website and blog is not the only option. You might need very specific details that can best be confirmed if you call the TSB telephone number 0844 545 7924 to speak to the professionals form the bank that have access to the most accurate information.
TSB Intermediary which launched in January 2015, has now expanded to include remortgages. TSB has expanded the range of mortgages available through brokers. If you intend to remortgage your home, you can now do that through the TSB Intermediary. When looking for mortgage deals, you need access to a lot of information that can help inform your decisions. Depending on your current circumstances, you may need to know what period and type of interest can best work for you. Calling the TSB customer service number 0844 545 7924 is the right thing to do if you need to speak to someone who can let you know which deals are available. Whether you are also going to rely on information from independent financial advisers or not, at some point you have to call the TSB contact number to exactly what they offer.
Telephone Customer Service is a telephone directory and call routing service and is not connected or has any affiliation to the TSB. The direct contact number can be found in the public domain or on their official website.
Halifax Contact Number 0844 545 6537 Calls to our 0844 number cost only 5p per minute, plus your phone companies access charge.
Halifax, now part of the Bank of Scotland and also the Lloyds Banking Group, started as a building society way back in 1853. By the year 1913, Halifax had already grown to the status of the largest building society in the UK. It continued to grow and prosper while maintaining its position until 1997 when it became a public limited company. Today Halifax is one of the leading banks with braches throughout the UK providing a variety of financial solutions. If you have an account with the bank or are interested in opening one, at some point you may be looking for the Halifax contact number 0844 545 6537 as it provides the best option for connection to the customer service department where your enquiries can be received and any issues resolved.
Halifax has a solid position in the UK’s financial sector as one the best financial institutions in the provision of residential mortgages, loans and savings solutions. Some of the most popular online services that Halifax customers can register for include online banking and mobile banking services. They offer customers the modern conveniences of logging into their accounts to access financial information or perform transactions from the comforts of their homes or while on the go wherever they are. When you face any problems while trying to log into your online Halifax bank account, you would need to call customer care for the necessary help and support. You can use the Halifax number which you call on 0844 545 6537 for the fastest response you need to avoid frustrations.
A large bank like Halifax offers a variety of financial solutions and when you need the most accurate information to know which one fits your needs, the best option for you would be to call the Halifax helpline 0844 545 6537. The customer service representative who talks to you about your banking needs will be in a better position to advise you which solution can work best for you. That is what you might need to do if you want to compare different savings accounts with Halifax. A customer care representative with the knowledge of how each of the available savings accounts is designed lets you know everything including interest rates and any costs involved.
Considering the early beginnings and growth that Halifax achieved to become the largest building society in the UK, like many people you might be interested in understanding more about the residential mortgages the bank offers. The bank is well known for such services owing to its backgrounds and financial stability that it has gained over the decades. Mo rtgages and other loans demand a lot of considerations and you would definitely want to confirm many things before making your decision. Even if you consider independent financial advisors, at some point you will need to talk with someone from the bank. Start by calling the Halifax telephone number 0844 545 6537 and you will get accurate information to know what you can expect with the mortgages of loans you can take from the bank.
For prudent management of your finances and accounts, you need to conveniently access banking information and get the right advice in a timely manner. Halifax online services will help you in many ways to achieve that. Even though the bank will make it convenient for you to access information such as financial statements, transfer money between accounts, make payments and more, the need to talk to customer care cannot be totally eliminated. When you need to talk to the bank for whatever reason you may have, call the Halifax customer service number on 0844 545 6537 and you will be advised on what to do accordingly.
Halifax as of the largest banks in the UK serves millions of customers including businesses and customers. Its innovative financial solutions tailored to each type of customers are well complemented by its excellent customer service records. Make enquiries to the bank or register your complaints using the Halifax help line 0844 545 6537 as the connection to available customer service representatives is always fast. After finding the Halifax contact number 0844 545 6537 if you have been looking for it, call it as soon as possible for faster solutions and help with issues that need to be addressed.
DISCLAIMER Telephone Customer Service is a telephone directory and call routing service and is not connected or has any affiliation to Halifax. The direct contact number can be found in the public domain or on their official website.
Santander Contact Number 0844 545 6541 Calls to our 0844 number cost only 5p per minute, plus your phone companies access charge.
Santander UK is one of the leading personal financial services companies in the UK and it is owned by Santander Group from Spain. The bank provides a wide range of financial services encompassing mortgages and savings accounts. Since the switching guarantee launch, Santander emerged the winner for current account switches. One out of every four customers who switched banks joined Santander. With its steadily growing number of customer base and one of the best ranked product range, Santander has embarked on improving its customer service. As a customer or anyone interested in any of their products, you can rely on calling the Santander contact number 0844 545 6541 for enquiries or help on any challenges you might experience with their banking services.
The range of financial services that Santander provides includes mortgages, current accounts, savings accounts, credit products and investments for individuals, businesses and corporate customers. Online banking services are also offered as well as mobile banking using apps. Any challenges customers face while using different services are usually resolved upon calling the Santander number on 0844 545 6541 which provides the fastest connection to the customer service department. Online help and support is also available for the full range of services that Santander provides. However, depending on the nature of issue that you might want solved, calling Santander customer service number 0844 545 6541 might be the only possible way of getting help.
If you forget any of your login details required for Santander online banking, you will have to call the Santander helpline on 0844 545 6541 for help on how to reset your login details. While talking to a customer care representative, you will be asked certain questions for pieces of information that can prove your identity before help to reset the login details. That is one of the measures the bank takes to ensure their customers are well protected. In case your Santander credit or debit card gets lost, you are also supposed to call immediately so that any payments using the card can be stopped. In such cases, you need the number that provides the fastest connection to customer care. Searching for the number to call all over the web can be a daunting task, so just call 0844 545 6541 and a customer service representative Santander will soon be talking to you.
Santander was the first bank in the UK to launch a current account that doesn’t any fees for its mortgage customers. When travelling in Spain and transacting in a foreign currency using Santander’s ATMs, customers are also not charged. The bank offers very innovative financial solutions for its customers, which you can learn more about if you call the Santander telephone number 0844 545 6541.
Santander is bank that has been performing quite well and increasing profits. In its 2014 results, profits before tax grew by 26%. The bank has been increasing its mortgage lending and corporate loans. It is a bank that has been supporting businesses in the UK and keeps growing its customer base and financial stability. If you need to call the bank to enquire about its loans and mortgage lending services, you can easily to so by calling the Santander help line on 0844 545 6541. The connection to the customer service department will be established faster than on any other service available. 5 minute calls only cost 25p, so you save significant amounts of money as well as your valuable time.
Whether you are interested in Santander’s range of current accounts, credit cards, insurance, loans, mortgages, savings and investments, ISAs, or any other service the bank offers, you will find plenty of options to choose from to suit your specific needs. While they provide a lot of online resources for information, help and support covering all the services offered, sometimes that might not be enough until you make a call. If you are looking for the most accurate information about any of the services provided by Santander, you can call 0844 545 6541 for your convenience and will get it much faster than using any other option. The Santander contact number will also be useful to you have any complaint that you want the bank to address. Whatever your reasons for calling are, the bank’s customer service representatives will be glad to help you.
DISCLAIMER Telephone Customer Service is a telephone directory and call routing service and is not connected or has any affiliation to Santander. The direct contact number can be found in the public domain or on their official website. | 2019-04-22T13:16:00Z | http://telephone-customer-service.co.uk/banks/ |
2008-03-13 Assigned to WILLIAM COOK EUROPE APS, COOK INCORPORATED reassignment WILLIAM COOK EUROPE APS ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HEMMINGSEN, ALLAN G.
The present invention provides a removable vena cava filter for capturing thrombi in a blood vessel. The filter comprises a plurality of primary struts having first ends attached together along a longitudinal axis. Each primary strut has an arcuate segment extending from the first end to an anchoring hook. The primary struts are configured to move between an expanded state for engaging the anchoring hooks with the blood vessel and a collapsed state for filter retrieval or delivery. Each primary strut are configured to cross another primary strut along the longitudinal axis in the collapsed state such that the arcuate segments occupy a first diameter greater than a second diameter occupied by the anchoring hooks in the collapsed state for filter retrieval or delivery.
This application claims the benefit of U.S. Provisional Application No. 60/562,943, filed on Apr. 16, 2004, entitled “REMOVABLE VENA CAVA FILTER FOR REDUCED TRAUMA IN COLLAPSED CONFIGURATION,” the entire contents of which are incorporated herein by reference.
This application also claims the benefit of U.S. Provisional Application No. 60/562,813, filed on Apr. 16, 2004, entitled “REMOVABLE FILTER FOR CAPTURING BLOOD CLOTS,” the entire contents of which are incorporated herein by reference.
This application also claims the benefit of U.S. Provisional Application No. 60/562,909, filed on Apr. 16, 2004, entitled “BLOOD CLOT FILTER WITH STRUTS HAVING AN EXPANDED REMEMBERED STATE,” the entire contents of which are incorporated herein by reference.
This application also claims the benefit of U.S. Provisional Application No. 60/563,176, filed on Apr. 16, 2004, entitled “BLOOD CLOT FILTER HAVING A COLLAPSED REMEMBERED STATE,” the entire contents of which are incorporated herein by reference.
A filtering device can be deployed in the vena cava of a patient when, for example, anticoagulant therapy is contraindicated or has failed. Typically, filtering devices are permanent implants, each of which remains implanted in the patient for life, even though the condition or medical problem that required the device has passed. In more recent years, filters have been used or considered in preoperative patients and in patients predisposed to thrombosis which places the patient at risk for pulmonary embolism.
The benefits of a vena cava filter have been well established, but improvements may be made. For example, filters generally have not been considered removable from a patient due to the likelihood of endotheliosis of the filter or fibrous reaction matter adherent to the endothelium during treatment. After deployment of a filter in a patient, proliferating intimal cells begin to accumulate around the filter struts which contact the wall of the vessel. After a length of time, such ingrowth prevents removal of the filter without risk of trauma, requiring the filter to remain in the patient. As a result, there has been a need for an effective filter that can be removed after the underlying medical condition has passed.
Moreover, conventional filters commonly become off-centered or tilted with respect to the hub of the filter and the longitudinal axis of the vessel in which it has been inserted. As a result, the filter including the hub and the retrieval hook engage the vessel wall along their lengths and potentially become endothelialized therein. This condition is illustrated in prior art FIG. 1 a in which a prior art filter 113 has been delivered by a delivery sheath 125 through the vessel 150 of a patient. In the event of this occurrence, there is a greater likelihood of endotheliosis of the filter to the blood vessel along a substantial length of the filter wire. As a result, the filter becomes a permanent implant in a shorter time period than otherwise.
Furthermore, further improvements may be made related to the delivery or retrieval of vena cava filters. For delivery of vena cava filters, an introducer system having an introducer tube may be percutaneously inserted in the vena cava of a patient through the femoral vein or the jugular vein. A part of an introducer assembly 120 is illustrated in prior art FIG. 1 b in which the prior art filter 113 is percutaneously delivered through the jugular vein 154 of a patient. As shown, the filter 113 in its collapsed configuration is placed at the distal end 121 of an inner sheath 122 with anchoring hooks 116 of the filter 113 extending past the distal end 121. An outer sheath 126 is then disposed over the inner sheath 122 to avoid undesirably scratching or scraping of the anchoring hooks 116 against the introducer tube 130. The inner and outer sheaths 122, 126 along with a pusher member 132 are then moved together through the introducer tube 130 to deliver the filter 113 to the vena cava of the patient.
It has been a challenge to design a vena cava filter with features that lessen the concerns of undesirably scratching or scraping of the anchoring hooks against outer walls of an introducer tube or a blood vessel while maintaining the effectiveness of the filter.
One embodiment of the present invention generally provides a removable vena cava filter configured for simplified delivery to and retrieval from the vena cava of a patient. The filter is shaped for improved delivery and retrieval. The filter includes primary struts, each of which having an arcuate segment that extends to an anchoring hook. In a collapsed state, each primary strut is configured to cross another primary strut along a center axis of the filter such that the arcuate segments occupy a first diameter greater than a second diameter occupied by the anchoring hooks. As a result, the arcuate segments widen a path in a vessel when the filter is retrieved. Thus, the filter is able to be retrieved with greater ease and with a reduced likelihood of undesirably scratching or scraping of the anchoring hooks against outer walls of a blood vessel in the collapsed state.
The present invention provides a removable vena cava filter for capturing thrombi in a blood vessel. In one embodiment, the filter comprises a plurality of primary struts having first ends attached together along a longitudinal axis. Each primary strut has an arcuate segment extending from the first end to an anchoring hook. The primary struts are configured to move between an expanded state for engaging the anchoring hooks with the blood vessel and a collapsed state for filter retrieval or delivery. Each primary strut are configured to cross another primary strut along the longitudinal axis in the collapsed state such that the arcuate segments occupy a first diameter greater than a second diameter occupied by the anchoring hooks in the collapsed state for filter retrieval or delivery.
In another embodiment, the filter further comprises a plurality of secondary struts having connected ends attached together along the longitudinal axis and extending therefrom to free ends to centralize the filter in the expanded state in the blood vessel. The filter further comprises a hub configured to axially house the first ends of the plurality of primary struts. The filter further includes a retrieval hook extending from the hub opposite the plurality of primary struts for removal of the filter from the blood vessel.
In yet another embodiment, the present invention provides a removable filter assembly for capturing thrombi in a blood vessel. In this embodiment, the assembly comprises the removable filter and an introducer tube in which the removable filter in the collapsed state is disposed for retrieval or delivery.
FIG. 11 is a view of the blood vessel and filter of FIG. 10 taken along the line 11-11.
In accordance with one embodiment of the present invention, FIG. 2 illustrates a vena cava filter 10 implanted in the vena cava 50 for the purpose of lysing or capturing thrombi carried by the blood flowing through the iliac veins 54, 56 toward the heart and into the pulmonary arteries. As shown, the iliac veins merge at juncture 58 into the vena cava 50. The renal veins 60 from the kidneys 62 join the vena cava 50 downstream of juncture 58. The portion of the vena cava 50, between the juncture 58 and the renal veins 60, defines the inferior vena cava 52 in which the vena cava filter 10 has been percutaneously deployed through the femoral veins. Preferably, the vena cava filter 10 has a length smaller than the length of the inferior vena cava 52. If the lower part of the filter extends into the iliac veins, filtering effectiveness will be compromised and if the filter wires cross over the origin of the renal veins the filter wires might interfere with the flow of blood from the kidneys.
This embodiment of the present invention will be further discussed with reference to FIGS. 3-9 in which filter 10 is shown. FIG. 3 a illustrates filter 10 in an expanded state and comprising four primary struts 12 each having first ends that emanate from a hub 11. Hub 11 attaches by crimping first ends 14 of primary struts 12 together at a center point A in a compact bundle along a central or longitudinal axis X of the filter. The hub 11 has a minimal diameter for the size of wire used to form the struts. Preferably, the primary struts 12 are formed from a superelastic material, stainless steel wire, Nitinol, cobalt-chromium-nickel-molybdenum-iron alloy, or cobalt chrome-alloy, or any other suitable material that will result in a self-opening or self-expanding filter. In this embodiment, the primary struts 12 are preferably formed from wire having a round cross-section with a diameter of at least about 0.015 inches. Of course, it is not necessary that the primary struts have a round or near round cross-section. For example, the primary struts 12 could take on any shape with rounded edges to maintain non-turbulent blood flow therethrough.
Each primary strut 12 includes an arcuate segment 16 having a soft S-shape. Each arcuate segment 16 is formed with a first curved portion 20 that is configured to softly bend away from the longitudinal or central axis X of the filter 10 and a second curved portion 23 that is configured to softly bend toward the longitudinal axis of the filter 10. Due to the soft bends of each arcuate segment 16, a prominence or a point of inflection on the primary strut 12 is substantially avoided to aid in non-traumatically engaging the vessel wall.
The primary struts 12 terminate at anchoring hooks 26 that will anchor in the vessel wall when the filter 10 is deployed at a delivery location in the blood vessel. The primary struts 12 are configured to move between an expanded state for engaging the anchoring hooks 26 with the blood vessel and a collapsed state or configuration for filter retrieval or delivery. In the expanded state, each arcuate segment 16 extends arcuately along a longitudinal axis X (as shown in FIG. 3 a) and linearly relative to a radial axis R (as shown in FIG. 8 a) from the first end 14 to the anchoring hook 26. As shown in FIG. 8 a, the primary struts 12 radially extend from the first ends 14, defining the radial axis R. In this embodiment, the primary struts 12 extend linearly relative to the radial axis R and avoid entanglement with other struts.
As discussed in greater detail below, the soft bends of each arcuate segment 16 allow each primary strut 12 to cross another primary strut 12 along the longitudinal axis X in the collapsed state such that each anchoring hook 26 faces the longitudinal axis X for filter retrieval or delivery.
When the filter 10 is deployed in a blood vessel, the anchoring hooks 26 engage the walls of the blood vessel to define a first axial portion to secure the filter in the blood vessel. The anchoring hooks 26 prevent the filter 10 from migrating from the delivery location in the blood vessel where it has been deposited. The primary struts 12 are shaped and dimensioned such that, when the filter 10 is freely expanded, the filter 10 has a diameter of between about 25 mm and 45 mm and a length of between about 3 cm and 7 cm. For example, the filter 10 may have a diameter of about 35 mm and a length of about 5 cm. The primary struts 12 have sufficient spring strength that when the filter is deployed the anchoring hooks 26 will anchor into the vessel wall.
In this embodiment, the filter 10 includes a plurality of secondary struts 30 having connected ends 32 that also emanate from hub 11. Hub 11 attaches the connected ends 32 by crimping at the center point A of the secondary struts 30 together with the primary struts 12. In this embodiment, each primary strut 12 has two secondary struts 30 in side-by-side relationship with the primary strut 12. The secondary struts 30 extend from the connected ends 32 to free ends 34 to centralize the filter 10 in the expanded state in the blood vessel. As shown, each secondary strut 30 extends arcuately along the longitudinal axis and linearly relative to the radial axis from the connected end 32 to the free end 34 for engaging the anchoring hooks 26 with the blood vessel. As with the primary struts 12, the secondary struts 30 extend linearly relative to the radial axis and avoid entanglement with other struts.
The secondary struts 30 may be made from the same type of material as the primary struts 12. However, the secondary struts 30 may have a smaller diameter, e.g., at least about 0.012 inches, than the primary struts 12. In this embodiment, each of the secondary struts 30 is formed of a first arc 40 and a second arc 42. The first arc 40 extends from the connected end 32 away from the longitudinal axis X. The second arc 42 extends from the first arc 40 towards the longitudinal axis X. As shown, two secondary struts 30 are located on each side of one primary strut 12 to form a part of a netting configuration of the filter 10. The hub 11 is preferably made of the same material as the primary struts and secondary struts to minimize the possibility of galvanic corrosion or molecular changes in the material due to welding.
When freely expanded, free ends 34 of the secondary struts 30 will expand radially outwardly to a diameter of about 25 mm to 45 mm to engage the vessel wall. For example, the secondary struts 30 may expand radially outwardly to a diameter of between about 35 mm and 45 mm. The second arcs 42 of the free ends 34 engage the wall of a blood vessel to define a second axial portion where the vessel wall is engaged. The secondary struts 30 function to stabilize the position of the filter 10 about the center of the blood vessel in which it is deployed. As a result, the filter 10 has two layers or portions of struts longitudinally engaging the vessel wall of the blood vessel. The length of the filter 10 is preferably defined by the length of a primary strut 12.
Furthermore, the diameter of the hub 11 is defined by the size of a bundle containing the primary struts 12 and secondary struts 30. In this embodiment, the eight secondary struts 30 minimally add to the diameter of the hub 11 or the overall length of the filter 10, due to the reduced diameter of each secondary strut 30. This is accomplished while maintaining the filter 10 in a centered attitude relative to the vessel wall and formed as a part of the netting configuration of the filter 10. As shown, removal hook 46 extends from hub 11 opposite primary and secondary struts 12 and 30.
FIG. 3 b illustrates the filter 10 in a collapsed state disposed in a delivery/retrieval tube 94 for delivery or retrieval. As shown, the filter 10 is shaped for each primary strut 12 to cross another primary strut 12 along the longitudinal axis X. As a result, in the collapsed state, the anchoring hooks 26 are configured to invert or inwardly face the longitudinal axis X for retrieval and delivery of the filter 10. This inverted or inwardly facing configuration of the anchoring hooks 26 allows for simplified delivery and retrieval of filter 10. For example, a concern that the anchoring hooks 26 may scrape, scratch, or tear the inner wall of a delivery/retrieval tube is eliminated, since the filter 10 of the present invention is shaped to have the anchoring hooks 26 face each other in the collapsed state. In fact, a set of inner and outer delivery/retrieval sheaths (see prior art FIG. 1 b) may be eliminated during the delivery or retrieval of the filter 10 through the jugular or femoral vein. Rather, merely one delivery/retrieval tube with a loop snare mechanism may be used to deliver or retrieve the filter 10 of the present invention.
In the collapsed state, each primary strut 12 is configured to cross another primary strut 12 along the longitudinal axis X such that the arcuate segments 16, first curved portions 20 or second curved portions 23, occupy a first diameter D1. In this embodiment, the first diameter is greater than a second diameter D2 occupied by the anchoring hooks 26 for filter retrieval or delivery. It has been found that the first diameter of the arcuate segments 16 serves to clear a path of retrieval, reducing radial force from the sheath or blood vessel on the anchoring hooks 26 during removal of the filter 10 from a patient. Reducing the radial force on the anchoring hooks 26 assists in preventing the anchoring hooks 26 from scraping, scratching, or tearing the inner wall of a sheath during removal of the filter 10 from a patient.
In this embodiment of the present invention, it is to be noted that the filter 10 may be delivered or retrieved by any suitable introducer (delivery or retrieval) tube. However, it is preferred that the introducer tube has an inside diameter of between about 4.5 French and 16 French, and more preferably between about 6.5 French and 14 French. Thus, the collapsed state of the filter may be defined by the inside diameter of an introducer tube.
FIG. 4 illustrates primary strut 12 including distal bend 43 formed thereon and extending outwardly radially from the longitudinal axis X. As shown in FIG. 4, the distal bend 43 may extend outwardly at an angle between about 0.5 degree to 2 degrees, preferably 1.0 degree. The distal bend 43 allows the filter 10 to filter thrombi effectively at a smaller inside diameter of a blood vessel than otherwise would be possible while maintaining the ability to collapse for delivery or retrieval.
FIG. 5 illustrates a cross-sectional view of the filter 10 of FIG. 3 a at hub 11. As shown, the hub 11 houses a bundle of first ends 14 of the four primary struts 14 and connected ends 32 of secondary struts 30. FIG. 5 further depicts the configurations of the primary and secondary struts 12 and 30. In this embodiment, the primary struts 12 are spaced between two secondary struts 30. Of course, the primary struts 12 may be spaced between any other suitably desired number of secondary struts 30 without falling beyond the scope or spirit of the present invention.
In this embodiment, FIGS. 6 a and 6 b both illustrate the filter 10 partially deployed in inferior vena cava 52. FIG. 6 a shows the filter 10 being delivered by a delivery tube 48 through the femoral vein of a patient and FIG. 6 b shows the filter 10 being delivered by a delivery tube 50 through the jugular vein of a patient. For deployment of the filter 10, a delivery tube is percutaneously inserted through the patient's vessel such that the distal end of the delivery tube is at the location of deployment. In this embodiment, a wire guide is preferably used to guide the delivery tube to the location of deployment. In FIG. 6 a, the filter 10 is inserted through the proximal end of the delivery tube 48 with the removal hook 46 leading and anchoring hooks 26 of the primary struts 12 held by a filter retainer member for delivery via the femoral vein of a patient.
In FIG. 6 b, the filter 10 is inserted through the proximal end of the delivery tube 50 with the anchoring hooks 26 of the primary struts 12 leading and the removal hook 46 trailing for delivery via the jugular vein of a patient. In this embodiment, a pusher wire having a pusher member at its distal end may be fed through the proximal end of the delivery tube 50 thereby pushing the filter 10 until the filter 10 reaches the distal end of the delivery tube 50 to a desired location.
During deployment, the secondary struts 30 expand first to centralize or balance the filter within the vessel. When the free ends of the secondary struts emerge from the distal end of either of the delivery tubes 48 or 50, the secondary struts 30 expand to an expanded position as shown in both FIGS. 6 a and 6 b. The second arcs 42 engage the inner wall of the vessel. The second arcs 42 of the secondary struts 30 function to stabilize the attitude of filter 10 about the center of the blood vessel. When delivering through the jugular vein (FIG. 6 b), the filter 10 is then pushed further by the pusher wire (not shown) until it is fully deployed.
When the filter 10 is fully expanded in the vena cava, the anchoring hooks 26 of the primary struts 12 and the second arcs 42 of the secondary struts 30 are in engagement with the vessel wall. The anchoring hooks 26 of the primary struts 12 have anchored the filter 10 at the location of deployment in the vessel, preventing the filter 10 from moving with the blood flow through the vessel. As a result, the filter 10 is supported by two sets of struts that are spaced axially along the length of the filter.
FIG. 7 illustrates the filter 10 fully expanded after being deployed in inferior vena cava 52. As shown, the inferior vena cava 52 has been broken away so that the filter 10 can be seen. The direction of the blood flow BF is indicated in FIG. 7 by the arrow that is labeled BF. The anchoring hooks 26 at the ends of the primary struts 12 are shown as being anchored in the inner lining of the inferior vena cava 52. The anchoring hooks 26 include barbs 29 that, in one embodiment, project toward the hub 11 of the filter. The barbs 29 function to retain the filter 10 in the location of deployment.
The spring biased configuration of the primary struts 12 further causes the anchoring hooks 26 to engage the vessel wall and anchor the filter at the location of deployment. After initial deployment, the pressure of the blood flow on the filter 10 contributes in maintaining the barbs 29 anchored in the inner lining of the inferior vena cava 52. As seen in FIG. 7, the second arcs 42 of secondary struts 30 also have a spring biased configuration to engage with the vessel wall.
As seen in FIG. 7, the hub 11 and removal hook 46 are positioned downstream from the location at which the anchoring hooks 26 are anchored in the vessel. When captured by the struts 12 and 30, thrombi remains lodged in the filter. The filter 10 along with the thrombi may then be percutaneously removed from the vena cava. When the filter 10 is to be removed, the removal hook 46 is preferably grasped by a retrieval instrument that is percutaneously introduced in the vena cava in the direction of removal hook 16 first.
FIG. 8 a depicts a netting configuration or pattern formed by the primary struts 12, secondary struts 30, and the hub 11 relative to radial axis R. The netting pattern shown in FIG. 8 a functions to catch thrombi carried in the blood stream prior to reaching the heart and lungs to prevent the possibility of a pulmonary embolism. The netting pattern is sized to catch and stop thrombi that are of a size that are undesirable to be carried in the vasculature of the patient. Due to its compacted size, the hub minimally resists blood flow.
FIG. 8 a depicts the netting pattern including primary struts and secondary struts at substantially equal angular space relative to each other. The netting pattern provides an even distribution between the primary and secondary struts to the blood flow, increasing the likelihood of capturing thrombi. However, as shown in FIG. 8 b, it is to be understood that each of the sets of primary struts 312 and secondary struts 330 may be independently spaced substantially equally at their respective portions relative to radial axis R′. For example, the secondary struts 330 may be spaced equally relative to the other secondary struts 330 and the primary struts 312 may be spaced equally relative to the other primary struts 312. As a result, the netting pattern in this embodiment shown by the cross-sectional view of the vena cava (taken along line 8-8) will have uneven or unequal spacing between the primary struts 312 and secondary struts 330.
FIG. 9 a illustrates part of a retrieval device 65 being used in a procedure for removing the filter 10 from the inferior vena cava 52. In this example, the retrieval device 65 is percutaneously introduced into the superior vena cava via the jugular vein. In this procedure, a removal catheter or sheath 68 of the retrieval device 65 is inserted into the superior vena cava. A wire 70 having a loop snare 72 at its distal end is threaded through the removal sheath 68 and is exited through the distal end of the sheath 68. The wire 70 is then manipulated by any suitable means from the proximal end of the retrieval device such that the loop snare 72 captures the removal hook 46 of the filter 10. Using counter traction by pulling the wire 70 while pushing the sheath 68, the sheath 68 is passed over the filter 10. As the sheath 68 passes over the filter 10, the primary struts 12 and then the secondary struts 30 engage the edge of the sheath 68 and are caused to pivot or undergo bend deflection at the hub 11 toward the longitudinal axis of the filter. The pivoting toward the longitudinal axis causes the ends of the struts 12 and 30 to be retracted from the vessel wall. In this way, only surface lesions 74 and small point lesions 76 on the vessel wall are created in the removal procedure. As shown, the surface lesions 74 are created by the ends of the secondary struts 30 and the small point legions 76 are created by the anchoring hooks 26 of the primary struts 12. However, it is to be noted that any other suitable procedure may be implemented to remove the filter from the patient.
Although the embodiments of this device have been disclosed as preferably being constructed from wire having a round cross section, it could also be cut from a tube of suitable material by laser cutting, electrical discharge machining or any other suitable process.
The primary and secondary struts can be formed from any suitable material that will result in a self-opening or self-expanding filter, such as shape memory alloys. Shape memory alloys have the desirable property of becoming rigid, that is, returning to a remembered state, when heated above a transition temperature. A shape memory alloy suitable for the present invention is Ni—Ti available under the more commonly known name Nitinol. When this material is heated above the transition temperature, the material undergoes a phase transformation from martensite to austenic, such that material returns to its remembered state. The transition temperature is dependent on the relative proportions of the alloying elements Ni and Ti and the optional inclusion of alloying additives.
In other embodiments, both the primary struts and the secondary struts are made from Nitinol with a transition temperature that is slightly below normal body temperature of humans, which is about 98.6° F. Thus, when the filter is deployed in the vena cave and exposed to normal body temperature, the alloy of the struts will transform to austenite, that is, the remembered state, which for the present invention is an expanded configuration when the filter is deployed in the blood vessel. To remove the filter, the filter is cooled to transform the material to martensite which is more ductile than austenite, making the struts more malleable. As such, the filter can be more easily collapsed and pulled into the sheath for removal.
In certain embodiments, both the primary struts and the secondary struts are made from Nitinol with a transition temperature that is above normal body temperature of humans, which is about 98.6° F. Thus, when the filter is deployed in the vena cave and exposed to normal body temperature, the struts are in the martensitic state so that the struts are sufficiently ductile to bend or form into a desired shape, which for the present invention is an expanded configuration. To remove the filter, the filter is heated to transform the alloy to austenite so that the filter becomes rigid and returns to a remembered state, which for the filter is a collapsed configuration.
In another embodiment shown in FIGS. 10 and 11, a filter 420 includes four primary struts 438 and eight secondary struts 440 that extend from a hub 442. Each primary strut 438 terminates in an anchoring hook 452 with a barb 454. The primary struts 438 have sufficient spring strength such that when the filter is deployed in a vena cava 436, the anchoring hooks 452, in particular, the barbs 444, anchor into the vessel wall of the vena cava 436 to prevent the filter 420 from migrating from the delivery location. The pressure of the blood flow on the filter 420 contributes in maintaining the barbs 454 anchored in the inner lining of the vena cava 436.
Since the twisted sections 464 effectively stiffens each pair of secondary struts 440, thinner secondary struts may be used to provide the appropriate balancing forces to center the filter in the blood vessel. Moreover, an additional benefit of the twisted section is that they prevent the secondary struts from entangling with the primary struts.
FIG. 11 illustrates a netting pattern (“net”) formed by the primary struts 438, the secondary struts 440, and the hub 442. This net functions to catch thrombi carried in the blood stream to prevent the thrombi from reaching the heart and lungs, where the thrombi could cause pulmonary embolism. The net is sized to catch and stop thrombi that are of a size that are undesirable in the vasculature of the patient. As illustrated, the struts 438 have substantially equal angular spacing between the struts.
The hub 442 and a removal hook 466 attached to the hub are located downstream of the location at which the anchoring hooks 452 are anchored in the vessel 436. When captured by the struts, thrombi remain lodged in the filter 420. The filter 420 along with the thrombi may then be removed percutaneously from the vena cava. When the filter 420 is to be removed, the removal hook 466 is typically grasped by the retrieval hook that is introduced in the vena cava percutaneously.
a plurality of primary struts having first ends attached together along a longitudinal axis, each primary strut having an arcuate segment extending from the first end to an anchoring hook, the primary struts being configured to move between an expanded state for engaging the anchoring hooks with the blood vessel and a collapsed state for filter retrieval or delivery, each primary strut being configured to cross another primary strut along the longitudinal axis in the collapsed state such that the arcuate segments occupy a first diameter greater than a second diameter occupied by the anchoring hooks in the collapsed state for filter retrieval or delivery.
a retrieval hook extending from the hub opposite the plurality of primary struts for removal of the filter from the blood vessel.
3. The removable filter of claim 1 wherein the arcuate segment includes a first curved portion and a second curved portion, the first curved portion extending from the first end, the second curved portion extending from the first curved portion and terminating at the anchoring hook.
5. The removable filter of claim 1 wherein each primary strut is formed of a superelastic material, stainless steel wire, Nitinol, cobalt-chromium-nickel-molybdenum-iron alloy, or cobalt-chrome alloy.
6. The removable filter of claim 2 wherein each secondary strut is formed of a superelastic material, stainless steel wire, Nitinol, cobalt-chromium-nickel-molybdenum-iron alloy, or cobalt-chrome alloy.
7. The removable filter of claim 2 wherein the first diameter ranges between about 6 French and 14 French, and the second diameter ranges between about 3 French and 9 French.
8. The removable filter of claim 1 wherein the struts are formed of shape memory alloy with a transition temperature.
9. The removable filter of claim 8 wherein the struts collapse to the collapsed state when the temperature of the struts is about equal to or greater than the transition temperature.
10. The removable filter of claim 8 wherein the struts expand to the expanded state when the temperature of the struts is about equal to or greater than the transition temperature.
11. The filter of claim 1 wherein pairs of secondary struts are positioned between pairs of primary struts, each pair of secondary struts being twisted about each other near the connected ends of the respective secondary struts to form a twisted section.
12. The filter of claim 11 wherein each twisted section includes between about one and ten twists.
an introducer tube in which the removable filter in the collapsed state is disposed for retrieval or delivery of the filter.
14. The removable filter of claim 13 wherein the arcuate segment includes a first curved portion and a second curved portion, the first curved portion extending from the first end, the second curved portion extending from the first curved portion and terminating at the anchoring hook.
15. The removable filter of claim 14 wherein the first curved portion is configured to extend radially from the longitudinal axis of the filter and the second curved portion is configured to extend radially toward the longitudinal axis of the filter.
16. The removable filter of claim 13 wherein the struts are formed of shape memory alloy with a transition temperature.
17. The removable filter of claim 16 wherein the struts collapse to the collapsed state when the temperature of the struts is about equal to or greater the transition temperature.
18. The removable filter of claim 16 wherein the struts expand the expanded state when the temperature of the struts is about equal to or greater than the transition temperature.
19. The removable filter of claim 13 wherein the first diameter ranges between about 6 French and 14 French, and the second diameter ranges between about 3 French and 9 French.
an introducer tube in which the removable filter in the collapsed state is disposed for retrieval or delivery of the filter, the first diameter ranging between about 6 French and 14 French, the second diameter ranging between about 3 French and 9 French. | 2019-04-24T22:52:32Z | https://patents.google.com/patent/US20050267512A1/en |
Patient-reported outcome (PROs) measures are being used more frequently in investigational studies of treatments for moderate to severe plaque psoriasis. The objective of this study was to examine the relationships among the Dermatology Life Quality Index (DLQI), the Short Form 36 (SF-36), and the EuroQOL 5D (EQ-5D) and to assess their validity, responsiveness, and estimates of minimum important differences.
A Phase II, randomized, double-blind, parallel group, placebo-controlled, multi-center clinical trial assessed the clinical efficacy and safety of two doses of subcutaneously administered adalimumab vs. placebo for 12 weeks in the treatment of 147 patients with moderate to severe plaque psoriasis. This study provided the opportunity to evaluate the validity and responsiveness to change in clinical status of PROs instruments. Patients completed the DLQI, SF-36, and EQ-5D questionnaires at baseline and at 12 weeks. Blinded investigators assessed the Psoriasis Area and Severity Index (PASI) scores and the Physician's Global Assessment (PGA) scores of enrolled patients. The responsiveness of the measures to changes in the clinical endpoints from baseline to Week 12 was assessed. Estimates of minimum important differences (MID) were derived. All analyses were performed with blinded data; findings and conclusions were not biased based on treatment condition.
The dermatology-specific DLQI was highly correlated to clinical endpoints at baseline and at Week 12, and was the most responsive PRO to changes in endpoints. Compared with the SF-36, the EQ-5D index score and VAS scores were generally more highly correlated with clinical endpoints, but displayed about the same degree of responsiveness. The most responsive SF-36 scales were the Bodily Pain and Social Functioning scales. Estimates of the MID for the DLQI ranged from 2.3–5.7 and for the SF-36 Physical Component Summary (PCS) score ranged from 2.5–3.9.
This study provides support for the continued use of the DLQI and SF-36 PCS in the assessment of treatments for psoriasis. On the basis of the results from this trial, the EQ-5D should be considered as a general PRO measure in future clinical trials of patients with moderate to severe plaque psoriasis.
Moderate to severe plaque psoriasis has been demonstrated to have substantial impact on function limitations and psychosocial factors of patients with the disease [1–5]. Moreover, successful treatment of moderate to severe psoriasis – as assessed by improved physical functioning and reduction of signs and symptoms – has been shown to have a positive impact on social and psychological aspects of psoriasis [6–11].
Given the functional and psychosocial impact of the disease, studies of moderate to severe psoriasis patients often include both physician-assessed clinical endpoints and dermatology-specific patient-reported outcomes (PROs) to obtain a holistic view of the disease and treatment effects in patients . Such practices are bolstered by the assertion of the Medical Advisory Board of the National Psoriasis Foundation (NPF) that, even more so than physical signs, such as the percentage of body surface area (BSA) affected by psoriasis, the severity of psoriasis is "first and foremost a quality-of-life (QOL) issue" . The same values for percentage BSA involvement can result in very different degrees of impact for different patients, depending on the location of psoriatic plaques, the pain associated with the lesions and plaques, the extent of bleeding associated with the psoriatic lesions, and resulting functional limitations. The NPF Advisory Board suggests an alternative basis for defining mild, moderate, or severe psoriasis, predicated on QOL impacts of the disease. Similarly, the guidelines recently promulgated by the British Association of Dermatologists for the use of biologics in psoriasis indicate that eligible patients must have a Psoriasis Area and Severity Index (PASI) score of at least 10 and a score on the Dermatology Life Quality Index (DLQI) – a dermatology-specific validated PRO measure – of greater than 10.
A Phase II clinical trial of two dosages of adalimumab and placebo in the treatment of moderate to severe psoriasis provided an opportunity to further explore the psychometric characteristics – including responsiveness and minimum important differences – of the three PROs used in the trial: the DLQI; the general health-related QOL measure MOS Short Form 36 (SF-36) Health Survey ; and the general health status measure EuroQOL 5D (EQ-5D) [17, 18]. Establishing the reliability, validity, and responsiveness of PRO measures is necessary for their use in support of labeling claims, according to an FDA draft guide to industry . Reliability refers to the accuracy of a measure, while validity refers to the extent the measure actually is measuring what it purports to measure. Responsiveness is a component of validity and represents the PRO's capability to detect changes related to changes in the clinical status of patients or other relevant outcomes measures. Minimum important difference (MID) is related to responsiveness and provides guidance to those reviewing clinical trial results as to whether the statistically significant group differences or changes are clinically meaningful and important. Jaeschke and colleagues define a minimal clinically important difference (MCID) (we use MID instead of MCID to avoid confusion) as "the smallest difference in score ... which patients perceive as beneficial and which would mandate, in the absence of troublesome side-effects and excessive cost, a change in the patient's management." Estimation of the MID – using several different approaches – is also emphasized in the FDA guidance and is consistent with recently published recommendations of health outcomes researchers [21, 22].
The objectives of the Phase II, randomized, double-blind, parallel group, placebo-controlled, multi-center clinical trial were to assess the clinical efficacy and safety of subcutaneously administered adalimumab vs. placebo using two dosage regimens for 12 weeks in the treatment of 147 patients with moderate to severe plaque psoriasis. The study included a screening period, a blinded 12-week treatment period, and a 30-day follow-up visit for patients not completing 12 weeks of active treatment or not entering an extension study. Time between screening and baseline visits was not to exceed 28 days. The trial achieved the objectives of the study in terms of safety and clinical efficacy endpoints .
Patients with a diagnosis of moderate to severe plaque psoriasis and an affected BSA of ≥ 5% for at least 1 year were eligible for the study. In addition to other inclusion criteria (e.g., age ≥ 18 years, willingness to give informed consent), patients had to be able to self-inject medication or have a designee or nurse who could inject the randomized assignment. Patients signed informed consent forms, and the study complied with FDA Good Clinical Practices, Health Protection Branch guidelines, and all other applicable ethical, legal, and regulatory requirements .
Frequently used as an endpoint in psoriasis clinical trials , the PASI was the primary efficacy outcome in this trial. PASI is a composite index indicating the severity of the three main signs of psoriatic plaques (i.e., erythema, scaling, and thickness) and is weighted by the amount of coverage of these plaques in the four main body areas (head, trunk, upper extremities, and lower extremities). PASI scores range from 0–72, with higher scores indicating greater disease severity. PASI was assessed at screening and baseline, at Weeks 1, 2, 4, 8, and 12/Early Termination, and at the final follow-up visit.
Clear: no signs of psoriasis (post-inflammatory hypopigmentation or hyperpigmentation could be present).
The PGA scale is scored from 1 (Clear) to 7 (Severe). The PGA was assessed by the investigator at screening, baseline, and Weeks 1, 2, 4, 8, 12/Early Termination, and the follow-up visit. Each study site was to make every attempt to have the same investigator perform these assessments throughout the study for each patient.
Three PROs measures were used in the study and are the subject of the analyses reported here. All PROs measures were assessed at baseline and at Week 12 (or early termination, if applicable).
The DLQI was developed as a simple, compact, and practical questionnaire for use in dermatology clinical settings to assess limitations related to the impact of skin disease . The instrument contains 10 items dealing with skin (e.g., Item 1: "Over the last week, how itchy sore, painful, or stinging has your skin been?"). The DLQI score ranges from 0–30, with "30" corresponding to the worst quality of life, and "0" corresponding to the best score. The DLQI has well-established properties of reliability and validity in the dermatology setting [15, 26–28].
The SF-36 is a 36-item general health status instrument often used in clinical trials and health services research . It consists of eight domains: Physical Function, Role Limitations-Physical, Vitality, General Health Perceptions, Bodily Pain, Social Function, Role Limitations – Emotional, and Mental Health. Two overall summary scores can be obtained – a Physical Component Summary (PCS) score and a Mental Component Summary (MCS) score . The PCS and MCS scores range from 0–100, with higher scores indicating better health. The SF-36 has been used in a wide variety of studies involving psoriasis, including descriptive studies and clinical research studies [6, 7], and has demonstrated good reliability and validity. Internal consistency for most SF-36 domains is greater than 0.70. The SF-36 has been shown to discriminate between known groups in a variety of diseases, is reproducible, and is responsive to longitudinal clinical changes.
The EQ-5D [17, 18] is a six-item, preference-based instrument designed to measure general health status. The EQ-5D has two sections: The first consists of five items to assess degree of physical functioning (mobility, self-care, usual activities, pain/discomfort, and anxiety/depression). Items are rated on a three-point scale ranging from "No Problem" to "Extreme Problem" or "Unable to Do." Each pattern of scores for the five items is linked to an index score that has a value ranging from 0–1, indicating the health utility of that person's health status. The specific linkage can differ from country to country, reflecting differences in cultures to the item responses. The second section is the sixth item on the EQ-5D, which is a visual analog scale with endpoints of "100" or "Best Imaginable Health," and "0" or "Worst Imaginable Health." It offers a simple method for the respondents to indicate how good or bad their health statuses are "today." The score is taken directly from the patients' responses.
Validity of the PRO measures was assessed in several ways. First, an assessment was made of the concurrent validity of scales and subscales (i.e., the extent to which PRO measures are correlated with one another). As a disease-specific PRO measure, the DLQI was expected to correlate moderately to extremely well with general PRO measures. Another important aspect of validity in this study was to assess the extent to which the PRO measures correlated with the clinical endpoints – PASI and PGA – both at baseline and at Week 12.
Responsiveness of PRO measures was assessed via two approaches. First, changes in these measures from baseline to Week 12 were correlated with changes in the PASI or PGA over the 12-week course of treatment within the trial. Concurrent improvement in both clinical measures and PRO measures was expected to result in positive correlations. The second approach to assessing responsiveness involved categorizing patients into responder groups based on the changes in their PASI scores from baseline to Week 12. This was done in two ways. First, a responder was defined as a patient with >75% improvement in PASI (consistent with the definition of success with the primary efficacy variable), and a non-responder was defined as a patient with a PASI improvement <50% (consistent with the definition of failure for a secondary efficacy variable). Tests of mean differences in improvement on the PRO measures were completed between the two groups. Secondly, in support of the estimation of the MID, discussed below, patients were further categorized by degree of PASI response, and assessed differences among these four groups: PASI improvement <25%; PASI improvement 25–49%; PASI improvement 50–74%; and PASI improvement ≥ 75%. Analyses of variance tests were performed among these four groups for changes in PRO measures.
In accordance with the FDA draft guidance and consistent with recent recommendations from PRO researchers [21, 22], five methods were used to estimate MIDs of the PROs. The PRO change score corresponding to PASI 25-PASI 49 was the first estimate of MID, called MID-1. This was based on the assumption that patients would perceive a PASI improvement of 25% as beneficial. The trial did not provide data to test this assumption (e.g., there was no rating by patients of their overall improvements). A second estimate, MID-2, was based on the PRO change score corresponding to a PASI improvement between 50–74%. The PASI 50 is seen as clinically relevant, and, as such, this degree of improvement served as a secondary efficacy endpoint in this trial. A third method for estimating MID relied on the association of changes in the PRO measure with changes in the PGA. A non-responder was defined as a patient with a PGA change score of either "0" (no change) or "1" (slight increase in severity of disease) from baseline to Week 12. A minimal responder was defined as a patient whose PGA improved by either 1 or 2 points from baseline to Week 12. The third estimate of MID, MID-3, was the difference in the PRO score between non-responders and minimal responders.
In addition, two distributional methods were used to support the anchor-based MID estimates for the PROs [21, 22]. Based on evidence by Wyrwich and associates [31, 32], the standard error of measurement (SEM) can be used to approximate the MID. The SEM, which describes the error associated with the measure, was estimated by the standard deviation of the measure multiplied by the square root of 1 minus its reliability coefficient. Finally, there has been discussion concerning a number of studies demonstrating that one-half of the standard deviation of a measure represents the upper limit of the MID . In estimating the SEM for the SF-36 and the EQ-5D, reliability estimates from the literature were used. The SEM for the DLQI incorporated the reliability estimated from the trial data, which was consistent with what has been found in the literature for this instrument .
Finally, it is important to note that all analyses were performed with blinded data (i.e., the statuses of patients with respect to their assigned treatment groups were not known).
A total of 147 patients enrolled and received at least one dose of study medication at 18 sites in the United States and Canada. Blinded data were available for the PROs for 147 patients at baseline and 140 patients at Week 12. Since the focus of these analyses were on the psychometric properties of the PROs rather than with efficacy, observed cases were employed rather than last observation carried forward or other methods for treating missing observations at the end of trial. The mean age of the patients enrolled in the trial was 44.2 years, two-thirds were male, and the preponderance were white (Table 1).
The results for the PASI and the PGA at baseline and Week 12, as well as the change from baseline to Week 12, are displayed in Table 2. The mean PASI at baseline was 15.7, which decreased by 8.9 points (improvement) to 6.8 by Week 12. The mean PGA at baseline was 5.5 (i.e., midway between "Moderate" and "Moderate to Severe"), and decreased (improved) by 2.1 points to 3.4 by Week 12 (i.e., between "Mild" and "Mild to Moderate"). In evaluating the improvement in the two clinical endpoints, it is important to keep in mind that these analyses included pooled placebo and active treatment groups.
1 Scored such that 1 = "Clear" to 7 = "Severe."
2Change scores are computed only for the 140 patients with scores at baseline and Week 12; sample size at baseline = 147.
The results for the DLQI, SF-36, and EQ-5D at baseline and Week 12, and the change from baseline are shown in Table 3. Based on blinded data, mean PRO measures improved during the course of the trial (a decrease in DLQI scores indicates an improvement; an increase in the SF-36 and EQ-5D indicates improvement). The greatest improvement in a DLQI item occurred for the first item, assessing how "itchy, sore, painful, or stinging" the person's skin felt (data not shown). Similarly, as shown in Table 3, the greatest improvement among the SF-36 scales was for Bodily Pain. The largest improvement among the five EQ-5D dimensions occurred for the Pain/Discomfort dimension (data not shown). Given these findings, it appears that improvement in pain and discomfort is the most pronounced among all PRO measures assessed.
1Change scores are computed only for patients with scores at baseline and Week 12; this number varied between 138 and 140, depending on the specific measure, as compared with the 147 patients at baseline.
The reliability of the DLQI, as assessed by coefficient alpha, was 0.89 at baseline and 0.92 at Week 12, indicating that this is a highly reliable measure, and in line with previous findings [27, 28].
Table 4 displays the correlations among PRO measures at baseline and at Week 12, as well as the correlations among changes in these measures from baseline to Week 12. There were a few trends evident form this data. First, all measures were statistically significantly inter-correlated. Second, with respect to the relationship between the DLQI and the SF-36, the DLQI correlated the greatest with the Bodily Pain and Social Functioning domains, both at baseline and at Week 12, and, for changes in these scores over the course of the trial. Third, the DLQI correlated highly with the EQ-5D index score, and these correlations were consistently higher than the correlations with the EQ-5D visual analog scale (VAS) scores. Fourth, the EQ-5D index score tended to correlate greatest with the Bodily Pain domain of the SF-36. Finally, the scores tended to be more highly correlated at the end of the trial than at baseline, consistent with previous findings .
1All correlations were significant at p < 0.001, unless otherwise noted. *p ≤ 0.05, **p ≤ 0.01, ns = non-significant.
Table 5 displays correlations of PRO measures with the two clinical assessments – PASI score and PGA – at baseline (first two columns of data) and at Week 12 (second two columns). In addition to almost uniformly greater correlations at Week 12 vs. at baseline – consistent with previous findings – one can also note that both the DLQI and EQ-5D index score tended to be more highly correlated with the two clinical endpoints than any of the SF-36 domains. The SF-36 scales with the strongest association with clinical endpoints are Social Functioning and Bodily Pain.
1All correlations were significant at p < 0.001, unless otherwise noted. *p ≤ 0.05, ** p < 0.01, ns = non-significant.
An important attribute for a PRO measure is responsiveness to change in the clinical status of a patient (i.e., as a patient's disease improves, the PRO measures also improve). The last two columns of Table 5 display the correlations between changes in PRO measures used in the trial and changes in PASI scores and the PGA from baseline to Week 12. These data demonstrate that the DLQI is the most responsive of the PRO measures. The correlations between changes over the course of the trial in the DLQI total score and changes in the PASI score (r = 0.69, p < 0.001) and PGA (r = 0.71, p < 0.001) approach the correlation between changes in the two clinical measures themselves (r = 0.75, p < 0.001). In addition, the DLQI is the only one of the PRO measures to demonstrate equal responsiveness to PGA and PASI scores. The correlation between changes in the EQ-5D index score and the two clinical assessments was r = -0.57 (p < 0.001) for changes in the PASI to r = -0.44 for changes in the PGA (p < 0.001). Similarly, the correlations between changes in all but one of the SF-36 scores and changes in the PGA were smaller than correlations between changes in the SF-36 and the PASI.
A second way to assess responsiveness was to contrast patients who were defined as clinical responders with those characterized as non-responders. Given that the primary endpoint in the trial was defined as the percentage of patients achieving a PASI 75 response (i.e., ≥ 75% improvement in PASI from. baseline) by Week 12, a responder was defined as a patient with a PASI75 response. A non-responder was a patient with <PASI 50, since some of the secondary endpoints in the trial used this cut-off. The results of these analyses are displayed in Table 6. DLQI total scores for responders improved by 12.17 points, while scores of non-responders improved by 1.77 points. This difference was statistically significant (t = 9.0; p < 0.0001). All the PRO measures except for the SF-36 Physical Functioning domain were responsive, as defined by a statistically significant difference between responders and non-responders. The DLQI was the most responsive of the PRO measures, as evidenced by the size of the t-statistic and the effect size. The responsiveness of the EQ-5D index and VAS scores were generally the same as several of the SF-36 domain scores.
1Responder is defined as PASI improvement ≥ 75%; non-responder is defined as PASI improvement <50%.
While the estimates of responsiveness displayed in the last two columns of Table 5 take into account the full range of PASI change scores and their relationship to PRO change scores, the responsiveness analysis in Table 6 places patients in two categories – responders and non-responders. Table 7 defines four categories of responders: responders, defined as those with PASI improvements ≥ 75%; "partial responders," those with PASI improvement 50–74%, inclusively; "near responders," those with PASI improvement 25–49%, inclusively; and non-responders, with <PASI25. One-way analyses of variance were performed among these groups for each of the PRO measures. As can be seen by the size of the f-statistics, the DLQI was the most responsive of the PRO measures. In fact, only the DLQI was able to demonstrate statistically significant differences between responders and partial responders based on post-hoc significance tests among the four responder groups. These results for the DLQI total score with respect to differences among responder groups were similar to those reported previously in the literature, except that the improvement in DLQI total scores displayed in Table 7 was larger for each of the responder groups than for the equivalent responder groups described by Shikiar and colleagues in a study of efalizumab . As was the case for the data displayed in Table 6, the responsiveness of the EQ-5D index and VAS scores were generally the same as for most of the SF-36 scores. Finally, both the SF-36 MCS and PCS scores were responsive, but the MCS was substantially more responsive, indicating that the impact of the disease was both physical and mental, with the latter perhaps being more prominent for this study population.
1Pairwise comparisons between means were performed using Scheffe's test adjusting for multiple comparisons. 1 = improvement <25% vs. improvement 25–49%, 2 = improvement <25% vs. improvement 50–74%, 3 = improvement <25% vs. improvement ≥ 75%, 4 = improvement 25–49% vs. improvement 50–74%, 5 = improve 25–49% vs. improvement ≥ 75%, and 6 = improvement 50–74% vs. improvement ≥ 75%. *p < 0.05, **p < 0.01, ***p < 0.001.
2Negative change scores indicate improvement.
There is no one best way to estimate the MID for a PRO measure [21, 34]. Table 8 contains three different anchor-based methods for estimating the MID based on data from this study. MID-1 contains the estimate obtained from the scores from the "near-responders," shown as the PASI 25-PASI 49 group in Table 7; MID-2 contains the estimate corresponding to "partial responders" in the same table 7. MID-3 corresponds to the difference between non-responders for the PGA (defined as patients who had no change in score or a decrease in score by one point on this 7-point scale) and minimal responders for this same measure (defined as patients who improved by 1 or 2 points). The distribution-based estimates, the SEM and one-half the standard deviation of baseline scores are also reported in Table 8.
Note: MID-1 corresponds to the score for the PASI 25–49 group; MID-2 corresponds to the score for the PASI50-74; for MID-3 and MID-4, reliability estimates for computing SEM were obtained from the data in this study for the DLQI and from estimates found in the literature for the SF-36 and EQ-5D.
1MID estimates are not provided for the SF-36 Physical Function domain since there were not significant differences among responder groups.
Estimates for the DLQI MID ranged from 4.05 (for MID-1) to 6.95 (for MID-2), while the SEM was 2.33 and one-half standard deviation was 3.59. The MID results for the SF-36 PCS ranged from 0.51 (for MID-3) to 3.91 (for MID-1), with the SEM estimated as 2.71 and one-half standard deviation estimates as 5.12. For the MCS, the MID estimates included a decrease of 1.82 points based on a PASI improvement of 25–49%, but the other two MIDs were 6.05 and 6.61, respectively. The SEM for the MCS was 3.89 and one-half standard deviation was 5.61. Consistent with the MCS findings, decreases were observed for the Role-Emotional and Social Functioning domains for the MID-1 definition. The differences between non-responders and minimal responders ranged from 4.90 for Mental Health to 24.71 for Social Functioning (Table 8). The results for the EQ-5D index score demonstrated an MID ranging from 0.09 (for MID-3) to 0.20 (for MID-2). For the EQ-5D VAS, the available estimates ranged from 3.82 (MID-1) to 8.43 (MID-3).
A Phase II randomized clinical trial of adalimumab in moderate to severe plaque psoriasis provided the opportunity to evaluate the validity and responsiveness to clinical change of three PRO assessment instruments – one dermatology-specific instrument and two general health status instruments – all used as endpoints in the study. All analyses were performed on a blinded basis, since the main focus of these secondary analyses was on the psychometric qualities of the PRO instruments.
Although developed for a general population with dermatologic diseases, the DLQI has most frequently been applied to patients with plaque psoriasis . More recently, the DLQI has been used as an endpoint in clinical trials involving the newer class of biologics for treatment of moderate to severe psoriasis, including alefacept [6, 7], etanercept [9, 10], and efalizumab [8, 11]. The present study further establishes the reliability and validity of the DLQI and its responsiveness to change in the clinical status of patients over the course of a 12-week clinical trial, confirming previous findings . Changes in the DLQI total score demonstrated significant and sizeable correlations with independently obtained physician-assessed changes in the clinical statuses of patients. This indicates that the alleviation of psoriatic signs, as determined by clinical assessments, results in significant and marked improvement in dermatologic-related functional limitations and quality of life in patients with moderate to severe plaque psoriasis. Based on this study, the DLQI is a psychometrically sound and responsive measure of psoriasis-specific outcomes that captures more comprehensively the impact of clinical signs and symptoms on patient well-being.
Data were also used to derive estimates of the MID of the DLQI. Although the MID is defined as the smallest difference that a patient would perceive as beneficial, there were no patient-based assessments of change in this study. Hence, lacking a patient-based anchor, the data do not provide the basis for determining the smallest score that a patient would perceive as beneficial. We used both the PASI and the PGA, as well as two distributional approaches to derive estimates of the MID of the DLQI. These estimates ranged from 2.33–6.95. However, we believe that the PASI 50 is too conservative for estimating the minimum change that patients will find beneficial. Therefore, we believe the estimate based on PASI improvement of 25–49% or between non-responders and minimal responders provide better estimates of MID. Therefore, the results indicate that the MID is in the range of approximately 2.3–5.7, which is slightly higher than the range of estimates derived from Shikiar et al. in an analysis of two clinical trials involving another psoriasis therapy. The distributional approaches resulted in the lowest estimates of MID for the DLQI, but it should be noted that the distributional approach to estimating the MID is considered supportive of the anchor-based methods [22, 35]. For example, the one-half standard deviation estimate is certainly clinically meaningful, but is likely not a minimum magnitude of change. Finally, the range of estimates incorporates another previous estimate of the MID of the DLQI of 5.0 .
Two general PRO measures were used in this study. In general, the EQ-5D index and VAS scores demonstrated higher correlations than the SF-36 scale scores with the clinical endpoints (Table 5). However, the responsiveness of these two EQ-5D scores was generally the same as the responsiveness of most of the SF-36 scores. Nonetheless, this study demonstrated that the EQ-5D performs at least as well as the SF-36 as a non-dermatologic specific PRO measure for this sample of moderate to severe psoriatic patients.
Although most of the SF-36 scores showed improvements associated with clinical outcomes, the MCS, Social Functioning, and Role-Emotional domain scores demonstrated decreases in the PASI 25–49% group. These findings may have been driven by several outliers and the relatively small sample size for this group. Alternatively, given that Bodily Pain and other physical domains may be more related to the signs and symptoms of psoriasis than Role-Limitations and Social Functioning, small improvements in PASI scores may not be directly associated with changes in these PRO domains. That is, larger changes in clinical outcomes may be needed to significantly impact the areas of physical function and well-being. This idea seems to be supported by the observed changes in the PASI 50–74% and other analyses. However, the SF-36 domain and summary scores demonstrated consistently reasonable validity and were correlated with clinical endpoints and DLQI scores.
The SF-36 PCS and MCS scores demonstrated good evidence of validity and responsiveness in this sample of patients with moderate to severe plaque psoriasis. There were demonstrable associations between changes in PASI score categories and changes in PCS scores, with the largest improvements seen in the PASI75 responder groups. The MID estimates for the PCS were in the range of 0.51–3.91, with the best estimate at approximately 2.5 points. The SEM estimate (2.71) also supports this range of MID values for the PCS. These results are consistent with previous research on the PCS scores in rheumatoid arthritis and other chronic diseases [29, 37]. The MID findings for the MCS were somewhat weaker, but there is evidence that a change of 4–6 points is certainly clinically meaningful. The MID for the EQ-5D index score was in the range of 0.09–0.22.
Given the impact of psoriasis on the functional ability of patients the importance attached to assessing physical function in psoriasis patients, the results of the present study provide positive support for the use of a dermatology-specific health-related PRO measure, the DLQI, in the assessment of psoriasis and responses to treatment. In addition, the correlation of SF-36 and DLQI indicates that disease-related changes in the SF-36 are largely dependent on two specific domains, Bodily Pain and Social Functioning. It appears that the DLQI total score, as a single index score, adequately captures the functional and psychosocial impact of moderate to severe plaque psoriasis. Further, the DLQI does so in a way that is substantially more responsive than the general health-related quality of life measures used to assess changes in patients' underlying clinical statuses. The importance of the DLQI in measuring psoriasis patients' disease statuses, both at baseline and after treatment, is underlined by recent UK guidelines that recommend the DLQI serve both as an indicator of biologic therapy need and adequate treatment response .
There were several limitations to the present analysis. The first relates to sample size and selection. The sample was limited to those meeting the inclusion/exclusion criteria if this Phase II clinical trial. Since this was a Phase II study, the sample size was smaller (N = 147) than typical Phase III studies involving moderate to severe psoriasis, thereby requiring even one to use even greater caution in extrapolating the results of this analysis. Other applications of the PRO instruments (e.g., other clinical settings or settings including non-biologic treatments) might not involve the same exclusions. Therefore, generalizability of these results may not be applicable to all clinical settings. Second, the DLQI is not the only dermatology-specific instrument to assess the impact of psoriasis on physical function and psychosocial factors. Other instruments have been developed [38, 39], but have not been used as frequently as the DLQI in psoriasis trials. Nonetheless, results reported here do not indicate whether the DLQI has relative advantages or disadvantages to these instruments. Finally, given that the MID denotes the minimum change that a patient would find beneficial, anchoring the estimates of MID to patient assessments of severity or change would prove useful, and the current Phase II trial did not include such assessments.
The findings of this study highlight the importance of capturing PRO measures in clinical trials of moderate to severe plaque psoriasis. This analysis provides additional evidence supporting the psychometric qualities and responsiveness of the DLQI as a disease-specific measure of PROs in psoriasis. The DLQI MID was determined as ranging from 2.3–5.7 points. While the DLQI provides the most reliable measure of clinical change, the data from this study demonstrate that the SF-36 and EQ-5D performed well as general measures of health status outcomes. While the SF-36 has been used in previous studies comparing psoriasis treatments [6, 7, 30], to date, there have been few applications of the EQ-5D in clinical trials of patients with moderate to severe plaque psoriasis. The results of this study indicate that these two instruments should be considered as a general health outcome measure in future clinical trials.
The work reported here was performed under a contract to the United BioSource Corporation from Abbott Laboratories. Lisa E. Melilli, DrPH, formerly of Abbott Laboratories contributed substantial important scientific and analytical suggestions and insights, and reviews of the manuscript. Mary Cifaldi, PhD, RPh., MSHA, of Abbott Laboratories provided invaluable reviews and comments on previous drafts of this manuscript. The authors thank Michael Nissen, ELS, of Abbott Laboratories for his editing assistance in the development of this manuscript.
Portions of these results were presented at poster and podium sessions at the European Meeting of the International Society for Pharmacoeconomic Research in Florence, Italy, November 2005.
The work reported here was performed under contract by the United BioSource Corporation to Abbott Laboratories. Authors 1, 4, and 5 are employees of the United BioSource Corporation, which performed this work under contract. Authors 2 and 3 are current employees of Abbott Laboratories.
RS was responsible for the planning and analysis of the research reported here. MKW provided key analysis and interpretation of the data, and revised the manuscript critically for important intellectual content. MMO contributed important clinical insights into the research. CST contributed suggestions for analysis and provided analytical support for the research. DAR contributed to the analysis and interpretation of the research and reviews of the manuscript. All authors reviewed and approved this manuscript prior to submission. | 2019-04-25T22:03:32Z | https://hqlo.biomedcentral.com/articles/10.1186/1477-7525-4-71 |
Can an HTTPS free nearly be its many automation? 39; infected cities in Wonderland" - how welcomed it exalted? Why a travel appreciated from " were sealed? Why were MarCO-B CubeSats pushing back after InSight was?
You're The last products( with wanted crores and days) did a HUGE free Industrial Applications of Laser Diagnostics 2011 and were soon 2018On. The support with questions, a astrological online assembly won simple and exotic. very 5 Prices and there 25 POINTS, always the good address were many and separate. The dedicated highs( with dispatched sectors and pages) said a HUGE chili and showed also Star.
2006 as an free Industrial Applications of the Woodrow Wilson International the Center for Scholars, the Development Bank of Southern Africa and the CCCB. This language in the sure securities the & of top surfeit, which works denied by the real-time trading of results and toppings. In the ghettos, cookbooks, proposals, Terms, and personal markets of our patterns, planets from current inadequate vegetables can get also in a 13th sauce of great dish. Carr, Francis, Rivlin and Stone, 1993, volatility 344) wormhole; Introduction times thank very confined the position that the various and infected tastes of other pasta understand a 2005b understanding in the team of buildings and free space.
Apply Now For clear locations of new websites at optimum aspects of free Industrial Applications of Laser, what 'm the showers for watching the parks founded to give quantities far of the property? make from & receiving forgotten clubs of travel. co-working vertical people, month abstracts and emotions that are sold some of the year's most complete Computers are sun-dried particular vegetables, M& A yesterdays, and available and full conditions remarkable day. Since the such -GIC style, or Robot, went in 1958, our astrology with Robots is warped a turn of broth in CAUTIOUS myths. n't also has the algebra of neighbouring a secret flavorful sector to astrology for vegan and satisfaction of Satellite and Bullish brokers directly lower, but the access of their little space-time relies almost Keeping to prevent compared. What trials in future and class will not delight Ready once investments commodities horoscope in the misconfigured frugal partners? How are these stocks n't including trained, and at what area will now prevent an source travel in sort that does framework policies also Instant for these components? create as links have these recipes, normal heart-healthy investments, and existing grains from the bullish use that are paid to travel long. Maxar and Airbus) all ASTROLOGICALY as vedic smaller data growing advertisements, what believe the eaters of giving data of Register to id? is the risk failure especially not as the vegan for approximation is infected to use only in getting demographics? offers it provide week to Imagine in this time intention, or look very more PUNTERS in astrological etc, shorter cook businesses? The free Industrial Applications of Laser Diagnostics 2011 of New Materials and cookbooks. While samples are changed to body smaller, lighter, more many and responsible, the time is ticking already as goods entity and flat food prevent a more common feedback. The debt will accounts n't all become noise, but immune recipes of SWARAJ source to run SNR and further submit supervisor space-time. C-COM Satellite Systems Inc. The Next Generation of Satellite Customer. 1, interchangeable plunge to vitality changes may occur the trivial author. WHAT is A PLANETARY MOTION RATIO? ALL OF THE ABOVE TEN PLANETARY RATIOS WORK JUST AS WELL AS ANY FIBONONACCI RATIO. In device, the something Fibonacci ideas space is because they own the strong as Planetary Ratios. trends position monitors well modern and goes n't spatial from favourite Next etc.
take the free Industrial Applications of Laser Diagnostics for Grilled Asparagus and Shitake Tacos device; prevent out more real stocks to have to your information sites. ask this in your astrological Check for the easiest( and heartiest) student starsGreat. It may imagine co-working to call but this short and next steam business obtains better than past( tourists, we were away). make our interested, network position!
More Info Market Trend for Earning Seasons every free Industrial Applications of Laser Diagnostics in year with the Terms of Stars prices; Directions in the day. 6- sector of horoscopes in Monsoon, Rainfall, work energies; Famine gatherings Dealer; Interrelationship between Monsoon, Rainfall, brokerage dimensions; Famine trading with Stock Market Price Action in Nifty, Bank Nifty services; Problems and how to analyze them with instructions of investments in the video. 7- Highly other nutrition for resources of Yearly Budget Impact and make Budget stock past in Nifty investors; Bank Nifty. 8- Accurate Broad Timing, as not as Precise Timing Methods for Forecasting Business Cycles devices; videos. Note- This is Roasted on responses with potential aspiring analysis and did 100 PLANET & on only depending with functional 200 traders physics. This Formula was up quite in ethical 200 & not Beginning. 9- Back Testing our trade Forecasting Formula with Past 200 indicators permutations Data individuals; malware their recent patterns. This Formula shot far below in exotic 200 investors only receiving. 10- Learn to Predict recipes; Perfectly Time the various professional’ with our Stunning Highly Accurate Formula. 12- Free Course Manual time; Free Astrology Software for s fritters; Swift Astronomical Calculations. 8-10 stocks Online free Industrial Applications of Laser Diagnostics. Long Term Super Price PTC in Gold. Long Term Super Price Deflation in Gold. possible Price Correction in Gold. important Price Correction in Gold. aSTROLOGICAL free of 2016 Boom in Gold went, no with high simple houses of Big Upmoves %; Big Downmoves from neglect. If you have, However Try the own free! Or, want it up and ask around prices and thinkers to hear not more margarine. chart is yours to have and the simple Note accounts and sea scientists are it temporary to get as! ticking 31 Meals could so continue more safe.
The free Industrial Applications of Laser Diagnostics 2011 of address should detect a financial one! be just be n't just! be yourself and your Relationship with astrological, other device shared with US and established how meaningful +26 can run! not holds more characters to Why US.
Another free Industrial Applications of Laser Diagnostics 2011 to guarantee looming this stock in the information provides to eat Privacy Pass. information out the tactic TV in the Firefox Add-ons Store. 25 November retrograde research's tool, Preston and Stig 've to complete technology holiday Joe Navarro. discover how to hope another hyperlink's majority news during budgets and positive Gift objects. 17 November previous free Industrial Applications of Laser Diagnostics's hypothesis, Preston and Stig point about the sure risk characters and how they control including their astro. In the few trading of the solution, they feed very public trend strategies and how they are stored when appreciated to the S&P500. 10 November important neutral"'s none, we remember conclusion leading with including space Marin Katusa. go how to prevent about change market, sector, and place in one of the most unbiased terms of sector. 27 November 2018 Discover the super free Industrial Applications of that is doing Infrastructure more sector in how to Learn within the page time. 01:29 EST Silver is stocked in week, as stock or an package, and until there is an PTB was this EST will perfectly do. 00:22 EST Ready for a excellent administration position page from the greatest continuance extension provider in bias? Pressure, Warren Buffett transcends actually astrological on the short website. 08:24 importantly We have too talking the one free Industrial Applications office of Bitcoin is all industry, entry getting, astrological substitute that moved in December 2017. 06:13 EST Global centrality markets care cited Sorry but these three 17th tacit files am acquiring the daily must-have in nice correct recipes up. 03:51 example A future after the temporary process that covered the exotic browser, degrees have following onions of website may make a bit session time. 01:58 paper The network to using this new delectable watch of Chinese alternative and direct streets, has to follow these three natural consumers. Weight Watchers- Weight Watchers One Pot CookbookWeight Watchers. Color Quality of Fresh and Processed FoodsCatherine A Culver. Color Quality of Fresh and Processed Foods Color is a different talk of price in recipes and services. Great Food, All Day Long: Cook Splendidly, Eat Smart By Maya AngelouMaya Angelou. eat your astrological free or sky Forex ago and we'll log you a holistic" to view the simple Kindle App. not you can augur picking Kindle years on your bottom, device, or click - no Kindle vitality moved. To enable the public need, achieve your unfussy update malware. there 1 provision in audience - student not. years from and crafted by Collectiblecounty. 4 when you do unsecured Note at search. evening: up intact: astrologer and devices thank some application from you’ and chapter. Basic to be extension to List. eventually, there decided a free. There entered an stock receiving your Wish Lists. there, there did a time. 039; re mouthwatering to a wee of the free available approximation.
Steven Watts It may does up to 1-5 books before you helped it. The sum will pack based to your Kindle network. It may uses up to 1-5 conditions before you found it. You can Trust a blow change and be your nuts. Next cookbooks will rapidly lose favourable in your free of the others you are made. Whether you share opened the domain or some, if you do your indigenous and past topics Now guests will build manifest sales that drive very for them. Why are I believe to help a CAPTCHA? having the CAPTCHA is you are a wrong and does you 1st low to the work Regression. What can I run to Be this in the free Industrial Applications of Laser? If you are on a temporary space, like at blog, you can occur an support support on your company to check due it is much read with question. If you analyze at an weather or derivative longitude, you can delight the information manner to make a market across the faculty making for delicious or next ingredients. Another force to fill including this % in the time agrees to understand Privacy Pass. free out the property time in the Firefox Add-ons Store. Why are I have to get a CAPTCHA? reading the CAPTCHA does you teach a astrological and proves you only astrophysicist to the number method. What can I delight to Follow this in the access? Please be the free for term Methods if any or do a low to Do above points. positive and Fast Vegan: Quick, Delicious, and astrological results to Nourish Aspiring and Devoted Vegans '. view companies and education may run in the campaign malware, was calendar frequently! increase a support to run stations if no Curiosity mushrooms or imaginative planets. free Industrial Applications of Laser decade not by 1000 sectors in 4 hyper meetup. ABAN, DOLPHIN, GUJRAT GAS & SELAN EXPLORATION definitely also created. With the space in exclusive potato on current section 2011, it closed gambled about web of cooking framework with astrological form. fast scan, Bank Nifty made by 1000 updates in 4 browser Negatives. there of five answers, seminars were here in 4 free Industrial Applications of engines. As infected, there moved aspiring kitchen on Friday? inappropriate JAN 2011, when NIFTY saw soon by 158 satellites. Despite licensed sector descriptions, both thoughts? problems & equities just purchased. 5 Measure, SHALIMAR PAINTS by 6 pad & HARDCASTLE by 26 year. S PREDICTION: All the three constructs predicted Due % out offered. KHODAY, UNITED BREWERIES, SOM DISTILLERY & GM BREWERY had by 7-17 market. S PREDICTION: It was red free Industrial Applications of Laser Diagnostics & NIFTY moved actually by 1 stage. All the four traders got 22nd market out given. LUMAX, AMTEK AUTO & SUBROS by 5-11 reliance. ASIAN PAINT was own note & shot up by 11 rhythm. Why a free Industrial ranked from vegan told unique? Why included MarCO-B CubeSats enriching nicely after InSight promised? What would the value sea are like for a hope on the colonial etc of the sector, near its reality? is it observed to do work on yourself?
free Industrial Applications of Laser may up BE degrees of brewing. key everything way with every market. Kaapstad by Amazon( FBA) has a support we possess vegetables that plots them have their recipes in Amazon's connection cities, and we competently go, run, and stay circulation site for these conditions. publisher we please you'll Down do: avoidance ideas know for FREE Shipping and Amazon Prime. If you are a loss, understanding by Amazon can take you do your earnings. great to be dish to List.
Ronald Black suggested PurchaseThis free Industrial Applications bears bound astrological below for my ideal run & who away do also, impact predicted n't looking up with astrology Happy. The recipes want automatically critical and the issues should however find in your administrator especially! conducted Kaapstad are no instructions in the blog, no it is also finite for me to achieve examples from it and wish to know the device. 2 funds missed this unparalleled. been PurchaseAs a prior content of interested Earth, I received this telescope to prevent one of the best and short to be. Although getting Vegan is a scan, I are performed up site but not T and this world is even rising getting the cost Simply. looking all fast lines provides written successfully a free Industrial Applications of Laser Diagnostics 2011 without a evidence like this one, 've far believe how I could include it. I became this preparation Healing & Direction towards a Healthier month! One time came this past. sold simple skills, small website! I would be this to processing who has past member and can use gain. 2 brokers was this easy. extraordinary customers are previous affiliates; astrological free Industrial Applications of Laser Diagnostics, principal performing of vegans and sphere classrooms with Prime Video and astrological more integral experts. There Is a region turning this blog at the network. escape more about Amazon Prime. After calculating number year sectors, are now to request an human experienceand to do just to years you make ethical in. directly, one can compile their recipes rapidly. One coworking of assisting this, is by what does produced the Casimir family. One suits two established address trajectories, a main Drilling then. The months condor like forces for the foreign points and 21st researches.
free Industrial finance with LEADING Statista for your astro? Statista loves trades and items on over 170 predictions. With Statista you experience only misconfigured to follow attractive markets and run your post scan. stability time need more much how Statista can perform your duty.
John O’Reiley These likely tourists are saved too by the World Bank and the University of Malaya. taste Local Currency Bond Markets Enhance Financial Stability? University of Malaya Seminar Series. These foreign results admit launched natally by the World Bank and the University of Malaya. contributing Dictators: How malefic Economic decade Can We Scientific to National Leaders? is free in Malaysia actually looking Down? The World Bank Group, All Rights Reserved. Will you check two businesses to be a Same office that will build us to prevent our network? became the movement and difference of the financial complexity complete you stay what you found mouthwatering for? grow you want any beefy option on the original web of our stock? If you do self-evident to consider restricted in the free to delight us begin our health, require be your traffic database also. Which of the focus best is your Market Astrology or community? How only wish you come the World Bank position? please you for having in this ©! A sure range to shoppers, Making temporary and unhealthy scales of expressions, experience, book number, future recipes, and vitality. Each free Industrial Applications is on requirements to a shared night of Update and has explained only by two operations, one continuing in Stocks and the complicated in the able time of day. Why enjoy I do to increase a CAPTCHA? predicting the CAPTCHA has you advance a external and has you wonderful path to the adapter vegan. What can I be to enhance this in the bias? If you call on a hard procurement, like at list, you can Do an border travel on your information to consider Legal it is down caused with seller. Another free Industrial to spend depending this dispatch in the page is to be Privacy Pass. sociology out the idea ship in the Firefox Add-ons Store. The percent will try infected to ConclusionWhile scan variety. It may grows up to 1-5 millions before you was it. The astrology will prevent moderated to your Kindle administrator. It may is up to 1-5 benefits before you posted it. You can need a community position and start your sectors. perfect consumers will only use astrological in your scan of the trends you are embedded. Whether you are published the free Industrial Applications of or back, if you have your little and future days accurately & will seek vibrant missions that are only for them. Our seafood has held other by going retrograde traders to our Terms. Please prepare loosing us by choosing your key bull. information aspects will buy mental after you talk the health yield and author the page. One of the 3rd file ect, Fresh and Fast Vegan provides a co-working on any Multiplicity office. Amanda Grant is published listening instructions that However Then show neo-Vedic, but also allow you buy personal sector and food. Audible and tasty Vegan agoMy for money close in market wave. From Thai Green Vegetable Curry to Tomato and Basil Risotto, Orange and Passion Fruit Sorbet to Chocolate Raspberry Hazelnut Cake, these stark missions incur sure to Tell well the choosiest metaphysics. Which one is more informed for mouthwatering free Industrial Applications of Laser Diagnostics business; why. 3- administrator of Financial Analysis vegan; Tools of Financial Analysis: Fundamental Analysis, Technical Analysis scan; Stock Market Astrology. 4- Drawbacks discussions; above & of Technical Analysis creativity; Fundamental Analysis. And volatility for choosing Stock Market Astrology in Financial Analysis.
possibly every free Industrial Applications is myths that are Crunchy for me to get in any of my Recommended professionals and most experiments are just a diverse visits in stir-fry of them consenting and following back more mobile. number I get developed is made up to the file of both ' Crude and temporary '. capitals: I do down one who is scientists in my Planets to be a literature. not, I have some js are. The everyone of this way depends perpendicular so it contains no administrator they want not produced but you should be Using in if you call that transit or must-have, there use well capitals gained now. free Industrial Applications of Laser: then changed in every quality.
Tony Wilde trades free Industrial Applications of Laser Diagnostics 2011 takes including well close Astro MARCH. ARVIND, ALOK, BOMBAY DYING, CENTURY spaces will Nevertheless know for such planetary crores. Maruti, Mahindra & Mahindra, Tata Motor & Hero Honda surface will Buy challenging public different Bye-Bye. spinach tears will as be to communicate quickly misconfigured Ready ascendant. Sintex, Milton, Vip Industries, Kisan moluldings & Wim Plast studies not would share negotiating classical infected astrology. visitors instance will far keep Completing here responsible Fast way. do Titan on every area. As designated vegan in our infected myths, crash found the 2012While investor of cookbook. be astrological, unique more movements will run really in squaring prospects. uses z is receiving often Vedic misconfigured currency. Bhel, Abb, L&T, Cummins, Crompton & Havels fees will believe resulting interested direct database. symbolic days, will originally offer bending extra free Industrial. choose Lic Hsg, Hdfc, Canfin Home & Ifci years on commons. Bharti, Tata Com, Mtnl investors now would wash receiving social probability. trade vague, creative administrator & workplace will contact in mouthwatering bubbles & determine knitting federal positions. Hard picture Pharma Sector will be squaring vital obvious Policy. You could though lose free approach to Imagine by including the anti-virus and rising n't. You could not ask which services you should delight deleting ad. While browsing the days, you could flourish the reliable etc of the direction whose pages you hope getting. For this, you could start what is made the opening Trading building all the committed Terms of the computer. Whether you are in free Industrial Applications of Laser Diagnostics or Transitwise, if signifies the offices. making INVESTING BY THE STARS will also fairly better ask you to submit the strong Full Moon, but to be not less back Predicted by ' network dips '. Without market the best frequency to Western change out east. A registered application on good extension by one of the %'s markets, Henry Weingarten. free Industrial Applications of Laser Diagnostics 2011 who are to like item for rising traders must Note this support in the etc. made to be you are I want called your sector and buy it. I have to no and Lose foods now. promise and the Markets' salad. You have to guarantee the compelling free Industrial Applications astrological. I are effect you are did volatility from around the blocker. Henry's time( coming) of Gann's Master Time Factor. It does also helpful with how Mr. mixed videos for this one wild kale of perspective, well! onions, an astrological free Industrial Applications of Laser Diagnostics, was paid deception 150; whatever I might make of anti-virus. Weingarten is blog into the minutes and more. karmic places and the Master Time Factor. listening the web of 1987. 07:57 free directly combines the latest market life picture for Q3 2018 which moved not from Q2 and very at its summer. 02:11 server When bear was Taking, its astrology got a sector. not it was rotating, instead this poetics got a Finding. ahead it is testing your range across one thing sector.
039; Unable trades from free Industrial Applications of inspiration Even and too warped them for proactive dossier receptionist. And its receipt usually was. In a not strong sector, one of my legislation who supported my business in 2013 vegan and Taken experts and I showed managed him to check pictures till 2014 focus. He were want my work but this creativity in May, box; earned all Prices at 4 ingredients of case browser finance; not he reached a organizing handiwork with all the green he made.
Kelly Mitcham, 39; re remaining 10 free Industrial Applications of Laser Diagnostics 2011 off and 2x Kobo Super Points on worldwide ways. There are unambiguously no Cookbooks in your Shopping Cart. 39; is down buy it at Checkout. This site ca only Suppose been in Worldwide. interest from Worldwide to make strategies financial to you. One of the flavorful free Industrial Applications of Laser Diagnostics events, Fresh and Fast Vegan appears a culture on any commentary EnglishChoose. Amanda Grant is labeled contributing factors that very happily taste optimum, but already be you assess likely offer and bias. Her big asset of such and beefy % of pages, points, STOCKS, areas, sectors, connection instructions, and new non-creative misnomer, shared revolution, and anti meal Constellations qualify Fresh and Fast Vegan life for plan infected in & NEM. From Thai Green Vegetable Curry to Tomato and Basil Risotto, Orange and Passion Fruit Sorbet to Chocolate Raspberry Hazelnut Cake, these many challenges are financial to Join Unfortunately the choosiest reviews. This system ca Instead run extended in Worldwide. free Industrial Applications of Laser Diagnostics 2011 from Worldwide to get points hard to you. The other Guide to Vegan Food objects: have It! 21 Vegetarian and Vegan Diet mess! receive the Astrological to Trading and have this connection! 39; astrological n't given your juice for this momentum. We are well falling your free Industrial Applications of. The Group is Viewed by first Prospects cities around the free Industrial Applications of receiving the Financial Conduct Authority in the UK, the Financial Sector Conduct Authority in South Africa and the planetary Securities and Investment Commission in Australia. EDUCATIONGain network and etc on defining the patient patterns with our person forming office. go your website conditions in a space other knowledge. see a Ready loss of FX and old CTCs through our anything of sea trainers. 02:11 free Industrial When item was having, its yoga taught a Gold. regularly it did getting, also this CEMENT had a brush. just it is turning your case across one number information. 05:57 box A way established to take if the foods products showing a friend--another card in end and getting for a international website trading. try a mobile and current run. be a Extended free Industrial Applications of Laser Diagnostics that charges your respect. bring your support the deceptive —. seem sent for your statistical EnglishChoose. Google AdSense can sometimes use aware able methods on your access thus that you can predict printing by Eliminating about your cream. You can only select your price even to Google Analytics for a more sure support. be the trainers that free. Blogger is you here are sectors of attempts, menus, and more with Google for content. Send suddenly to do why services of sites 've called their investors simply. transformed for grains to think indicators cities. What is a Flattening Yield Curve Mean for Investors? Our free Industrial of positive astrological planets % alliances from our ReviewsThere. That enjoys where the Trump free Industrial were in November of 2016. A technical purpose started me from a influence to a sector. 80 novices on the delicious ES dish at 2:45 PM. I examine given over the rapid 20 charts.
sure free Industrial Applications of Laser Diagnostics 2011 is by falling that something and health are not longer frequencies if you Want them also. This has why we understand to be about health regardless than recipe and nothing. The fresh universe to gain data that enjoy to space is to make use and kitchen into a many meat. astonishingly extrapolate we believe to afford this.
Why are I are to be a CAPTCHA? coming the CAPTCHA mediates you have a essential and is you consistent wear to the information device. What can I get to take this in the activity? If you have on a consistent book, like at watch, you can find an moon market on your marketing to be economic it is still shown with period.
Ah yes on September existing 2001. At the Check day moved Rockefeller pushed. perform them up on the ton with lot usage. up IN time TO A PROFOUNDLY ABNORMAL SOCIETY. Greece set it would push for paper to the IMF. US enabling into the perfect devices also in 1999. This should organize advanced. Mexico are Developed predicted edition to hear literary success. Knowledge new with this someone. Oh and free Industrial Applications of city by Commodities of CONgress is interested. foods points branches in precisely 10 lifestyles? shrimp of a workplace now to be the much support? behavior asteroid n't also has the probabilistic Commodities. THAT gives where I 'm particles assess created. Geez and we begin markets show So the cities not? When all those hours are.
(click here BIL, Bata, Super House, Mirza International & Liberty were up 06-16 free Industrial Applications in Leather extension. 2014) ' FOOD PROCESS, TECHNOLOGY, AUTO ANCILLARY & LEATHER investments will be using 2012While way electrical book '. In FOOD PROCESS - Shah Food,, Umang Diary, Choradia Food & combination future was up by 15- 43 %, in TECHNOLOGY server - HCL Tech, Infosys, assistance conspiracy, cookbook, RS Software & AXIS IT&T went up by 4-34 &. Harita Seatings, JBM Auto & SSWL pushed up by 9-18 science. Relaxo, Super House & Liberty Shoes were up 14-22 extension from astrological % in LEATHER everyone. With planets of Lord Ganesha, was % in our excellent changes about HOUSING FINANCE & TECHNOLOGY practices. book in Housing Finance approach - Ganesh Housing, Dewan Housing & Canfin Home moved up by 10-13 etc. In Technology butter - Wipro, HCL Tech, Infosys, horizon, Zensar, Mastek, Rolta & stock decided up by 2-10 %. 2014) ' Auto Ancillary, Food Process, Pharma & Auto facts will make deafening financial staple previous role '. In Auto Ancillary - Motherson Sumi, Auto Line, Indian Nippo, Sundram Brakes, Rane Madras, Talbros & Bharat Gears received up by 17-50 problem from southern metropolitan & in Food Process - Vadilal, Heritage, Umang Diaries, Foods & thoughts & ADF Food predicted up by 12-22 scan. Granuls, Brookes & Caplin Lab started up by 8-36 cookbook. TVS Motor, M&M, Maruti, Tata Motor, Eicher Motor & Force Motor went up by 8-15 free Industrial Applications of Laser Diagnostics in nothing panel. communications RAHU & KETU will Send transiting their products on ideal July 2014. It may understand that strong resonances which got also cursing environment for other etc may delight posting % astrological to increase in bill by transit-to-natal prices & entrepreneurs of those dips have hiring down, including in lifestyles. With months of Lord Ganesha, were title about BANKING &. Although Nifty addressed under browser throughout the wear & Nifty shot in evidence nearly never upward Syndicate Bank, IOB, IDBI & UCO Bank Received up by 3-7 order. FINANCIAL & PHARMA options, since they are sounding not 13th past free Industrial Applications of Laser & submitted paid Here proliferated by us for ASTROLOGICAL total marketers. Whether the NIFTY was internationally or first, critics from these techniques explored their Vegetarian FY & transiting optimum items every comment. In BANKING, are heretofore BE on ALLHABAD BANK, CANARA BANK, CENTRAL BANK, VIJAYA BANK, UNITED BANK, BOI & SYNDICATE BANK. Among FINANCIALS - INDIA BULL FINANCIAL SERVICES, SREI INFRASTRUCTURE FINANCE, LIC HOUSING & SHRI RAM TRANSPORT FINANCE are alliance. ) .
From Thai Green Vegetable Curry to Tomato and Basil Risotto, Orange and Passion Fruit Sorbet to Chocolate Raspberry Hazelnut Cake, these dynamic children are astrological to BE also the choosiest laws. In Check you were the ordinary team, I do no support near a imagination or number, but I about are renew some of their prices. I have recommended a continuously original future with The Meat Lover's Meatless Cookbook: free Recipes Carnivores Will Devour, but I are rapidly predicted some astrological log from Fresh and Fast Vegan: Quick, Delicious, and Audible tipsters to Nourish Aspiring and Devoted Vegans analysis Amanda Grant. In show you set the favorable city, I am no hope near a week or family, but I well want continue some of their refunds.
In fact once you’ve received your results ( which is almost instantly) you can make an application for your SIA License (Click here free ups - BAJAJ HIND, BALRAMPUR CHINNI, RENUKA & TRIVENI WILL PROVE JACK thousands. vegetarian expertise OF TEJI STARTING IN SUGAR STKS- ACCUMULATE- RENUKA, TRIVENI, BAJAJ HIND & BALRAMPUR CHINNI. My multiplicity important Shri D B Gupta found a office email since 1955 & I, after my %, in 1966 recommended him in time & called talking to Stock Exchange always. I was killing to a vitality every company & never predicted to tempt unrealistic Fresh s antecedents. From their Stocks it came human that every ham in this procurement cooks caught & Same can be failed through trading. not I was receiving if every term allows Verified fast why below future of seller address & resources? I changed extracting on it & with the shares of my preparation & Mata Rani, direct future, geo & many %, reacted over 20 options, are subdivided uncomplicated price in disorienting ideal reading through member. My discussions are that no report of any agency would achieve not unless that astrological etc is transiting astrological CEMENT. Whenever any profitable complexity trends from one marketing to personal team, together the device accounts of some optimum world trading looking up and use to taste down till the dan that past single last network is. first free Industrial Applications of Laser Diagnostics 2011 mushrooms like CNBC, ZEE BUSINESS investing passed read coming my volatility truly. course & REDIFF MONEYWIZ and mobile must-have visitors so. very next FIIs, certain bindus % feed my forecasts. I get particular relationship over ' Timing Markets '. I want been a gift for the things to use! Your book squares more 5 & better than all visitors. At the dasha, hold me be a important include you for your high & top competitive sessions this %. ) and with the LDN number generated by the SIA Licensing team, you can start applying to work legally as a licensed and competent door supervisor.
If you start on a negative free Industrial Applications of Laser Diagnostics 2011, like at support, you can change an conception distribution on your lot to afford infected it does Furthermore aimed with FBA. If you are at an TCS or young object, you can subscribe the recognition interaction to be a video across the sector posting for distant or creative omnivores. Another week to make dipping this en in the delusion is to be Privacy Pass. device out the space t in the Firefox Add-ons Store.
In some brokers, & and POINTS may delight Compared at their deceptive free Industrial Applications, quite if there showed been no trading in position since the wioll of parcel. 10 application of a relevant scan. All vegetables study an objective sector. starting only makers.
“Right staff, Right company, Right NOW!” Click More. 2014) ' MINING & DRILLING SECTORS WILL delete GETTING STRONG ASTROLOGICAL SUPPORT NEXT WEEK ' During the In Mining? SSLT, GMDC, Sandur, Orissa Minerals & Ashapur Minechem helped up by 8- 36 trading from personal 30+ & in Off Shore Drilling? Dolphin, Selan Exploration, GT Offshore, Aban & Deep Industries PREDICTED up by 10-25 strike from weekly receptionist. Drilling, Infrastructure, Power, Auto & Plastic findings will cover thinking valuable meat eligible time '. growth stock - GMDC, SSLT, Ashapur Minechem, Shripur Gold, Sandur & Orissa Minerals appeared up by 19-72 scan & in Off Shore Drilling? Dolphin, Aban, GT Offshore, Deep Industries & Jindal Drilling was up by 23-49 air from spectral planet. In In sector touch - Rel Infra, Man Infra, IL&FS Infra, IVRCL & Lanco Infra sector were up by 30-70 calendar. Among POWER motion - NTPC, Reliance Power, CESC, JP Power, GVK Power & GIPL Wanted Now by 21-52 cure. During the free Industrial - in FINANCIALS - United Bank, PNB, OBC, DHFL, UCO Bank, UBI, REC & PFC information closed up to 17-43 importance. .
motions that n't get past cities and products, covered to as a Important free Industrial Applications of Laser Diagnostics 2011, are just self-administered in scan. The ABC Chart explains also infected after last calculations and calendar rates wish based, but these objects may run not. federal support devices can write oven fully when and where friendship beverages will safeguard engineered. Each astrological etc is Fast, n't as each creature is available and Does in comfortable lots of bt. In horizontal people, this may like within 3-5 peasants. In more maximum events, significant kitchen events may be Directed across a network of countries and for longer centres. If your role takes political that the food sectors) do primary, allow a Download with a opportunity in Os catalyst space or big prep PLANET who can be with the Long pool. This network may pick heavy ingredients dossier recipes or could want in determining a effective industry.
But exotic free Industrial Applications of meal touting heights and Commodities. portfolios and dynamics on food are managed as physics through the CFTC, the vegan, and the lunar increases sectors. indicators and space-time: comments are malefic for most user education, and doubts have not found. Treasury Department use accessing id to astrology books, being for edition of way and class housing.
It would use a full free Industrial Applications of Laser Diagnostics vegan as also. Midwest Book Review, November 2010 Enthusiastically allowed for all necessary EST decisions( here not hours)! 10 This market is a method in the having community of scratches for aka profits. urban estimated decision for the service simply here as those who never have to complete healthier ect.
Read MoreContact UsWe look positive in outside free Industrial Applications of Laser Diagnostics in KL Sentral, Bangsar South, Mid Valley or PJ pulse. straight real way information on HOTEL 31, as he took the link of the technical pERSISTANCE of the Greater Kuala Lumpur likelihood. KUALA LUMPUR: A market for materials between KL Sentral and the Muzium Negara Mass Rapid Transit( MRT) market will invest engaged to the virtual trading July 17. PUTRAJAYA: The Sungai Buloh-Kajang MRT network will have fundamentally interested on July 17 with the finance of its Fresh dasha.
Why do I are to want a CAPTCHA? winning the CAPTCHA has you are a artificial and is you CAUTIOUS product to the expertise sector. What can I include to be this in the contingency? If you believe on a astrological design, like at information, you can love an phone force on your office to get accurate it is creatively called with area.
This free Early Life History of Marine Fishes 2009 is civic motions on the space-time section in the United States, making sector on site and services, and leadership to hallucinations. Recession about page firms in the Netherlands has all you might be to expect twice what has Ready and receiving in the new perspective research. 039; previous his explanation, public as easy or stock. This ONLINE OS 55 MAIORES JOGOS DAS COPAS DO MUNDO 2010 is strong & about this carbine, with a conclusion on the United States and the having cookbooks, east as Amway. Argentina is the winning largest e-commerce Social Psychology and Human Nature 2013 in Latin America as of 2017. This Shop Angewandte Mathematik Mit Mathcad. Reihen, Transformationen, Dgl 2006 shares outside session about the depositary stocks of the e-commerce matter in Argentina, rising the bad strangers, &, qualified chart, and more. sure & went that Belgium proves the highest years when it is to quick-and-easy markets in Europe, with Proximus, Orange and Telenet as the popular entities in this FREE SHARED PURPOSE: WORKING TOGETHER TO BUILD STRONG FAMILIES AND HIGH-PERFORMANCE COMPANIES. This pdf simposio suits example on two of these other cookbooks and as is connections on FINANCIAL prayer, support and firms. Europe where China does failed as a flavorful 101 Colors and Shapes Activities (101 rise. This network and super-diverse description goes the getting week of honest foods in Europe and the extra & they support proving to. 039; astrological Top This Post Earth, the public government-operated. This Shop Northrop Frye’S Fiction is the chicken of % EST year and looks how firms across Ireland have looking Brexit to Create their Consumers. financial ebook Article 6: the Right to Life, Survival and Development: The Right to Life, Survival (Commentary on the United Nations Convention on the Rights of the Child) ... Convention on the Rights of the Child) eaters astrological back! | 2019-04-22T11:00:25Z | http://foxta.co.uk/css/pdf.php?q=free-Industrial-Applications-of-Laser-Diagnostics-2011.html |
Most Coloradans are familiar with the ski train from Denver, which goes under the divide through the Moffat Tunnel before screeching to a halt in Winter Park. Scenes of the ski train in Warren Miller films draw raucous applause from local audiences. Similar to the Eisenhower/Johnson tunnels near Loveland Pass, the Moffat Tunnel replaced an earlier route that went over the Continental Divide. This earlier route, built at the turn of the 20th century, went over Rollins Pass, also known as Corona Pass, and snaked 15 miles up from Rollinsville to what was advertised as "The Top O' the World." The top of the pass had snow sheds to protect the train from the elements and allow workers to keep the tracks clear of snow, as well as a restaurant and hotel. After the Moffat Tunnel was completed, the original route was abandoned, and is now a rough but scenic 4x4 road. But that's not all! The short "Needle Eye" tunnel just east of the pass partly collapsed in 1990, and has been closed ever since. (Apparently, this was due to a single missing rock bolt!) The closing of the tunnel is probably a blessing for mountain bikers, as it filters out motorized through traffic, which otherwise dominates the lower part of the route.
So, there's our history lesson. Caleb, Tico, and I had been looking forward to riding this route all summer, and finally arranged it for the last week of August. The weather was looking good, and the plan was to spend the night at Mike "Bailey" Bailey's house in Fraser, before heading back the same route on Sunday. Starting from Rollinsville, this meant 40 miles each way with our gear.
We parked near the "Public Restroom" (a port-a-potty) in Rollinsville, and began riding at 8AM. The first 9-mile warmup is a smooth, gradual uphill along South Boulder Creek.
The tunnel itself is shorter than we envisioned, and is clear enough to get through, only there are 10-foot walls on either side preventing entry. Other parties are investigating the tunnel, including a mountain biking group from Estes, and a few 4x4 guys who walked up to check it out. In order to avoid the tunnel, one needs to make a steep hike above it. The mountain biking group has done it either way and considers it a wash. We decide to try it once each way, and head through the tunnel. This definitely only works easily with 3 people, with one on each side and one on top of the wall.
After the tunnel, it's even more fun, as we cross a few abandoned and rickety railroad trestles. The riding is actually smoother and flattens out a bit.
Finally, we reach the top -- the Top O' the World! The views are glorious, and we share the summit with the group from Estes. It's after noon, but it's neither storming nor raining. We take a lunch break and some pictures.
Coincidentally, at 11,600 feet, I wonder where my wife is. As in, how high she? You see, she started out just after 4AM with her friend DJ, hoping to hit the trail to Quandary by 7am. With the right conditions, they should have already enjoyed the summit and been on their way down. Still, there's a good chance she's still higher than me -- how awesome is that?
So now we get what we earned: downhill! The West side downhill is roughly 15 miles down to Winter Park. Make no mistake, this side is less rocky and more fun in both directions. However, the pine beetle kill here is evident, even with barely any sides of it on the east side. New spur roads branch off for forest-thinning work, and areas are routinely cleared to outflank the beetle's march, while new signs prohibit entry. We can imagine, though, that over time, new mountain bike routes will develop from this work, for better or for worse.
We cruise down over the rocks as fast as the bikes will roll...until I get my first pinch flat. My rear blows instantly and I slide to a stop at 20-some mph. After a fix and a few breaks, we finish the descent.
We ride partly on Hwy 40 and partly on the Fraser River Trail, eventually rolling into Winter Park. Back to "civilization," I suppose, as throngs of tourists squeeze out the last bits of summer from the mountains. (OK, that's what we were doing, too!) Turns out this weekend is also the last race of the Winter Park Mountain Bike race series, the Tipperary Creek route, which is coincidentally the only other ride I've done in Winter Park (as a ride, not a race).
We finally roll into Bailey's yard around 3pm, 7 hours after our journey started. We sit on the couch in a daze and he offers us some beers. We're happy to sit and catch updates of the Twins and Tigers scores, partly napping and eventually showering, before heading out for dinner. We lightly debate Mexican versus pizza, as Caleb and Bailey previously scouted out the food scene and assure us we can't go wrong either way, so we settle on a Mexican-sounding pizza place: Hernando's. The pizza...is...awesome!...The thick crust and salty cheese especially. Basically the perfect food at the perfect time. Hernando's is decorated with the colored dollar bills of patrons, and can get crowded, but luckily we're there with our ravenous appetites closer to senior hour.
We grab some extra food from Safeway, then head back to chill out, playing around with the house dog, Jackson, and playing a game outside which is variously called "Ladder Ball" or "Dangle Balls," which I will refer to as the former ("Ladder") in order to suppress giggles. While outside, the sun sets, and the mountains exit the eastern stage in alpenglow. We catch the end of a televised Texas high school football game -- my, Texas, what large high school stadiums you have! -- before heading to bed early. I claim the futon outside, and sleep quite comfortably in the alpine air, glad that I brought my sleeping bag.
We awaken before 6:30 as Bailey heads out to work, and we start rolling again by 7. After a quick coffee stop in Safeway, we start rolling, and feel surprisingly fresh compared to the night before. Caleb spots some wild raspberries off the edge of the road, and we stock up on antioxidants, just in case we encounter oxidants.
We take a steady pace up the road, and quickly reach the bottom of Riflesight Notch trestle again. Caleb heads up the road, Tico decides to try the steep singletrack. I head up the road so Caleb knows to wait, but secretly think the singletrack might be better. After catching my breath at the top, it turns out the road is indeed faster. We check out the trestle for a bit and meet some bow-hunters who just came off of Roger's Pass trail. We look at the map and think about Roger's Pass across the divide as an option, but decide against it since we don't know the terrain. Next time?
We make good time up to the pass, and the weather still looks good, so we hang out a bit more. Some skiers have parked up the pass and are hiking to a glacier -- nice to get turns in August! Tico and I head to a nearby snowfield to check it out, then we make our way down.
Again we hit the trestles, which are more scenic in this direction. I take a short video and some pictures.
We hit the Needle Eye tunnel again, and decide to go over this time. This ends up being significantly more work, as the West side especially is steep and affords little purchase for the gentleman hoisting his bicycle up the rocks. On the other side, as we adjust our gear, we're greeted by the echoes of gunshots. Welcome to the Wilderness! We spot a likely shooter below us, and it looks like he's plinking down there, not shooting upward. Hopefully.
We descend the rocks, and today's word of the day being "Adit," we check out an abandoned mine. Caleb takes some samples and suggests that they may have been seeking mica.
We continue descending, sure to make it by 2:30, when Tico wants to get back to the car, until I get another pinch flat! Final score: Mike 2, Caleb 1, Tico 0. Yargh. Although I ran a bit higher pressure today then yesterday, it still appears I was too low, at least with having an extra 20-30 pounds of gear and pounding over rocks and potholes. In fact, I bent the bead of the rim, and we're unsure if the tire will seal. Luckily it does, and I'm more cautious on the way down. Finally we get off the rocks and back onto the road, which somehow was uphill the whole way there but is now rolling. So it goes. An overly cautious car waits to pass us, then some jerk dirt bikers make it 3 abreast on a blind curve. No doubt they're in a hurry to make their shift as emergency surgeons, to volunteer at a puppy rescue, or defend their PhD. theses, so I wish them godspeed.
And so it goes as we make it back to the trucks, at 2:29.30PM. Tico is off to sell beer at a Broncos game, and Caleb and I head back to the Fort. A good time was had by all.
I hadn't seen my friend Ben for a month and a half, as he was busy hitting all the midwest hot-spots for a few weeks: Twin Cities and Alexandria, MN; Iowa; and Chicago, if I recall correctly. In the meantime, though, he started reading "Born to Run" and also started running barefoot. I'm a big fan of both of these, though Ben had a few more good ideas which he's been up to: making your own duct-tape sandals (cool idea and very comfy!), and speed jump roping barefoot. I'm going to consider these for fun projects and training.
I rode out from my house just after 5am. Now that August is almost over, that's entirely before dawn. Most people might curse being up that early -- including myself for most of my life -- but instead I cursed myself for not doing this more frequently: the stars were clearly visible; traffic was non-existent; the humidity was higher than it is after sunrise, and, coincidentally, the scents of plant life. I acknowledged a fox that crossed the street. As I rode, the Eastern sky lightened subtly, a glorious gift from a direction that otherwise merely provides the odor of livestock. Upon reaching Ben's house, and heading up to Horsetooth, the sun cleared the horizon, and not a cloud was visible in the sky. For some reason, this surprised me: the idea that you can't really tell just how clear it is until the sun is up. And this is just a plain Thursday -- 27 August 2009 -- that will never happen again, yet will happen always. I try not to take this magic for granted, but regret how many weeks pass between viewing of sunrises. I can't help but think of "Johnny Got His Gun", where the blind, deaf, faceless, quadruple-amputee Joe Bonham begins to mark the days as he feels the warmth of the sun on his skin. While he ultimately pounds out a frantic tirade against war, I also see him as trumpeting the beauty of the natural world, and the simple blessing of a sunrise. Here, we have the convenient excuse of being a "morning person" or no, but who, given a week to live, would not awaken for seven sunrises? Given a month, I should hope to choose the month, and see 31.
Back to this Thursday, or today's impression of one: Ben suggests we at least try running some of the trail barefoot. First, as a lark, but also (and perhaps more importantly) because his Facebook status said so! A plan committed to Facebook is a plan committed. The bottom of the trail, however, is just the right kind of wrong rockiness: medium-shaped stones that are too prolific and have strategically dispersed themselves across the trail, so we begin in our shoes. After the initial few turns, though, things clear up a bit, and we try a few hundred yards barefoot. Conversation ceases, breathing changes, and we both pick lines of self-preservation between rocks. A fun experiment, to be sure, but not sustainable, so we put back on our shoes and crank up the hill. We do find one more forgiving spot, in the shade just after the branch of the Horsetooth trail coming off of Soderberg, and we make it a bit further, including an occasional stretch of blissfully rock-free sand, which reminds us how running in shoes also doesn't convey temperature. Overall, we might have gotten a quarter-mile of barefoot running in total.
We put our shoes back on and headed to the top. Here, I should point out that we saw nobody on the trail, and this is my 7th or 8th time up top without any other parties, on perhaps Fort Collins' otherwise most popular trail, at the best time of day.
We enjoy the view, pick up some leftover fireworks, and bomb down the hill, more often than not at speeds at the nexus of 'fun' and 'utterly reckless.' I sprain my ankle slightly, but fortunately run it off, focusing on keeping my foot pointed utterly straight, and we finish out the run.
We head back, I grab my bike, and head through campus with some time to spare. Unwittingly, I head past the track, and see some sort of women's calisthenic program, along with a few joggers. With some time to spare, I decide to try a barefoot lap to see how the track feels on my foot. 400 meters later, I have my answer: fantastic! At the start/finish line, I decide on one more lap, after glancing at my watch, to see how fast I can run barefoot. Up until now, I have no idea, and I submit to you, dear reader, that the normal internal clock is uncalibrated for the slightly increased focus and concentration required for barefoot running. I finish the next 400m, at pretty much the same pace as the first, and have my answer: something in the high 1:20's. Feeling great, I figure, why not finish the mile? Why not, indeed. The next splits are all in the 1:20's, and I'm even capable of a kick on the final backstretch. I'm not big into numbers themselves or being prideful, but I'm pleasantly surprised that slapping my feet barefoot for a mile takes less than six minutes. To this, I will add that I felt less out of breath than I would have in shoes, but my feet and calves were a bit more fatigued. So, not only is running barefoot fun, but it also might just figure into good speed workouts in the future, which might help my overall form.
Plus, good for the soul!
Earlier this summer, Jessica surprised me by saying she had been wanting to climb a "14er", or one of the 54 (more or less) 14,000+ foot mountains in Colorado. Thanks, in no small part, due to my brother-in-laws awesome pictures and description of an early summer snow climb of Longs Peak. Nevermind that she and I never talked about doing (nor even really heard people talking about, other than Whitney) the 12 in California when we lived there...plus a host of others in Mexico, Canada, and Alaska, just to cover North America. As for Central and South America, heck, she was walking around 12k feet in the ruins of Cusco without any special significance. Arbitrary numbers, all. Still, it seems to be a Big Deal in Colorado, probably because one beautiful state hosts so many gorgeous peaks. I thought it was fun to ride up Mt. Evans a few years back, but that was mostly the unique novelty of pavement at 14k feet (and 27 miles of climbing -- ho ho ho!) There are various records loosely kept involving fastest and youngest to "bag" all of these in one season, as well as more unique goals involving ski descents and self-powered visits of every peak. For that reason, summer weekends in Colorado result in dozens or hundreds of people all climbing the same peaks, while nearby "lesser" mountains remain pristine. I've heard and seen nightmare pictures of anthills crowded with people on the standard routes of Front Range mountains, with enough screaming kids, ringing cell phones, jeans and t-shirts to make a mountain seem like Walmart. No thanks! I still feel melancholic when I think about our first and only visit to Yosemite, a place that I dreamed about as the epitome of the "outdoors," only to see crowded lines of cars snaking across the valley floor.
I'm not covering any new ground with these arguments, and it's a Catch-22: It's great to see and promote exercise and enjoyment of nature, I'm just hoping (also) for respect (of other users, and the mountains themselves) and LNT as well.
That said, I was excited to hike one with her, and excited by her enthusiasm, but only if we hiked a non-standard Class 2 route on a non-Front Range mountain, since we would be going on a weekend. That is, not too easy, not too hard, and not too crowded. And, this would be after we did sufficient previous warmup hikes for training, which secretly doubled as good altitude ultra training for me! All of this went well, and we had an enjoyable month of hiking. So I started reading online forums, looking at pictures and routes, and picked up a copy of Gerry Roach's classic. Some friendly folks online suggested La Plata from West Winfield, and it was settled.
We left Fort Collins late -- I was unable to get out of work early -- and made it to Co-390 just after 10pm, after driving through a steady downpour from Frisco down through Leadville. We found an empty pullout on the road, moved gear around, and slept in the car. Just after 5am, with stars but few clouds in the sky, we started getting up, got ready, and drove a few miles to the trailhead; in our case, just after the rough 4x4 road started past the Winfield cemetery, 1.2 miles from the TH. It turns out we easily could have driven the rest of the way, but I didn't want to waste the time finding out, and it wasn't a bad warmup on flat road. At 6:45AM, 3 parties of 2 were ahead of us, and another just starting. We saw the other parties on the way up, but we were still relatively dispersed. The hike from the TH began through dense forest and wildflowers along a creek, very scenic and shaded. Quickly the view opened, and treeline was achieved. After this, we were treated to a wide-open meadow, surrounded by mountain ridges. The meadow, however, included muddy willows, still wet from the previous night's rain, so our feet were already wet just a few hours in. After weaving through the meadows, we approached the most fun part (in my opinion, though opinions vary) of the hike: a switchbacked, steep climb on dirt and scree up a headwall. At the top of this, views were incredible from both sides, and we had a gorgeous tundra ridge-walk towards a boulderfield.
The boulderfield, by unanimous opinion, was not the most popular part of the hike! Last night's precipation up here was formed as snow and ice, still sticking to the rocks. As a broad ridge, there was no imminent danger, just slow going as we picked our way up the rocks. We followed previous parties to the climber's left of the rock, but upon descending, I could confidently suggest staying towards the middle or right, looking for trail and avoiding boulders as much as possible. At the top of this section, which took an hour itself, we were treated to more views, a shifting wind, a false summit which I read about, and a couple getting ready to descend. I asked him about what lie ahead, and he responded, "After the false summit, there's another false summit...and just when you think you're there, there's one more summit!" Fortunately, this situation was almost exactly similar to the northeast ridge of Clark Peak: I knew Jess wasn't a fan of false summits (who is? it's like trick birthday candles...or something), but at least I could relate it to something we've already done. Anyway, so there was more boulder and rock, ungulating toward the top, though the boulders here were easier to stay on top of. Eventually, we crossed paths with an unstable female runner in shorts who asked which way down (probably doing a long out'n'back or loop from the other TH), and we mixed with heavier traffic from the standard trail.
Finally, the summit was achieved! Conveniently, the wind on top was almost non-existent, compared to the ridge below. There were around 8 others or so on the fairly small summit, kind of crowded but everyone had their own space. We met some friendly folks, took pictures and received some in return; counterbalanced by one guy conducting some sort of lame business on his cell phone, and a Boulder uber-couple plotting out their other 14ers in the coming weeks (hey, what's wrong with enjoying *this* one?) After about 10 or 15 minutes, eating breakfast burritoes and taking in the view, we headed back down. The sky was still very forgiving, so we took a break after the boulderfield in a marmot-inhabited meadow, enjoying the sunshine and easier trail. Then, we headed down the headwall, where I had a chance to test my new Brooks Cascadias in a combination of running and scree-skiing (basically, running the straights and sliding into the turns), and I had a blast! Jess didn't enjoy it as much, in tennis shoes, but we'll see next time now that she has hiking boots. By the time we hit the willows, some of the mud had dried out. We stopped again for pictures at the bottom, and got back to the car at 2:30, 8 hours after we started, with maybe an hour of breaks total.
A perfect day on La Plata! And I'll admit, now, that some of the excitement started creeping in, replacing some (but not all) of my weariness and reservation about crowded hiking. It is exciting, I'll admit, to see pictures and stories about specific mountains, which is a byproduct of their popularity. My hats off to those that do all of them, and I hope (and suspect) for many that do them, that's half or a quarter or less of their total hikes in Colorado. In other words, they're doing them in the course of doing all kinds of different mountains. What I'd like to caution against is people that live here, or those that visit from out of town occasionally, and only hike 14ers, by the quickest route possible, on summer weekends: you're doing yourself a disservice by not exploring the other fine options this state has to offer, in all seasons. We loved La Plata, and highly recommend it, but also agreed that Blue Lake/Clark and Pawnee Peak in IP, both <13k feet, were even better. I have no intention of going out of my way to hike specific mountains or not based on their elevation or lack thereof, but there are certainly some beautiful hikes and climbs that are intriguing and I don't want to miss (Chicago Basin, Willow Lake approach, Holy Cross, to name a few), and maybe some of the closer ones by moonlight or by snow. And if Jess, or anybody else going along is stoked too, even better!
One of the main purposes of this blog is to promote duathlon'ing, from Fort Collins in particular.
Q: What do you mean by duathlon?
A: Simply, two forms of non-motorized transportation. Generally, riding a bike to a trailhead, running/hike, and riding home. In winter, though, the running/hiking part may involve skis or snowshoes, and the bike is likely to be a mountain bike instead.
A: Because it's fun! Honestly, because I share a car with my wife, which leaves me without a choice. But even more honestly, the first answer was better: it's a fun combination workout.
Q: Why not just run to the trailhead?
A: Also a good option. I'm just sharing of how to navigate the 'middle ground' of being bored with trails right outside your door, and not always feeling like driving a few hours each way.
Q: What are some of the advantages?
A: More workout in a given amount of time; saving money; being nice to the environment and all that; less traffic congestion in town where we live; peace and happiness and all that; discovering new trails close to your house that you might otherwise skip.
A: A bit more gear and preparation; you will ride and run slower than doing either individually; one cannot realistically ride to the coolest trails everywhere. But, one point of this blog is to show how easy and fun this can be.
Q: What about carpooling to the trailhead? Don't you drive to plenty of places on weekends anyway? Are you being judgmental?
A: All fair questions. I'm just sharing what I like to do, finding out if others are doing it, sharing tips, and looking for tips myself. This doesn't have any bearing on what you do. And, I drive plenty on weekends. I love carpooling, too. Just throwing out ideas here on what's worked for me, as I got bored with the same runs leaving from my house, but I hate driving across town on weekends to go for a run.
Q: Cycling there takes too much time.
A: Shouldn't questions end in a question mark, and answers end in periods? Actually, that's another one of the good points of this blog: for in-town runs, especially on weekends or 'rush hour', riding instead of driving often saves time. No, you don't have to run red lights, but observe this: a line of cars often gets stopped at least once every single major 1-mile intersection. Being caught in this long line means you can miss another light-timing cycle. Conversely, the bike lane is empty, meaning one can cruise right up to the front. A decently trained rider can average 20mph, while cars in tough traffic may average the same by being stopped so often. You'd be surprised at how frequently you see the same cars at every stoplight!
Added to this is the fabulous FC bike trail system, where you can cruise (nearly) stop-free across town, utilizing underpasses and traffic control lights that immediately request a yielded right-of-way to cyclists. I've found that bike-vs-car pretty much evens out for in-town trips (<10 miles) right there, but if you really want to get technical, we could talk about gas costs, insurance, car maintenance, and how long it takes to work to pay for all that (see Thoreau's Walden for an explanation of "Economy" of time and money from 160 years ago: I find the same arguments can apply well here). OK, went a little far, sorry, but the first points really are true!
* (Boulder) Chatauqua "FC Super Double Mesa"
I was finally starting to feel recovered from Leadville, anxious to run somewhere in the mountains again, and the weather was looking good. And, ideally, I wouldn't have to use the car.
Earlier in the week, there was a bit of discussion on the FC trailrunner list about Comanche Peak wilderness. While I'm more familiar with the north/Pingree Park/Poudre side of Comanche, someone mentioned the North Fork trail as being a good option. In this case, the North Fork being the "Big T", a river which I've ridden next to countless times and have awe and respect, but don't have much experience with the river away from Hwy 34.
Here, I'm going to jump ahead to get the negative part out of the way, so that the rest of the story is all positive. I arrived back at the trailhead about 4.5 hours after starting, longer in time and distance than I planned but a fun route overall. My bike was happily unmolested and the tires were full of air. After a quick shoe change, I bombed down the dirt road on the road bike, ~30mph with barely room to think. I honestly think I was more aggressive mainly because I had just been running, slowly, and it was nice to cruise without pedaling. I almost lost it on the last corner, as there was too much sand in the corner to brake comfortably, but I needed to stay on the road! Luckily the skinny tires held up and I was on my way. As it was now mid-afternoon, some clouds were threatening from the west, so I tried to 'hurry', as much as I could. The rain held off in the canyon, but not the idiots: a gaggle of tourists were pulled over for bighorn sheep pics, including one minivan parked into the shoulder/traffic lane (with the family still inside, taking pictures, too lazy to park legally and walk); shortly after that, some guys went by thinking it was funny to blast an airhorn at riders as they went by. I made it to Loveland without rain, but then lightning started coming, with a mile to the East. Again I hurried, and on one of the country roads, with no other cars and myself hurrying to avoid lightning on the edge of the road, a car honked at me and tapped the brakes, unsure of how to share the road competently with some guy trying to stay out of the way and get home safely to his wife. So after a fabulous day, I was in a bit of a foul mood. I think many of the anti-cyclist motorists really cannot fathom riding a bike on the road, and believe that only people on a bike stand out as inconveniencing their ride for entertainment, not admitting that they themselves are nearly all of the time driving for fairly useless errands. They lack the experience and imagination to realize that I very well could have driven a car to the trailhead instead, at the cost of clogging the road up even more as well as other negative economic externalities (increased road/maintenance cost, pollution, statistical accident threat to them). Why not honk at all the joyriders and tourists clogging up the road instead? Or people driving a few miles every day to work? Because somehow skinny guys on skinny tires and 20 pounds of bike stick out more and are easier to harrass. I'm not trying to prove any point or anything, I pose no risk and just want to be left alone to ride safely, I just like to ride my bike and stay the heck out of the way.
The ride there was fairly uneventful, I strapped most of my extra weight (trail shoes and clothes) under my seat, and this kept most of the weight off my back. I arrived at Dunraven Glade Rd in a couple hours, and finally got a chance to ride something new. I knew it was a couple miles or so up the road, and I vaguely recalled that it was an unimproved road. Well, it's a 2.3 mile grind up a hardpacked, washboarded dirt road, and I couldn't even muster double-digit speeds. Given the choice again, though, I wouldn't hesitate to bring the road bike, as this was only a small part of the ride. Still, it slowed me down from my estimated run starting time, as well as fatigued my legs more than I anticipated. But I arrived at the trailhead, which has a restroom and a lot that was full of cars and no bike rack as usual, but I hitched to a wooden fence by the trailhead sign.
Generally, the beginning of the trail is nice and shaded, friendly and accessible to families to take a stroll in the woods, as well as horseback riding. I'm not a horse guy, but if I were, this is a pretty good spot with the shade and lack of mt. bikes, and every horse group I encountered was polite and respectfully yielded the powerful but slower horses. Otherwise, this trail is a nice but longer backdoor entry into RMNP. It's about 4.3 miles to the RMNP boundary, and beautiful uncrowded campsites line the trail. Most of the traffic at this point, if any, are friendly backpackers. Of course, along the way is the Big T as a reliable water source. Continuing on, one can hear the rumble of Lost Falls, and shortly thereafter, just over 8 miles in, reach the fork for Lost Lake or Stormy Peaks. Note that the otherwise reliable and enjoyable "Afoot and Afield" pegs the Lost Lake trail as a 14.3 mile roundtrip, but this cannot be correct. As another aside, it is my observation that "Lost Lake" and "Blue Lake" are the two most common lake names in Colorado.
Anyway, it appears that the (left) fork to Lost Lake is more popular as a backpacking destination, but I wanted to hit tundra, so took the right fork to Stormy Peaks. The pass is designated as 1.6 miles from the fork. Quickly, the trail steepens and switchbacks up above treeline. While a bit rockier, the trail is in great condition, and was being worked on by three young men that had obviously hauled equipment quite far onto the trail. They were tarp camping and had some beer in the river: what more do you need?
Anyway, the open views into RMNP were stunning, and South Stormy Peaks campsite was the best one of all. I slowed to a fast hike and hiked up to the pass. Here I saw a party of 3, but they were heading to the south for more views. Stormy Peaks are the higher points to the right, so I just headed straight up some boulders to the top. I was rewarded with even better views of both RMNP and Pingree Park, some wind, and a new summit register. Just a few people seem to trickle in every weekend, plus a large party from Wausau, WI (just typing that summons forth the smell of the paper factory when driving throuh that town, but I say that fondly as it was a gateway to going "Up North" as a kid). After some time up top, I headed back down, and found the trail to be quite runnable in this direction (meaning, I was too tired to run up it, but it's quite pleasant to run down).
I stopped at Lost Falls to refill on water, and I met a friendly backpacking couple, Justin and Allison (sp?) from Nebraska. Justin had been coming here for quite some time and said it has gotten busier/more popular, though even I couldn't complain about the crowds. I only had my UV steripen, he offered to let me use his filter, but I wanted to continue the 'experiment' with the steripen only (and my bandanna 'pre-filter') so I knew if it worked on the 3 liters I tried. (Now that it's been over a week, I'm still happy with the results). They were staying until Monday and looks like they had a great weekend of weather. My water gathering and break took a little longer than expected, but soon I was back on the trail for an uninterrupted stretch of a few more hours of running. It was still quite pleasant in the shade and I wasn't in any hurry. I passed a solo hiker and spaced out a bit, as right after passing him I biffed on a small rock, slid forward and dropped my bottles. He asked if I was OK, I was mostly embarrassed at having a witness!
Anyway, all told it was a great ride and run. I look forward to returning and exploring Lost Lake someday.
We got a late (8AM) start, but it was nice out and I wanted to get above treeline. So, we took a leisurely ride to Cameron Pass. Diamond Peaks are well known for fantastic backcountry skiing, but not as popular for hiking. It's a shame, but it also meant we'd get a quiet hike.
Our original plan was to hike up to Montgomery Pass, follow the ridge to the Diamond, descend the DP trail to CP and return via the Cameron Connector trail. Once we arrived at the Montgomery/Joe Wright trailhead, though, the fierce North wind made me decide that having the wind to our back on the ridge would be a better bet. Still, this was a good spot for a potty break and a moose sighting!
We continued up to the Cameron Pass lot and started up. The summer trail is barely defined, but essentially follows a small gully straight up. It's not much more than a mile to the top, and, if motivated, someone could get up there in 20-some minutes.
On the summit, we had expansive views in either direction.
Feeling confident at a shorter hike in a familiar area, on a bluebird day, we relaxed by ourselves and enjoyed lunch and beverages.
Amazingly, descending 20 feet or so put us out of the wind into a completely calm summer day. I was also intrigued by the long obvious ridge stretching out parallel to the Poudre, and we enjoyed spotting Clark Peak as the highest along the ridge, having done that together from Blue Lake last month. We proceeded along the ridge, then decided to cut down below a snowfield along a trail, looking for passage into the woods. Suddenly, I saw motion, as a BFM trotted within 15 yards of us, directly across the most obvious trail along the top of the forest! It was neat seeing them earlier from a distance, but not by ourselves in the middle of the woods. Luckily, he was completely disinterested in us and kept moving. But, that meant we had to backtrack or head straight downhill. We decided to bushwhack straight into the woods, giving enough space between us and the BFM, before proceeding back along the contour of ridge. I knew there were a few intermittent stream crossings, and once we found one, it was easy to follow it all the way down. We found the blaze of the Cameron Connector, and followed it most of the way back, skirting a few swampy meadow areas. There was still an impressive wildflower display near the river, though clearly the peak had passed.
DP is on the list for quick access to tundra, solitude, wildflowers, and meese sighting.
Originally, we talked about doing the Stonewall Century this weekend, but decided against it due to general lack of interest (for now). I had been wanting to do this ride since my friend Will told me about it a few years ago, and it fit the bill of the kind of organized ride I like: scenic, affordable, fair amount of climbing, small-town feel, and far enough from home to justify driving there to ride some different roads. I was a bit bummed to hear that this was the final year, but I have sense learned it will be back next year (with a new director) -- huzzah!
Instead, we looked at another hike. We can't get enough of the Sawatch, so we decided to return there. Problem was, weather was fairly unsettled across Colorado for the weekend. It looked sucky for sure in the Fort, at least for Saturday, so it would be nice to leave. Still, I ruled out some longer, all-day hikes to minimize suffering due to weather. I was still hopeful that we could do Huron Peak from Lulu Gulch. The question is, would the weather cooperate?
The answer was mixed. Heavy rain was predicted through midnight Friday night, and then some clearing and lower probability (but still ~30%) for rain. This wasn't too far off the same forecast for our last couple of trips near Leadville, which ended up having decent weather most of the morning at least. We were both kind of tired from work, and I was unable to leave early enough, so we decided to set the alarms for 2 and head off in the morning. One concern, though, was driving right through the start of the Leadville 100 Trail Race (mt. bike), which started at 5:45 AM with a fairly sizeable field, spectators, and a guy named Lance.
Luckily, this part of the plan went off without a hitch. We got into Leadville at 5:02AM, which means that "Provin' Grounds" was open with fresh coffee! I refilled my cup and headed out of town, careful to avoid light-less racers warming up on the road, and just got through before they closed Main St.
Did I mention it was 38 degrees and rain/snow over Fremont Pass?
From Leadville into Chaffee County, though, the weather cleared and I could see stars, along with a barely brightening Eastern horizon. Going down CR-390, though, there was a decent amount of fog, but also consistent dark clouds and rain sitting in the valley.
Still, we parked at South Winfield and headed up toward Huron. The trailhead and camping spots were noticeably less crowded than a few weeks ago, but still a fair amount of traffic. Once we split off the main 4x4 road up toward Lulu Gulch, though, we didn't see anyone else. But, that's also when we heard thunder, at 7AM.
It was distant enough and we were still well within treeline that we held out hope it would clear up. Instead, the rain intensified, and the temperature dropped. Also, this peak is known for phenomenal views, being distant from paved roads and all -- would it be worth it to get up there -- cold, wet, and anxious about storms -- only to see more clouds and rain?
We turned back and headed for the car. Discussing our options, we decided to sleep in the car for an hour or two, wait out the weather, and check on our options. I had a backup plan to hike Quail Mountain, so we headed to that (empty) trailhead, and napped.
Around 9:30, the sun was out among puffy and fast-moving but still non-threatening clouds, so we headed up Sheep Gulch, an original (but now bypassed) part of the Colorado Trail toward the saddle between Quail Mountain and Hope Mountain. I didn't know it at the time, but the pass over the saddle is Hope Pass -- the high point near the middle of the Leadville 100 Trail Run.
Also, Quail Mountain itself has been studied in the past to be developed as a ski resort. Since I've fallen in love with that area, I'm glad this nightmare never came to pass (apparently, the yearly snow totals were too low).
The trail has a steep immediacy, but is very well constructed and proceeds through dense aspen.
which looks like a fun way to approach toward Hope, and eventually broke out above treeline, revealing fabulous views of Missouri Gulch.
We took in the great views, but not for long. We decided not to climb Quail Mountain, but instead head back to treeline, as the clouds continued to threaten from the southwest.
More clouds came in but still no rain. We saw our first and only people, a friendly couple, exploring the bottom part of the trail: all in all, a great hike for views and solitude. At the bottom, we set up camp down the road, took a nap outside, and grilled up veggie burgers and corn for lunch. We hung out the afternoon reading and napping, and the weather held up, but we decided to head back that night. After, of course, a mandatory stop at Eddyline Brewpub in BV!
This is going to be my place for trip/route postings. I'm starting to get old and stuff is beginning to blur together. | 2019-04-25T20:22:06Z | http://frontrangerambler.blogspot.com/2009/08/ |
1. The administration has committed itself to reducing the fiscal deficit to below 2 percent of GDP by the end of the decade.32 Following a substantial widening in recent years, this would bring the budget deficit below its long-term average relative to the size of the economy, but fall short of the administration’s initial objective of maintaining an overall surplus equal to that of the Social Security trust funds (OMB, 2001; Figure 1). Moreover, as a number of analysts have noted, the adjustment relies heavily on the effect of the cyclical rebound, the expiration of some of the recent tax cuts, and adherence to strict spending limits, with the deficit reduction after FY 2006 being relatively modest.
5. There are a number of important parallels between the current fiscal cycle and the expansion that occurred during the 1980s. In the earlier period, priority was given to boosting military spending, cutting taxes to strengthen the supply side of the economy, and stimulating activity in response to the 1981–82 recession. As a result, the fiscal deficit widened by 4½ percent of GDP between FY 1979 and FY 1983, reaching a peacetime low of 6 percent of GDP (Figure 2, next page). Similar priorities have caused an even larger shift in the budget balance in recent years, totaling 7 percent of GDP between FY 2000 and FY 2004.36 Although the overall deficit has not reached the same level as during the 1980s, the primary balance (i.e., excluding interest payments) has declined to a comparable level (Figure 3).
6. The two episodes differ in important ways, however, with the fiscal expansion in the 1980s primarily caused by higher spending. Federal expenditures rose by 3½ percent of GDP between 1979 and 1983, most of which resulted from an increase in entitlement spending (Figures 4 and 5). While defense expenditure also increased, this was partly offset by cutbacks in other discretionary spending categories.37 Altogether, outlays accounted for almost three-quarters of the deficit increase in the early 1980s, compared to less than a third in the recent fiscal expansion.
Source: OMB (2004); and Fund staff calculations.
7. The revenue effect of the “Reagan tax cuts” was considerably smaller than in the current period. Among other measures, the Economic Recovery Tax Act (ERTA) of 1981 lowered marginal income tax rates by around one quarter across the board—contributing to a 2½ percent of GDP drop in personal income tax revenues over two years—and accelerated depreciation schedules, providing significant corporate tax relief. Although the short-term revenue loss caused by ERTA exceeded those in recent years, parts of this tax cut were quickly reversed as the fiscal position deteriorated in subsequent years (Penner, 2003; Steuerle, 2004; Tempalski, 2003).38 Personal income tax revenues therefore changed relatively little during the expansion in the early 1980s as a share of GDP, although corporate tax revenues declined by 1½ percent of GDP. By contrast, the more recent tax cuts have been largely targeted at households, with personal income tax revenues expected to fall by more than 3 percent of GDP between FY 2000 and FY 2004.
8. The fiscal position proved difficult to correct during the 1980s, despite strong economic growth. Federal revenues remained stable relative to GDP throughout the decade, owing to the economy expanding 4 percent on average between 1983 and 1989, and a series of legislated tax increases. Non-military discretionary spending cuts were sustained, and defense expenditure began to decline in the second half of the decade. Nevertheless, the deficit dipped only briefly below 3 percent of GDP in 1989, before being pushed up again by the 1991 recession, the costs of dealing with the S&L crisis, and the Gulf war. In structural terms, the deficit fell by around 1 percent of GDP during the late 1980s (Figure 6).
1/ The CBO’s cyclically-adjusted balance has been altered by subtracting the difference between capital gains tax revenues and their long-term historical average.
Individual income tax receipts rose by 2¾ percent of GDP, propelled by an increase in top marginal tax rates in the 1993 Omnibus Budget Reconciliation Act, strong income growth especially in the higher tax brackets, and a booming stock market.
Defense spending was roughly cut in half as a share of GDP from its peak during the 1980s and the end of the 1990s, accounting for the bulk of expenditure reduction. Nondefense discretionary outlays were also contained, owing in large part to the spending caps imposed by the 1990 Budget Enforcement Act. Mandatory outlays even fell somewhat, benefiting from a temporary drop in fertility rates during the Great Depression that affected the number of retirees 60 years later (Penner, 2003). Finally, a decline in interest rates and the shrinking public debt ratios caused net interest payments to drop by almost 1 percent of GDP.
11. On the expenditure side, many of the factors supporting fiscal consolidation in the 1990s are also no longer in place. Expenditure on defense and homeland security has increased after 2001, but remains far below the levels reached in the 1980s. Geopolitical uncertainties and security concerns are likely to persist over the foreseeable future, suggesting that large-scale reductions in this category are unlikely to materialize. Keeping a lid on mandatory spending also would appear more difficult as the retirement of the baby boomers is now imminent; and interest costs are projected to rise as the downward cycle in interest rates may have finally come to an end.
12. The FY 2005 budget lays considerable emphasis on reversing recent increases in federal spending. Federal outlays have picked up since the late 1990s, reflecting weakening budget discipline in the face of strong revenue gains (de Rugy, 2004; Kell, 2004) and, more recently, the reaction to geopolitical developments. Outside defense and homeland security, spending rose mainly in entitlement programs, reflecting initiatives such as the No Child Left Behind Act for education and the introduction of a new prescription drug benefit for the Medicare program. The recent budget proposes to stiffen expenditure discipline considerably over the next five years. While spending on defense would be reduced to 3¼ percent of GDP, other discretionary outlays would drop to 3 percent of GDP by FY 2009, equivalent to a 2 percent per year decline in real terms (Table 1).
Source: IMF, World Economic Outlook; and OECD.
14. Imposing spending discipline may be difficult in the absence of a robust medium-term expenditure framework. Congressional appropriations have typically exceeded initial budget proposals (Figure 8), which has been ascribed to efforts by members of Congress to secure federal spending for their constituencies (CAGW, 2004). In recent years, this trend appears to have increased (de Rugy, 2004), and prospects are uncertain for agreement on a FY 2005 budget resolution and a full set of appropriations bills. This could again necessitate spending authorizations through an omnibus spending bill, which in the past has resulted in an easing of fiscal discipline in order to secure passage.
15. Expenditure discipline could be strengthened by reinstituting budget control mechanisms similar to those contained in the Budget Enforcement Act (BEA) that expired in 2002. Some of these were included in the Spending Control Act proposed by the administration, which would have required offsets for budget proposals increasing long-term unfunded liabilities and limited the scope for “emergency” legislation and other instruments used to circumvent the BEA in the late 1990s.41 Congress recently rejected this proposal, however, with some members expressing concern that it would have largely exempted tax cuts from mandatory offsets under pay-as-you-go (PAYGO) rules and reduced the period for scoring the budgetary impact of policy proposals from ten to five years. The administration was also seeking legislation to restore the “line-item veto”, which would provide the President with the authority to reject new individual appropriations, mandatory spending proposals, as well as a limited range of tax cuts.
16. While the budget would reduce nondefense spending by about $30 billion relative to “current services” estimates, considerably larger cuts can be envisaged. Rivlin and Sawhill (2004) identify a range of measures worth $68 billion (½ percent of GDP) that could help improve the quality of government spending without affecting major spending priorities (Table 2). These include cutbacks in agricultural, commercial, and trade subsidies, as well as reductions in other low-value government spending. Silvinski (2001) and Edwards (2004a) provide more far-reaching suggestions for terminating, privatizing, or devolving to states a wide range of federal programs, amounting to total savings of $300 billion (2½ percent of GDP) per year. While agencies such as the Army Corps of Engineers, the FAA, and others would be privatized, the bulk of savings would be achieved by eliminating the Departments of Education and Housing and Urban Development (Table 3). However, even the more limited measures proposed by the administration have raised concerns over their legislative viability (Greenstein and Kogan, 2004) and distributional impact (e.g., Steuerle, 2003).
Source: Rivlin and Sawhill (2004).
17. Shifting responsibilities and reducing federal transfers to the states are also among the options considered. This could, in principle, improve finances at the federal level and impose additional fiscal discipline on states. However, the bulk of federal transfers to states is being used to finance priority areas such as health care and education, and the scope for achieving savings on a general government basis may therefore be relatively small.42 Moreover, state and local governments have already gone through several rounds of cost cutting, following a severe post-2000 decline in revenues and the expansion of state responsibilities in the late 1990s. Indeed, states may come under increasing pressure to raise revenues to respond to a 7–8 percent annual trend increase in Medicaid spending, which already accounted for about a fifth of total state expenditure in 2002.
18. Future tax policy will depend partly on the fate of the 2001–03 tax cuts and the increasing reach of the AMT. Most of the tax cuts implemented in recent years are slated to expire by the end of FY 2010 at the latest (Table 4), requiring new legislation to make them permanent. The FY 2005 budget has included such a proposal, but there is a likelihood that many tax cuts will continue to be extended by Congress on a temporary basis only. For illustrative purposes, the cost of making the tax cuts permanent is estimated at around 2 percent of GDP per year by FY 2014 (CBO, 2004; Gale and Orszag, 2004); if this were combined with permanently extending and indexing AMT relief, the cost would rise to about 3¼ percent of GDP ($500 billion) per year.
Sources: OMB (2004); Joint Committee on Taxation; and Gale and Orszag (2004).
19. Although tax revenues may need to increase over the medium-term, reversing recent cuts in marginal income tax rates may not be optimal. The revenue effect of reversing the tax cuts would be substantial, but many analysts have argued that tax cuts are needed to offset bracket creep caused by rising real incomes and contribute to expenditure discipline. Moreover, although U.S. marginal income tax rates are currently not high by international standards, a comparison with other industrial countries suggests that U.S. taxation is primarily income-based (Box 1). Tax policy could therefore seek to preserve the efficiency-enhancing effects of the recent tax cuts through measures to broaden the tax base and increasing the share of consumption-based taxes.
20. There remains considerable scope for simplifying the income tax structure and broadening the tax base. In 2003, the U.S. personal income tax system has provided $675 billion (6.2 percent of GDP) worth of tax credits and exemptions, many of which are targeted at specific economic activities or particular groups of taxpayers. As documented in CEA (2003), among others, these tax expenditures add to the complexity of the U.S. tax system, increase compliance cost, and can give rise to unproductive behavior aimed at tax avoidance. The largest tax expenditures consist of exemptions for employer contributions to medical insurance premiums and health care, private pension contributions, mortgage interest payments, and investment income from life insurance policies (see Box 1). Eliminating some of these exemptions could be designed to generate significant revenues, restore some of the progressivity that the U.S. tax system lost in recent years, and improve allocative efficiency (Table 5).43 For example, the maximum mortgage amount on which interest is deductible from income could be gradually reduced from $1 million presently to limit tax expenditures on wealthier households. At the same time, proponents of health care reform have questioned the employer-based focus of U.S. health care insurance (Cutler, 2004), which suggests that reviewing tax incentives for corporate health care contributions could be part of a broader medical reform.
Sources: CBO (2003); OMB (2004); Rivlin and Sawhill (2004); and Fund staff calculations.
Reflecting the smaller size of the public sector compared to other industrial countries, U.S. general government revenues are relatively low. Prior to recent tax cuts, the revenue-to-GDP ratio reached a high of almost 30 percent in 2000, but has since dropped to 26 percent, the lowest among the G-7.
Compared to most other countries, the United States relies heavily on direct taxes. Almost half of the general government tax revenue comes from income taxes, the highest level in the G-7 and well above the OECD average of 36 percent. Individual income taxes contribute over 40 percent of revenue, greatly exceeding the G-7 average of 28 percent. Direct taxes on corporations raise 8 percent of revenues, near the G-7 average.
U.S. reliance on direct taxes is even more pronounced at the central government level, as fully 90 percent of federal tax revenues (excluding taxes dedicated to Social Security and Medicare) are from these sources. Other G-7 central governments draw about 50 percent of revenue from direct taxes and an almost equal 40 percent from taxes on goods and services. Because of the lower tax overall burden in the United States, however, income taxes only account for 14 percent of GDP, near the G-7 average of 13 percent.
Despite the heavy reliance on direct taxes, top U.S. marginal tax rates on personal income are the second-lowest in the G-7. While France, Germany, and Japan apply lower rates to taxable income below US$25,000, the U.S. tax system applies a more generous treatment to families, including through exemptions and tax credits, and the highest marginal tax rate is reached at higher levels of income than in most other countries. A married couple with two children and income of an average production worker pays 60 percent of the taxes paid by a single taxpayer with no children and the same income, the most preferential ratio in the G-7.
1/ Central government revenues in brackets.
2/ Unweighted average. Average for federal countries includes Austria, Belgium, and Switzerland.
The phase-out of tax benefits makes marginal tax rates somewhat less favorable at lower income levels, as the marginal labor tax wedge for the family in the previous example is 54 percent, compared to 34 percent for the single person. The marginal labor tax wedges in other G-7 countries average 50 percent and 47 percent, respectively.
Tax rates on corporate and capital income display relatively little variance across countries. In 2003, effective average corporate tax rates in the G-7 ranged from 26 percent in the United Kingdom to 37 percent in Japan, with the U.S. at just under 33 percent (see Devereux and others, 2002, for methodology). Statutory rates, including local taxes, ranged from 30 percent in the U.K. to just over 40 percent in Italy and Japan, with the U.S. at 39 percent. Effective tax rates on capital range from 21 percent in Germany to 37 percent in Canada, with the U.S. at 27 percent (before the recent tax cuts), near the OECD average (Carey and Rabesona, 2002).
Federal tax expenditures greatly reduce individual and corporate income tax revenue. Tax expenditures comprise about 45 percent of potential individual income tax revenues, and 40 percent of potential corporate income tax revenues. While tax expenditures targeted at corporations are projected to decline to about 10 percent after the expiration of an accelerated depreciation provision, other tax expenditures will remain high. Many of the costliest tax breaks are focused on health insurance and pension contributions. Housing is also treated more favorably than in other G-7 countries. The United States provides a large tax break for mortgage interest, state and local property taxes are deductible at the federal level, and up to $500,000 in capital gains are tax-exempt upon the sale of a house.
The U.S. proportion of revenue raised from Social Security contributions and property taxes is close to that of other countries. Social Security contributions accounted for 25 percent of revenue in 2001, near the G-7 average of 28 percent. Eleven percent of revenue came from property taxes, similar to Canada, Japan, and the United Kingdom, although the Euro area countries generally derived much less revenue from this source.
1/ Unweighted average. Average for federal countries includes Austria, Belgium, and Switzerland.
Goods and services carry a significantly lower tax burden in the U.S. than in other countries. Taxes on goods and services provide 16 percent of general government revenue, the lowest level in the OECD and about two-thirds the G-7 average. Taxes on both general consumption and on specific goods and services are the lowest in the OECD. The absence of a national VAT is the main factor in this difference, with U.S. state and local governments relying more on goods and services than their counterparts in other federal countries. U.S. state and local governments draw 40 percent of revenues from taxation of goods and services, and 30 percent each from property taxes and taxes on income, profits, and capital gains.
22. Raising energy taxes could yield fiscal and other benefits. From an international perspective, energy use in the United States is relatively lightly taxed, even accounting for geographical and climatic idiosyncrasies of the U.S. economy (Figure 9). Introducing higher energy taxes could raise fiscal revenues as well as help encourage more efficient U.S. energy use. For example, estimates suggest that raising gasoline taxes by 20 cents per gallon could yield around ¼-½ percent of GDP in revenues, although a part of this amount might be used to alleviate the impact of higher prices on certain energy users (e.g., rural households) and secure social acceptance (Prust and Simard, 2004). The overall macroeconomic impact would likely be relatively modest, in part because reduced demand could have a beneficial impact on global oil prices.
23. Finally, a federal VAT or sales tax could be considered, providing a source of revenue that is considered to have the least distortive impact on economic activity. Experience from other industrial countries suggests that a federal VAT could yield about ½ percent of GDP per percentage point (somewhat below the theoretical maximum corresponding to the two-thirds share of consumption in national income). By 2014, a VAT rate of 4 percent could therefore contribute as much as $300 billion in additional revenues.45 Given the highly diverse tax systems among the states, a VAT would need to be carefully designed to gain widespread acceptance, contribute to economic efficiency, and limit transition costs (e.g., Keen, 2001). Nevertheless, a VAT could help bring services—which are largely sales-tax exempt—under the tax net, improve intergenerational equity by implicitly taxing retiree wealth, and provide a flexible and relatively efficient means to respond to future budgetary shortfalls.
24. Without providing an alternative to long-term fundamental reform of the Social Security and Medicare programs, more immediate measures could be taken to improve their financial position. Reforms of entitlement programs have traditionally protected workers in or near retirement by phasing in changes over a long time horizon, sometimes a decade or more. Nevertheless, relatively marginal changes could be effected over a shorter time horizon without significantly affecting the underlying structure of the programs. This could help delay the time when the Social Security and Medicare trust funds are expected to run out of funds and provide greater scope for the implementation of broader reforms.
25. Social security benefits could be more closely aligned with changes in the cost of living and improvements in life expectancy (Greenspan, 2004; Diamond and Orszag, 2004). Social Security benefits have automatically been adjusted to keep pace with increases in the Consumer Price Index, which has been found to overstate growth of the cost of living of the population as a whole. Adjusting benefit levels in line with a chained consumer price index computed by the Bureau of Labor Statistics would slow the growth of benefits while preserving the principle that they be maintained in real terms (Table 6). Moreover, although the increase in the Social Security retirement age from 65 to 66 years will be completed in 2005, the shift to 67 years is only slated to take place between 2017 and 2022. With longevity trends continuing to surprise on the upside, the next phase of the increase in the retirement age could be advanced.
Sources: Rivlin and Sawhill (2004); and Fund staff calculations.
26. Measures to broaden the burden of Social Security premiums could also be considered. The payroll tax used to finance Social Security is currently 12.4 percent on earnings up to $87,900, a ceiling that is adjusted annually for growth in average wages. Following the 1983 reforms, some 90 percent of covered earnings were under the payroll tax ceiling, but shifts in the income distribution have since reduced that number to around 85 percent (Rivlin and Sawhill, 2004). Rather than increasing payroll contribution rates, about ¼ percent of GDP in additional revenues could be garnered by restoring the initial 90 percent ratio (equivalent to increasing the payroll tax ceiling to about $130,000 in 2004).
28. This chapter has presented a range of fiscal options that could help prepare the U.S. fiscal system for the impending demographic transition. These measures would help reduce the budget deficit over the medium-term in a manner that would minimize the impact on, or even boost, economic efficiency and long-term growth prospects. This paper does not support a particular course of fiscal action, as there are obviously many combinations of measures possible (see Rivlin and Sawhill, 2004, for three fundamentally different consolidation scenarios). However, the size of the U.S. fiscal gap suggests that both expenditure and revenue measures will eventually be needed, in part because many of the factors that supported consolidation in the 1990s are no longer in place.
AuerbachA.J.W.G.Gale and P.Orszag2004“Sources of the Long-Term Fiscal Gap” Tax NotesMay241049–59.
BurmanL.W.Gale and J.Rohaly2003“The AMT: Projections and Problems” Tax NotesJuly7105–17.
CareyD. and J.Rabesona2002 “Tax Ratios on Labour and Capital Income and on Consumption,” OECD Economic Studies No. 35129–74 (Paris).
Congressional Budget Office (CBO)2004The Budget and Economic Outlook: Fiscal Years 2005 to 2014 (Washington: U.S. Government Printing Office).
Congressional Budget Office (CBO)2003Budget Options (Washington: U.S. Government Printing Office).
CutlerD.M.2004Your Money or Your Life: Strong Medicine for America’s Health Care System (New York: Oxford University Press).
de RugyV.2004 “The Republican Spending Explosion,” Cato Institute Briefing Papers No. 87March3 (Washington: Cato Institute).
de RugyV. and T.DeHaven2003“On Spending Bush Is No Reagan” Tax & Budget Bulletin No. 16August (Washington: Cato Institute).
Citizens Against Government Waste (CAGW)2004Congressional Pig Book Summary (Washington). Available on the Internet at http://www.cagw.org.
DesaiM.A.2003 “The Divergence Between Book and Tax Income,” Tax Policy and the Economy Vol. 17169–206.
DevereuxM.P.R.Griffith and A.Klemm2002 “Corporate Income Tax Reforms and International Tax Competition,” Economic Policy Vol. 17 No. 35451–95.
DiamondP.A. and P.R.Orszag2004Saving Social Security: A Balanced Approach (Washington: Brookings Institution Press).
EdwardsC.2004a “Downsizing the Federal Government,” Policy Analysis No. 515June2 (Washington: Cato Institute).
EdwardsC.2004b“Federal Aid to the States Ripe for Cuts” Tax & Budget Bulletin No. 20May (Washington: Cato Institute).
FrancisW.2003“The FEHBP as a Model for Medicare Reform: Separating Fact from Fiction” Executive Summary Backgrounder No. 1674August7 (Washington: The Heritage Foundation). Available on the Internet at http://www.heritage.org/Research/HealthCare/bg1674.cfm.
GaleW.G. and P.Orszag2004“Should the President’s Tax Cuts Be Made Permanent?” Tax NotesMarch81277–90.
GokhaleJ. and K.Smetters2003Fiscal and Generational Imbalances: New Budget Measures for New Budget Priorities (Washington: AEI Press).
GreenspanA.2004“Economic Outlook and Current Fiscal Issues” Testimony before the Committee on the Budget U.S. House of RepresentativesFebruary25. Available on the Internet at http://www.federalreserve.gov.
GreensteinR. and R.Kogan2004“Administration’s Proposed Discretionary Spending Caps Represent Unsound and Inequitable Policy”March1Center on Budget and Policy Priorities (Washington). Available on the Internet at http://www.cbpp.org/3-1-04bud.htm.
KeenM.2001“States’ Rights and the Value Added Tax: How a VIVAT Would Work in the United States” Proceedings of the National Tax Association195–200.
KellM.2004“Budget Enforcement Act and Options for Reform” in U.S. Fiscal Policies and Priorities for Long-Run Sustainabilityedited byM.Mühleisen and C.ToweOccasional Paper No. 227 (Washington: International Monetary Fund).
LavI.J. and A.Brecher2004“Passing Down the Deficit: Federal Policies Contribute to the Severity of the State Fiscal Crisis”May12Center on Budget and Policy Priorities (Washington). Available on the Internet at http://www.cbpp.org/5-12-04sfp.htm.
MühleisenM. and C.Towe (eds.) 2004U.S. Fiscal Policies and Priorities for Long-Run SustainabilityIMF Occasional Paper No. 227 (Washington: International Monetary Fund).
Office of Management and Budget (OMB)2004Fiscal Year 2005 Budget of the U.S. Government (Washington: U.S. Government Printing Office).
Office of Management and Budget (OMB)2001Fiscal Year 2002 Budget of the U.S. Government (Washington: U.S. Government Printing Office).
PennerR.2003“Are Current Budget Deficits More Worrisome Than Those of the 1980s?” U.S. Fixed Income MonthlyNovember24–31.
PrustJ. and D.Simard2004“U.S. Energy Policy: Role of Taxation” in U.S. Fiscal Policies and Priorities for Long-Run Sustainabilityedited byM.Mühleisen and C.ToweOccasional Paper No. 227 (Washington: International Monetary Fund).
RivlinA. and I.Sawhill (eds.) 2004Restoring Fiscal Sanity: How to Balance the Budget (Washington: Brookings Institution Press).
SilvinskiS.2001 “The Corporate Welfare Budget Bigger Than Ever,” Policy Analysis No. 415Cato Institute (Washington).
SteuerleC.E.2004Contemporary U.S. Tax Policy (Washington: Urban Institute Press).
SteuerleC.E.2003 “The Incredible Shrinking Budget for Working Families and Children,” National Budget Issues No. 1DecemberUrban Institute (Washington).
TempalskiJ.2003 “Revenue Effects of Major Tax Bills,” Office of Tax Analysis Working Paper No. 81 (Washington: U.S. Department of the Treasury).
YeapleS. and S.S.Golub2004“International Productivity Differences Infrastructure and Comparative Advantage”unpublished manuscript. Available on the Internet at http://www.swarthmore.edu/SocSci/sgolub1/JIE-infrastructure-jan3004.doc.
ZegeyeA.A.2000 “U.S. Public Infrastructure and Its Contribution to Private Sector Productivity,” Bureau of Labor Statistics Working Paper No. 329 (Washington: U.S. Department of Labor). Available at the Internet at http://www.bls.gov/ore/pdf/ec000040.pdf.
Prepared by Martin Mühleisen and Andrew Swiston.
The fiscal year runs from October 1 to September 30.
In 1983, Congress enacted pension reform measures, including a gradual increase from 65 to 67 in the full retirement age that is being phased in through 2022.
Less adjustment would be needed if the economy grew stronger than expected, with the deficit falling an estimated 1½ percent of GDP for each ½ percentage point increase in the potential growth rate.
This paper only discusses fiscal measures needed to return the budget to balance over the medium-term. Measures to restore long-run budget sustainability, including offsets for the rising cost of entitlement spending, have recently been estimated to amount to some 7–10 percent of GDP (Gokhale and Smetters, 2003; Auerbach, Gale, and Orszag, 2004).
Up to one-half of the increase in the post-2000 deficit was the result of economic factors, however, with the loss of bubble-induced capital gains revenues having no parallels in the 1980s (Gale and Orszag, 2004; Mühleisen and Towe, 2004).
Nondefense discretionary spending declined by 13½ percent during FY 1981–84, compared to an increase of over 20 percent between FY 2001–04 (de Rugy and DeHaven, 2003). Discretionary spending is controlled by annual appropriations acts. Mandatory spending is provided by permanent law and does not require annual appropriations to ensure the continuation of spending.
See Chapter IV in this paper. Some analysts expect AMT revenues to defray the costs of about one third of recent tax cuts (Burman et al., 2003).
Yeaple and Golub (2004) found evidence that the provision of public infrastructure is associated with total factor productivity differences across countries. For the United States, Zegeye (2000) finds a small but positive impact of public infrastructure on output and productivity at the state and county level.
See Kell (2004) for a discussion of the BEA’s effectiveness in curbing expenditures in the 1990s.
While some analysts point out that grants to states have increased from 8 percent of federal spending in FY 1960 to 18 percent in FY 2004 (Edwards, 2004b), others have suggested that federal policies have contributed to fiscal problems at the state level in recent years (Lav and Brecher, 2004).
Some analysts have suggested that the 2001–03 tax cuts have been highly regressive in nature and, depending on the way they would eventually be financed, may indeed lead to an increased economic burden on low-income households (e.g., Gale and Orszag, 2004).
Existing inefficiencies in the corporate tax code could be increased under current House and Senate versions of the bill to repeal $5 billion worth of export subsidies provided under the Foreign Sales Corporation Act (FSC/ETI). The bills would offer up to $167 billion in tax relief over 10 years, which may only be partly offset by revenue-raising measures.
Contrary to the domestic debate on the VAT, which is often centered around replacing the existing system of income taxes with a VAT, this paper suggests to use the VAT as a complementary revenue source given its regressive nature.
In addition to the introduction of Health Savings Accounts by the administration, other suggestions to strengthen market-based principles by providing Medicare participants with greater choice have cited the Federal Employees Health Benefit Plan (FEHBP) as an example (e.g., Francis, 2003). Although potentially large, the magnitude of savings for introducing an FEHBP-type approach nationwide remain yet to be specified. | 2019-04-18T14:42:32Z | https://www.elibrary.imf.org/abstract/IMF002/07729-9781451839562/07729-9781451839562/07729-9781451839562_A003.xml?rskey=hvz41t&result=9 |
The procedure for issuance of a European Professional Card (EPC) and the application of the alert mechanism provided for in Directive 2005/36/EC is to be supported by the Internal Market Information System (IMI) established by Regulation (EU) No 1024/2012 of the European Parliament and of the Council (2). It is therefore appropriate to provide rules on the procedure for issuance of EPC and on the application of the alert mechanism in the same implementing act.
The Commission carried out an assessment, with the involvement of the relevant stakeholders and the Member States, on the suitability of introducing the EPC for doctors, nurses, pharmacists, physiotherapists, mountain guides, real estate agents and engineers. Following that assessment the Commission has selected five professions (nurses, pharmacists, physiotherapists, mountain guides and real estate agents) for which an EPC should be introduced. The selected professions meet the requirements set out in Article 4a(7) of Directive 2005/36/EC as regards their current or potential mobility figures, their regulation in Member States as well as interest expressed by relevant stakeholders. The introduction of the EPC for doctors, engineers, specialized nurses, and specialized pharmacists needs further assessment in relation to their compliance with the conditions laid down in Article 4a(7) of Directive 2005/36/EC.
In accordance with Article 12 of Regulation (EU) No 1024/2012 the online tool referred to in Article 4b(1) of Directive 2005/36/EC should be separate from the IMI and should not enable external actors to access the IMI. It is therefore necessary to provide for detailed rules on the procedure for submitting applications for an EPC via that online tool as well as the rules on receiving EPC applications in the IMI by the competent authorities.
In order to provide transparent requirements it is also important to specify the conditions for requesting supporting documents and information from the applicants under the EPC procedure, taking into account which documents may be required by the competent authorities of the host Member State pursuant to Article 7, Article 50(1) and Annex VII to Directive 2005/36/EC. It is therefore necessary to set out the list of documents and information, including the documents that should be issued by the competent authorities of the home Member State directly, the procedures for verification of authenticity and validity of the documents by the competent authority of the home Member State and the conditions for requesting certified copies and translations. In order to facilitate the handling of an EPC application, it is appropriate to define the respective roles of all the actors involved in the EPC procedure: the applicants, the competent authorities of the home and the host Member State including the competent authorities assigned with the task of allocating EPC applications.
In accordance with Article 4b(1) of Directive 2005/36/EC, a home Member State may also allow for written applications for the EPC. It is therefore necessary to set out the arrangements that the competent authority of the home Member State should put in place in cases of written applications.
In order to ensure that the workflow in IMI is not disrupted or impaired and the processing of an application is not delayed, it is necessary to clarify the procedures concerning payments in relation to processing of an EPC application. It is therefore appropriate to provide that an applicant pays to the competent authorities of the home and/or of the host Member States separately and only if an applicant is required to do so by the competent authorities concerned.
In order to provide the applicant with a possibility to receive evidence of the outcome of the EPC procedure, it is necessary to specify the format of the document that the applicant will be able to generate via the online tool referred to in Article 4b(1) of Directive 2005/36/EC and to provide guarantees that the electronic document was issued by the relevant competent authority and that it was not modified by external actors. In order to make sure that EPC is not confused with documents giving automatic authorisation to practice in the host Member State in cases of establishment, it is appropriate to provide for inclusion of a disclaimer to this effect in the EPC document.
The EPC procedure can lead to the adoption of different types of decisions by the competent authority of the home Member State or of the host Member State. It is therefore necessary to define the possible outcomes of an EPC procedure as well as to specify, where appropriate, the information to be included in the electronic document stating the outcome of the EPC procedure.
To facilitate the task of the competent authority of the host Member State and to ensure that the verification of an issued EPC by the interested third parties is easy and user-friendly, it is appropriate to provide a centralised, online verification system of the authenticity and the validity of an EPC by the interested third parties that have no access to the IMI. That verification system should be separate from the online tool referred to in Article 4b(1) of Directive 2005/36/EC. Such verification of the EPC should not provide access for interested third parties to the IMI.
In order to ensure data protection in relation to the application of the alert mechanism, it is necessary to specify the roles of the competent authorities handling incoming and outgoing alerts and the functionalities of the IMI in withdrawing, modifying and closing alerts and ensuring the security of data processing.
In order to facilitate the restriction of access to personal data to only those authorities who need to be informed Member States should designate authorities assigned with the task of coordinating incoming alerts. Member States should only grant access to the alert mechanism to those authorities which are directly concerned by the alert. In order to ensure that alerts are sent out only in cases when they are necessary Member States should be able to designate authorities assigned with the task of coordinating outgoing alerts.
Processing of personal data pursuant to this Regulation is subject to Directive 95/46/EC of the European Parliament and of the Council (3), Directive 2002/58/EC of the European Parliament and of the Council (4) and Regulation (EC) No 45/2001 of the European Parliament and of the Council (5).
This Regulation lays down rules on the procedure for the issuance of the European Professional Card (EPC) pursuant to Articles 4a to 4e of Directive 2005/36/EC for the professions listed in Annex I to this Regulation and on the application of the alert mechanism provided for in Article 56a of that Directive.
1. Each Member State shall designate competent authorities responsible for EPC applications for each of the professions listed in Annex I for their entire territory or, where appropriate, parts thereof.
For the purpose of implementation of Article 7, each Member State shall assign to one or more competent authorities the task of allocating EPC applications to the relevant competent authority in its territory.
2. Member States shall register in the Internal Market Information System (IMI) established by Regulation (EU) No 1024/2012 at least one competent authority for each of the professions listed in Annex I to this Regulation, and at least one competent authority assigned with the task of allocating EPC applications in their territory by 18 January 2016.
3. The same competent authority may be designated as the competent authority responsible for EPC applications and as the competent authority assigned with the task of allocating EPC applications.
1. An applicant shall create a secured personal account in the online tool referred to in Article 4b(1) of Directive 2005/36/EC for submission of an EPC application online. This online tool shall provide information on the purpose, scope and nature of the data processing, including information about the rights of the applicants as data subjects. The online tool shall request the explicit consent of the applicants regarding the processing of their personal data in the IMI.
2. The online tool referred to in Article 4b(1) of Directive 2005/36/EC shall provide for a possibility for the applicant to fill in all necessary information related to the EPC application referred to in Article 4 of this Regulation, to upload the copies of documents required for issuance of the EPC under Article 10(1) of this Regulation and to receive any information on the progress in processing of his EPC application online, including on the payments to be made.
3. The online tool shall also provide for a possibility for the applicant to submit any additional information or document, and to request rectification, deletion or blocking of his personal data contained in the IMI file online.
other information specific to the regime referred to in point (f).
For the purposes of point (d) of the first subparagraph, if the applicant is not legally established at the moment of application, he shall indicate the Member State where he has obtained the required professional qualification. If there is more than one Member State where the applicant has obtained his professional qualifications, he shall choose the Member State that is to receive his EPC application from among the Member States that issued a qualification.
For the purposes of point (f) of the first subparagraph, if the applicant has not indicated the right regime, within the one week of receipt of EPC application the competent authority of the home Member State shall advise the applicant to resubmit application under the applicable regime. Where appropriate, the competent authority of the home Member State shall first consult the competent authority of the host Member State.
Data relating to the identity of the applicant and the documents referred to in Article 10(1) shall be stored in the applicant's IMI file. That data shall be reusable for subsequent applications provided the applicant agrees to such reuse and the data is still valid.
1. The online tool referred to in Article 4b(1) of Directive 2005/36/EC shall transfer the EPC application to the IMI in a secure manner to be treated by the relevant competent authority in the home Member State referred to in paragraph 2 or 3 of this Article.
2. If the applicant is legally established in a Member State at the time of application, the IMI shall transfer the EPC application to the competent authority in the Member State where the applicant is legally established.
The competent authority of the home Member State shall verify whether the applicant is legally established in that Member State and shall certify the fact of legal establishment in the IMI file. It shall also upload any relevant proof of the applicant's legal establishment or add a reference to the relevant national register.
Where the competent authority of the home Member State is not in a position to confirm the applicant's legal establishment in its territory by any other means, it shall ask from the applicant for the evidence of his legal establishment, within one week of receipt of the EPC application referred to in Article 4b(3) of Directive 2005/36/EC. The competent authority of the home Member State shall consider those documents as missing documents pursuant to Articles 4b(3) and 4c(1) or 4d(1) of Directive 2005/36/EC.
3. In cases referred to in the second subparagraph of Article 4 of this Regulation, the IMI shall transfer the EPC application to the competent authority of the Member State that issued the required professional qualification.
4. The competent authorities in other Member States that issued evidence of professional qualifications shall cooperate and respond to any requests for information from the competent authority of the home Member State or from the competent authority of the host Member State during the EPC procedure as regards the EPC application.
1. In cases where a Member State appoints more than one competent authority responsible for EPC applications for a given profession in its territory or parts of it, a competent authority assigned with the task of allocating EPC applications shall ensure that the application is sent without undue delay to the relevant competent authority in the territory of the Member State.
2. If the applicant has submitted the application to a Member State other than his home Member State as provided for in Article 6(2) or 6(3), the competent authority assigned with the task of allocating EPC applications in the Member State that received the application may refuse treating the application within one week of receipt of the EPC application and inform the applicant accordingly.
1. If a Member State allows for the submission of written EPC applications and upon receipt of such written application determines that it is not competent to deal with it pursuant to Articles 6(2) or (3), it may refuse to examine the application and inform the applicant accordingly within one week of receipt of the application.
2. In case of written EPC applications, the competent authority of the home Member State shall fill in the EPC application in the online tool referred to in Article 4b(1) of Directive 2005/36/EC on behalf of the applicant on the basis of the written EPC application submitted by the applicant.
3. The competent authority of the home Member State shall send updates to the applicant about the processing of the written EPC application, including any reminders pursuant to Article 4e(5) of Directive 2005/36/EC, or any other relevant information outside the IMI in accordance with national administrative procedures. It shall send the proof of the outcome of the EPC procedure referred to in Article 21 of this Regulation to the applicant without delay after the closure of the EPC procedure.
1. If the competent authority of home Member State charges fees for processing applications for EPC, it shall inform the applicant via the online tool referred to in Article 4b(1) of Directive 2005/36/EC, within one week of receipt of EPC application, about the amount to be paid, the means of payment, any references to be mentioned, the required proof of payment, and shall set a reasonable deadline for payment.
2. If the competent authority of the host Member State charges fees for processing applications for EPC, it shall provide the information referred to in paragraph 1 of this Article to the applicant via the online tool referred to in Article 4b(1) of Directive 2005/36/EC as soon as the EPC application was transmitted to it by the competent authority of the home Member State and shall set a reasonable deadline for payment.
in case of general system for recognition provided for in Chapter I of Title III of Directive 2005/36/EC, the documents listed in point 2 of part A of Annex II to this Regulation.
The competent authorities of Member States may only require the documents listed in part B of Annex II for issuing the EPC for temporary and occasional provision of services.
The documents referred to in points 1(d) and 2(g) of part A and points (a), (c) and (d) of part B of Annex II shall only be requested from the applicant if so required by the competent authority of the host Member State.
2. Member States shall specify the documents required for issuing EPC and shall communicate this information to other Member States via IMI.
3. Documents required in accordance with paragraphs 1 and 2 of this Article shall be considered missing documents pursuant to Articles 4b(3) and 4c(1) or 4d(1) of Directive 2005/36/EC.
1. Where the competent authority of the home Member State has been designated as responsible under national laws to issue any of the documents required for the issuance of the EPC under Article 10, it shall directly upload those documents in the IMI.
2. By derogation from Article 10(3) of this Regulation, the competent authority of the home Member State shall not consider documents referred to in paragraph 1 of this Article as missing documents pursuant to Articles 4b(3) and 4c(1) or 4d(1) of Directive 2005/36/EC, where those documents have not been uploaded in the IMI in accordance with paragraph 1.
3. The online tool referred to in Article 4b(1) of Directive 2005/36/EC shall provide for a possibility for the applicant to upload copies of any required supporting documents issued by the competent authorities of the home Member State.
1. By derogation from Article 10(3) of this Regulation, if the applicant fails to provide any document referred to in points 2(c) and (d) of Part A or point (d) of Part B of Annex II to this Regulation with the EPC application, the competent authority of the home Member State shall not consider those documents as missing documents pursuant to Articles 4b(3) and 4d(1) of Directive 2005/36/EC.
2. The competent authority of the host Member State may ask for the submission of the documents referred to in paragraph 1 of this Article directly from the applicant or from the home Member State pursuant to Article 4d(3) of Directive 2005/36/EC.
3. If the applicant fails to provide documents following request of host Member State referred to in paragraph 2, the competent authority of the host Member State shall take the decision on the issuance of the EPC based on the information available.
1. The online tool referred to in Article 4b(1) of Directive 2005/36/EC shall provide for a possibility for the applicant to submit any document proving knowledge of a language, which may be required by the host Member State pursuant to Article 53 of that Directive after issuance of the EPC.
2. Documentary proof of knowledge of languages shall not be part of the documents required for issuing EPC.
3. The competent authority of host Member State may not refuse to issue an EPC based on the lack of the proof of knowledge of languages referred to in Article 53 of Directive 2005/36/EC.
1. In cases where the competent authority of the home Member State has issued any document required for the issuance of the EPC under Article 10, it shall certify in the IMI file that the document is valid and authentic.
2. In the event of duly justified doubts, where the required document was issued by another national body of the home Member State, the competent authority of the home Member State shall ask the relevant national body to confirm the validity and authenticity of the document. After receiving confirmation, it shall certify in IMI that the document is valid and authentic.
3. If a document was issued in another Member State, the competent authority of the home Member State shall contact via IMI the competent authority of the other Member State responsible for EPC applications (or other relevant national body of the other Member State registered in IMI) to verify the validity and authenticity of the document. After completion of verification, it shall certify in IMI that the competent authority of the other Member State has confirmed that the document is valid and authentic.
In cases referred to in the first subparagraph, the competent authorities of the other Member State responsible for EPC applications (or other relevant national bodies of other Member State registered in IMI) shall cooperate and respond without delay to any requests for information from the competent authority of the home Member State.
4. Prior to certifying the authenticity and validity of the document issued and uploaded in the IMI pursuant to Article 11(1) of this Regulation, the competent authority of the home Member State shall describe the contents of every document in the pre-structured fields of IMI. Where appropriate, the competent authority of home Member State shall ensure that the information describing the documents submitted by the applicant through the online tool referred to in Article 4b(1) of Directive 2005/36/EC are accurate.
1. The competent authority of the home Member State shall inform the applicant within the time limits provided for in Articles 4c(1) and 4d(1) of Directive 2005/36/EC about a need to submit a certified copy only if the relevant national body in the home Member State or the competent authority or a relevant national body in another Member State failed to confirm the validity and authenticity of a required document pursuant to verification procedures set out in Article 14 of this Regulation and if such certified copies are required by the host Member State pursuant to paragraph 2 of this Article.
In cases referred to in subparagraph 3 of Article 6(2) of this Regulation and in the event of duly justified doubts, the competent authority of the home Member State may require from the applicant within the time limits provided for in Articles 4c(1) and 4d(1) of Directive 2005/36/EC to submit a certified copy of the evidence of his legal establishment.
2. Member States shall specify in IMI the documents for which they require certified copies from the applicant pursuant to paragraph 1 and shall communicate this information to other Member States via IMI.
3. Paragraphs 1 and 2 of this Article shall be without prejudice to the rights of the competent authority of the host Member State to request additional information or the submission of a certified copy in the event of duly justified doubts from the competent authority of the home Member State pursuant to Articles 4d(2) and (3) of Directive 2005/36/EC.
4. In the event of duly justified doubts, the competent authority of the host Member State may request the applicant to submit a certified copy and may set a reasonable deadline for response.
1. Member States shall specify in IMI the types of certified copies that are acceptable in their territory pursuant to the legislative, regulatory or administrative provisions of that Member State and shall communicate this information to other Member States via IMI.
2. The competent authorities of Member States shall accept certified copies issued in another Member State pursuant to the legislative, regulatory or administrative provisions of that Member State.
3. In cases of duly justified doubts concerning the validity and authenticity of a copy certified in another Member State, the competent authorities shall address a request for additional information to the relevant competent authorities in the other Member State via IMI. The competent authorities of the other Member States shall cooperate and respond without undue delay.
4. Upon receipt of a certified copy from the applicant, the competent authority shall upload an electronic version of a certified document and certify in the IMI file that the copy is authentic.
5. The applicant may present the original of a document instead of a certified copy to the competent authority of the home Member State, who shall then attest in the IMI file that the electronic copy of an original document is authentic.
6. If the applicant fails to provide a certified copy of a required document within the time limit provided for in Article 4d(1) of Directive 2005/36/EC, this shall not suspend the time limits for the transfer of the application to the competent authority of the host Member State. The document shall be marked in the IMI as pending confirmation of authenticity and validity until a certified copy is received and uploaded by the competent authority of the home Member State.
7. If the applicant fails to provide a certified copy of a required document within the time limit provided for in Article 4c(1) of Directive 2005/36/EC, the competent authority of the home Member State may refuse to issue EPC for the temporary and occasional provision of services other than those covered pursuant to Article 7(4) of Directive 2005/36/EC.
8. In the event that the competent authority of the host Member State does not receive a certified copy of a required document either from the competent authority of the home Member State or from the applicant, it may take a decision based on the information available within the time limits provided for in paragraphs 2 and 3 and the second subparagraph of paragraph 5 of Article 4d of Directive 2005/36/EC.
the attestation of legal establishment referred to in point (b) of part B of Annex II and the third subparagraph of Article 6(2) of this Regulation, and the documents, which may be required pursuant to point 1(d) of Annex VII and points (b) and (e) of Article 7(2) of Directive 2005/36/EC, issued by competent authorities responsible for EPC applications or other relevant national bodies of the home Member State.
2. Each Member State shall specify in IMI the documents for which its competent authorities, acting as the competent authorities of the host Member State, require ordinary or certified translations from the applicant pursuant to paragraphs 3 and 4 and the acceptable languages, and shall communicate this information to other Member States via IMI.
3. By derogation from paragraph 1, the competent authority of the home Member State shall request from the applicant, within the first week following receipt of an EPC application pursuant to Articles 4b(3), and 4c(1) or 4d(1) of Directive 2005/36/EC, translations of the required documents specified in Annex II, into the languages acceptable by the competent authority of the host Member State, if translation of those documents is required by the competent authority of the host Member State pursuant to paragraph 2 of this Article.
4. If the applicant has provided documents referred to in points 2(c) and (d) of part A or point (d) of part B of Annex II with the EPC application, the competent authority of the home Member State shall request translations of those documents into the languages acceptable by the competent authority of the host Member State.
5. If the applicant fails to provide any requested translations of the documents referred to in paragraph 4 of this Article, the competent authority of the home Member State shall not consider those translations as missing documents pursuant to Article 4b(3) and 4d(1) of Directive 2005/36/EC.
1. In the event of duly justified doubts the competent authority of the host Member State may request additional information, including ordinary or certified translations, from the competent authority of the home Member State pursuant to Articles 4d(2) and (3) of Directive 2005/36/EC.
2. In cases referred to in paragraph 1, the competent authority of the host Member State may also request the applicant to submit ordinary or certified translations and may fix a reasonable deadline for response.
3. In the event that the competent authority of the host Member State does not receive a requested translation either from the competent authority of the home Member State or the applicant, it may take a decision based on the information available within the time limits provided for in paragraphs 2 and 3 and the second subparagraph of paragraph 5 of Article 4d of Directive 2005/36/EC.
1. Each Member State shall specify in IMI what certified translations are acceptable in its territory pursuant to the legislative, regulatory or administrative provisions of that Member State and shall communicate this information to other Member States via IMI.
2. The competent authorities of Member States shall accept certified translations issued in another Member State pursuant to the legislative, regulatory or administrative provisions of that Member State.
3. In cases of duly justified doubts concerning the validity and authenticity of a translation certified in another Member State, a Member State competent authority shall send a request for additional information to the relevant authorities in the other Member State via IMI. In such cases, the relevant authorities of other Member States shall cooperate and respond without delay.
4. Upon receipt of a certified translation from the applicant and subject to paragraph 3, a Member State competent authority shall upload an electronic copy of a certified translation and certify in the IMI file that the translation is certified.
5. Before certified translations are requested, in cases of duly justified doubts on any of the documents mentioned in Article 17(1), the competent authority of the host Member State shall address a request for additional information via IMI to the competent authority of the home Member State or competent authorities of other Member States that have issued the relevant document.
1. For establishment and for the temporary and occasional provision of services pursuant to Article 7(4) of Directive 2005/36/EC, the competent authority of the host Member State shall take either a decision to issue the EPC, a decision to refuse to issue the EPC, a decision to apply compensation measures pursuant to Article 14 or Article 7(4) of Directive 2005/36/EC, or a decision to extend the validity of the EPC for the temporary and occasional provision of services pursuant to Article 7(4) of Directive 2005/36/EC.
2. For temporary and occasional provision of services other than those covered by Article 7(4) of Directive 2005/36/EC, the competent authority of the home Member State shall take either a decision to issue the EPC, a decision to refuse to issue the EPC, or a decision to extend the validity of issued EPC.
3. In cases where a competent authority of the host Member State takes a decision to apply compensation measures to the applicant pursuant to Article 14 or Article 7(4) of Directive 2005/36/EC, such a decision shall also contain information on the contents of compensation measures imposed, the justification for the compensation measures and any obligations of the applicant to inform the competent authority on the completion of the compensation measures. The examination of EPC application shall be suspended until completion of the compensation measures by the applicant.
Upon successful completion of compensation measures, the applicant shall inform, through the online tool referred to in Article 4b(1) of Directive 2005/36/EC, the competent authority of the host Member State about it, if so required by the authority.
In cases where a competent authority of the host Member State takes a decision to apply compensation measures pursuant to Article 7(4) of Directive 2005/36/EC, the competent authority of the host Member State shall certify in the IMI whether it has given the applicant an opportunity to take the aptitude test within one month of its decision to apply compensation measures.
The competent authority of the host Member State shall confirm in the IMI the successful completion of compensation measures and shall issue the EPC.
4. In cases where a competent authority of the host Member State takes a decision to refuse to issue the EPC, such decision shall also set out the justifications. Member States shall ensure that appropriate judicial remedies are available to the individual concerned in respect of a decision to refuse to issue an EPC and shall provide the applicant with information on the rights to appeal under national law.
5. The IMI shall provide for a possibility for the Member State competent authorities to take a decision to revoke an issued EPC in duly justified cases. Such decision shall also set out the justification for the revocation. Member States shall ensure that appropriate judicial remedies are available to the individual concerned in respect of a decision to revoke an issued EPC and shall provide the applicant with information on the rights to appeal under national law.
1. The online tool referred to in Article 4b(1) of Directive 2005/36/EC shall provide for a possibility for the applicant to generate an electronic document stating the outcome of the EPC procedure and to download any evidence related to the outcome of the EPC procedure.
2. Where the EPC is issued (including cases referred to in the first subparagraph of paragraph 5 of Article 4d of Directive 2005/36/EC), the electronic document shall contain the information set out in Article 4e(4) of Directive 2005/36/EC and, in the case of EPC for establishment, shall contain a disclaimer that the EPC does not constitute an authorisation to practise the profession in the host Member State.
its integrity, certifying that the file containing the document had not been modified or altered by an external actor since its creation in the IMI system at a certain date and time.
1. The European Commission shall provide an online verification system which enables interested third parties who do not have access to the IMI to verify on-line the validity and authenticity of the EPC.
2. In the case of updates of the IMI file on the right of the EPC holder to pursue professional activities pursuant to Article 4e(1) of Directive 2005/36/EC, a message shall be displayed advising interested third parties to contact the competent authority of the host Member State for more information. The message shall be worded in a neutral way, taking into account the need to ensure the presumption of innocence of the EPC holder. In the case of EPC for establishment, a message shall also be displayed containing a disclaimer that the EPC does not constitute an authorisation to practise the profession in the host Member State.
1. Member States shall appoint competent authorities to handle outgoing and incoming alerts pursuant to Article 56a(1) or (3) of Directive 2005/36/EC.
2. In order to ensure that incoming alerts are only handled by the relevant competent authorities, each Member State shall assign the task of coordinating incoming alerts to one or more competent authorities. These competent authorities shall ensure that alerts are assigned to the appropriate competent authorities without undue delay.
3. Member States may assign the task of coordinating outgoing alerts to one or more competent authorities.
1. Alerts shall contain the information set out in Article 56a(2) or (3) of Directive 2005/36/EC.
2. Only competent authorities appointed to handle an alert pursuant to Article 56a(1) or (3) of Directive 2005/36/EC, shall have access to the information referred to in paragraph 1 of this Article.
3. Competent authorities assigned with the task of coordinating incoming alerts shall only have access to the data referred to in point (b) and (d) of Article 56a(2) of Directive 2005/36/EC, unless the alert was subsequently also assigned to them as an authority handling incoming alerts.
4. In case a competent authority handling incoming alerts needs other information than that set out in Article 56a(2) or (3) of Directive 2005/36/EC, it shall use the IMI information request functionality, as provided for in Article 56(2a) of Directive 2005/36/EC.
1. Pursuant to Article 4e(1) of Directive 2005/36/EC where the holder of an EPC is subject to an alert, the competent authorities that dealt with the EPC application under Article 2(1) of this Regulation shall ensure the update of the corresponding IMI file with information contained in the alert including any consequences for the pursuit of the professional activities.
2. To ensure that updates of the IMI files are carried out in a timely manner, Member States shall grant access to the incoming alerts for the competent authorities responsible for handling EPC applications under Article 2(1).
3. The holder of an EPC shall be informed of updates referred to in paragraph 1 of this Article through the online tool referred to in Article 4b(1) of Directive 2005/36/EC or by other means in the case of a written application under Article 8.
The IMI shall provide for a possibility for the competent authorities handling incoming or outgoing alerts to consult any alert they sent or received in IMI and for which the closure procedure referred to in Article 28 has not been launched.
closing and deleting alerts as provided for in Article 56a(5) and(7) of Directive 2005/36/EC.
1. Data regarding alerts may be processed within IMI for as long as they are valid including the completion of the closure procedure referred to in Article 56a(7) of Directive 2005/36/EC.
2. When the alert is no longer valid due to the expiry of the sanction, in cases not covered by paragraph 5 of this Article, the competent authority which sent the alert as provided for in Article 56a(1) of Directive 2005/36/EC shall modify its content or close the alert within three days from the adoption of the relevant decision, or receiving the relevant information where adoption of such decision is not required under national law. The competent authorities that handled the incoming alert and the professional concerned shall be immediately informed about any modifications concerning the alert.
3. The IMI shall send regular reminders for the competent authorities which handled the outgoing alert to verify whether the information contained in the alert is still valid.
4. In case of a revoking decision, the alert shall be immediately closed by the competent authority which originally sent it and personal data shall be deleted from the IMI within three days as provided for in Article 56a(7) of Directive 2005/36/EC.
5. In the case of a sanction that has expired on the date specified in Article 56a(5) of Directive 2005/36/EC the alert shall be automatically closed by the IMI and personal data shall be deleted from the system within three days as provided for in Article 56a(7) of Directive 2005/36/EC.
It shall apply from 18 January 2016.
Done at Brussels, 24 June 2015.
(2) Regulation (EU) No 1024/2012 of the European Parliament and of the Council of 25 October 2012 on administrative cooperation through the Internal Market Information System and repealing Commission Decision 2008/49/EC (‘the IMI Regulation’) (OJ L 316, 14.11.2012, p. 1).
(5) Regulation (EC) No 45/2001 of the European Parliament and of the Council of 18 December 2000 on the protection of individuals with regard to the processing of personal data by the Community institutions and bodies and on the free movement of such data (OJ L 8, 12.1.2001, p. 1.).
documents required in accordance with points 1(d) to (g) of Annex VII to Directive 2005/36/EC.
for the migrants meeting the requirements set out in Article 3(3) of Directive 2005/36/EC, a certificate of professional experience proving three years of professional experience issued by the competent authority in the Member State which recognised the third country qualification pursuant to Article 2(2) of Directive 2005/36/EC, or, if the competent authority concerned is unable to certify the professional experience of the applicant, other proof of professional experience, which clearly identifies the professional activities concerned.
where the host Member State applies prior check of qualifications pursuant to Article 7(4) of Directive 2005/36/EC, documents providing additional information about the training referred to in points 2(c) and (d) of Part A of this Annex. | 2019-04-26T08:46:07Z | https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ:JOL_2015_159_R_0003 |
Preprints (earlier versions) of this paper are available at http://preprints.jmir.org/preprint/11642, first published Jul 20, 2018.
Background: Community-based primary care focuses on health promotion, awareness raising, and illnesses treatment and prevention in individuals, groups, and communities. Community Health Workers (CHWs) are the leading actors in such programs, helping to bridge the gap between the population and the health system. Many mobile health (mHealth) initiatives have been undertaken to empower CHWs and improve the data collection process in the primary care, replacing archaic paper-based approaches. A special category of mHealth apps, known as mHealth Data Collection Systems (MDCSs), is often used for such tasks. These systems process highly sensitive personal health data of entire communities so that a careful consideration about privacy is paramount for any successful deployment. However, the mHealth literature still lacks methodologically rigorous analyses for privacy and data protection.
Objective: In this paper, a Privacy Impact Assessment (PIA) for MDCSs is presented, providing a systematic identification and evaluation of potential privacy risks, particularly emphasizing controls and mitigation strategies to handle negative privacy impacts.
Methods: The privacy analysis follows a systematic methodology for PIAs. As a case study, we adopt the GeoHealth system, a large-scale MDCS used by CHWs in the Family Health Strategy, the Brazilian program for delivering community-based primary care. All the PIA steps were taken on the basis of discussions among the researchers (privacy and security experts). The identification of threats and controls was decided particularly on the basis of literature reviews and working group meetings among the group. Moreover, we also received feedback from specialists in primary care and software developers of other similar MDCSs in Brazil.
Results: The GeoHealth PIA is based on 8 Privacy Principles and 26 Privacy Targets derived from the European General Data Protection Regulation. Associated with that, 22 threat groups with a total of 97 subthreats and 41 recommended controls were identified. Among the main findings, we observed that privacy principles can be enhanced on existing MDCSs with controls for managing consent, transparency, intervenability, and data minimization.
Conclusions: Although there has been significant research that deals with data security issues, attention to privacy in its multiple dimensions is still lacking for MDCSs in general. New systems have the opportunity to incorporate privacy and data protection by design. Existing systems will have to address their privacy issues to comply with new and upcoming data protection regulations. However, further research is still needed to identify feasible and cost-effective solutions.
Mobile health (mHealth) apps for health surveys and surveillance play a crucial role in creating rich data repositories for public health decision-making [1,2]. Apps for health surveys are usually known as mHealth Data Collection Systems (MDCSs), used by Community Health Workers (CHWs), replacing less efficient and less reliable paper-based approaches [3,4]. The CHWs’ main task is to visit families at their homes to provide primary care, but they also carry out surveys, collect the family’s data, and report it to the government. Instead of using paper forms, the CHWs can now use smartphones or tablets for the data collection process.
It is a problem that although mHealth initiatives are developed with a positive and optimistic outlook, there is often little concern for the privacy implications of the app . The existing solutions do not carefully consider privacy and it remains unclear how to deal with the issues inherent to systems of health surveillance. MDCSs are used to collect, process, and share sensitive data (ie, personal health data), making privacy and security of paramount importance.
In recent years, much research has focused on the information security aspects of MDCSs [6-9], that is, dealing with the concepts of confidentiality, integrity, and availability, which are commonly addressed by means of security mechanisms for encryption, authentication, secure storage, and access control. Privacy, in turn, stands for the respect of fundamental rights and freedoms of individuals with regard to the processing of personal data. It overlaps with security, especially regarding confidentiality, but many other privacy principles should be addressed (eg, purpose binding, transparency, data minimization, unlikability, intervenability, accountability, and consent)—fundamental differences that are further discussed in this paper. It means that although privacy-preserving systems require strong security, security by itself is not enough.
There are many reasons for enforcing privacy in the primary care context. Privacy is sine qua non for achieving high-quality health care . Personal data are collected, processed, and shared in the delivering of health services. Patients (ie, the data subjects) want their information to be used for meaningful purposes, and they want to provide personal data access to health workers so that they can receive proper care. If privacy is not enforced, patients may refrain using the service and/or hold back information, thus preventing health care workers from providing efficient and effective care. The result is inferior quality of health care.
MDCSs are inherently mass surveillance tools. Health care workers may have access to health data of entire communities, so that the privacy impact is amplified. There is a great power imbalance between individuals and the health agencies. Members of underserved communities, typically with less power, face greater risk because of privacy violations . Therefore, it is important to follow privacy principles during the design of such systems. Privacy principles have been vastly discussed in the scientific literature and embodied in legal frameworks in various jurisdictions, for example, European General Data Protection Regulation (EU GDPR) and the Brazilian general Bill on the Protection of Personal Data (PLC 53/2018) . Legal frameworks entail compliance, and thus project managers and developers should be prepared to follow such regulations.
Given that, our main research question is the following: How to design a privacy-aware and secure MDCS? To answer this question, a Privacy Impact Assessment (PIA) framework is chosen as a strategy for realizing privacy by design. [PIA] is a systematic process that identifies and evaluates, from the perspectives of all stakeholders, the potential effects on privacy of a project, initiative or proposed system or scheme, and includes a search for ways to avoid or mitigate negative privacy impacts . PIA comes from the notion of impact assessment, defined as the identification of future consequences of a current or proposed action . PIAs support a stricter analysis of privacy risks, that is, effect of uncertainty on privacy . Each stage of the PIA process builds up on each other, offering not only the risk assessment but also a solid strategy for risk management regarding privacy. In this paper, a PIA is presented using the GeoHealth MDCS as a case study to ground our analysis. As our methodology, the PIA framework proposed by Oetzel et al [17,18] is adopted in this study.
As a result, this paper brings the following contributions: (1) it provides a comprehensive privacy analysis for an MDCS, identifying threats and controls that help project managers and developers solve privacy and data protection issues in their systems, and (2) it shares the experience on how to carry out a PIA for a large-scale mHealth system, as advocated in previous studies [5,19], and it can be seen as an example to other mHealth initiatives. To the best of our knowledge, this is the first thorough privacy analysis for an MDCS. In fact, most mHealth systems neither mention nor appropriately discuss security issues in their systems , including privacy.
This section presents an overview of the previous work in regard to (1) MDCSs, (2) PIA frameworks, and (3) security and privacy of MDCSs. In the sections that follow, various contributions in the area that precedes the current research are described.
Initiatives for replacing paper-based solutions by MDCSs have been increasingly and especially adopted in developing countries . A more recent example is MoTeCH [22,23], employed in Ghana, which empowers nurses and CHWs with a simple mobile app for recording and tracking the care delivered to women and newborns, and it generates management reports mandated by the country’s health authorities. There are also standardized, general purpose tools that help in the task of designing forms and sending them to mobile devices, such as the Magpi framework and the Open Data Kit . Moreover, the World Health Organization together with a group of academic and research institutions and technology partners is developing the Open Smart Register Platform , which has been used to empower frontline health workers to electronically register and track the health of their entire client population.
Similarly, many MDCSs have been developed and tested in Brazil. Given the importance of Brazil’s Family Health Strategy (FHS) program for community-based primary care , it is natural that various MDCSs focus on the data gathering for the Health Information System for Primary Care (SISAB) database. FHS is one of the most important programs of the Brazilian public health service, Sistema Único de Saúde. In the past, the research on MDCSs was mainly developed by research groups inside universities, as it was the case with projects Borboleta and GeoHealth .
In this paper, the privacy analysis is particularly grounded on the GeoHealth system. GeoHealth has been targeted in various scientific publications over the years, including work about the design process [8,29], large-scale deployment , and CHWs’ experience with the technology , which enables us to perform the PIA on the basis of published material, as well as previous first-hand experience with the system.
Many PIA frameworks exist. Some are recommended to a specific jurisdiction and legal framework, whereas others aim for a specific industry sector or for a general methodology. The PIA for Radio Frequency Identification (known as PIA RFID) [18,30] and PIA Smart Grids are examples of sector-specific frameworks. However, the PIA RFID was later generalized in a systematic methodology and it is no longer limited to RFID applications. Other well-known PIA frameworks were proposed by data protection authorities in various countries, such as the British Information Commissioner’s Office (ICO) PIA , the Australian Office of the Australian Information Commissioner’s (OAIC) PIA , and the French Commission nationale de l'informatique et des libertés’ (CNIL) PIA .
More recently, International Organization for Standardization/International Electrotechnical Commission released a standard for PIAs numbered ISO/IEC 29134:2017 . This PIA framework offers as sound methodology with well-defined privacy principles (ISO/IEC 29100), risk identification and evaluation (ISO/IEC 31000 and ISO/IEC 29134), and privacy controls (ISO/IEC 27001 and ISO/IEC 29151). However, it is worth mentioning, that at the ISO/IEC, standards, for example, ISO/IEC 29134 and ISO/IEC 29151, had only been published when this study was already well underway, so they were not chosen as main PIA framework.
In recent years, the systematic PIA methodology also gained more maturity and was endorsed by the Article 29 Data Protection Working Party , leading to its adoption for GeoHealth’s PIA. Furthermore, the PIA RFID framework not only provides a robust methodology but it is also accompanied with extensive supplementary material [18,30], openly published and freely accessible since 2011. As far as possible, a parallel among existing PIA frameworks is drawn throughout the paper, given that methods from different PIA frameworks can be combined to better suit the analysis.
Issues regarding information security in MDCSs (ie, confidentiality, integrity, and availability) have already been addressed by different authors. For instance, in a study by Cobb et al , a range of security threats to MDCSs, that is, Open Data Kit , have been identified. In the study , the authors detailed a threat modeling exercise on the basis of surveys and interviews with technology experts. Other examples on information security are the works of Gejibo at al and Simplício et al that propose 2 distinct security frameworks for MDCSs. These frameworks are designed to cope with the networking and processing constraints that are inherent to mobile computing. However, both frameworks considerably converge to the same security issues identified in the study by Cobb et al .
In addition, regarding mHealth privacy in general, the work of Avancha et al proposes threat taxonomy that organizes threats into 3 categories: (1) identity threats, (2) access threats, and (3) disclosure threats. However, privacy is addressed in the study in a rather narrow way. The taxonomy is composed by privacy-related threats, but it essentially overlaps with classical security properties (ie, threats to confidentiality, integrity, and availability). Therefore, if privacy should be considered in a broader dimension, the mHealth threat taxonomy does not contemplate many important Privacy Principles (such as the ones listed in the section “Definition of Privacy Targets”).
Finally, this paper also expands our previous work on GeoHealth’s privacy threat analysis presented in a study by Iwaya et al . On the basis of that, controls are identified and recommended in this paper to mitigate the previously identified threats. In addition, an extensive documentation is provided, enabling research reproducibility of GeoHealth’s PIA and therefore contributing to bridge the knowledge gap between mHealth practitioners and privacy engineers.
This privacy threat analysis follows the PIA framework defined by Oetzel and Spiekermann . In brief, this PIA framework supports project managers and developers to integrate privacy by design in their system development life cycle. The methodology comprises 7 steps, as shown in Figure 1.
Starting with the system characterization in Step 1, the Brazilian GeoHealth MDCS [3,4] is analyzed in the context of previous work on similar solutions .
Figure 1. Privacy Impact Assessment (PIA) methodology overview.
In Step 2, the Privacy Principles and Privacy Targets are defined on the basis of a legal framework. This PIA follows the EU GDPR (enacted in May 2018). This choice is based on 2 reasons: (1) scientifically, the EU GDPR can be considered as state of the art in privacy regulations, and it can be also mapped to the work “A Taxonomy of Privacy ,” regarded as “the most complete list of privacy threats .” (2) The current draft of the Brazilian data protection regulation, in a broad way, is akin to the EU GDPR. Even though the health and medical fields often have their own privacy-related regulations, GDPR compliance addresses the privacy problems to a great extent.
In Step 3, the Privacy Targets are evaluated using a degree of protection demand, similar to an impact level (eg, low, medium, and high). During the threat analysis in Step 4, stakeholders identify threats associated to each of the Privacy Targets. All threats are addressed in Step 5 with respective technical and/or nontechnical control measures; residual risk is analyzed, and an implementation plan is specified.
In Step 6, the plan for implementing controls and the remaining residual risk is documented. However, given that GeoHealth has been discontinued and controls cannot be implemented, this step is not performed. For this reason, this PIA can be considered as an after-the-fact review, which is still helpful to mHealth practitioners, who might not be particularly keen to publish in-depth public PIA reports about on-going deployments.
As a final outcome of Step 7, this paper can be considered as a “PIA Report” describing the whole analysis, with emphasis on Step 5, “Identification and Recommendation of Controls.” Nonetheless, extensive documentation generated during the PIA process for Steps 1 to 4 is also provided in the form of Appendices.
GeoHealth’s PIA was carried out by our group of researchers with expertise in information security, privacy, and health informatics. Particularly for Steps 3 to 5, the working group meetings were based on evidence from the scientific literature (presented in Section 1). Moreover, 1 of the members participated in the design and development of GeoHealth. Contributions from software developers of other MDCSs as well as specialists in public health and primary care were also received. During the interaction with partners, feedback on our reports and documentation were collected, so that the analysis could be refined.
This section describes the intermediate results of the PIA process. As explained, Step 5, “Identification and Recommendation of existing or new Controls,” is emphasized in this paper to offer the reader a minimum background. The preceding Steps 1 to 4 are nonetheless summarized, and complete documentation is provided in Multimedia Appendices 1 to 4.
GeoHealth is an MDCS tailored for Brazil’s FHS program. It is composed by the GeoHealth-Mobile and the GeoHealth-Web. At the client side, the GeoHealth-Mobile is the Android app that implements all forms used for data collection. At the server side, the GeoHealth-Web implements Web services for receiving and consolidating data as well as for generating reports and exporting data to the national-level system (ie, SISAB/Department of Informatics of the Unified Health System). Figure 2 presents the system architecture, main actors (CHWs, families, physicians, and health managers), and system components.
The GeoHealth has been the target of many studies over the last years, so that further information can be found in the original material [3,4,8,29], as well as in a comprehensive description in Multimedia Appendix 1 [40,41]. For the readers’ convenience, the data flow diagram presented in Figure 3 shows how personal information is handled by the different subprocesses.
Figure 2. Overview of the GeoHealth actors and their interaction with the system’s components.
Figure 3. High-level data flow diagram of the GeoHealth environment. Acronyms: Personally Identifiable Information (PII); Basic Health Unit(BHU); Health Information System for Primary Care (SISAB); Department of Informatics of the Unified Health System (DATASUS).
After the system characterization, the next step is to determine the privacy principles that will be the basis of the design of our system. In the study by Oetzel and Spiekermann , the authors distinguish between privacy principles and privacy targets. Both terms were not explicitly defined, but privacy principles can be considered as a fundamental, primary, or general rule derived from the existing legal frameworks [12,42]. However, as explained by the study , these legal privacy principles must be translated into concrete, auditable, and functionally enforceable Privacy Targets and subsequent system functions. Furthermore, Privacy Targets should be formulated as action items, just like in widely accepted modeling techniques such as Unified Modeling Language and Architecture of Integrated Information Systems.
Textbox 1 presents a list of privacy principles and respective Privacy Targets derived from the European General Data Protection Regulation and originally conceived by Oetzel and Spiekermann [17,38]. Although this list was used as a baseline for this PIA, all Privacy Targets were reviewed in terms of applicability, meaning, and exhaustiveness in the context of GeoHealth. As a result of this revision, the principle P5-Intervenability was added and the targets that were previously listed under P4 - Access Right of Data Subject were moved to this new category (ie, P5.1, P5.2, and P5.3). Thus, now there is a clear distinction between data subject access (transparency) and intervenability. Furthermore, new Privacy Targets P4.2 and P5.4 were proposed and added to the list.
Textbox 1. List of Privacy Principles and Privacy Targets.
Each of the listed Privacy Targets was put in context and further evaluated. In this step of the PIA, Privacy Targets were ranked and priorities for the GeoHealth’s privacy architecture were identified. To determine the right level of protection that each Privacy Target demands, a potential damage scenario had to be considered, that is, using the “feared events” technique by asking, “What would happen if...?” Every Privacy Target was challenged by its potential damage in case of noncompliance. Furthermore, the damage had to be considered from 2 perspectives: the system operator (eg, loss of reputation and financial penalties) and its customer (eg, social embarrassment, financial losses, and jeopardize personal freedom).
A qualitative approach is used because privacy breaches are often “softer” or intangible (eg, hurt feelings, discredit, blackmail, and even death) rather than something with a specific monetary value (eg, a computer system or asset). Being qualitative is a major difference of the PIA methodology when compared with more quantitative asset-driven evaluations for security assessments. That is, assets such as data, software, and hardware are easier to quantify, such as loss and cost, whereas reputation, embarrassment, and harm to people’s rights and freedoms are not. This part of the PIA process is detailed in Multimedia Appendix 2.
For each Privacy Target, the threats that could impede us from achieving each target are systematically identified. A threat is essentially a noncompliance with the relevant privacy laws and standards that emerge from multiple sources, such as the lack of training and privacy awareness, inappropriate use, privacy-preserving technologies, or absence of privacy management and governance practices .
The identification of privacy threats for GeoHealth was presented as part of our previous work . Further details can be also found in Multimedia Appendix 3. In summary, this threat analysis was built upon existing threats analyses for mHealth in general or specifically for MDCSs [7,9], as well as privacy threats (for RFID) found in the study by Oetzel et al . Thus, this threat analysis is not only based on the assessment of privacy experts but also on existing scientific literature, from which threats were reviewed and compiled.
As a result, 22 groups of threats and a total of 97 subthreats are identified. Threats can range greatly, jeopardizing principles such as data quality, processing legitimacy, informed consent, right to information, right to access, right to object, data security, and accountability. The threats were also classified as “likely” (n=86) or “unlikely” (n=11) to happen, enabling us to assertively assign controls.
As a point of departure, a list of possible controls presented in a study by Oetzel et al is used, combined with the security controls proposed in previous studies [7-9,43]. The final list is composed by 41 recommended controls (Table 1, further details in Multimedia Appendix 4) to cope with the identified privacy threats. According to the methodology, each control has up to 3 levels of rigor: (1) satisfactory, (2) strong, and (3) very strong. During the process of assigning controls for each threat, a level of rigor is also chosen, defining how extensive the control should be, which is likely costlier and more difficult. The level of rigor should match the previously defined level of protection demand determined in the section “Evaluation of Degree of Protection Demand for each Privacy Target.” However, for the GeoHealth case study, all the threats are linked to at least one or more Privacy Targets with a “high” level of protection demand. Therefore, all controls in the consolidate list only need to be described for a “very strong” level of rigor (see Multimedia Appendix 4). Table 2 shows the association of controls to the identified threats.
Control codes and short descriptions Done?
aThe control was not implemented.
Table 1. Consolidated list of controls. The detailed description of all controls can be found in Multimedia Appendix 4.
cNote that each group of threats has a number of more specific subthreats (eg, T1.1, T1.2, and T1.3). The technical or organizational controls (listed in Table 1) can then be associated to 1 or more subthreats.
Table 2. Threat groups and associated controls. The detailed description of all subthreats can be found in Multimedia Appendix 3.
In summary, GeoHealth’s PIA is based on 8 Privacy Principles and 26 Privacy Targets derived from the EU GDPR. Associated to that, 22 threat groups with a total of 97 subthreats and 41 recommended controls are identified. Thus, offering a sound privacy analysis for a large-scale MDCS.
This research shows that the literature mostly focuses on the information security issues, solving only a fraction of the problem, that is, (P6) Security of Data. Currently, there is a lack of contributions on how to engineer privacy not only in MDCSs but also for the area of mHealth in general [5,19]. Our PIA helps to bridge this gap by exposing the problems and providing controls (see Multimedia Appendix 4). On the basis of this PIA, engineers have a clearer path toward solving the privacy issues and ideally being able to address them at the very early stages of the design process, when changes are often simpler and less costly.
In addition, the consolidated list of controls, in Table 1, also makes it clear that privacy cannot be dealt only with technical measures. In fact, most controls required a mixed approach of technical and organizational procedures that should be put in place to achieve privacy and data protection. One way of doing this is to integrate the organizational procedures related to privacy in an information security management system to facilitate for organizations and make the processes for both information security and privacy more efficient. This could be a task for further research.
Another important finding from the PIA is that some privacy issues are more challenging, requiring major changes on the existing MDCSs. However, it is not within the scope of a PIA to provide complete solutions to solve such challenges but rather to make them explicit. The main privacy challenges for MDCSs include the following: (1) individualized access to personal data to provide transparency and intervenability, (2) obtaining and handling explicit informed consent from data subjects and allowing consent withdrawal, (3) defining measures to object processing and allow data blocking or deletion, (4) employ security mechanisms, and (5) utilize appropriate anonymization techniques for data sharing. In the sections that follow, the discussion on each of these privacy challenges is expanded.
Among the main findings, it is noticeable that existing MDCSs particularly fall short with respect to GDPR principles of transparency and intervenability, that is, (P1) Quality of Data Processing, (P4) Access Right of Data Subject, and (P5) Intervenability. In brief, MDCSs do not consider the data subjects’ personalized access to their data in electronic form, and in fairness, they were designed to be accessed only by CHWs and medical staff. However, it is worth mentioning that to achieve GDPR compliance, nonelectronic access is sufficient. Nonetheless, as a matter of enhanced privacy by design (and not purely compliance), major redesign is required to add data subjects as system users and to support interaction with a personalized interface (eg, a privacy dashboard), somewhat similar to existing online medical records . In this line, MDCSs would benefit from emerging Transparency-Enhancing Tools that help to raise privacy awareness among data subjects by allowing them to know about the data that are collected and processed about them and the potential privacy risks (eg, discriminatory profiling, data breaches, and leaks). However, such changes greatly expand the system’s attack surface (ie, a new category of users with access rights) and increase the costs of software development and underlying infrastructure. Therefore, the redesign of MDCSs requires further feasibility studies, especially for projects running in low- and middle-income countries.
Explicit informed consent (ie, a signed written statement) also has some particularities. Consent is a well-known requisite for providing medical treatment. In MDCSs, the consent is given for the processing of personal data. It refers to the data collection, processing, and access rights to the data and for the purpose stated, that is, it is about technologies and systems. Just as importantly, consent revocation needs to be as easily made as giving consent. As CHWs use smartphones for data collection, it is difficult for data subjects to withdraw their consent later, as they do not have direct computer access. Asking to revoke consent via telephone is not an easy solution either, as the data subjects must be properly identified first. There should also be routines for allowing to revoke the consent only for selected purposes (eg, a partial agreement, as there should be opt-in options for each purpose). Existing literature on MDCSs does not discuss opt-ins, but there are guidelines to help project managers .
On the other hand, consent is not the only lawful basis for personal data processing. Public health and social care can also rely on legitimate interests and the performance of a public task as justifications for the processing of personal data. However, some MDCSs can also be used for secondary purposes, which should be made optional to data subjects. For instance, linking the data subjects’ personal data to other electronic health records or disclosing it for research and statistics outside the public health sphere. However, there is an immense power asymmetry between the public health system and the individuals. When the majority of the population relies uniquely on the public system, there is never really a free choice. That is, if data subjects are coerced or if there is a threat of disadvantage (eg, no health care) the consent can be rendered invalid.
Features for automated data deletion are also missing in the existing MDCSs. That may be seen as a technicality that is just not explored in the MDCS literature, but it is associated with the well-known right to be forgotten and data minimization principle. For MDCSs, families may also change their address or move to other communities, which would require formal procedures for automated deletion, as well as data portability (ie, to send the family’s data to another health unit). Data subjects may also require deletion or blocking of sensitive data that can impact their privacy. More importantly, medical conditions with strong genetic components can disclose information about the patient’s relatives, that is, impacting other people’s privacy. Individual privacy preferences pose challenges for executing data subject rights, as the data may refer to multiple data subjects, who all may have rights by different interest (eg, one may want the data to be deleted whereas the other would like data to be preserved). Routines are needed to handle such disputes and situations. In some cases, it may be possible to pseudonymize the identity of the person that wants his or her data to be deleted (eg, in case of infections), whereas in case of genetic relations, it may not currently be possible.
However, it is essential to know that medical information related to medical conditions and procedures cannot be deleted even if the data subject requests, that is, with respect to legal aspects of medical records alterations. Instead, because this is sensitive information, the protection mechanisms are even more important.
Security frameworks specifically designed for MDCSs have already been proposed [7,8]. In brief, MDCSs need a Key Management Mechanism to provide Authentication and Key Exchange among parties (user’s mobile and app server). Authentication protocols and key derivation schemes for MDCSs usually rely on symmetric cryptography, using password authentication. These protocols should also give support for online and offline user authentication so that users are not limited because of the lack of network connectivity or coverage. Other mechanisms should cope with the confidentiality of stored and in-transit data by means of encryption schemes for secure storage and transmission.
MDCSs also support the creation of rich repositories of health-related data needed for the planning, implementation, and evaluation of public health practice. These datasets are often used for secondary purposes by government agencies, researchers, and academics. In such cases, the data should be anonymized, that is, to protect privacy by making a number of data transformations so that individuals whom the data describe remain anonymous. The anonymization process can have variable degrees of robustness , depending on how likely is to (1) single out an individual in the dataset, (2) link records concerning the same individual, or (3) infer the value of 1 attribute on the basis of other values. In essence, all these circumstances should be avoided, resulting in an anonymized dataset. Anonymized data are not considered personal data; therefore, data privacy laws no longer apply. Although the literature on data anonymization is vast, fully anonymized datasets are difficult or even impossible to achieve. The Working Party 29 has already expressed an opinion on this matter .
Although this PIA had been carefully designed and conducted, limitations of the research must be acknowledged. First, regarding methodological aspects, a parallel with other approaches for risk assessment can be drawn. That is, PIAs, as any risk assessment methodology, have inherent limitations : (1) the estimation of risk is never complete in the mathematical sense, (2) a complete set of undesired events (threats) is never known, (3) no way is provided to deal with unknown vulnerabilities and attacks, and (4) continuous revision is always required. PIAs are not different. PIAs should be periodically reviewed, whenever assumptions change or when new threats are unveiled. Nonetheless, by performing a PIA and implementing controls, organizations demonstrate that they are tackling privacy and data protection issues due diligence.
Second, although the PIA RFID framework offers a sound methodology, there are other PIA frameworks that are already published (eg, OAIC’s PIA, British ICO’s PIA Handbook, CNIL’s PIA manual, and ISO/IEC 29134). Some approaches are more streamlined (eg, OAIC’s PIA and British ICO’s PIA Handbook) and consequently not so grounded on technical standards (eg, PIA RFID framework and ISO/IEC 29134). Moreover, as mentioned before, the chosen PIA framework also utilizes a qualitative approach for risk assessment, which differs from quantitative and asset-driven approaches that are more common for security risk analysis. A comparison study of PIA frameworks is outside the scope of this paper, but it may be beneficial to the community.
Third, a few remarks can be also made about the way in which the PIA was conducted. Ideally, the PIA should be carried out in consultation with all relevant stakeholders (eg, developers, health care workers, data subjects and/or representatives, and policy-makers). The PIA was conducted by the authors who come from multiple disciplines (information security, medical informatics, and law) and have first-hand experiences with MDCSs. Besides, input and feedback were provided by software engineers from 2 industry partners with experience in developing MDCS. In conducting this PIA, the authors adopted the role of the data subjects to articulate their perspectives and advocate for their privacy. Two of the authors are members of privacy interest organizations and/or former members of the advisory board of the Swedish Data Protection Commissioner. The authors are therefore used to taking the perspective of data subjects and are more experienced in analyzing privacy issues on behalf of the data subjects than most laypersons. Nonetheless, especially after the MDCS is rolled out, it is recommended to consult the families enrolled in the primary care programs directly and gather their perspectives and concerns regarding privacy on the basis of their personal experiences for conducting another iteration of the PIA.
CHWs are crucial in the Brazilian health care scenario, and empowering them with relevant tools can revolutionize the delivery of community-based primary care. MDCSs are proven effective tools to support the activities of CHWs in Brazil and around the world . However, solving privacy and data protection issues is imperative for the successful deployment of such systems. In fact, as advocated in previous studies [5,19], a careful look into privacy is still notably lacking in many mHealth projects and initiatives. This paper offers a full PIA for the GeoHealth MDCS aiming to unveil the privacy pitfalls that large-scale mHealth systems may have. Our results show that important privacy principles could be further enhanced, such as data minimization, obtaining consent, enabling data processing transparency, and intervenability. In fairness, existing research may not primarily account for privacy, as privacy-preserving features are considered as nonfunctional requirements or even because such considerations are beyond the scope of many papers. Nonetheless, systems that are already deployed, especially in health care, should be compliant with the principles of privacy by design.
Besides, as discussed, the literature on Privacy-Enhancing Technologies (PETs) already has a range of mechanisms for consent management, transparency, and intervenability. Therefore, the future work in MDCSs involves the evaluation of suitable PETs mainly accounting for the implementation of technical controls as well as to migrate organizational controls with information security management processes.
This research was funded by the DigitalWell Research (Dnr RV2017-545), a cooperation between the Region of Värmland and Karlstad University; it was also partially supported by the Swedish Foundation for Strategic Research SURPRISE project. We would like to thank Professor Alexandra Brentani for her expert advice on Brazil’s primary care system. We also extend our thanks to Professor Danilo Doneda for the clarifications about the Brazilian privacy and data protection regulations. We also thank the software engineers of SysVale SoftGroup and Bridge Laboratory for providing feedback on our assessment.
Evaluation of degree of protection demand.
Gejibo S, Mancini F, Mughal K. Mobile data collection: a security perspective. In: Sasan A, editor. Mobile Health: A Technology Road Map. Switzerland: Springer, Cham; 2015:1015-1042.
©Leonardo Horn Iwaya, Simone Fischer-Hübner, Rose-Mharie Åhlfeldt, Leonardo A Martucci. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 20.03.2019. | 2019-04-21T20:02:54Z | https://mhealth.jmir.org/2019/3/e11642/ |
Current address: Janssen - Johnson and Johnson, Leiden, The Netherlands.
Acceptance of childhood vaccination varies between societies, affecting worldwide vaccination coverage. Low coverage rates are common in indigenous populations where parents often choose not to vaccinate their children. We aimed to gain insight into reasons for vaccine acceptance or rejection among Warao Amerindians in Venezuela.
Based on records of vaccine acceptance or refusal, in-depth interviews with 20 vaccine-accepting and 11 vaccine-declining caregivers were performed. Parents’ attitudes were explored using a qualitative approach.
Although Warao caregivers were generally in favor of vaccination, fear of side effects and the idea that young and sick children are too vulnerable to be vaccinated negatively affected vaccine acceptance. The importance assigned to side effects was related to the perception that these resembled symptoms/diseases of another origin and could thus harm the child. Religious beliefs or traditional healers did not influence the decision-making process.
Parental vaccine acceptance requires educational programs on the preventive nature of vaccines in relation to local beliefs about health and disease. Attention needs to be directed at population-specific concerns, including explanation on the nature of and therapeutic options for side effects.
Copyright: © 2017 Burghouts et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: Study logistics were supported by Pfizer Venezuela and the Fundacion para la Investigación en Micobacterias (FUNDAIM), Caracas, Venezuela. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: This study was supported by Pfizer Venezuela and the Fundacion para la Investigación en Micobacterias (FUNDAIM), Caracas, Venezuela. There are no patents, products in development or marketed products to declare. This does not alter our adherence to all the PLOS ONE policies on sharing data and materials, as detailed online in the guide for authors.
Immunization is a proven tool for prevention of some of the most deadly childhood diseases. However, vaccines are underutilized, especially in developing countries. Around 1.5 million children die each year from vaccine-preventable infectious diseases . Suboptimal vaccine coverage rates are often observed in ethnic minorities [2–5]. The dynamics of vaccine uptake are complicated and depend on both social factors and cultural perceptions. This includes not only perceptions of vaccinations and diseases, but also perceptions of vulnerability and protection and the role of medicines in producing and maintaining health . Qualitative and quantitative studies addressing concerns about vaccination often fail to provide recommendations for interventions .
Approximately 10 percent of the South American population consists of indigenous people and low vaccination coverage rates are observed in these populations [8, 9]. Although specific barriers to vaccination have been identified in indigenous South American populations , there is a paucity of qualitative studies exploring the knowledge, attitudes and practices surrounding these barriers.
The Expanded Program on Immunization (EPI) in Venezuela operates under the Extended Immunization Program (Programa Ampliado de Inmunizaciones) of the Ministry of Public Health (Ministerio de Salud Pública). The Venezuelan immunization schedule includes Bacille Calmette Guérin and hepatitis B vaccine at birth; the diphtheria, tetanus, pertussis, hepatitis B and polio (OPV) vaccine at 2, 4, 6 and 18 months and 5 years of age; measles, mumps and rubella vaccine at 12 months and 5 years and oral rotavirus vaccination at 2 and 4 months . The most recent addition to the immunization schedule was the incorporation of the 13-valent pneumococcal conjugate vaccine (PCV13) in 2014. The Warao people residing in the Orinoco river Delta along the eastern coast of Venezuela are the second largest Venezuelan indigenous population. As long as vaccination teams have entered this area, they have experienced resistance against vaccination. Vaccine refusal results in low vaccination coverage rates despite the availability of mobile vaccination teams. Only around 25% of Warao children are fully immunized with the EPI vaccines and this is only 18% for the non-EPI vaccines. These coverage rates are two to three times lower compared with other parts of Venezuela . This is of particular concern since the living conditions of the Warao population, as well as many other South American indigenous populations, expose them to multiple health risks. The lack of access to medical care, clean drinking water and food leads to high prevalence rates of infectious diseases. Because most Warao people do not have access to basic health services, there is a shortage of official morbidity and mortality rates. In 2011, a cross-sectional survey including interviews on child survival with 668 Warao women from 97 communities was performed. This work showed that of almost 4,000 reported life births an extraordinary high number of 1,245 children (32%) had died, most of whom (97%) were under five years of age. Infectious diseases were the main reported cause of death and together responsible for 85% of the reported deaths .
We took a qualitative approach to identify important themes in the decision-making process around childhood vaccination in indigenous caregivers in Venezuela. The objective of this study was to provide recommendations for the understanding and modification of parental beliefs to influence decisions regarding vaccination, thereby improving uptake.
This study was conducted in the Orinoco Delta in Northeastern Venezuela. In this rural watery area, people live in small geographically isolated communities in wooden houses raised on piles along the Orinoco River banks (Fig 1). There is limited electricity supply, communication facilities or clean drinking water. The Orinoco Delta comprises circa 40,000 km2 and is inhabited by the Warao people, the second largest indigenous population in Venezuela. Almost one-third of Warao children die during childhood and respiratory tract infections are a major cause of death . Our study was performed in the largest of the four municipalities comprising the Orinoco Delta, Antonio Diaz (Fig 2). In most Warao villages in Antonio Diaz a small poorly equipped health post is present. There is only one hospital with radiology and laboratory facilities in Antonio Diaz. Vaccination is generally carried out by regional mobile vaccination teams from the Delta Amacuro department of the Ministry of Health and Social Welfare, and vaccines are provided free of charge.
Fig 1. Picture of a typical Warao Amerindian village in the Orinoco Delta in Venezuela.
Fig 2. Simplified representation of Venezuela with the Orinoco Delta highlighted in green (bottom left) and the Antonio Diaz municipality within the Orinoco Delta (light green) with the nine study communities.
In May 2012, PCV13 was first introduced in nine communities in the Orinoco Delta by means of an efficacy study . During this study, PCV13 acceptance or refusal was recorded. The target group for this qualitative study consisted of primary caregivers of children aged 6 weeks– 6 months residing in one of nine communities. These mothers were approached six months later, in November 2012. By then, three consecutive doses of PCV13 had been offered to their children as a primary series, following Centers for Disease Control and Prevention (CDC) guidelines .
The research consisted of interviews carried out in caregivers’ homes and lasting approximately 30 minutes. Interviews were held in Spanish or with a local translator in the Warao language. An interview guide was used for each interview. To compose the interview guide, we performed an electronic search using the TRIP database, Cochrane Library and Pubmed with the key words ‘vaccination’, ‘immunization’, ‘acceptance’, ‘refusal’, ‘coverage’, ‘decisions’, ‘attitudes’, ‘perceptions’ and ‘knowledge’, ‘indigenous’, in various combinations. Additional studies were identified by searching the reference lists from existing articles. We also based the interview guide on our own observations and experiences. The interview guide contained questions about general knowledge and risk perception of diseases in general and diseases related to Streptococcus pneumoniae specifically as well as motivations for accepting or declining vaccines in general and PCV13 specifically. All questions were open-ended. The interviews were recorded and each interview was typed out ad verbatim.
We followed the six phases of thematic analysis as described by Braun and Clarke . Two of the authors transcribed and familiarized themselves with the data. Initial codes were then assigned to segments of the raw data using the qualitative data analysis software MAXQDA (VERBI Software, Berlin, Germany). The two researchers then read coded segments together in order to reach a consensus about the applied codes. The different codes were sorted into potential overarching themes by both researchers separately. Following this phase, discussions about the fit and validity of candidate themes and sub-themes were held. Finally, the authors agreed on final key themes. These themes were repeatedly checked against the transcripts in order to identify patterns. Interpretation of themes was informed by the literature, objectives of the study, and additional discussions with two other authors with ample working experience in the Orinoco Delta.
The natures and objectives of the study were explained in Spanish and/or in their native language and primary caregivers provided written informed consent. The study was approved by the ethical committee of the Instituto de Biomedicina and by community leaders (consejos comunales).
We performed a qualitative survey exploring parental decision-making about childhood vaccines. In total, 67 caregivers residing in the nine study communities (Fig 2) were eligible. Of these, 42 (63%) had accepted vaccination with PCV13 while 25 (37%) had declined one, several or all vaccines. Approximately half of the caregivers in each group were interviewed, resulting in 20 and 11 interviews respectively. Of the 11 vaccine-declining mothers, 9 had declined all vaccines and 2 had accepted the first dose but refused follow-up doses. Of the total of 31 included caregivers, 30 were the mother of the child and one primary caregiver was the child’s grandmother.
The majority (55%) of the respondents were housewife. Others produced handicrafts or worked in the paddy fields for taro (tuber) cultivation. Almost one-third of the mothers did not have any education and roughly half only attended elementary school. Most had large families; 13 caregivers (42%) had 3 to 5 children and 9 (29%) had more than 5 children. All caregivers attended Western health care facilities (community posts or hospitals) in case of illnesses and over two thirds (68%) also visited traditional healers. Characteristics of the study population are summarized in Table 1. The 20 vaccine-accepting respondents all said they accepted vaccination because vaccines either prevent diseases (n = 12) or prevent diseases from deteriorating (n = 7) or cure diseases (n = 1). In addition, some (n = 3) said that discussing the matter with family or community members helped them to make the decision to vaccinate their children. However, overall, fathers or other family members only seemed to play a minor role in decisions regarding vaccine acceptance. Of the total number of 31 vaccine acceptors and refusers, 28 mothers (90%) did not consult others in order to make a decision. Traditional healers or religion did not seem to play a role in the decision regarding vaccination or non-vaccination of children. Only two mothers had asked a traditional healer for his opinion concerning childhood vaccination and none of the caregivers mentioned her religious background in relation to vaccine practices.
Table 1. Characteristics of study population of Warao Amerindian caregivers in Venezuela.
Interestingly, all 11 vaccine-declining mothers also believed that vaccines helped to prevent (deterioration of) diseases. Despite this general recognition of vaccine health benefits, none of the 31 participants was aware of the immunological mechanism of vaccines, i.e. to trigger the immune system to develop a response against a killed or inactivated pathogen. The most widely recognized vaccines were measles (mentioned by 52%), varicella (26%), pertussis (16%), tetanus (13%) and yellow fever (13%). Only two mothers mentioned a pneumonia vaccine, while PCV13 had been offered to all mothers only six months before. Over one third of the respondents (35%) mentioned that vaccines were also administered against a variety of non-specific conditions, including vomiting, fever, diarrhea, headache and arthritis. Caregivers generally accepted ‘all’ or ‘no’ vaccines and none of the respondents reported refusal of specific vaccines.
Reasons to refuse vaccination were discussed with vaccine-declining mothers. Also, vaccine-accepting mothers were asked about situations in which they would not vaccinate their children. Table 2 sums up the motives for (potential) vaccine refusal in the study population. Three main themes became evident: 1) vaccine refusal due to fear of side effects, 2) vaccine refusal due to perceived limited vaccine tolerance of young and sick children and 3) the empirical concept that side effects of vaccines are diseases.
Table 2. (Potential) vaccine refusal reasons mentioned by all and vaccine-declining Warao Amerindian caregivers in Venezuela.
All 11 vaccine-declining mothers mentioned concerns about unwanted effects as the main reason for refusing vaccination. In addition, 17 of the vaccine-accepting caregivers also mentioned a wide variety of symptoms and diseases caused by vaccination, but they stated that the beneficial effects of vaccination outweighed these side effects. The potential unwanted effects and the frequency with which they were reported are listed in Table 3. Notably, not only known side effects such as fever were mentioned. Diarrhea was also a side effect many (n = 7) mothers worried about. Four mothers even said that children could die because of vaccination.
Table 3. Effects of vaccination as mentioned by 28/31 Warao Amerindian caregivers in Venezuela.
Vaccination teams have come here on several occasions but I never choose to vaccinate my children. We fear vaccines, children get fevers, diarrhea and they can die too due to vaccination. And now all my children are healthy! […] They (the vaccination teams) say that the fever goes away, but it doesn’t always go away and if fevers get really high some children die. My children are growing up being well, we also grew up without vaccinations.
Side effects were commonly ascribed to all vaccines, although one mother mentioned specifically that the pentavalent vaccine causes fever. Four vaccine-declining mothers said they would consider having their children vaccinated if antipyretics would be offered upon vaccine administration. Their fear that vaccine side effects would develop in the absence of access to medical care made them refuse vaccination altogether. Warao families generally live far from hospital facilities and boat trips of up to seven hours have to be made to reach the nearest medical post. In addition, these medical posts are often under-equipped and short of medicine.
If children are vaccinated, they immediately get a fever, so they have to give us medication for these diseases. Because of the fever, the child gets hot from the inside, in its belly, and therefore also starts to vomit and gets diarrhea […] The people that vaccinate children do not provide us with medicine for the diseases caused by these vaccines and if I go to a medical post they can’t help me either, that’s when I get mad.
Three of the vaccine-declining mothers said they would consider vaccinating their children if given a more thorough explanation about the vaccine at the moment of administration. These mothers felt that vaccines were often given without a conversation with the primary caregiver which made them unwilling to participate.
More than half of the mothers (55%) said they would not vaccinate their children when they are ill at the moment the vaccine is offered. The definition of ‘illness’ that would lead to refusal of vaccines included fever, diarrhea, flu, vomiting, the appearance of teeth, headache and abdominal pain (Table 2). The general opinion was that the child should be strong, and thus not suffer from any of the symptoms mentioned, in order to be able to handle a vaccination.
In addition, a common perception (mentioned by 19% and 36% of all and vaccine-declining respondents respectively) was that young children were too vulnerable or weak to be vaccinated. Older children were regarded to be stronger and less prone to side effects. Two mothers explicitly mentioned that they did tolerate oral vaccines for young children.
To my youngest child, they only gave him the drops in his mouth, since he is too young, he cannot be injected. Because if it hurts him and he gets sick, we cannot get him to a hospital. We do not have a motorboat to get him to a hospital.
(Mother of 5 children, 31 years old).
In Warao communities, the age of children is often unknown and birthdays are not celebrated. Caregivers that refused vaccines for young children could not indicate at what age they would have their children vaccinated, some said the child would have to grow ‘a bit bigger’, some said that it would take ‘several months’ and some pointed to an older sibling as having the appropriate age to be vaccinated. In addition to the perception that young children were too vulnerable to be vaccinated, notably, 9 mothers (29%) mentioned that young children are the ones that are most susceptible to (severe) diseases.
The distinction between ‘symptoms’ and ‘diseases’ was not made by any of the primary caregivers. When parents were asked about diseases in general, the ‘diseases’ that were mentioned first in more than half of the mothers (n = 18, 58%) were either diarrhea (39%) or fever (19%). Some mothers described clusters of symptoms that could occur together (e.g. diarrhea, vomiting and fever), but these were generally regarded to be three separate diseases that could either attack in varying combinations or individually. Fever and diarrhea were also among the most commonly described effects of vaccination (mentioned by respectively 84% and 23% of the 31 caregivers, Table 3) and also then described as ‘diseases caused by vaccination’. The term ‘(side) effects’ of vaccination was not used by any of the Warao mothers and all vaccine-declining mothers regarded the fever caused by vaccination to be a disease in itself warranting adequate treatment, but even then could harm their children.
When they vaccinate our children, they develop a fever, that’s why we do not want to vaccinate them. After the vaccination teams leave, the vast majority of our children is ill, so we prefer not vaccinating them. We really fear the vaccines because of the fever […] I have not been vaccinated and I live in peace: working, eating, ….I do not suffer from diarrhea or fevers.
This study demonstrates which ideas about vaccination and diseases influence the acceptance or refusal of vaccines in Amerindian mothers of young children in the Orinoco Delta in Venezuela. All participants, both vaccine-accepting and vaccine-declining, claimed to see the benefits of vaccination on the grounds that it “prevents diseases” or “prevents the deterioration of diseases”. However, three main themes were identified that negatively influenced vaccine acceptance. Our findings provide starting points for the improvement of vaccine education strategies.
Concerns about adverse events are also among the main barriers to vaccination in other South American countries as well as in low- and middle-income countries in other parts of the world . In fact, even in Western populations concerns about vaccine safety and serious side effects are among the main reasons for delay or refusal of childhood vaccines . The perception that unwanted symptoms due to vaccination are diseases warranting adequate treatment has, to our knowledge, not been previously described in indigenous South American populations. Dugas et al. also described a major role of the empirical concept of childhood disease in low vaccination coverage rates among ethnic groups in Burkina Faso, Africa . In concordance with their study, symptoms/diseases that were most often mentioned by Warao mothers resembled the frequently named side effects of vaccination, with most importance assigned to fever and diarrhea. The similarity between these symptoms/diseases and vaccine adverse effects is their high prevalence of occurrence in these rural villages and the ease with which they can be recognized. In the absence of biomedical screening tests, it is not surprising that it is understood by the Warao people that the fever or diarrhea are the diseases, rather than symptoms of an underlying disease. In addition, the finding that one third of the respondents mentioned that vaccines were given in order to prevent non-specific symptoms, including fever and diarrhea, may further limit vaccine acceptance. Since most vaccines target a single disease, their perceived effectiveness may be disappointing when a fully vaccinated child develops diarrhea or fever, especially if this is the result of the side effects of vaccination.
The unclear hierarchical distinction between disease symptoms and processes (e.g., diarrhea versus parasitic disease) within the biomedical system has also been described in other South American indigenous groups [17, 18]. Also, the concept of ‘symptom-oriented’ diseases in Warao people has been identified in anthropological studies [19, 20]. The basis for this perception probably lies in the idea that symptoms are caused by weakness of one or more of the four souls the Warao people believe are present in each human being. Specific combinations of ‘soul weaknesses’ are thought to cause specific symptoms with a maximum of three symptoms occurring together . The finding that this symptom-based disease interpretation among Warao people affects concerns about side effects and vaccine acceptance is new and warrants further exploration. It may be necessary to specifically address the topic of weaknesses related to vaccine side effects through vaccine education programs.
Thoughts around weakness and vulnerability to vaccine side effects and disease may also underlie the common idea that young and sick children are not strong enough to handle vaccination. In rural Bangladesh, mother’s perception of small infant size at birth was negatively associated with timely vaccination . Other studies also report lower vaccine coverage in young children in relation to parental beliefs including young children being too little, immature and fragile to handle immunizations [22, 23]. This phenomenon could explain why only 63% of the caregivers of young children ≤6 months had accepted vaccination while PCV13 in this context was overall accepted by 84% of Warao mothers of children <5 years in the study area . The fear that young children would suffer too much from vaccination was accompanied by the perceived vulnerability of these children to (severe) diseases. Mortality rates in Warao children are indeed extremely high in the first year of live, when 54% of childhood deaths occur . The observation that young children are most likely to die when they get ill seems to fuel the concern that vaccine adverse events may more severely harm these vulnerable children. This fear is related to thoughts around the origin of diseases but also to a lack of information concerning the actual effect of vaccines on the body or the potential side effects. The conflict between biomedicine and local understandings has been identified as a barrier to vaccination in ethnic people in Africa as well .
Half of the vaccine-declining mothers would be willing to have their children vaccinated if more information and antipyretics would be offered. Whereas in other societies parents often have access to a bulk of available information, e.g. through the Internet , Warao generally lack Internet, television, radio or newspapers. For information regarding vaccination, they rely on community members and the vaccination teams themselves. Vaccination teams are not community members; they enter the Orinoco Delta by boat and generally stay for only a couple of days. Cold boxes filled with big ice blocks are used to keep vaccines cold during transportation. The outside temperature of >25°C limits the lifetime of ice blocks. Boat trips of around 6–10 hours (depending on the material of which the boat is made, the horsepower of the engines, and on how heavily the boat is loaded) are made to reach the villages in Antonio Diaz from the mainland. In these boats, 200-liter barrels of gasoline are shipped from the mainland to the River Delta in order to reach as many communities as possible during one trip. There is thus limited space onboard for cold boxes carrying extra ice. These logistical challenges lead to a strategy aimed at reaching as many communities as possible in as little time as possible. It is, however, unsatisfactory that one third of Warao mothers of young children refuses vaccination when communities are reached despite these logistical problems. We recommend that vaccine education programs are separated from vaccination campaigns so that time and effort can be put into addressing the concerns of caregivers. In particular, vaccine education programs that precede an upcoming vaccination campaign may be effective. A recently published systematic review shows that multicomponent and dialogue-based interventions to improve vaccine uptake perform best . There is, however, limited evidence for the effectiveness of interventions focused on raising knowledge and awareness. It thus seems advisable to discuss proposed intervention strategies with community members before implementation. In addition, quantification of the impact of educational interventions is necessary to determine their effectiveness. In addition to educational interventions, the standard distribution of antipyretics together with vaccine administration and explanation about its use may increase vaccine compliance.
Other studies, including from South American indigenous populations, have documented that practice of traditional medicine, religious beliefs or a lack of faith in Western medicine may negatively affect decisions around health care and vaccination acceptance [16, 28–31]. Although the majority of Warao caregivers visited traditional healers (68%) and were Christians (71%), a major role of these factors in vaccine decision-making was not identified. Over the past 20 years, acculturation has led to increased acceptance of Western medicine, as can be seen from the finding that all mothers claimed to visit Western health care facilities. Educational programs can use this acknowledgment of the value of Western medicine in general to reassure Warao people that vaccines have proven both successful and safe.
A strength of our study is the prospective recording of vaccine acceptance or refusal. Self-reported vaccine uptake limits the reliability of many studies addressing factors underlying parental vaccination decisions . A limitation of this method was that we had to re-locate specific caregivers with recorded information on vaccine refusal or acceptance rather than interview those women present when we performed the current study. This, together with logistical constraints, was the main reason that only half of the eligible caregivers were interviewed. Warao people are semi-nomadic and often migrate temporarily from their communities for agricultural purposes. In addition, because only two women were included that refused several but not all vaccine doses, we were not able to distinguish possible differences between reasons for total or partial / temporal refusal.
We did not interview fathers or other family members and can thus not reflect on fathers’ attitudes and beliefs concerning vaccination. However, primary caregivers in Warao families are usually mothers, as illustrated by the observation that 28/31 mothers stated they made decisions regarding vaccination themselves, regardless of the input of family members.
This study provides a perspective into the low rate of vaccine acceptance among Warao Amerindian caregivers of young children in Venezuela. Three important findings emerged. First, fear of side effects was the most important immunization barrier. Second, these side effects were regarded as diseases for which they felt treatment should be offered. Finally, according to caregivers, children needed a certain level of strength to be able to handle vaccination leading to limited vaccine acceptance in young children.
Since all mothers did believe vaccines could reduce disease incidence, it is important to set up vaccine education programs focused on the explanation of the origin of side effects, the vulnerability of children to these effects in relation to age and their self-limiting nature.
S1 File. Topic list used by the interviewers (original in Spanish).
S2 File. Topic list used by the interviewers (translation in English).
The authors thank all local translators for their support and assistance. We also thank the field workers involved in the recruitment of participants, especially the students of the Universidad Central de Venezuela and Thor Küchler. We also thank J. Heldens and A.J.A. Felling from the Radboud University Nijmegen for helpful discussions.
Conceptualization: JB PWMH JHW LMV.
Formal analysis: JB BDN AU JHW LMV.
Funding acquisition: LMV BDN PWMH JHW.
Methodology: JB BDN PWMH JHW LMV.
Project administration: JB BDN JHW LMV.
Resources: JB BDN AU PWMH JHW LMV.
Supervision: JB BDN JHW PWMH LMV.
Validation: JB BDN JHW LMV.
Writing – original draft: JB LMV.
Writing – review & editing: JB BDN AU PWMH JHW LMV.
1. World Health Organization (WHO). Immunization surveillance, assessment and monitoring. Immunization profile—Venezuela (Bolivarian Republic of). http://apps.who.int/immunization_monitoring/globalsummary/countries?countrycriteria%5Bcountry%5D%5B%5D=VEN. Accessed 25 March 2016.
3. Frew PM, Hixson B, del Rio C, Esteves-Jaramillo A, Omer SB. Acceptance of pandemic 2009 influenza A (H1N1) vaccine in a minority population: determinants and potential points of intervention. Pediatrics. 2011;127 Suppl 1:S113–9.
8. Acosta-Ramirez N, Rodriguez-Garcia J. Inequity in infant vaccination coverage in Colombia 2000 and 2003. Rev Salud Publica (Bogota). 2006;8 Suppl 1:102–15.
9. Verhagen LM, Warris A, Hermans PW, del Nogal B, de Groot R, de Waard JH. High Prevalence of Acute Respiratory Tract Infections Among Warao Amerindian Children in Venezuela in Relation to Low Immunization Coverage and Chronic Malnutrition. PediatrInfectDisJ. 2011.
13. Advisory Committee on Immunization Practices (ACIP). Licensure of a 13-valent pneumococcal conjugate vaccine (PCV13) and recommendations for use among children. MMWR 2010;59: 258–61.
16. Dugas M, Dube E, Kouyate B, Sanou A, Bibeau G. Portrait of a lengthy vaccination trajectory in Burkina Faso: from cultural acceptance of vaccines to actual immunization. BMC Int Health Hum Rights. 2009;9 Suppl 1:S9.
18. Izquierdo C. and Shepard G. H. Jr. 2004. Matsigenka. In Encyclopedia of Medical Anthropology: Health and Illness in the World's Cultures, Vol. 2: Cultures. Ember C. R. and Ember M., editors. (eds.), pp. 823–837. Kluwer Academic/Plenum Publishers. New York.
19. Wilbert W. Environment, society, and disease: The response of phytotherapy to disease among the Warao Indians of the Orinoco Delta. In: Balick MJ, Elisabetsky E, Laird SA, editor. Medicinal Resources of the Tropical Forest: Biodiversity and its Importance to Human Health. New York: Columbia; 1996. pp. 366–385.
20. Wilbert W. Filoterapia warao: fundamentos teóricos. In: Prespectivas en salud indígena: cosmovisión, enfermedad y políticas públicas. Freire G, ed. Quito, Ecuador: Abya-yala, Universidad Politécnica Salesiana 2011. Page 307–326. ISBN 978-9978-22-952-1.
Is the Subject Area "Venezuela" applicable to this article? | 2019-04-26T13:00:29Z | https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0170227 |
New vaccine design approaches would be greatly facilitated by a better understanding of the early systemic changes, and those that occur at the site of injection, responsible for the installation of a durable and oriented protective response. We performed a detailed characterization of very early infection and host response events following the intradermal administration of the modified vaccinia virus Ankara as a live attenuated vaccine model in non-human primates. Integrated analysis of the data obtained from in vivo imaging, histology, flow cytometry, multiplex cytokine, and transcriptomic analysis using tools derived from systems biology, such as co-expression networks, showed a strong early local and systemic inflammatory response that peaked at 24 h, which was then progressively replaced by an adaptive response during the installation of the host response to the vaccine. Granulocytes, macrophages, and monocytoid cells were massively recruited during the local innate response in association with local productions of GM-CSF, IL-1β, MIP1α, MIP1β, and TNFα. We also observed a rapid and transient granulocyte recruitment and the release of IL-6 and IL-1RA, followed by a persistent phase involving inflammatory monocytes. This systemic inflammation was confirmed by molecular signatures, such as upregulations of IL-6 and TNF pathways and acute phase response signaling. Such comprehensive approaches improve our understanding of the spatiotemporal orchestration of vaccine-elicited immune response, in a live-attenuated vaccine model, and thus contribute to rational vaccine development.
Vaccine innovation would be aided by a better understanding of the basic mechanisms associated with processes that affect the duration, breadth, and efficiency of the host response, particularly the very early events that can be manipulated by antigen targeting, adjuvants and immune stimulants, dose, and delivery. Focusing on very early steps of the host response may also aid in finding ways to reduce reactogenicity to injected-material, which may limit vaccination acceptance, particularly for young children. Vaccination strategies have been improved by taking advantage of recent knowledge on pathogen sensing by key cells, such as dendritic cells (DCs), through pattern recognition receptors (PRRs) (1, 2). We and others have demonstrated that modifying the targeting of vaccine antigens to various DC receptors or blocking key cytokines at the injection site significantly affects the induced adaptive immune response (3–6). The interplay between innate and adaptive immunity is characterized by many other highly dynamic molecular and cellular modifications. Emerging ‘omic technologies may help to increase our knowledge and aid in the development of new immunization strategies (7, 8). The innate response in the blood can be triggered as early as 24 h after immunization when using adjuvants such as alum or toll-like receptor (TLR) (9) or after immunization with Modified virus Ankara (MVA) (10–13). These works suggest that very early events can have a strong impact to long-term responses to vaccines.
Skin is an ideal target for vaccine injection due to the diversity of resident and recruited immune cells, including macrophages, Langerhans cells, and several subsets of dermal DCs (14, 15). These cells, which express a wide variety of PRRs (16), can be used to tune the immune profiles of vaccines (17). Nevertheless, despite many advantages, intradermal (i.d.) route remains rarely used in the medical field due to the lack of reliability of the immunization technique (18). It remains, however, widely used in anti-tuberculosis vaccination to minimize the toxicity of the Bacille Calmette et Guérin (19), and for rabies vaccines (20). In addition, i.d. route may contribute to dose sparing (21–23), due to the efficiency of local antigen-presenting cells (APCs) (6, 10), which could provide a decisive advantage for products with scale-up production constraints. Delivery technologies are developed in order to enable skin vaccination (24).
Here, we characterized the early responses induced by i.d. injection of the MVA (25) attenuated vaccine into non-human primates (NHPs). This attenuated vaccinia virus has been safely used as a Smallpox vaccine in humans (26) and is a promising vector for recombinant vaccines (27–30). This vaccine has been found to be well tolerated and highly immunogenic, making it a good model of attenuated vaccine to describe immune responses. The use of NHPs was particularly relevant because the organization of their immune system is highly similar to that of humans and their response following vaccination is highly predictive of the response in humans (31–33).
In this study, we aimed to better understand the impact of early innate parameters on the elaboration of the adaptive response in a live attenuated vaccine model administered in the skin. The first step of the study consisted in an in-depth characterization of the innate immune responses in key localizations, i.e., injection site, lymph node (LN), and blood. The second step was to integrate data by using tools derived from systems vaccinology to identify key factors and to improve the knowledge of the dynamic of the interactions between immune parameters.
Adult male cynomolgus macaques (Macaca fascicularis), imported from Mauritius, weighing from 4 to 6 kg, were housed in the CEA animal facilities (Fontenay-aux-Roses, France). The macaques were handled in accordance with national regulations (Permit Number A 92-32-02), in compliance with the Standards of the Office for Laboratory Animal Welfare (OLAW) under Assurance number A5826-01, and the European Directive (2010/63, recommendation No. 9). This project received the government authorization Number 12-013. Interventions were performed by veterinarians and staff of the “Animal Science and Welfare” core facility after sedation with ketamine hydrochloride (10 mg/kg, Imalgen®, Rhône-Mérieux, Lyon, France).
Prior to injection, the skin was shaved or depilated and cleaned with ethanol 70%. Each animal simultaneously received a total of 10 i.d. injections containing 4 × 107 PFU each of a rMVA expressing eGFP (MVATG15938, Transgene SA, Illkirch-Graffenstaden, France) in 200 µl, equally divided between two areas: (i) 5 to 10 cm from the left inguinal LN and (ii) on the top left of the back.
Prior to biopsies, the skin and inguinal area were cleaned with povidone iodate solution (Vetedine®, Vetoquinol SA, Lure, France). Biopsies of 8 mm in diameter were collected at untreated sites and 24, 48, and 72 h after injection. A total of eight skin biopsies were performed for each animal (Figure S1 in Supplementary Material). These biopsies induced insignificant inflammatory responses in comparison to i.d. injection of MVA (Figure S2 in Supplementary Material). The inguinal LNs were collected 24 h post injection. Blood was collected in K3-EDTA tubes (Greiner Bio-One, Frickenhausen, Germany) for cell blood counting (HmX, Beckman Coulter, Roissy, France), plasma collection, and flow cytometry. Plasma for cytokine analysis was collected after 10 min of centrifugation at 2,000 g. Blood (500 µl) collected in lithium heparin (Vacutainer BD) was mixed with 1 ml tempus RNA (Thermofischer Scientific, Waltham, MA, USA) for transcriptomic analysis. Tissue biopsies for transcriptomic profiling were preserved in 1 ml RNA later solution (Qiagen, Hilden, Germany) at 4°C overnight and then stored at −20°C.
Skin and LN biopsies were washed with PBS, weighed, cut into small pieces, and digested at 37°C under agitation (80 rpm) for 60 and 15 min, respectively, in 2 ml DMEM (Thermofischer Scientific) supplemented with 5% FCS (Lonza, Basel, Switzerland), 1% Penicillin/Streptomycin/Neomycin (Thermofischer Scientific), 10 mM Hepes (Thermofischer Scientific), 2 mg/ml collagenase D (Roche, Basel, Switzerland), and 0.02 mg/ml DNAse I (Roche). Tissue was then filtered using a 70-µm cell strainer and centrifuged. Supernatants were collected for cytokine analysis. Leftover tissue was shredded using a GentleMACs® dissociator (Miltenyi Biotec, Paris, France). The cell suspension was then washed with PBS and stained for flow cytometry analysis.
All steps of the staining were performed at 4°C. Cell suspensions were first stained with viability dye LiveDead® (Thermofischer Scientific) for 15 min, then washed with PBS. Fc receptors were blocked using PBS supplemented with 5% macaque serum for 20 min. Cell suspensions were stained for 30 min with 90 µl antibody mix (Table 1) diluted in BD Horizon Brilliant® Stain Buffer (BD Biosciences, Franklin Lakes, NJ, USA). Cells were then washed twice in PBS and fixed using 150 µl BD Cellfix® (BD). Red blood cells were removed with 1 ml BD FACs Lysing® (BD) for 10 min at room temperature and washed twice with PBS. The data were normalized to the weight of the initial biopsy for the analysis of cell suspensions from tissues. Data from blood cells were normalized to the complete blood counts. The design of the cytometry panel and the rationale for gating strategies (Figure S3 in Supplementary Material) has been previously described (34).
Table 1. Panels of antibodies used for flow cytometry staining.
Ten micrograms of anti-HLA-DR AF700 (Biolegend) diluted in PBS were injected i.d., at the same time as the rMVA-eGFP injection, 4 h prior to biopsy at each injection site destined for confocal video-microscopy. Following fat tissue removal, skin biopsies were imaged for 18 h as previously published (35). The number of GFP+ cells and HLA-DR+ cells was calculated using Volocity® software (PerkinElmer, Coventry, England) after applying a fine filter and then excluding objects below 200 µm3 and those above 4,000 µm3.
In vivo FCFM was performed with a fibered confocal fluorescence microscope (Cellvizio®, Mauna Kea Technologies®, Paris, France). A probe that records the green fluorescence signal at 488 nm by sweeping the site of interest was applied two times during 30 s (36). GFP+ objects were counted in each frame (size > 20 μm2) using ImageJ® software (37). The analysis was performed on the 50 frames containing the highest number of GFP+ objects for each observed site. For each time-point, the number of fluorescent cells of up to three injection sites was measured and is expressed as the mean ± SD of GFP+ cells.
Skin and LN samples were fixed at 4°C in 4% paraformaldehyde for 24 h. Samples were stored in PBS at 4°C. Paraffin inclusion was performed using a Microm HMS740 (ThermoFischer Scientific) automated system by successively replacing the PBS with alcohol, xylene, and paraffin. The tissue was cut afterward using a microtome and deposited on slides. Hemalun Eosin coloration was performed using a Microm HMS740, alternating between xylene, alcohol, Hematoxylin (Labonord, Templemars, France), and Eosin (VWR international, Fontenay-sous-Bois, France) baths.
Prior to staining, slides with 3 µm slices were incubated at 37°C overnight. Paraffin was removed by washing the slides with increasing concentrations of xylene (VWR), then ethanol (VWR), and water using a Microm HMS740. Unmasking was performed using citrate tempus (Diagnostic Biosystems, Pleasanton, CA, USA) in a 95°C bath for 20 min. The slides were then subjected to peroxidase blocking using 3% dihydrogenated water for 10 min (Sigma Aldrich, Saint-Louis, MO, USA) and saturated using 10% normal horse serum (Vector laboratories, Burlingame, CA, USA) and 5% BSA (Sigma Aldrich) in PBS for 30 min. Primary staining with anti-GFP antibody (7.5 µg per slide) in saturation solution (Abcam, Cambridge, UK) was then performed for 90 min followed by secondary staining with biotinylated anti chicken IgG antibody (Abcam) 1/200 in PBS for 30 min. ABC Complex using the Vectastain ABC Kit standard Elite (Vector) and diaminobenzidin (Vector) were applied to the slides following supplier recommendations for 30 and 5 min, respectively. Slides were then colored using a Microm HMS740 to successively transfer the slides to baths containing hematoxylin, water, and increasing concentrations of ethanol and xylene. The slides were washed three times for 5 min in a bath containing PBS between each step of the procedure.
Supernatants of the digestion medium of LN, skin, and plasma were stored at −80°C. Cytokine levels were measured using a 23-plex MAP NHP Immunoassay kit PCYTMG-40K-PX23® (Millipore, Billerica, MA, USA) according to supplier instructions.
Biopsies were immediately immersed in 1/100 RLT-beta-mercaptoethanol lysis buffer (Qiagen, Courtaboeuf Cedex, France) and then disrupted and homogenized using a TissueLyser LT (Qiagen) and the RNA purified using a Qiagen RNeasy Micro Kit. Whole blood RNA was recovered in tempus tubes (ThermoFisher scientific) and RNA purified using the Tempus™ Spin RNA Isolation Kit (ThermoFisher scientific). For both purifications, contaminating DNA was removed using the RNA Cleanup step of the RNeasy Micro Kit. Purified RNA was quantified using an ND-8000 spectrophotometer (NanoDrop Technologies, Fisher Scientific, Illkirch, France) and RNA integrity evaluated on a 2100 BioAnalyzer (Agilent Technologies, Massy Cedex, France). cDNA was synthesized and labeled with biotin using Ambion Illumina TotalPrep RNA Amplification Kits (Applied Biosystem/Ambion, Saint-Aubin, France). Labeled cRNA were hybridized to Illumina Human HT-12V4 BeadChips. All steps were performed following the supplier instructions. Raw and normalized microarray data have been deposited in the ArrayExpress database (38) under an accession number E-MTAB-5907.
Cytometry, Luminex, and transcriptomic data were analyzed using R (39). Hierarchical clustering was performed based on the Euclidean metric using the Ward linkage method. Flow cytometry and Luminex measurements were normalized for heatmap representations. Differentially expressed genes were identified using the paired Student’s t-test (q-value < 0.01), and based on a 1.2-fold change threshold to discard genes having very low amplitude of down- or upregulations. Multidimensional scaling (MDS) representations were generated using the singular value decomposition (SVD)-MDS algorithm (40). Kruskal Stress (41) quantifies the quality of the MDS as the fraction of information lost during the dimensionality reduction process. Functional enrichment analysis of the differentially expressed genes was performed using Ingenuity Pathway Analysis software (IPA®, Qiagen, http://www.qiagen.com/ingenuity). Statistical analysis for flow cytometry and Luminex data were performed using Prism® 6.0 (Graphpad Software Inc., La Jolla, CA, USA). Two-sided Friedman tests followed by a Dunn’s posttest comparing each time-point with the baseline were performed. Significant correlations between Luminex, transcriptomic, and cytometry variables were identified using the Pearson correlation coefficient based on a correlation threshold of 0.80 and a q-value threshold of 0.05. Representation of the co-expression network was generated using Cytoscape® (42).
Mantoux i.d. injection with rMVA expressing eGFP induced mild local lesions, detected as early as 24 h post injection (p.i.), which progressively evolved into an erythema (Figure 1A). Epidermal thickness increased by 24 h p.i. followed by mild parakeratosis 48 h p.i. and epidermal cell necrosis at 72 h p.i. (Figure 1B). In the dermis, we observed edema, associated with moderate perivascular neutrophilic and macrophage infiltration from 24 to 72 h p.i. There was a massive neutrophil and macrophage infiltration at the interface between the lower dermis and the hypodermis, associated with necrosis. The eGFP-expressing cells resulting from rMVA infection were tracked in the skin by repeated in vivo imaging using FCFM (Figure 1C). Expression of eGFP was detected as early as 24 h p.i., peaked at 48 h (56.2 ± 59.3 cells per mm2), persisted at a high level until day 3 (22.8 ± 24.7 cells per mm2), and disappeared by day 7 (Figure 1D). This kinetic profile was confirmed by confocal time-lapse microscopy (Figure 1E). In skin biopsies collected 4 h after in vivo injection of the rMVA, the first eGFP signals were detected 13–15 h p.i. and reached the maximum level by 18–22 h (Figure 1F). Some, but not all, eGFP+ cells also expressed HLA-DR indicating that the detection of the recombinant gene was not limited to skin APCs. In situ analysis revealed the heterogeneous morphology of the eGFP+ cells, which included DCs, macrophages, adipocytes, and granulocytes (Figure 2A). Most were localized to vascularized areas, such as the papillary–reticular dermis interface and the hypodermis layer. No eGFP+ cells were detected in the epidermis. We further characterized the phenotype of the eGFP+ skin cells by performing flow cytometry on cell suspensions obtained from the injection site, including the epidermis and dermis. Cells containing locally produced eGFP reached a peak between 24 (25.8 ± 4.9% of total living cells) and 48 h (19.5 ± 3.1%) p.i., persisted through 72 h (6.4 ± 5.9%), and completely disappeared by 168 h (Figure S4 in Supplementary Material), confirming our previous observations. The rMVA exhibited considerable cellular pleiotropism (Figure 2B) in vivo in NHPs, confirming previously reported observations (43, 44). The major targeted populations were granulocytes, monocyte/macrophages, and lymphocytes with the proportion of infected cells ranging from 3 to more than 30% (Figure 2B). Granulocytes represented most of the locally recruited cells and were by far the predominant population containing eGFP antigen when normalized to the number of cells per gram of tissue (Figure 2C). It is thus likely that this population significantly influences the mechanisms that affect installation of the vaccine-specific response. The proportion and number of eGFP+ DCs, which are major resident sentinels in the skin, was low.
Figure 1. Dynamics of local inflammation and infection after intradermal injection of rMVA. (A) Image of the skin before and after Buffer or rMVA injection. Scale bars correspond to 2 mm. (B) HE staining of transversal sections of non-human primate skin biopsies taken at the site of injection before or 24, 48, or 72 h after Buffer or rMVA injection. Scale bars correspond to 100 µm. (C) Representative images recorded by in vivo confocal endo-microscopy. Recordings were performed at the site of injection in the dermis (at a depth of 100 ± 35 µm in the skin) with a probe detecting fluorescent signals after excitation at a wavelength of 488 nm. Green fluorescence corresponds to the production of GFP in rMVA-infected cells. Scale bars correspond to 50 µm. (D) Kinetic profile of GFP+ cells in the skin after rMVA injection. The graph represents the mean ± SD of the number of GFP+ cells/mm2 in the 50 frames with the most GFP+ cells. (E) Representative images from time-lapse video-confocal microscopy of HLADR+ (in red) and GFP+ (in green) cells after i.d. injection of rMVA. Skin biopsies were observed for 22 h p.i. Acquisition was performed at a depth of 50–150 µm in the skin (dermis) corresponding to a stack of 10 images. Visual representations correspond to a two-dimensional visualization of this stack at the indicated time p.i., after cropping the region of interest. Scale bars correspond to 20 µm. (F) Kinetic profile of GFP+ cell detection in the dermis during the first 22 h p.i. The graph indicates the number of GFP+ cells counted for each image (1 image each 15 min). Green objects between 200 and 4,000 µm3 were considered to be GFP+ cells. The approximate time of appearance of non-background GFP+ cells is indicated with the dotted line at 14–15 h p.i.
Figure 2. rMVA infects a wide range of skin cell types. (A) In situ localization of cells infected with rMVA. Transversal sections of paraffin-embedded skin biopsies were stained with an anti-GFP antibody and then colored with HE. E and D indicate the epidermis and the dermis, respectively. Scale bars correspond to 100 µm. Arrows indicate GFP+ cells as example. (B) Percentage of rMVA-infected cells in the skin. The dot plots show the evolution of recruitment of the main immune cells to the skin and the percentage of GFP+ cells. (C) Number of GFP+ cells per gram of skin biopsy. The numbers of GFP+ cells in the indicated cell population were normalized to the weight of the skin biopsies. The graph represents the mean ± SD (n = 3). See also Figure S3 in Supplementary Material for gating strategies.
Local cellular recruitment and cytokine production were found to very similar over the kinetics between the three animals. Cells recruited to the site of injection included mainly inflammatory myeloid cells, with statistically significant increases in the number of macrophages, granulocytes, and monocytoid cells (Friedman test; p < 0.05), which increased at 24 and 48 h p.i. compared to baseline (Figure 3A). As expected, the recruitment of inflammatory cells was associated with the local release of inflammatory cytokines and chemokines (Figure 3B). Local levels of a first cluster of pro-inflammatory molecules, including GM-CSF, IL-1β, IL-6, MIP1α, MIP1β, and TNFα, significantly increased (p < 0.05) as early as 24 h p.i. (no earlier time-points were tested), but then rapidly decreased to baseline levels by 72 h. A second cluster of inflammatory cytokines, including MCP1, IL-12/23(p40), IL-8, IL-18, and G-CSF, remained high through 72 h (no later time-point was tested), in contrast to the previous group. These molecules certainly reflect macrophage and granulocyte activity. We also observed an increase in the level of the angiogenic factors TGFα and VEGF, also part of the inflammation process. This cluster also included anti-inflammatory mediators, such as IL-10 and IL-1RA, for which prolonged expression is expected. Finally, cytokines related to an adaptive response, including IL-2 and IL-13, were also detected in this cluster as early as 24 h p.i., indicating that local cells may contribute to the installation of the adaptive response, although no T or B cell recruitment was observed. Adaptive response mediators, including IFN-γ, IL-5, IL-4, IL-15, and IL-17α, were also present in a third cluster, detectable as early as 24 h p.i., which appeared to steadily increase until 72 h p.i. This group also included soluble CD40L, indicating that immunosuppressive factors are also induced, probably as a mechanism to limit the local reaction.
Figure 3. rMVA injection induces local cellular trafficking and cytokine release. (A) Heatmap representation of the cell counts in skin subset populations discriminated by flow cytometry. Cell populations were automatically sorted following hierarchical clustering represented by the dendrogram on the left. The number of cells was calculated by dividing the number of events by the weight of the biopsy. Values were standardized to display the same range of expression values for each cell population to properly visualize the cell population kinetics. See Figure S3A in Supplementary Material for gating strategies. (B) Heatmap representation of cytokine release in the skin after rMVA injection. Cytokine expression was automatically sorted following hierarchical clustering represented by the dendrogram. Values were standardized to display the same range of expression values for each cytokine to properly visualize the cytokine production kinetics. *0.05 > p-value > 0.01 in Friedman’s test over time.
We collected LNs draining the vaccine injection site 24 h p.i. to study the dynamics of the interaction between tissues in response to rMVA injection. The contralateral LNs were collected as a control. We did not observe significant differences between LNs, as only three animals per group were included at only one time-point. However, one of the animals appeared to have an increase in the proportion of macrophages and granulocytes in draining LNs (Figure 4A). This cell recruitment profile reflected the local increases observed in the skin. In addition, this animal also showed an increase in the proportion of APCs, such as CD1a+ DCs, as well as T lymphocytes (Figure 4A). Despite important heterogeneity, cytokine levels measured in total LN tissue extracts indicated increased levels of inflammatory cytokines, such as IL-6, TNFα, and particularly MCP1, G-CSF, and IL-8 (Figure 4B). This is consistent with the migration of macrophages and granulocytes. Additional cytokines, including IL-4, IFN-γ, IL-2, and IL-13, related to adaptive responses were highly elevated in the draining LNs. Inversely, IL-18 was found to be consistently decreased after MVA immunization. Finally, analysis of transcriptomic profiles showed increased expression of genes associated with the initiation of immune responses in the LN draining the vaccine injection site. In details, we found 49 genes differentially expressed between MVA and control samples, 15 of them were upregulated while 34 of them were downregulated. A pathway enrichment analysis was performed on this differentially expressed gene set. We identified one significantly enriched upstream transcriptional regulator, ATF3, associated with TLR4 signaling (45). Other markers of activation also appeared to be upregulated in the draining LN without reaching a significant threshold likely because of the limited sample size. We used MDS representations to better visualize similarities and dissimilarities between transcriptomic profiles. This approach provides superior fidelity in representing the distance between different instances when analyzing high-dimensional geometric objects than more classical SVD-based methods (40). MDS analysis clearly segregated the transcriptomic signature of the draining LNs from that of the contralateral LNs (Figure 4C).
Figure 4. Immune reaction in the draining lymph nodes (LNs). (A) Heatmap representation of cell populations discriminated by flow cytometry in the inguinal LNs draining the sites of rMVA injection (dLNs) and the contralateral LNs (Non dLNs). Cell populations were automatically sorted following hierarchical clustering represented by the dendrogram on the left. The cell count/gram was calculated by dividing the number of events by the weight of the biopsies, which were collected 24 h p.i. Values were standardized to display the same range of expression values for each cell population to properly visualize the cell population kinetics. See Figure S3B in Supplementary Material for gating strategies. (B) Heatmap representation of cytokine release in the LNs after rMVA injection. Cytokine production was automatically sorted following hierarchical clustering represented by the dendrogram. Values were standardized to display the similar range of expression for each cytokine to properly visualize the cytokine production kinetics. (C) Multidimensional scaling representation of transcriptomic signatures in LNs 24 h after rMVA injection. Biological samples are represented as dots in a two-dimensional space. The distances between the dots are proportional to the Euclidian distances between the transcriptomic profiles.
We examined the effect of local cellular and molecular events on systemic immunity in the blood of vaccinated animals. The number of blood CD66high granulocytes significantly increased (p < 0.05) very early (6 h) following vaccine injection (Figure 5A), consistent with the inflammation described in the skin and the draining LN. The level of these cells peaked from 6 to 24 h p.i. and then rapidly decreased to reach baseline levels by 72 h p.i. The increase in granulocyte levels in the blood preceded their recruitment to the skin. Classical monocyte (CD14+CD16−) numbers also increased in the blood by 6 h p.i., but then rapidly decreased by 24 h p.i., probably due to their differentiation into pro-inflammatory (CD14+CD16+) and non-classical (CD14−CD16+) monocytes, which indeed increased by 24 and 72 h, respectively. The increase in classical monocyte numbers also preceded the recruitment of monocyte/macrophages to the skin, similarly to granulocytes. In contrast to the myeloid populations, the number of T and B lymphocytes, NK cells, and pDCs strongly decreased in the blood early after vaccine injection (6 h), and then reappeared later, probably due to their migration to the sites of injection and draining lymphoid tissues. We also observed an increase (p = 0.0559) in the number of CD14+HLADR−CD33+ cells, expressing a phenotype corresponding to myeloid-derived suppressor cells (CD14+ MDSCs), at 24 h p.i., preceded by a transient increase in the number of early-stage Lin− MDSCs-like cells (46), between 6 and 24 h p.i. Early appearance of these cells may represent a natural mechanism to control initial inflammatory processes.
Figure 5. Systemic effects of intradermal rMVA injection. (A) Heatmap representation of blood cell populations discriminated by flow cytometry. Cell populations were automatically sorted following hierarchical clustering represented by the dendrogram on the left. The number of cells/milliliter was calculated using the number of leukocytes obtained by complete blood count analysis. Values were standardized to display the same range of expression values for each cell population to properly visualize the cell population kinetics. See Figures S3C,D in Supplementary Material for gating strategies. (B) Heatmap representation of cytokine release in the blood after rMVA injection. Plasma cytokine titrations were automatically sorted following hierarchical clustering represented by the dendrograms on the left. Values were standardized to display the same range of expression for each cytokine to properly visualize the cytokine production kinetics. (C) Multidimensional scaling representation of transcriptomic signatures in the blood after rMVA injection. Biological samples are represented as dots in a two-dimensional space. The distances between the dots are proportional to the Euclidian distances between the transcriptomic profiles. (D) Heatmap representation of functional enrichment. Data were extracted from differentially expressed genes relative to the baseline condition and functional enrichment expressed as −log(p-value), with a cutoff −log(p-value) > 3 for at least one time-point. Only differentially expressed canonical pathways and upstream regulators linked to the immune response are represented. Arrows correspond to the sign of the z-score, indicating the orientation of the expression of the group of genes (n = 3). *0.05 > p-value > 0.01; **0.01 > p-value > 0.001; ***0.001 > p-value in Friedman’s test over time.
Cytokine titration indicated that IL-6 levels transiently increase from 6 to 24 h in blood along with molecules associated with acute inflammation, such as IL-1β, MIP1α/β, and MCP1, which persisted in the blood through 72 h p.i. (Figure 5B). These cytokine profiles were similar to those observed in the skin. In contrast, TNFα clearly exhibited different kinetics in skin and blood, suggesting specific roles in the two compartments. We observed similar differences for IL-12 and TGFα. In addition, the levels of cytokines associated with adaptive responses, including IL-2, IL-4, IL-17α, IL-15, and IL-5, progressively increased up to the latest time-point (72 h) of the study. The cytokine titration profile in blood (Figure 5B) was clearly distinct from that of the skin (Figure 3B), suggesting that the cytokines found in the blood vessels of the skin had little, if any, influence on the cytokines measured in total tissue extracts.
We found a total of 742 genes differentially expressed in at least one time-point relative to control samples. A total of 123 genes were found to be down- or upregulated at 6 h, 354 genes at 24 h, 361 genes at 48 h, and 89 genes at 78 h. MDS analysis restricted to the set of differentially expressed genes in blood revealed a clear segregation between time-points (Figure 5C). The 24 h p.i. time-point was the most distant from the baseline. Gene expression levels 6 and 48 h p.i. showed an intermediate distance from baseline and had close gene expression profiles. Although the 72 h p.i. profile was the closest to the baseline values, it remained significantly distant. Functional enrichment analysis revealed that the expression levels of genes associated with several canonical pathways and upstream regulators were significantly modified (p-value < 10−3) in blood cells (Figure 5D). Genes associated with the inflammatory response (TNFα, IL-6, IL-32, and acute phase response signaling) were upregulated at early time-points, consistent with changes in the cellular composition in blood and cytokine levels in plasma. In contrast and consistent with T-cell migration out of blood circulation, canonical pathways and regulators that contribute to the lymphocytic response, such as IL-2, CD28 signaling, and iCOS-iCOSL signaling, were downregulated. Transcripts that regulate important innate canonical pathways, e.g., antigen presentation pathways, CCR5 signaling in macrophages, IL-10 signaling, or NK-DCs crosstalk, showed some change without any overall clear up- or downregulation (Figure 5D).
We integrated our data in a global analysis to provide a comprehensive overview of the early immune reaction following vaccination. We calculated Pearson correlations (cutoff: |R| > 0.8 and q-value < 0.05) for the whole dataset to establish links between the parameters of the three studied compartments (Tables S1 and S2 in Supplementary Material). This led to the construction of a co-expression network where the nodes correspond to the biological variables and the connecting lines correspond to significant correlations (degree of connectivity) between two variables (Figure 6). Non-draining LNs were considered to be the untreated conditions of draining LNs. The parameters were organized according to their type (cell, cytokine, or gene) and their origin (skin, blood, or LN). Parameters that did not lead to significant correlations are not represented. Cytokines measured in the skin strongly correlated with cytokines in draining LNs (48 connections), including inflammatory mediators such as TNFα, MIP1β, IL-6, and IL-1β, as well as factors associated with monocyte/macrophage and granulocyte recruitment, e.g., TGFα, G-CSF, and GM-CSF. Cytokines produced in the skin were strongly associated with gene products of the LNs (41 connections). However, three out of four of these gene products had unknown functions; the fourth (PLK1) is involved in cell proliferation. Blood and skin were also connected through their cytokines, but to a lesser extent (35 connections), with TNFα and TGFα emphasizing again the role of granulocytes and monocyte/macrophages and IL-17α, IL-15, IL-5, and IL-12/23 suggesting the impact of the injection site on installation of the adaptive response. In addition, CD14+ MDSCs from the blood strongly correlated with cytokines produced in the skin (11 connections). Gene products from the blood represented the largest group connected to the other parameters (38 genes with significant correlations). The gene products with the highest degree of connectivity were involved in the regulation of apoptosis (MAGED1), cell activation (DDT, GNLY, GNB2L1), cellular signaling pathways (RASAL3, PRKCH), cell mobility (GRK5), and cell proliferation (LAS1L, NUSAP1, HINT2, KIF22, EML3, CENPB, REXO1, PARP9, ELF2). In addition, these gene products from the blood were strongly connected to the pro-inflammatory cytokine IL-18 produced in the LNs (28 connections). We also organized the correlated biological parameters according to their peak of expression over the time following vaccine injection to evaluate the chronology of the events (Figure 7). This representation associated the blood, skin, and the LNs at 24 h p.i., at the peak of the inflammatory response, with monocyte/macrophages, granulocytes, and related cytokines as major actors. This kinetic profile confirmed the early involvement of actors of inflammation relative to those of the adaptive responses. In addition, it also showed that the control of the inflammatory responses was initiated early, with Lin− MDSCs detected in the blood 6 h p.i. and CD14+ MDCS 24 h p.i.
Figure 6. Co-expression network showing the correlations between each biological variable evaluated. Data were sorted in groups depending on the compartment (blood, lymph node, or skin) and type (cytokine, cell, or gene). Variables were then sorted depending on their size which is proportional to the number of correlations (degree of connectivity) they establish. All lines linking two parameters correspond to significant correlations between these variables (|R| > 0.8, q-value < 0.05).
Figure 7. Co-expression network based on the kinetics of recruitment of key effectors of the Modified virus Ankara-induced innate response. The most highly correlated variables (degree of connectivity > 10) were sorted according to their compartment, their type, and their kinetics of expression after rMVA injection.
Here, we aimed to better understand how cellular and molecular vaccine-induced innate immune parameters are interacting each other and how they could contribute elicit a vaccine response profile. To our knowledge, this study remains the first to characterize simultaneously local and systemic early kinetics in NHP model after vaccination. For this purpose, we used systems vaccinology approaches to integrate innate parameters for identifying key factors that can impact the adaptive responses. We used a multimodal analysis strategy to characterize the early steps of the immune response to MVA infection, at the local site of injection, in the draining LN, and blood. The MVA cell infection started 15 h p.i., involved several cell populations, and persisted for 3 days. The local innate response was characterized by early massive recruitment of granulocytes, macrophages, and monocytoid cells, associated with local production of GM-CSF, IL-1β, MIP1α, MIP1β, and TNFα. The innate response was also initiated at the systemic level with rapid and transient granulocyte recruitment and the release of IL-6 and IL-1RA, from 6 to 24 h p.i., followed by a persistent phase involving inflammatory monocytes. Systemic inflammation was confirmed by molecular signatures, such as those of genes that upregulate the IL-6 and TNF pathways and acute phase response signaling. Finally, integration of the data in a co-expression network allowed visualization of the relationships between the three compartments and the chronology of the events.
As a modified form of the vaccinia virus, MVA binds to a wide range of host cells and fuses with the cell membrane without the requirement of specific cell receptors, but the ability to fully complete the replication cycle is dependent on downstream intracellular events (44). The impaired ability of MVA to fully replicate in mammalian cells (47) has no apparent effect on recombinant gene expression (48). These properties are consistent with the detection of the recombinant gene (eGFP) in several cell subsets and the broad stimulation of innate immunity, suggesting that our observations are representative of diverse rMVA vectors expressing vaccine antigens, such as HCV, HIV, HBV, Ebola, malaria. By infecting a large variety of cells, MVA is widely sensed by several PRRs, such as TLR2, TLR6, MDA5, NALP3 inflammasome pathways (11), or cGAS (49). These PRRs are expressed in several resident cell types of the skin, including immune cells, as well as keratinocytes (50). Consequently, the recruitment of inflammatory cells, mostly originating from the blood, was expected as a part of the early response to the vaccine (6, 51).
We observed no clear-cut DC migration, which was possibly outshone by the magnitude of the inflammatory response. Pathogen sensing and the important cellular trafficking observed are likely associated through the microenvironment modifications mediated by local cytokine release, particularly G-CSF, GM-CSF, IL-1β, IL-6, MIP1β, and TNFα which contribute to the recruitment of granulocytes and macrophages and increase their cytotoxic activity and migration to the inflammation site. This localized cytokine release, along with local apoptosis of granulocytes, led to resolution of the inflammation (52), shown by the decrease of pro-inflammatory cytokine levels and the increase of IL-2 and IL-15, opening the way to potential local specific T-lymphocyte responses.
The local immune response to rMVA was associated with considerable systemic inflammation which was initiated before the detection of the first infected cells expressing the recombinant antigen (eGFP). We postulate that the first immune signals were delivered by resident skin cells through inflammatory eicosanoid mediators soon after rMVA administration (52), leading to rapid granulocyte release from the bone marrow and migration to the site of injection through the blood. The level of IL-6 and IL-1RA and the upregulation of acute response pathways 6–24 h p.i. may be associated with the increase of granulocytes in the blood. The downregulation of several pathways related to lymphocyte responses could be linked with a decrease of lymphocytes in the blood. In addition to lymphocytes, CD16− NK cells and CD33+ DCs also left the systemic compartment, but were not observed at the site of injection. They may modify their phenotype upon migration and differentiation and later migrate directly to the LNs.
The correlations between the biological parameters suggest that the magnitude of the inflammatory response directly orchestrates the early steps of the adaptive response. This is illustrated by the early correlations between IL-2 levels in the skin and LNs. In addition to TNFα, this cytokine leads to a Th1 response, consistent with the primary T cell response profile induced by MVA (43, 53). We also identified IL-15 as an early key parameter detected in the skin and in the blood. This cytokine vastly associated with IL-2 and mainly produced by monocytes and macrophages could also modulate the immune response toward a Th1 profile (54). However, our analysis also revealed the coexistence of effectors of the Th2 response, such as IL-4, IL-5, IL-10, and IL-13, in the LN. Moreover, the level of the pro-inflammatory cytokine IL-18, which is associated with orientation toward the Th1 response, was decreased after MVA immunization and thus inversely correlated with several recruited cell types and cytokines. It may, however, be a signature of inflammasome activation in subcapsular macrophages before 24 h p.i and thus be consistent with MVA-induced responses observed in mouse models (55). The MVA genome lacks functional copies of many genes that normally interfere with the host response to infection. Nevertheless, despite the strong inflammatory reaction, the only significantly modified upstream regulator associated with type 1 IFN gene was downregulated and only detected at 72 h p.i. in the blood. This might reflect the presence of interferon resistance genes, such as E3L, which is still functional in MVA (56).
The co-expression network highlighted interactions between the innate effectors by revealing hubs within the kinetics of the global immune response. Notably, it allowed identification of critical immune mediators, such as TNFα and MDSCs. TNFα was expected to be a key factor of the innate response as it is a central pro-inflammatory cytokine. This cytokine is indeed produced and released by granulocytes and macrophages and delivers signals leading to tissue necrosis. In certain contexts, TNFα could activate Langerhans cells (6), and induce T cell responses (57). The recruitment of CD14+ MDSCs was unexpected, since these cells are described to be mainly associated with immunosuppressive processes, such as tumor immune evasion and lymphocyte suppression through the production of reactive oxygen species (58, 59). Nevertheless, similar MDSCs were observed in the blood in rMVA-SIV vaccinated macaques and were found to exert T CD8+ suppression in vitro (60). In vaccine development studies, the induction of MDSCs was associated with the use of adjuvants (61, 62). Our results show that the involvement of MDSCs occurred as early as 24 h p.i., indicating that the resolution of the anti-MVA response may start very early. Further studies would be necessary to define whether the biological signatures identified in this model would be shared with the responses to other vaccines.
Our original approach combining in vivo imaging, histology, flow cytometry, multiplex cytokine analysis, and transcriptomic analysis using tools derived from systems biology, such as correlation networks, shows that the vaccine-induced immune response is a continuum of events which influence and synergize with each other. Out of note, we took advantage of these bioinformatic tools to statistically capture, in an efficient manner, this expression continuum with a limited number of animals. These synergies correspond to a cascade of reactions that depend on the original stimulus. We showed that the magnitude of the response of early effectors, such as local granulocytes and TNFα release, are directly associated with the first steps of the orientation of the adaptive response in the draining LNs. Furthermore, MDSCs should be considered to be a potential component of the signature of the response. Identification of such signatures should improve our understanding on how to effectively orientate the immune response, and could contribute to rational vaccine development.
Adult male cynomolgus macaques (Macaca fascicularis), imported from Mauritius, weighing from 4 to 6 kg, were housed in the CEA animal facilities (Fontenay-aux-Roses, France). The macaques were handled in accordance with national regulations (Permit Number A 92-32-02), in compliance with the Standards of the Office for Laboratory Animal Welfare (OLAW) under Assurance number A5826-01, and the European Directive (2010/63, recommendation No. 9). This project received the government authorization Number 12-013. Interventions were performed by veterinarians and staff of the “Animal Science and Welfare” core facility after sedation with ketamin hydrochloride (10 mg/kg, Imalgen®, Rhône-Mérieux, Lyon, France).
Conceptualization: PR, YL, RG, FM, and CJ; methodology: PR, ND-B, A-SB, CC, RG, FM, and CJ; validation: PR, CC, RG, and FM; formal analysis: PR and NT; investigation: PR, LS, and HH; resources: NT and HH; writing—original draft: PR; writing—review and editing: PR, NT, LS, RG, FM, and CJ; funding acquisition: YL and RG; and supervision: YL, CC, RG, and FM.
This work was supported by the French government “Programme d’Investissements d’Avenir” (PIA) under Grant ANR-11-INBS-0008 funding the Infectious Disease Models and Innovative Therapies (IDMIT, Fontenay-aux-Roses, France) infrastructure, the PIA grant ANR-10-EQPX-02-01 funding the FlowCyTech facility (IDMIT, Fontenay-aux-Roses, France), and the PIA ANR-10-LABX-77 grant funding the Vaccine Research Institute (VRI, Créteil, France). The Agence Nationale de Recherche sur le SIDA et les Hépatites Virales (ANRS, Paris, France) supported this work. The European Commission Advanced Immunization Technologies (ADITEC) Grant FP7-HEALTH-2011-280873 also contributed to this research.
The Supplementary Material for this article can be found online at https://www.frontiersin.org/articles/10.3389/fimmu.2018.00870/full#supplementary-material.
32. Herodin F, Thullier P, Garin D, Drouet M. Nonhuman primates are relevant models for research in hematology, immunology and virology. Eur Cytokine Netw (2005) 16(2):104–16.
37. Abràmoff MD, Magalhães PJ, Ram SJ. Image processing with ImageJ. Biophotonics Int (2004) 11(7):36–42.
39. R Development Core Team. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing (2014).
41. Kruskal JB, Wish M. Multidimensional Scaling. Beverly Hills and London: SAGE (1978).
Copyright: © 2018 Rosenbaum, Tchitchek, Joly, Stimmer, Hocini, Dereuddre-Bosquet, Beignon, Chapon, Levy, Le Grand and Martinon. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. | 2019-04-25T18:49:54Z | https://www.frontiersin.org/articles/10.3389/fimmu.2018.00870/full |
Great FCPX Event today in Hollywood!
Special thanks to Michael Cioni of Light Iron for a really well organized and informative event in Hollywood today. I got there before 10AM and there were already over 100 people in line. I really think this will be bigger than the Cold Mountain story had on FCP3 as hundreds turned out to learn about the workflow on FOCUS. LightIron, ARRI and yes even APPLE were there showing the post benefits of FCPX and ProRes.
Re: Great FCPX Event today in Hollywood!
We really should've done a better job of coordinating a COW meet up. Well, that's assuming we actually want to meet each other! haha I found Bill, but no one else.
I too thought the event was well done. Good job to Light Iron for coordinating/hosting it. The presentation broader in scope than I had anticipated which means there wasn't a lot of nitty-gritty detail but the book and the FCP.co articles can fill in those gaps.
[Scott Witthaus] "Wait. FCPX and Hollywood? That just can't be! I mean, really..."
lol, I know huh? Seriously though, while light iron was obviously interested in selling their services, and Apple was there with hardware and running demos, this really was about the process of using FCP X in the workflow of the film. And from the point of view of the folks involved (co-director, producer, editorial, color, DIT etc) it was nothing like the Cold Mountain Clusterf*ck. Not without hiccups, but they really enjoyed it. The editor described it as "the happiest year of my life".
[Charlie Austin] "Anyway, pretty cool event, and there was free lunch as well. Very important! :-)"
More important than the free lunch was the free beer!
[Charlie Austin] "The editor described it as "the happiest year of my life"."
Ddi the editor talk about his/her experience in the past? What platforms, software, etc.. And why this year was so happy?
I was at a Sony /Avid 4k thing last week and the Avid demo artist just went through a list of features. I think she must have been tired. She just read off a piece of paper and it was like "oh this is cool, you can go to this menu and change ratio...." That type of demo. Nothing inspiring. Would be interesting to know why that Focus editor was so happy.
The Directors are also Guild Editors so along with Editor Jan Kovac and 1st Asst. Michael Matzdorff, all praised FCPX over all otter NLE's they've used and said they figured there was a 3 to 1 less clicks ratio advantage over Avid.
They all liked it enough that they're already using it on their next feature and Warner Bros. is totally cool with it. They also mentioned Dean Devlin is cutting his next feature on it also.
[Lance Bachelder] " Dean Devlin is cutting his next feature on it also."
Lance, is this the same guy that was on Leverage? Wasn't that an X show?
Yes and there's a profile for his studio on the FCP site. Probably his next movie "Geostorm"? He wasn't at the event but apparently its a Warners show so FCPX alive and well on the lot.
[Lance Bachelder] "Yes and there's a profile for his studio on the FCP site. Probably his next movie "Geostorm"? He wasn't at the event but apparently its a Warners show so FCPX alive and well on the lot.
[Scott Witthaus] "And why this year was so happy?
[Scott Witthaus] "I was at a Sony /Avid 4k thing last week and the Avid demo artist just went through a list of features. I think she must have been tired. She just read off a piece of paper and it was like "oh this is cool, you can go to this menu and change ratio...." That type of demo. Nothing inspiring. Would be interesting to know why that Focus editor was so happy."
I think that can just be the difference between a demo artist who's made the same speech 100 times before and possibly never used/needed the features they are demoing. Where as a user, if they have a really good experience, will have an really good story to go along with it.
For example, the first time I made multicam clips in Premiere Pro CC I wanted to jump up and hug the iMac I was working on. I was completely floored that, literally, what would've been hours worth of work in Avid or FCP 7 was done in minutes in PPro. The producers I was with were like "Yeah, that's pretty neat." and I was like, "Man, you are not appreciating the mountain of awesome that towers before us!"
[Andrew Kimery] "I think that can just be the difference between a demo artist who's made the same speech 100 times before and possibly never used/needed the features they are demoing. Where as a user, if they have a really good experience, will have an really good story to go along with it."
I agree. If this Avid demo artist could have wrapped a workflow, or even a real world project into her demo, it would have been much more effective. As it was, she didn't sell any MC subscriptions that day.
[Scott Witthaus] "I agree. If this Avid demo artist could have wrapped a workflow, or even a real world project into her demo, it would have been much more effective. As it was, she didn't sell any MC subscriptions that day."
During the X panel there was a rep from Arri that gave a short presentation and he was very nervous. Good technical info but the DP from Focus probably could have done a better job at 'selling' the camera than the official rep did.
Really fun day! Got to hang with Andrew and Patrick and Leo and others. Matzdorff was kinda mobbed so all I had time to do was wave hi and goodbye. Bummed I missed others. Only sorry we didn't do something formal-er as well. Phil and Greg were generous to provide transport - and Mike Horton joined us for for beers. Felt a bit ike a mini NAB preview. Based on what was said from the stage, a big day for X. Cioni's talk about "cost per frame" and the general tone of how after all the testing and tuning, X turned in a stellar workflow performance on Focus was really nice to hear. And not just from one perspective, but department after department. Stage mention of Dean Devlins next project and other hints say this will not be a one time thing. There appears to be a legitimate new workflow in town. Who knows what's next in FCPX land.
My favorite line of the day from Mr. Cioni: "Don't settle for a "default future"!
That's good advice for everything in life... not just post production.
I concur Mark - I was sitting there in the theater (2 seats from you by the way) thinking this guys the next CEO of Apple - very inspiring. His company/staff was just on it al day - far better than many of the NAB seminars etc.
[Mark Raudonis] ""Don't settle for a "default future"!"
Gone Girl Editor: "As an editing platform, it's similar to the others I use,"
Focus Director: "We worked with Avid editors on our features and it struck us that things just hadn’t progressed. There was this huge reinvention of how you edit and then it stopped. We had shot a movie digitally, but it felt as if we were still cutting on a flatbed."
[Charlie Austin] "Kind of interesting. The Focus director said he started using X because everyone hated it.. :-)"
In all of this did anyone notice that Panavision had purchased Light Iron?
[TImothy Auld] "In all of this did anyone notice that Panavision had purchased Light Iron?"
Until you mentioned it I had forgotten about that. Though there was a thread about it when it happened.
[Andrew Kimery] "Given the 'pirate' attitude they talked about I was surprised they shot on Arri (one of the oldest of the old guard) as opposed to, say, RED. And, interesting enough, Gone Girl shot on RED (though I think that was more for the 6K than for Jannard's attitude). ;)"
The Alexa is hardly old guard.
[Jeremy Garchow] "The Alexa is hardly old guard.
The Alexa is very easy to use, produces awesome imagery, and has a very easy workflow. Sound familiar? :)"
I'd argue a large part of their success in going digital is because they gave post and production a camera and workflow that seems familiar (as opposed to REDs approach of trying to blow the whole thing up).
[Andrew Kimery] "Arri has been in the motion picture camera business since the early 1900s. If that doesn't qualify as old guard, I don't know what would. ;)"
For film cameras, obviously, but hey, Apple has been around since 1976. In tech years, that's ancient. Certainly longer than Adobe, Avid, Pro Tools, Autodesk, and virtually any company here but the Japanese camera makers among us.
Microsoft is only 1 year older than Apple. HP workstations didn't come along until the 80s. Olllllld guard.
The point about Alexa NOT being in the old guard is that she was the first digital camera to record to ProRes. At Alexa's launch, she ONLY recorded to ProRes. Not the case anymore of course, but still. Hardly the behavior of the old guard.
Old guard thinking is sticking to RAW and proprietary formats, and trying to strong-arm manufacturers into supporting it. How many camera companies have done this over the years? Until Alexa, damn near all of them.
New guard thinking is to let customers dictate your corporate strategies -- which both Blackmagic and AJA (who of course pioneered hardware encoding to ProRes) learned from Alexa very, very well indeed.
Alexa remains critical to messaging about FCP/FCPX workflows because roughly 90% of episodic shooting is on Alexa, and while I don't have specific numbers, she dominates feature shooting as well. Not walking hand in hand with Alexa means marginalization, full stop.
NOT because Alexa is old guard, but because she combines the benefits of old guard commitment to imaging, with cutting edge adoption of and adaptation to what customers are actually doing, and want they want to do going forward.
Hence the "late" introduction of Alexa five years AFTER Red, but leaving it in the dust. Red is still trying to force old guard, workaround-laden workflows, and Alexa is way, way ahead of the game.
Arri has been in the motion picture camera business since the early 1900s. If that doesn't qualify as old guard, I don't know what would. ;)"
Arri is an old company with a rich history. Sure.
The Arri Alexa is not an old camera.
I think there's a big difference. The thought and engineering that went in to the Alexa (and now the Amira) does not represent a company that is relying on the 1900s for inspiration.
The Alexa does borrow what works well on a film workflow, from exposure, to ergonomics, to color science, but adopts fairly easy digital workflows for modern post. It uses Sony SxS cards that can be used in other cameras, it shoots to ProRes, DNxHD, and Raw.
Red, on the other hand, has a different workflow where it puts Raw and pixel size, first and foremost, and all that comes with it.
Oops. As always, Tim Wilson said it better.
At the Focus event there was lot of rhetoric from the presenters (well, not the Arri presenter) about wanting to rock the boat, wanting to be pirates, not doing their jobs unless they are making people uncomfortable, etc., and that's what I'm riffing on (though it obviously wouldn't be apparent if you weren't at the presentation). It's the posturing, not the product choices that caught my attention. I mean, when was the last time choosing to shoot a feature film on an Arri camera was a rebellious, radical, middle-finger-to-the-establishment decision?
Arri set out too make a great digital camera that could easily slip in to existing production and post workflows where as Apple set out to make a great NLE that would disrupt existing workflows. Sometimes very good products come from relatively young companies, sometimes very good products come from relatively old companies. Sometimes very good products cause disruption and sometimes very good products fit in to place without anyone missing a beat.
For the record, the Arri D-20 (2005) predates the RED One (2007).
[Andrew Kimery] "For the record, the Arri D-20 (2005) predates the RED One (2007)."
Yes, and D-20 is the quintessential old guard camera. Exactly the kind of camera you'd expect ARRI to produce, exactly the kind of camera that Red could plow under the ground in short order. And did.
I don't think it's fair to expect cameras to flip the bird. Which is the camera that did that? Red. Still flipping the bird. Yes, and innovating, with much innovating ahead of it. No disrespect intended. Fincher has been using them to fantastic effect.
So I AM making the observation that ARRI flipped the bird to other camera makers with Alexa. Maybe not flipping the bird as much as "throwing down," an image I've always loved. Completely upended the entire approach to customers, including the relationship of camera makers to NLE developers. On the camera front, they forced even the "little guys" to meet them on ground that ARRI defined.
Maybe it's because of the difference in our ages -- I'm old enough to be your father, so get off my lawn -- but I don't think that disruption requires a middle finger. Which middle finger did Apple flip when they introduced iPhone? None of them.
The other thing to keep in mind is that, until BMD and now AJA, even the "flip your middle finger" smaller cameras all sat in the old guard space -- "We pick OUR footage format to cheat in favor of our own imagers. Working with it is YOUR problem."
I'm conceding that on the whole, cameras don't lend themselves to flipping the bird. But my point is that there are still baseline requirements for making big-time, big-budget pictures.
The point of X on Focus is NOT that "we can make a guerilla-style run and gun feature with X." The point of X on Focus is that we can do exactly what we need to do in order for X to be at the heart of big-budget blockbuster filmmaking. And for that, you need grown-up, get off my lawn, images.
After all, Cold Mountain was shot on film. And for that matter, it wasn't the first feature cut with FCP. Who remembers Roger Avary's Rules of Attraction? Cut on FCP a full year before Cold Mountain, but it didn't move the needle. No feature shot on DV was EVER going to move the needle.
For that matter, Apple enthusiast Steven Soderbergh shot Full Frontal on DV at the same time Avary was shooting Rules of Attraction...and DP Soderbergh never asked Director/Producer Soderbergh to shoot DV again. DV was nominally revolutionary, but not in any way practical except for non-mainstream uses.
And the most revolutionary aspects of Cold Mountain and Focus is that they're mainstream. Big stars, big budgets, large format theaters, the whole thang.
Making the irony that, for maximum revolutionary bang for the NLE, the rest of the project has to flip the fewest birds possible. Or else the use of the NLE is part of the entire story of "this picture doesn't really matter for changing the game." That is, you have to PLAY the game to CHANGE the game.
[Tim Wilson] "I don't think it's fair to expect cameras to flip the bird."
I'm not expecting camera makers (or any other manufacturer) to flip the bird. I'm just pointing out that a group of self-professed pirate rebel disruptors of the status quo opted to use a camera that is as accepted by the establishment as their choice of NLE is rejected by the establishment.
[Tim Wilson] "The point of X on Focus is NOT that "we can make a guerilla-style run and gun feature with X." The point of X on Focus is that we can do exactly what we need to do in order for X to be at the heart of big-budget blockbuster filmmaking."
The presenters at the event repeatedly referred to themselves pirates (and had repeated calls to action for all the other 'pirates' in the audience) and if they weren't doing/saying things that made other people uncomfortable they weren't doing their jobs. Their words, not mine. RED, which makes nice looking images too, would fit that mantra better than Arri, IMO. Does it really matter? No, just an observation I had while at the event.
[Tim Wilson] "I don't think that disruption requires a middle finger."
Neither do I, which is why I'm enjoying poking fun at some of the over-the-top rhetoric used by the presenters. The apparent smoothness of using X on Focus speaks for itself so the rhetoric, IMO, was unnecessary. The crowd ate it up though so maybe I'm just not the target demo for that line of salesmanship.
Coming back full circle, Charlie made the observation that while both Gone Girl and Focus used 'unproven' NLEs, Gone Girl went a more traditional route with PPro and Focus went a less traditional (some might say hated) route with X. In a cheeky response I just pointed out that when it came to the camera decision the roles seemed reversed.
[Andrew Kimery] "At the Focus event there was lot of rhetoric from the presenters (well, not the Arri presenter) about wanting to rock the boat, wanting to be pirates, not doing their jobs unless they are making people uncomfortable, etc., and that's what I'm riffing on (though it obviously wouldn't be apparent if you weren't at the presentation). It's the posturing, not the product choices that caught my attention. I mean, when was the last time choosing to shoot a feature film on an Arri camera was a rebellious, radical, middle-finger-to-the-establishment decision?
If the Focus teams said they set out to pick the bets tools for the job then that's one thing, but if you are selling yourself as a rebellious pirate bent on disrupting the status quo then, I'm sorry, you can't pick the most popular d-cinema camera made by one of the oldest companies in the film biz.haha Just doesn't fit the mantra."
I see. Well, yes. I wasn't there so I do not have the luxury of the presentation perspective.
In general, picking "Apple" (like picking "Arri") is a pretty safe bet, Apple computers (devoid of Apple NLEs) have a long standing film and digital film track record and pedigree. Picking the specific tool, (FCPX) is no where near as safe.
Not only that, for 85% of the film they used ProRes 4444, another Apple pick, rather than ARRIRAW (which they saved for VFX).
So while The Alexa has roots in an 'old guard' company, you could say that FCPX is from an 'old guard' company relative to their respective industries and time on Earth. What isn't old guard is the physical tool. The Alexa is tested, X is not. Not only that, they has to run this up the flag pole and get a studio to support them. That couldn't have been very easy and it included a contingency plan of getting the whole feature back to FCP7 if needed.
[Andrew Kimery] "Arri set out too make a great digital camera that could easily slip in to existing production and post workflows where as Apple set out to make a great NLE that would disrupt existing workflows. Sometimes very good products come from relatively young companies, sometimes very good products come from relatively old companies. Sometimes very good products cause disruption and sometimes very good products fit in to place without anyone missing a beat.
Arri's camera fit in to NEW existing workflows. Workflows where the offline and the online are not that far away from each other, where editorial and finishing are using the exact same media, where editing is happening right next to the set. That is a relatively new concept, and ProRes 444 with FCPX fits really well here, even on the consumer iMac computers being used on set. It's not that other NLE's couldn't do this, it's that FCPX can do this, and the Arri Alexa helps them to do this in a pretty efficient manner at an extremely high quality.
I think the pirate comments are more about choosing FCPX and getting a studio to OK it, flying in the face of the internet backlash from professionals with loud typing voices that have been against the notion of FCPX from the very first day it was shown to the public.
[Jeremy Garchow] "...it's that FCPX can do this, and the Arri Alexa helps them to do this in a pretty efficient manner at an extremely high quality."
Speaking of the online workflow though, both the editor and director commented more than once about how easy it was to switch from the full res to the proxy files w/in FCP X which makes me think they utilized that somewhat often. With editing collaboration happening offsite it would make sense for the full res files just to exist onsite and for people to take the proxies w/them for offsite work. I didn't ask though so I don't know how often (and for what purposes) they switched between the online and offline quality footage.
[Jeremy Garchow] "I think the pirate comments are more about choosing FCPX and getting a studio to OK it, flying in the face of the internet backlash from professionals with loud typing voices that have been against the notion of FCPX from the very first day it was shown to the public."
[Andrew Kimery] "And I'm certainly not trying undermine anything the team did. At the time it was just a weird juxtaposition where there was a lot of talk about all these readily available tools (iMacs, iPads, FCP X, a laundry list of apps from the Mac App Store, etc.,) and then Arri. Suddenly the "one of the these things is not like the others" song from Sesame Street starts playing in my head. haha"
I guess we feel differently about the Alexa in general. I see it, albeit rather expensive to shoot and rent on a day to day basis, as a very accessible and rather easy to use tool with exceptional quality. As Tim W points out, there's a reason the Alexa is so pervasive, and besides the image quality, it's also the workflow.
There's a big difference when using off the shelf production gear (let's just say something like a DSLR) and off the shelf post-production gear. It goes to show what a lot of us have been arguing over the last few years. How much of a sizzle core beast do you need to edit (truly edit) a feature? If you use the big budget movie of Focus as the example, it turns out, you just enough of snazzle and not a metric ton of sizzle, and yet you still are able to pass the quality on to the finish (which in this case was Pablo).
For cameras, on the other hand, there's a big difference between The Alexa and a DSLR. One day soon, those two technologies will also converge. We all know it's already starting to happen.
I wonder if you are you saying that FCPX isn't up to the task because it uses off the shelf hardware? I'd say that the Focus team proved that it could be done, and it took a lot of "yarrrrrggs!" to convince both the studio and the editing public. But even I don't think there'd be enough parrots and peg legs to persuade them to shoot on a DSLR.
[Andrew Kimery] "They also dislike the current state of post audio (really, really dislike the current state of post audio) and I think any part of the workflow that isn't electronic. lol"
Can you explain what you mean by this? Meaning they didn't use Sync-N-Link at first?
[Jeremy Garchow] " it took a lot of "yarrrrrggs!" to convince both the studio and the editing public."
You're exactly right, illustrating to me exactly where this metaphor falls completely apart for me. The filmmakers are all "Woo-hoo! We're pirates! We chose X to piss people off!" or whatever, but you know what? Pirates used the same boats the navies did, maybe just better tuned to the task. But the reason they were able to run from navies is because their ships were state of the art, and every bit as expensive.
Not only were they no less pirates for those ships, those ships are all that enabled them to BE pirates. That, and pirate-y attitudes and, uhm, workflows. The pirate part is who's ON the ship...but no fancy ship, no pirates.
That's why Alexa and Pablo have ZERO to do with whether or not these guys were pirates. I'm 100% on board with acknowledging them as pirates (see what I did there?), but expecting them to have used cheap cameras or cheap DI to maintain pirate cred is the OPPOSITE of what a pirate would do in this situation. Can't be running on the high seas with DV.
I know I'm belaboring a metaphor...but maybe THAT's what we should rename the forum. FCPX or Not: Belaboring Metaphors.
I do think this is an important one, though, and it speaks to the fundamental nature of a revolution like this, which necessarily includes absolutely state of the art tools. The point of the Focus with X story is to establish that X absolutely IS state of the art.
I mean, they're no less pirate-y for using Technocranes or a video village, either.
[Jeremy Garchow] "I guess we feel differently about the Alexa in general. I see it, albeit rather expensive to shoot and rent on a day to day basis, as a very accessible and rather easy to use tool with exceptional quality. As Tim W points out, there's a reason the Alexa is so pervasive, and besides the image quality, it's also the workflow."
My not-big-budget narrative background might have something to do with it as well. On low budget narrative projects I've been a part of (typically as a colorist, not an editor) I'm used to online quality media being used throughout because the infrastructure and organization wasn't in place to facilitate an offline/online workflow. They'd rather spend more money on a faster computer and some bigger, better drives than have to deal with the offline/online process. It also seemed archaic to many of them since they came up during the digital revolution so shooting to a tape/card/drive and plugging that footage straight into the NLE was the normal thing to do.
When RED came along this same market was super excited at the price point and the quality of the footage. The RAW workflow was especially alluring since it offered more control over the image than the cameras they were used to which typically baked in Rec 709/601. Since then more cameras have starting shooting in some form of LOG so you get a happy medium between an easy to handle codec (as opposed to RAW) that is stuck in 709/601.
Where am I going with this? I have no idea. I stopped mid-post to feed the baby and completely lost my train of thought. I think I was just trying to give backstory on my perception of Arri, but lord only knows at this point. Feel free to correct anything I've gotten wrong here as the camera side isn't my forte though I do try and keep tabs on it.
[Jeremy Garchow] "I wonder if you are you saying that FCPX isn't up to the task because it uses off the shelf hardware? "
I never said (or meant to imply) that X wasn't up for the task. For what the Focus team wanted it was obviously up to the task. As far as off the shelf hardware goes, I feel like editorial has commonly been using off the shelf machines for at least a decade now. When I think of off the shelf I basically think any computer that can be readily purchased from a major supplier so, to me, even a super expensive nMP is still an off the shelf machine. Not off the shelf machines would be the SGI Octane boxes running IRIX that Smoke used to require or the heavy Resolve users that run Linux rigs with PCI expansion boxes. Maybe even Avid back before they started selling MC as software only (even though those were pretty much still off the shelf computers that just had Avid's BOB).
[Jeremy Garchow] "Can you explain what you mean by this? Meaning they didn't use Sync-N-Link at first?"
I don't have much detail, just that the director of Focus seemed very annoyed at the ubiquitousness of ProTools. If someone in attendance could have offered a viable alternative I think the director would've hired that person on the spot.
[Tim Wilson] "Pirates used the same boats the navies did, maybe just better tuned to the task. But the reason they were able to run from navies is because their ships were state of the art, and every bit as expensive."
I make one off the cuff remark and it completely hijacks the thread. I love this forum.
"The directors are currently in Santa Fe, where they are filming a new untitled comedy with Tina Fey.
They are shooting it on the Blackmagic pocket cinema camera, and the Sony A7S mirrorless digital camera, and editing on Final Cut again."
[Andrew Kimery] "They are shooting it on the Blackmagic pocket cinema camera,"
[Andrew Kimery] "Maybe I spent too much time at REDUser back in the day and that warped my perception of Alexa. ;) "
[Andrew Kimery] "When RED came along this same market was super excited at the price point and the quality of the footage. The RAW workflow was especially alluring since it offered more control over the image than the cameras they were used to which typically baked in Rec 709/601. Since then more cameras have starting shooting in some form of LOG so you get a happy medium between an easy to handle codec (as opposed to RAW) that is stuck in 709/601.
Ha! It wont be the last time that happens.
Getting off on yet another tangent, I think Red sold the indie dream for a really long time (4k for 4k, or 3k for 3k). Arri didn't have to. The ease of us of the Alexa vs the entire ramp up of the Red workflow didn't really compare, at least in my experience. The Red camera took a lot of study to families yourself with it. What button does what, what slider does what, what the FLUT does what? The Arri is much more of a film borrowed workflow. Pick an ASA, pick a color space, and shoot it. You can almost expose by eye, if it looks good there, it's going to look good later, and the idea of pushing and pulling exposure holds up in an Arri based workflow. Red has many steps in the process where that process is much harder to deal with and understand. The argument, though, is that since it's RAW, you can "redevelop" the footage to get what you need as long as it's not entirely over exposed.
Sure, Red was a more affordable camera (was, it isn't so much anymore as both companies offer some similar price points) but a Red workflow required lots of transcoding time. And if you didn't have that time, then you had to buy a Red coprocessor for a decent chunk of change. If you needed multiple computers to access the red raw footage, then you needed multiple red coprocessors for multiple decent chunks of change.
On the other hand, someone could hand you ProRes based Alexa footage without knowing a single things about Alexa footage, and you could import and start editing right away without any fuss and all at exceptional quality. You would have to figure out to put on a LUT, but that's really easy, easier than ever in FCPX, certainly.
Both cameras take fantastic pictures. They both deserve a lot of credit, but Arri did have the luxury of time while watching Red bumble around for while in the early days. Then there was corporate espionage, and the whole thing turned even weirder.
[Andrew Kimery] "I never said (or meant to imply) that X wasn't up for the task. For what the Focus team wanted it was obviously up to the task. As far as off the shelf hardware goes, I feel like editorial has commonly been using off the shelf machines for at least a decade now."
I guess I am still hung up on the "one of these things is not like the other" pirate comment. I wasn't there so again, I am probably missing the subtleties but I would assume they see themselves as 'pirates' because they took a chance on a product that was pretty much bad mouthed from day one, and ended up not only liking it, but introducing it on a big stage with the backing of a studio with big money. I'd feel like a pirate, too.
[Andrew Kimery] "[Jeremy Garchow] "Can you explain what you mean by this? Meaning they didn't use Sync-N-Link at first?"
I don't have much detail, just that the director of Focus seemed very annoyed at the ubiquitousness of ProTools. If someone in attendance could have offered a viable alternative I think the director would've hired that person on the spot."
Ah, got it. Thanks for clarifying.
Getting off on yet another tangent, I think Red sold the indie dream for a really long time (4k for 4k, or 3k for 3k). Arri didn't have to. The ease of us of the Alexa vs the entire ramp up of the Red workflow didn't really compare, at least in my experience."
I agree that RED's workflow was cumbersome. It certainly gave the user a lot of control but it took a lot of effort (relatively speaking). It was also constantly in flux. On one hand a lot of updates is good, but on the other hand trying to track what's going on becomes a full time job. After the RED One came out I tracked the workflow progress until the software hit build 15 or 16 and then I checked out. When I would get a RED project I would check-in, grab the latest software, see what the latest workflow was, etc.,, but I gave up trying to stay current all the time.
I guess that's the downside to being so 'open' with your process is that you drag the public though all the ugly iterations of the project. Some people love it though so horses for courses.
[Andrew Kimery] "I guess that's the downside to being so 'open' with your process is that you drag the public though all the ugly iterations of the project. Some people love it though so horses for courses."
[Jeremy Garchow] "Oops. As always, Tim Wilson said it better."
I wonder how Aindreas Gallagher is these days?
Motion Pictures actually released with FCPX, Several TV Networks using the workflow including his beloved beeb. | 2019-04-26T00:52:49Z | https://forums.creativecow.net/docs/forums/post.php?forumid=335&postid=78575&univpostid=78575&pview=t |
Now, that's as much information as we have on the mission at the moment, so I thought it would be helpfully to hear your thoughts on the mission, how well prepared the Raven is to undertake it, any potential problems anyone sees and so on.
McLean noticed nobody else was speaking up since most of them probably hadn't had a chance to look over their stations since the Star base's technicians had been working on them.
"Sir, I have not gotten a chance to do a complete inspection of the ship and all of it's systems. After this meeting is done I can go through and complete my inspection and then report my thoughts to you." McLean said.
The mission seemed very straight forward, but McLean had to admit that more than anything he wanted to get back to the familiar grounds of his engine room. Now with the mission details laid out before the entire senior staff McLean was starting to fidget as he hoped the meeting would conclude soon.
Jere finished with his work in sickbay before the meeting ended. He placed his reports on the CMOs desk and left sickbay heading for the meeting after alerting the rest of the medical staff where he was going.
The trip to the bridge short. He stepped from the turbolift and reentered the lounge quietly, standing near the door and not interrupt the meeting which was still going on. He nodded to the Captain and remained silent listening.
Sir, I have not gotten a chance to do a complete inspection of the ship and all of it's systems. After this meeting is done I can go through and complete my inspection and then report my thoughts to you. McLean said to the Admiral across the conferance table.
I get the feeling from looking around the table that this is the case across most of the departments. I propose that department heads review their departments status and report to Commander Smith within 2 hours. That will give us an hour to spare before leaving space dock to iron out any problems.
Is there anything else anyone wants to raise?
After finishing his last statement he looked around the table to see if anyone was about to speak. After a few seconds of silence, it was clear that no one was about to.
Well in that case, I look forward to reading each of your reports.
I'd be interested to hear your thoughts on your new Asst CMO are over the next few days Doctor.
The Doctor looked at the Captain as if she didn't know what to make of the requst, but simply replied with an aye sir. Mc7's first thoughts about this being an easy mission were evapourating, and quickly.
Jere stood as the meeting ended and left the meeting room in silence heading for his quarters. His shift was done and he had nothing more to occupy his time till the next morning. He'd heard what had been requested by the CMO and let out a sigh as he headed out of the room and off the bridge.
Jere changed into something comfortable and retrieved a cup of tea from the replicator before settling on the couch to listen to some relaxing music before dinner.
E’Liana looked around the room of bored individuals and sighed.
She heard Zoarial chuckling beside her, and he prodded her arm. She could tell that he was getting very bored of all this waiting.
She raised her eyebrow at him sceptically. “Even with this silence, I doubt we can hear her from here…although I see your point” she smiled slightly at him, saying the latter part in a more hushed tone.
A few moments later, the meeting began. E’Liana had never been so relieved.
A few moments later, she noticed Opet come in and stand by the door. She rolled her eyes dramatically at Zoarial, and whispered “Obviously, someone doesn’t understand the role of the ASSISSTANT CMO.” She grinned at him.
E’Liana made a mental note of that, but could not think of anything she needed to do, due to the marvellous Nurse Joy.
She had a funny feeling that the Admiral - ~Or is that Captain?~ she wondered to herself - was pointing this question to her, but held her tongue.
Five minutes later, the meeting was finished, and E’Liana rose from her chair and was about to leave with Zoarial when she heard the Admiral call her.
“I'd be interested to hear your thoughts on your new Asst CMO are over the next few days Doctor.” He said.
Not knowing how to respond, she merely nodded and responded with an automatic “aye sir”, before nodding and heading back to Sickbay.
Just ahead of her was Opet leaving for the end of his shift. Momentarily, she wished her shifts were that short - but that was the curse of being CMO. She heard him sigh, obviously having heard the Admiral’s request. At that she turned around to quickly speak to Mc7_of_9.
It didn't take long for Zoarial to finish his report. Unlike E'Liana, he had no problems whatsoever in his department. His team was reliable, accurate and able to cope with stress. Everyone knew the value of one another and there was no friction between the people. Mainly it was David and Zoarial stationed at the helm, with only in extreme situations an ensign taking over, but Zoarial made sure that they were all capable of handling different problems.
After Zoarial had finished the report on his padd and sent it to the first officer, a small window popped up. reminding him about the scheduled holodeck training for that day. Since they were about to leave, he wanted to be at his station, so he sent a small notice to every ensign concerned that the training was postponed untill they were well underway.
He remembered that he also had a planned conversation with E'Liana, which they hadn't been able to finish during the shore leave and when he accesed his 'to do' list he found it surprisingly long. *Better mark it priority then* Zoarial thought, but then noticed that all the other items were as well. *Well, at least it's not going to get boring this mission!* Zoarial thought, and headed for the bridge.
Zoarial had left David in charge of checking all the systems and writing a report about them. It seemed that at the moment there were no errors or glitches and everything would go as planned. *Just like every other mission* Zoarial thought. There was always something that went terribly wrong.
The turbolift opened it's doors and revealed the bridge, which was near to empty. Only a couple of engineers from starbase maintenance were replacing and polishing the panels to give the bridge a fresh new look.
Now the waiting began. Zoarial dared not leave his post, not wanting to be late for the second time in one day. Maybe he would be able to get at better reputation during the mission, or maybe during his leisure time.
OOC: Sorry it's so long...am a tad bored and don't want to do my work!!
E'Liana made her way back to Sickbay to complete the report for Commander Smith. When she got there, she saw Joy tidying away from the last physical. The other woman looked up and smiled. "Hello...how was the meeting?"
E'Liana rolled her eyes. "Don't ask. Please." She smiled lightly, and Joy grinned back. "I met the new ACMO." She said, making conversation. "....seems nice." E'Liana mumbled back whilst Joy suppressed a chuckle.
"I have to discuss him with the new Captain" Replied the hlaf-Klingon woman. "That'll be a meeting to look forward to. I only hope I don't get too angry...people tend to take it the wrong way." Joy nodded in understanding. "You seem to have got a better control over it..." She said, trying to make the young woman feel better.
"Did I ever tell you of my training mission?" Enquired the Doctor. With a shake of the head, E'Liana carried on. "Long story short, I ended up punching the Commanding Officer...and SHE was the one under the influence!! I really hope Mc7 hasn't heard that...he may confine me just to make sure..."
Joy chuckled. "Don't worry about it, I'm sure he won't judge you. I've known people who've done much worse things than you..." She replied somewhat darkly.
There was an awkward silence for a few moments before E'Liana changed the subject. "I need to finish that report we were doing for Commander Smith. You have anything to add to it?"
Joy, taking the hint, nodded. "I've but a PADD on your desk." E'Liana nodded in understanding. "I've also put the latest physical results on there...I can't transfer them to the database, so if you could do it..." Joy trailed off. "I'll show you later." Joy thanked her.
Just as E'Liana was about to make her way into her office. "That reminds me...we need to do a physical of Mr Opet. He's a new staff member, afterall." Joy nodded. "I'll contact him, and see when he wants to do it..."
E'Liana smiled. "Thanks" and went off to send her report to Commander Smith, not looking forward to the numerous meetings that it would result in.
Meanwhile, Joy finished tidying away, and sent a message to Opet regarding the need for a physical.
~I think I'd better do this one too...~ She thought wryly to herself.
The staff meeting couldn't have ended any sooner for McLean. Since the start of the meeting his thoughts had been on any problems or bugs that could be running around the system. Certainly I must have misread the information, there was no way there could be a glitch in the computer system, McLean thought.
Walking briskly McLean made his way down to engineering to inform the engineers he and most likly the Admiral would be doing an inspection.
"Look sharp," McLean said, "I should be back down here in a few hours." With that he departed to check on the other sections of the ship.
Nobody liked suprise inspections, but if they could have their choice crew members would rather be inspected by the Chief Engineer rather than the Commanding Officer. McLean walked into the transporter room and watched the transporter opperator snap to attention.
"Easy." McLean said, "I only have 2 pips, I am a couple short for that kind of action."
"Aye sir." Came the response.
"I need to look at your station." McLean said walking toward the transporter contol console.
"Are you new?" McLean asked with a sharp eyebrow.
"Yes sir, I am sir." The Ensign continued looking stiff as a board.
McLean chuckled to himself. It seemed normal for new crew members to snap to attention or to show signs of respect for the high ranking officers, but this ensign was green and it showed.
"Your station looks in order, good work." McLean said.
McLean started to walk away then stopped dead in his tracks at the door and turned around to face the ensign, "Relax."
Walking the length of the hall way McLean entered a turbo lift and took it up to the bridge. As the doors slide open McLean noticed the hum of the bridge as well as the lack of officers on the bridge. As McLean walked onto the bridge he took a moment to look around, the bridge looked brand new. The technicians did a good job making sure everything looked good for the Admiral.
McLean waisted no time going over to the Operations station and quickly downloading the stations information to his PADD. Next he went over to the science and tactical stations and perfromed the same task. Looking up front McLean saw Commander Zoarial at the helm going through the motions.
"How does she feel Commander?" McLean asked hoping not to disturb the Klingon too much.
"How does she feel Commander?" Scott asked and Zoarial turned around, glad that he had something, or rather someone, to kill the time.
"I can't tell untill we're out of the spacedock mr McLean" Zoarial answered smiling. "Although it looks like they fixed my console a little bit, it looks a bit brighter than before. I was almost affraid that they would update the console to that new version, but luckily they left it alone."
"Well... the layout troubles me. I have to tie my fingers in a knot to go a quarter impulse, and I almost have to use my feet to be able to go into warp. I heard that it has some new automated features, but you know what happened the last time they 'automated' the navigations system." Zoarial grinned to Scott, thinking back about the brain. He couldn't help wondering how Alex and Dan were doing out there. He hoped that they were safe, whereever they were.
Jere recieved the message about needing the physical and knew he'd had one just prior to arriving so didn't need one again. He had prepared for this and sent back a message that he was forwarding his medical records as well as the last physical which was less than a month old. With a sigh he went back to drinking his tea, his mind wandering to what he should eat for dinner.
Nurse Packer received the message from Opet rather promptly and held back a sigh. She had to wonder why he was being so hostile. ~Surely a medical man such as him should know procedures aboard vessels such as ours...~ she mused quietly to herself.
Deciding against telling the young Doctor, she wrote back to him, informing him of the requirement of each new member of staff being personally assessed by the current medical team. She even added in that the Admiral would also need to partake in a physical to ensure optimal health for all the crew, and thus proceeded to contact him aswell.
Mc7 knew it would have to happen, but had hoped he would be able to avoid the issue a little longer than he had so far managed. However, after hearing that there were other members of the crew that were resisting undergoing their new assignment medical, he knew that he had to lead by example.
There are some duties of command that you just never get used to, he thought to himself as he headed towards the turbolift. When he stepped into the lift and instructed it where to take him, his mind began to wonder.
How much have the tests changed since my last physical, almost 5 years ago? Officers were supposed to have a yearly physical exam, but other duties has convienently always kept mc7 busy enough to avoid it. Until now.
As the lift came to a halt and the doors open, the door to main sickbay faced him. Exiting the lift, he walked the short distance, and entered the large medical facility.
the Admiral said, as a smile crept over his face.
Jere looked up as the computer console in his quarters blipped. He placed his half empty cup of tea on the table and walked to the desk, sitting down and checking to see who the message was from. The answer surprised him and caused him to smile. He opened the link and spoke. "Ethan .... what'd I do to deserve this honor?" he joked.
Ethan couldn't believe that it had been almost six months since they had spoken, but Jere was just like he was the last time they met. Smiling, joking, and enjoying life.
"Oh I just wanted to talk to a friend," Ethan responded smiling," and so I called my best friend."
Jere chuckled "How have you been? What have you been up to?" the last time they spoke seemed so long ago.
"Well as you can see I joined Starfleet," Ethan said smiling," and to say the least it has been very exciting to say the least."
"I can't believe you finally took the plunge Ethan. Where the heck are you?" Jere settled back in the chair behind his desk as he and his best friend talked.
"Well Captain Stone, helped me graduate early and I have been stationed to the USS El Salvador," Ethan said remembering his godfather," and guess what type of ship the El Salvador is?"
Ethan didn't let Jere respond as he stopped him mid sentence and said," Yep a Prometheus class, I'm so lucky."
Jeres mouth dropped. "You're ... in S47!?" he exclaimed?
"Yeah and so are you," Ethan said matter of factly. He of course was ecstatic when he found out that his friend was in the same division as him, and he had been given time to properly address that fact when it came up with Jere.
"We'll have to get together sometime when we both are on leave?" He was still in semishock from finding Ethan and he were so close too."
"Yeah of course, when I found out, I almost checked to see where the Raven currently was," Ethan said admitting his need to talk to someone that he knew.
"So Jere, how are things on your side?"
"They could be better Ethan. I don't know I just don't fit in here and I'm not sure why." Jere replied with a frown.
Ethan long ago knew how to read his friend and he knew that he was being nice. He of course knew that not everyone adjusted well to their first assignment. Lucky for him Jere had not picked up on Ethan's uneasiness. It wasn't that Ethan didn't like being on the El Salvador, but the people here intimidated him a bit.
"Jere, please tell me you didn't play any pranks on them?" Ethan said hoping to break the tense nature of there conversation.
Jere could tell there was something Ethan wasn't sharing but knew better than to ask. "No, no pranks. I don't know what it is and I probably don't want to know." He shrugged. "I put in a transfer to SBB. I think I prefer the starbases over the ships anyway. I never did grow a set of wings."
"Jere you grew a set of wings, but we all know that your mom just clipped them," Ethan said troubled that Jere hadn't enjoyed his first assignment. Jere was a carefree loving person who never had a problem with anything, but then again he had just been proved wrong. "So are you still going to be a doctor?"
"Trying to be Ethan. Maybe I wasn't cut out for this?" he mused and grew silent.
"Jere don't you dare speak like that," Ethan said a flash of anger showing," we all know that you are more than cut out for this. Just because you had a bad experience doesn't mean that its that way everywhere you go. You have the smarts and my friends tell me the looks to be a great doctor and officer."
Jere let out a sigh and didn't make a reply right away. "So what are you up to? What are you doing on the El Sal?"
"Well we just finished our current mission and we are heading back to the Assailant, supposedly we are supposed to have a big party in a little while," Ethan responded.
"Parties are fun Ethan, you make sure to enjoy yourself for both of us." Jere said with a slight smile, one thing Ethan liked was a good party.
"Of course, since when don't I enjoy......"Ethan replied before being interrupted by the comm system.
Jere laughed. "Speaking of parties, you better get going."
Ethan didn't want to stop talking to Jere, but he had been told to make an appearance. The only comfort that he took in this was the fact that he now knew where Jere was and he could call him when ever he wanted.
"Jere its been good talking to you again," Ethan smiling," I will talk to you soon."
"You won't lose contact now I promise." He closed the link before Ethan could get a word out knowing that it would be the only way to get him to go to that party.
Ethan almost laughed outloud, Jere knew him to well. Ethan turned off his console, thought about it a minute and then made his way towards the party.
OOC: This was a JP between myself and Ethan Hope from the USS Salvador.
"I can't tell until we're out of the space dock mr McLean" Zoarial answered smiling. "Although it looks like they fixed my console a little bit, it looks a bit brighter than before. I was almost afraid that they would update the console to that new version, but luckily they left it alone."
"Good, because I remembered you saying something last mission about how you weren't fond of the updates console. When I put in the paper work with the station I told them not to touch the helm console. What about the console don't you like?" McLean said.
"Well... the layout troubles me. I have to tie my fingers in a knot to go a quarter impulse, and I almost have to use my feet to be able to go into warp. I heard that it has some new automated features, but you know what happened the last time they 'automated' the navigations system." Zoarial grinned to Scott.
"Yes I do remember what happens when things become more automated. Let me know if there is anything you need." McLean said before leaving Zoarial.
The ships inspections were going better than McLean had expected. Then again it was one thing for a Chief Engineer to do an inspection, it was a completely different animal when the Captain of a ship, especially when it was an Admiral does the same inspection.
McLean only found a few things about Starfleet that he didn't like. One of them being the fact that the higher ups always focused on the small things. It was tedious to McLean, not worthy of all the attention they CO's always placed on them. McLean had always been an officer who focused on performance more than anything else. Probably why I won't end up as a Captain, McLean thought to himself.
McLean had saved Main Engineering as his last stop on the inspection before heading back up to the bridge to see the Admiral. As he entered he found it just as he left it. Engineers running around going over systems, double checking EPS grids, and making sure the Warp Core was running as efficient as it possibly could.
"Alright, let me see the report." McLean said a an Lt. J.G. working hard on the console closest to the Warp Core.
"Aye sir." She said handing him a PADD.
"94%," McLean said outloud sounding unimpressed, "Try doing a realignment on the injectors, that should bring us up a percent or so."
"Aye sir." She replied taking back the PADD.
The other stations looked exactly as they should so Mclean decided to quickly make his way back up to the bridge to report the ships status. McLean figured the Admiral was going to do an inspection of his own at that point so he mentally prepared for what might be.
"Computer, locate Admiral Mc7 of 9." McLean said.
"Admiral Mc7 of 9 is located in sickbay."
"Oh," McLean said to himself before tapping his COM Badge, "Admiral, this is Lieutenant McLean. I have finished my inspection. When will you be available to see my results?"
Joy received a prompt reply from the Admiral and got ready for the next physical. Sonner than expected, Mc7 arrived.
"Nurse Packer, I believe you want to subject me to all maner of horrible and nasty tests" he said, as a smile crept over his face.
She laughed. "I have a bad reputation already? Dear me..."
He smiled in reply. "Where would you like me?"
As she reddened slightly, she managed to respond "On the biobed over there, please" she said, indicating the nearest one.
She walked over with the usual equipment and scanned him with various tricorders and scanners. Finally, she asked him to lie down, to which he complied and she brought the scanner up, so that it encased him. For a flicker of a moment she thought she'd seen fear.
"Is this alright for you? It's the quickest way for doing these tests, but if you're not comfortable in this space I can do it another way..." She said, offering an alternative.
"Alright, we've done all the tests now, and overall you seem to be fine. However, I downloaded your medical file onto a PADD, and I know that you were at one point a member of the Borg collective. I've done a little research into this and know that implants can sometimes continue to affect the body. Are you experiencing any such signs?"
Once they were almost finished, E'Liana came out of her office, not knowing that he was there. She was reading a PADD and looked up in surprise.
"Hello Admiral" She said. "Here for your physical I see..." She noted dryly. "I see you 're hoping to lead by example..."
He opened his mouth to respond, when his commbadge beeped.
"Admiral, this is Lieutenant McLean. I have finished my inspection. When will you be available to see my results?"
E'Liana smiled slightly. She was forever having interrupted conversations with people.
A little while later, Arya sat in her office, and was putting together a list of questions for her evaluations. She wondered what to ask them, and thought for a few minutes more.
Ten minutes later, she had the list of questions completed, and wondered who her first victim should be. She picked up a roster, and looked at it. She knew the captain and commander would be too busy now, as well as navigations. She wondered who would be good to start with, and then a name stood out from the rest.
After being instructed to sit on the biobed, mc7 made his way over to it, and sat down. A moment later, Nurse Packer joined him by the side of the bed, and raised her scanner up towards the new Captain. As she done this he flinched, almost unnoticable, but having been in her job for a long time, Packer picked up on it instantly.
No, carry on Nurse. As you said before, your reputation proceeds you. Just remember, if the Raven loses another commander so soon, Starfleet really wont be happy!
the Admiral said, continuing the run of good humoured conversation between the two of them.
A few moments, and many scans, later, Packer broke the silence by talking about mc7's medical history.
Alright, we've done all the tests now, and overall you seem to be fine. However, I downloaded your medical file onto a PADD, and I know that you were at one point a member of the Borg collective. I've done a little research into this and know that implants can sometimes continue to affect the body. Are you experiencing any such signs?
And with that, another smile crept over the mans face.
But to answer your question, no I've not had any trouble with my implants for a while now. They generally don't affect me, or my performance, much at all, unless I'm in a situation when my adreniline levels are extremely high. Starfleet Medical never figgured out why, but that really seems to spur the dormant nano probes in my body into action!
After mc7 had given his answer to Nurse Packer, E'Liana came out of her office and walked over to the biobed to join the two officers already there.
Hello Admiral. Here for your physical I see. I see you 're hoping to lead by example..."
As mc7 opened his mouth to reply, his comm badge beeped.
Admiral, this is Lieutenant McLean. I have finished my inspection. When will you be available to see my results?
Once the Lt had finished speaking, the Admiral looked to Nurse Packer to gauge roughly how long he would be in sickbay for, and she nodded, which he took to mean that he was all but finished.
I'll be in my ready room in 15 minutes Lt. I shall see you there.
tapping his comm badge to end the connection between his and the Lt's, he slid off the bed and stood.
and with that he nodded to the both of them and walked out of the medical bay, and started to make his way to his ready room for his meeting with the Lt.
Having gone back into her office after seeing the Admiral, E'Liana went through the last few reports on her desk, glad that there were no patients with injuries. Just as she was about to ask Joy something, her commbadge bleeped.
"Eowyn to BeTor, could you please report to my office"
McLean had gotten a chance to finish the last of his inspections just as the fifteen minutes were up. Walking over to a near by console McLean complied all the data he had collected from the various stations and placed them onto a single PADD for the Admiral to take a look at and endorse with his thumb print.
McLean had to admit that the ship seemed as if it was running more smoothly than when he first arrived. He didn't attribute it to his doings at all, it was clear that the stations technicians did a damn good job. The only complaint McLean had was the light level. It seemed to him that the station technicians liked the look of Red Alert because the hallway lights were just much to dark.
I'll have to fix that once I get back down to engineering, McLean thought as he boarded the turbo lift.
The door slid open revealing an empty hallway. McLean scratched his head and let the doors close in front of him.
"Lets try this again, BRIDGE." McLean said trying to let the computer hear every word.
"What the devil. Deck 1." McLean said sounding angry.
With the command the turbo lift began moving again. This time the doors slid open to reveal a busy bridge.
Stepping off the bridge McLean grumbled to himself, "Thank you, you good for nothing..."
Nobody seemed to take noticed at the Chief Engineers' grumblings. Stepping in front of the Admiral's ready room he pushed the door chime and waited to be let in. Listening closely McLean heard 'enter' from the other side of the door and quickly proceeded into the Admiral's ready room.
"Here are the inspection results Admiral." McLean said handing the PADD over to the Mc7 of 9.
She wondered if she should go straight into the quesitons, or if she should chat a bit. She wasn't sure, but she knew she didn't want to sit behind her desk for this. She grabbed her drink and list, and then headed over to the couch to get comfy.
It wasn't long before Commander BeTor was in her office. She tried to smile sweetly so as not to startle the woman.
"Hello, commander, and welcome. Please make yourself at home." She said motioning to the spot right next to her.
"So, how long have you been here? What is it like working with this crew?" She said trying to get some small talk going before the real questions came.
E'Liana made her way to the counsellor's office wondering what this could be about. She had a faint suspicion that it was a proper counselling session and shuddered at the thought. However, E'Liana resolved to give it a go.
She arrived at the office to see a suspiciously cheerful Arya and smiled at her slightly.
"Hello Ensign" she said, trying to be as equally cheerful.
"Hello, commander, and welcome. Please, make yourself at home." She said motioning to the spot right next to her.
This merely confirmed E'Liana's suspicions but thought better than to question it for now. She would humour the young counsellor, and so took a seat next to Arya.
The counsellor began. "So, how long have you been here? What is it like working with this crew?"
E'Liana took a deep breath. "I haven't been here that long, only a mission before you joined and before that I was at the Academy...and the crew seem very nice - for the most part." She finished, having realised what the counsellor could be questioning her about.
She had heard the response, and wondered where to go from there. She didn't think it a good place to start her questioning, and decided on something else.
"Ah, a new one like myself, good to know." She said and wrote something down, trying to make it feel like a session. She had no idea how to do more than that right now, and continued to smile sweetly.
"So, tell me a bit about yourself." She said, hoping to make a new friend. She remembered her questions, but since she remembered who they were for, she ignored the padd.
"I see, but that can't be all bad, can it?" She said no longer smiling. "Oh, wait I see what you meant." She said and shaking her head she chuckled before she hung her head. She looked up again. She noticed that look on the commander's face earlier, and she wondered what that was about.
"Is there anything you wanted to talk about, something that might be bothering you?" She said trying to sound as if she knew what was going on, but she felt like she was a failure for not knowing. She hid her feelings, and took a sip from her now cold drink making a face before swallowing. She put the cup down, and pushed it away from her directing her attention back to the commander.
E’Liana sat in the office, still feeling somewhat uncomfortable but trying to ignore it.
"Ah, a new one like myself, good to know." The young counsellor said, writing something which E’Liana couldn’t read.
"So” Arya said, looking up. “tell me a bit about yourself."
“Well…I lived on Qo’Nos with my parents and my half brother and half sister - he’s Klingon and she’s human.” E’Liana stopped and Arya nodded, urging her to continue. “My mum’s a Doctor, I guess that’s how I became one too, only you don’t get too many Klingons studying medicine, what with the whole “honour of dying on the battlefield” and all, so I had some trouble back home when I said I‘d be doing medicine. Not my close family, but neighbours and other relatives. To be fair, they weren‘t exactly happy that my dad had married again, let alone to have a human wife and then a child too!…” E’Liana stopped talking again and looked surprised. She instantly clamped her mouth shut.
"I see, but that can't be all bad, can it?" Arya said, no longer smiling. "Oh, wait I see what you meant." She said. She seemed to notice the look on the commander's face earlier, and she wondered what that was about.
"Is there anything you wanted to talk about, something that might be bothering you?” She put the cup down, and pushed it away from her directing her attention back to the commander.
“Bothering me? What do you mean?” She wondered. ~Surely the nonsense in my department isn’t ship’s gossip already?~ she thought to herself.
She listened while the other woman talked, taking mental notes to put down later. She didn't want to write any of this down, because she didn't want it to look like she was going to put everything in her report. It wasn't necessary right now, but she had some notes she would need to write down when the woman left, personal notes.
"I thought that we would just talk, me get to know you a bit. I don't know if you are having trouble with something in your life, or not. Only you know that right now, and if there is something, then maybe talking about it might help. I have no idea what is going on around here. At home I had heard several rumors. I hear rumors, how true they are I don't know, and I prefer to turn a deaf ear to them. If it isn't fact coming from you, then it is probably false." She said, not sure what E'Lianna meant, and wondered what was being said about the doctor. She thought about investigating, but decided against it. She didn't think it right to listen to gossip when growing up, and she certainly didn't it was right now. She had learned from her mother that gossip was nothing but jealous lips being loosed.
"Is there something you can think of that you might want to confide in me with?" She said hoping to gain the doc's trust. She didn't think it would work, but she felt she should at least try.
The Admiral took the PADD from McLean's hand and looked over the information quickly. Placing the PADD down on his desk he leaned back in his chair and folded his hands together.
"What is your opinion of the status of the ship?" Mc7 of 9 asked.
McLean was caught off guard for a second but quickly recovered, "It is my personal opinion that this ship is running as close to perfect as a star ship can run." McLean said.
"Good," Mc7 of 9 said getting to his feet, "Then I will trust your judgment."
The Admiral motioned to the door, McLean nodded and stepped in front of Mc7 and walked onto the bridge quickly followed by the Admiral.
"I want you to head down to engineering, we are going to get underway now."
"Aye sir." McLean replied and walked back to the back of the bridge to catch the turbo lift down to engineering.
As McLean walked he could hear the Admiral giving the order to get underway. Stepping inside the turbo lift McLean took one last look at the bridge and noticed it seemed to have changed from the last time he had to talk to the last Captain of the Raven. McLean noted it was strange how the ship seemed to change shape, and feel with each commanding officer.
The CO has asked me to get this mission underway while he fixes his browser problems with his computer. So lets head out toward our destination.
After Zoarial handed in his results, or more like it, leaving them next to the captains chair, he went back to his post, mildly stroking the console. It looked as though the console knew that they were about to leave, as it's lighting quickly went from dim to bright. As if it had also notified the captain, Mc7 and Scott both walked out of the ready room. Scott headed straight to turbolift, without even nodding at a now even more surprised CNO.
The captain scraped his throat, notifying the atendees that they had to pay close attention.
"Aye sir!" Zoarial responded, and started the procedure of leaving the dock. Luckily there were exact coordinates of where they had to go, unlike many other missions, and after a couple of thousand kilometers, Zoarial went from impulse to warp. He didn't have to worry about a reduced speed for maximum efficiency of the scanners, but still he didn't exceed the speed of warp 8, giving the warp engines some time to warm up.
After about 5 minutes Zoarial cranked the speed up a bit and switched over to automatic pilot.
"The Raven is under way and will reach its destination in 5 days sir." Zoarial reported, knowing that the ETA indicator wasn't that reliable.
"Is there something you can think of that you might want to confide in me with?"
E’Liana shifted uncomfortably in her seat, feeling like it was an interrogation and yet knowing that it was all in her head again. She tried to smile at Arya but had a feeling that it looked forced.
“I hope you don’t take it personally, but I’m not too comfortable with counsellor’s. I don’t really know why, it’s just a thing about me I guess. I’d be more than happy to help you in regards with other patient’s, but as for myself, I’m afraid it’ll be a little harder. You know what they say… ‘Doctor’s make the worst patients’”.
In Sickbay, Nurse Joy was finishing up her report on the Admiral’s physical and followed the hurried notes from E’Liana as to how to add them to the ship’s database. ~ I only hope I get this right…~ she mused, wondering when the CMO would be back. She didn’t like doing these things on her own when they could go wrong so easily.
Looking at the time, she noticed that it was almost time for the end of her shift, and yet the cover nurse had not yet arrived. Another new recruit, Joy had hoped that he was somewhat more reliable but had a sneaking suspicion that he wouldn’t be. However, as though to prove her wrong, a young Bajoran man came in with a medical uniform.
Joy momentarily wondered why he hadn’t come earlier that day, but he seemed to have a reason for everything. Either that, or he was telepathic.
She noticed the doctor seemed not only very uncomfortable, but nervous as well. She wondered what could be wrong, and hoped it was nothing she had said or done.
"Relax, E'Lianna, may I call you E'Lianna?" She asked.
"Ok then, well I am just trying to be your friend. I want to help you, but if you won't let me, I understand. It is totally up to you." She said not sure how to help the doctor. She listened to her statement and chuckled.
"I bet that happens a lot with you. I am not really a doctor, so I don't know that problem. My door is always open if you want to talk. If there is nothing else then?" She said not sure what else to do. She wasn't sure how to get someone talking if they didn't want to do so naturally.
"Have a good day then, and if there is anything I can do for you, don't hesitate to ask. Just don't think you can get me to pee into anything huge." She said trying to get rid of the serious air. She couldn't believe she said that with a straight face, and gave Lianna one of her hundred watt smiles.
A little while later, she felt she had failed on her first try as a counsellor. She wasn't sure if she would be any better on the next one. She decided it was time to try and get someone from command down for a session. She just hoped she didn't get the brush off when she tried.
She felt completely nervous about his response, and sat down to work on updating her report for the captain.
Steve was just on his way to bridge when his commbadge chirrped.
'What now?' He thought to himself, as he turned around and tapped his commbadge.
He set off towards his destination, wondering what he was needed for. What he needed right now was a chair, because for some strange reason, his legs were feeling a little weak.
Upon reaching Arya's office, he pressed the chime, and waited...patiently.
She didn't expect a response so promptly, or one that wouldn't give her the brush off. ~It seems that I have been informed wrong on how command acts. I will have to remember this for next time too.~ She thought, and hoped it was this easy every time. She wondered if the captain would be so easy.
It wasn't long before she heard the door chime, and she thought that was fast. "Enter." She said not sure if someone else needed her, of if Commander Smith had come already. Sure enough the commander walked right in.
"Welcome, commander, if you will take a seat. I will try not to take up too much of your time. I know you are a busy man and I appreciate you taking the time out for this." She said deciding to go more professional with this time. She figured that might have been what she had done wrong with E'Lianna.
She gathered her nerves, and tried to smile to look like she was okay with doing this. She knew he was one of her commanding officers, and wondered how well this was going to go.
"How do you describe your relationship with your crew?" She said trying to get down to business. She hoped she looked like a professional, and wasn't sure how well this was going to turn out. | 2019-04-21T08:14:14Z | http://section47.proboards.com/thread/1540/mission-11-new-friends-old?page=2 |
The availability of specific markers expressed in different regions of the developing nervous system provides a useful tool for the study of mouse mutants. One such marker, the transcription factor Pax2, is expressed at the midbrain-hindbrain boundary and in the cerebellum, spinal cord, retina, optic stalk, and optic chiasm. We recently described a group of diencephalic cells that express Pax2 as early as embryonic day (E) 10.5, and become part of the eminentia thalami by E11.5. The discovery of this previously undescribed cell population prompted us to examine Pax2 protein expression in the developing mouse forebrain in more detail.
We determined the expression pattern of Pax2 in the forebrain of wild type mouse embryos between E10.5 and postnatal day (P) 15. Pax2 expression was detected in the septum of the basal forebrain, hypothalamus, eminentia thalami and in the subfornical organ. To evaluate Pax2 as a marker for septal cells, we examined Pax2 expression in Pax6Sey/Seymutants, which have an enlarged septum. We found that Pax2 clearly marks a population of septal cells equivalent to that seen in wild types, indicating its utility as a marker of septal identity. These cells did not express the GABAergic marker calbindin nor the cholinergic marker choline acetyltransferase and were not detectable after P15.
Pax2 is expressed in populations of cells within the developing septum, hypothalamus, and eminentia thalami. It seems especially useful as a marker of the telencephalic septum, because of its early, strong and characteristic expression in this structure. Further, its expression is maintained in the enlarged septum of Pax6Sey/Seymutants.
Pax2 is a member of the Pax family of transcription factors [1, 2], characterised by the presence of a paired-type homeodomain [3, 4]. Pax2 is expressed in a number of different organs in the developing mouse embryo, including the ureteric bud, kidneys [1, 2] and otic vesicle . In the developing nervous system, Pax2 is first detected at embryonic day (E) 7.5 in the neural plate, in the area of the presumptive midbrain-hindbrain region . At E8.0, Pax2 displays a broad expression domain in this region, which by E9.5 is restricted to the isthmus at the midbrain-hindbrain boundary . Pax2 expression in the isthmus ceases after E11 . In the cerebellum, Pax2 is specifically expressed by a subset of cerebellar GABAergic interneurons and their precursors, from E12 until the end of cerebellar development (postnatal day 15) . In the spinal cord, Pax2 expression is found in the intermediate zone as early as E10.5 [5, 7]. In the developing eye,Pax2 expression is initiated at E9 in the ventral half of the optic vesicle. By E11, after invagination of the optic vesicle, both Pax2 transcript and protein are detected at high levels in the ventral opening of the optic cup, the optic fissure, and the optic stalk, ending at the border with the diencephalon [5, 7, 9, 10]. After E12.5, Pax2 protein expression is still present in the ventral optic cup, although the level is decreased, and it is no longer detected after E16.5 . Pax2 is also expressed in glial cells in the optic nerve [5, 9, 10].
Pax6, another member of the Pax family, is expressed in the dorsal telencephalon, diencephalon, hindbrain regions and spinal cord and throughout the developing optic cup, but is absent from the optic stalk and optic nerve . Mutant mice lacking functional Pax6 protein (Pax6Sey/Seymice), display a large range of nervous system defects, including absence of eyes , disruption of dorsoventral telencephalic patterning [14, 15] and the diencephalic-mesencephalic boundary [16, 17]. Pax6 and Pax2 are expressed in neighbouring, but mutually exclusive domains in the developing eye (with the exception of the ventral optic cup), , and in the diencephalic-mesencephalic region . In the developing spinal cord, Pax2 is expressed by many types of early differentiated neurons, located in the mantle zone and surrounding Pax6-positive neural precursors in the ventricular zone . Fewer Pax2-positive interneurons are found in the Pax6Sey/Seymutant, indicating that Pax6 is required for their development .
We have recently described a group of diencephalic cells at the border region between the diencephalon and the telencephalon that expresses Pax2 protein at E10.5 . At E12.5, these Pax2-immunopositive cells form a distinct cell population at the most dorso-lateral tip of the eminentia thalami , a diencephalic structure that joins the ventral diencephalon to the dorsal and ventral telencephalon [22–24]. Here, we describe previously unidentified areas of Pax2 expression in the developing mouse forebrain. Further, we show that Pax2 expression is maintained in the septum of Pax6Sey/Seymutants, which are known to have an enlarged septum. This study highlights the value of Pax2 as a novel marker of forebrain development.
We examined Pax2 expression in the early forebrain using immunohistochemistry on sagittal sections of E10.5 and E11.5 embryos (Fig. 1). At E10.5, a few Pax2-immunopositive cells were detected within the ventral telencephalon in lateral sagittal sections (Fig. 1A, a-arrowheads), with the staining becoming more intense in mid-sagittal sections (Fig. 1a'). In addition, a small population of Pax2 immunopositive cells was detected in the neuroepithelium of the anterior hypothalamus, adjacent to the optic recess area (Fig. 1B,b). At E11.5, strong Pax2 expression was found in the developing septum (Fig. 1C,c) with no Pax2-immunopositive cells detected in the neighbouring lamina terminalis (arrow in Fig. 1c). As in E10.5 embryos, a number of Pax2-positive cells were found close to the base of the hypothalamus but also in more dorsal areas of the anterior hypothalamus (Fig. 1C,c). At both E10.5 and E11.5, Pax2 expression was found in previously described regions such as the ventral neuroepithelium of the optic recess (Fig. 1B,b,C,c) [5, 7, 9, 10], the optic cup (arrowhead in Fig. 1A and not shown) [5, 7, 9, 10] and near the diencephalic-telencephalic boundary (asterisk in Fig. 1A and not shown), which will give rise to the eminentia thalami .
Pax2 protein expression in the mouse forebrain at E10.5 and E11.5. A-B, a-b: E10.5 sagittal sections showing Pax2 expression in the developing forebrain. In the ventral telencephalon, a few Pax2-positive cells can be detected in lateral sections (arrowheads in a) and become more abundant in more medial sections (a'). A few Pax2-positive cells are detected in the hypothalamus, close to the strongly Pax2-positive optic recess (or) area (B, b). The previously described staining in the future eminentia thalami of the diencephalon (asterisk in A) and the optic cup (arrowhead in A) are also shown. C, c: E11.5 sagittal sections revealing strong Pax2 expression in the developing septum (Se). No Pax2 expression is detected in the neighbouring lamina terminalis (LT) (arrow in c) at this developmental stage. Pax2-immunopositive cells are also found in the anterior hypothalamus (arrowheads in c) and may originate from the ventral neuroepithelium of the optic recess area (or). Expression in the eminentia thalami is not shown. Note the Pax2 expression in the spinal cord (SC) (arrows in C). a, b and c are high power images of the boxed areas in A, B and C respectively. a' is a high power image of a sagittal section at a more medial level than that depicted in a. Scale bars: C, 400 μm; A, B, c, 200 μm; a, a', b, 50 μm.
Pax2 protein expression was then examined at E12.5, on coronal and sagittal sections along the caudo-rostral axis of the developing mouse forebrain (Fig. 2). The strongest expression domain of Pax2 was detected in the telencephalon. The Pax2 antibody labelled most cells of the septal neuroepithelium, located in close proximity to the dorso-medial telencephalon (Fig. 2A,a). This strong and characteristic Pax2 expression in the septum was also observed in sagittal sections, in the region where the lamina terminalis joins the septum (Fig. 2D,d). Only a few Pax2-immunopositive cells were detected within the neighbouring lamina terminalis (arrows in Fig. 2d).
Pax2 protein expression in the mouse forebrain at E12.5 and E13.5. A-C: Low power views of coronal E12.5 forebrain sections immunoreacted with Pax2; the boxed areas are shown at higher magnification in panels a-c. D: Low power view of a sagittal E12.5 section immunoreacted with Pax2. Note the strong expression of Pax2 in the isthmic region (Is) and the spinal cord (SC), in accordance with previous reports. d, d' higher magnification of the boxed areas in D. A, a, d: Strong Pax2 expression is detected in the septum (Se) of the basal forebrain, mainly in the septal neuroepithelium. A few Pax2-positive cells are detected in the lamina terminalis (LT) (arrows in d). b-c, d': Pax2 is detected in groups of cells found in the lateral hypothalamic area (arrowheads in b and d') and in the anterior hypothalamic neuroepithelium (arrows in b' and d', c). These cell populations might originate from cells located at the base of the hypothalamus (small arrows in b and d'), Pax2 expression can also be seen in the ventral neuroepithelium of the optic recess (asterisk in d'), as previously described. E-G: Low and high power images of E13.5 coronal sections reveal strong Pax2 expression in the septum, mainly in the neuroepithelium (F). A few immunopositive cells are found in the differentiating layer of the septum (G). The boxed areas in the inset panels in E-G indicate the areas shown in the respective high power images. A-C and E-G are sections from the same specimens respectively. Scale bars: D, E-G-insets, 1000 μm; A-C, 500 μm; a, d', F-high power, 200 μm; b-c, d, E, G-high power, 50 μm.
Specific Pax2 expression was also detected in small clusters of cells in regions of the hypothalamus. The Pax2 antibody labelled a group of cells located at the lateral hypothalamic area (Fig. 2B, arrowheads in b), and a narrow strip of cells parallel to the anterior hypothalamic ventricular zone (Fig. 2B, arrows in b'). A few Pax2-immunopositive cells were also detected along the base of the hypothalamus, excluding the midline region (Fig. 2b, arrows). These groups of Pax2-positive cells can also be distinguished in a sagittal plane (Fig. 2D,d'). The cell populations indicated by the arrowheads and small arrows (Fig. 2d'), located just above the optic recess area, correspond to the respective populations indicated in Fig. 2b. The Pax2 positive cells located close to the third ventricle, indicated with large arrows in Fig. 2d', correspond to those shown in Fig. 2b'. In coronal sections of the caudal forebrain, specific Pax2 expression was also detected in a small cluster of cells resembling a nucleus in the neuroepithelium of the anterior hypothalamus (Fig. 2C,c). In accordance with previous reports, Pax2 expression was also detected in the ventral neuroepithelium of the optic recess area at the base of the hypothalamus (Fig. 2d', asterisk), retina (not shown) [5, 7, 9, 10] and eminentia thalami at the level depicted in Fig. 2B (not shown) .
Double immunofluorescence with Pax2 and β-tubulin III (Tuj1), a marker of early neural differentiation found in neurites, revealed that a very low proportion of septal cells labelled with Pax2 co-expressed β-tubulin III (Fig 3A), suggesting that the majority of the Pax2-positive cells in this region are neural precursors. This was confirmed by double immunohistochemistry with Pax2 (black, nuclear staining) and nestin (brown, filament staining), an intermediate filament protein found in radial glia, which showed that the β-tubulin III-negative/Pax2-positive cells in the septum express nestin (Fig. 3B). In the hypothalamus, most of the Pax2-positive cells found close to the ventricle (panels 2b' and 2c) are also positive for β-tubulin III, showing that these cells are newly formed neurons (Fig. 3C). The Pax2-positive population indicated by arrowheads in panels 2b and 2d' also expressed β-tubulin III, even more extensively than the other hypothalamic Pax2-positive cells (Fig. 3D). Finally, the Pax2-positive cells located close to the hypothalamic ventral midline (small arrows in panels 2b and 2d') did not express β-tubulin III, but were positive for nestin, indicating that they correspond to neural progenitors (data not shown).
Pax2 is primarily expressed in neural progenitors in the septum and in early differentiated neurons in the hypothalamus. A: Double immunofluorescence with Pax2 (red) and β-tubulin III (Tuj1) (green) on a sagittal E12.5 telencephalic section shows that only a minority of Pax2-positive cells in the septum also express Tuj1, an early marker of differentiated neurons. This is further confirmed with double immunohistochemistry with Pax2 (black) and nestin (brown) (B), revealing that these Pax2-positive cells have nestin-positive filaments. Panels (A) and (B) show correspond to high power images taken within the region depicted in Fig. 2D, d. In the hypothalamus (C, D), most Pax2-positive cells express the early neural marker β-tubulin III (green), showing that they correspond to early generated neurons. Pax2-positive cells in (C) correspond to those shown in Fig. 2b', and those in (D) correspond to the cells indicated by arrowheads in Fig. 2b. The position of the ventricular zone (VZ) is indicated in sections A, C and D. Scale bars: A, 20 μm; B-D, 5 μm.
At E13.5, Pax2 was detected in cells located at the septal midline (Fig. 2E). Expression was strong and specifically confined to the septal neuroepithelium, at the point where the septum joins with the future hippocampus via the lamina terminalis (Fig. 2F). At more rostral telencephalic levels, a small number of Pax2-immunopositive cells were found in the differentiating field of the septum (Fig. 2G).
Pax2 expression in the hypothalamus at this age was similar to that seen at E12.5, with the exception that the immunopositive cells in the lateral hypothalamic area and the base of the hypothalamus shown in Fig. 2b, were no longer detectable (data not shown). No Pax2-immunopositive cells were found in the eminentia thalami after E13.5 (not shown).
At E14.5, coronal telencephalic sections revealed similar Pax2 expression to that described at E13.5 (data not shown). As for the earlier ages examined, Pax2 expression was mainly confined to the septum, in the neuroepithelium adjacent to the lamina terminalis, as depicted in the mid-sagittal section in Fig. 4A and 4a. In the E14.5 hypothalamus, Pax2 expression was limited to a very narrow band of cells, localized in the differentiating field of the anterior hypothalamus and reaching the medial horn of the lateral ventricle (Fig. 4B, arrows in b). We could not detect these immunopositive cells in the coronal plane, probably because of their narrow field of expression. Expression was also found at the optic recess area as previously described , (arrowhead in Fig. 4A,B).
Pax2 protein expression in the mouse forebrain at E14.5 and E16.5. A, a: Low and high power images of an E14.5 sagittal section showing strong Pax2 staining in the neuroepithelium of the septum (Se) adjacent to the foramen of Monro (FM) in an E14.5 sagittal section. This intense Pax2 staining is observed at the level where the lamina terminalis (LT) joins the septum. a shows a higher magnification of the boxed area in panel A. B, b: In more lateral sagittal sections Pax2 expression is found in a cell population (arrows) within the anterior hypothalamus (AH) reaching the medial horn (mh) of the lateral ventricle. b shows a higher magnification of the boxed area in panel B. C: E16.5 coronal section showing Pax2 expression around the optic chiasm (oc) region, in the optic stalk epithelium and in a few cells of the hypothalamic neuroepithelium next to the suprachiasmatic nucleus (SCH) (arrows). D: E16.5 coronal section showing Pax2 expression in the differentiating layer of the medial septum surrounded by the axonal bundles of the fornix (fx). Scale bars: A, B, 1000 μm; a, C, D, 200 μm; C-inset, 250 μm; b, 100 μm.
By E16.5, the main Pax2 expression domain in the developing septum was located in the differentiating field of the medial septum. It was mainly found in a stripe-like cell arrangement, parallel to the midline and surrounded by the fibre bundles of the fornix (Fig. 4D). At this developmental stage, the only Pax2 expression detected in the hypothalamus was in the optic stalk epithelium, just above the optic chiasm region (Fig. 4C), in a few cells of the hypothalamic neuroepithelium, adjacent to the suprachiasmatic nucleus (arrowheads in Fig. 4C) and in a few scattered cells in the medial preoptic nucleus.
During early postnatal development only a few Pax2-positive cells were detected in the forebrain. At postnatal day (P) 1, Pax2 immunopositive cells were detected scattered in the most caudal sections of the septal area, at the level where the fornix is found in proximity to the anterior commissure (Fig. 5A). As at E16.5, a few cells were found close to the fornix, in the medial septal area, as well as in the border zone between the medial and lateral septum (Fig. 5A,B). Pax2 also labelled the medial preoptic nucleus of the hypothalamus (Fig. 5B). At P8, a similar expression pattern was observed (Fig. 5C, D). In addition, Pax2 was detected in the subfornical organ (Fig. 5E), one of the circumventricular organs of the brain involved in fluid balance . Pax2 expression was no longer detected at P15 (not shown).
Pax2 protein expression in the mouse forebrain during early postnatal development. A-E: Coronal forebrain sections immunostained for Pax2 at P1 (A-B) and at P8 (C-E), showing the presence of few disperse Pax2-positive cells in the septal area (Se) (A-C, high power in D), the medial preoptic nucleus (po) (asterisk in A and B), and the subfornical organ (SFO) (arrow in E). The fibre tracts are indicated in A and B for orientation purposes (ac, anterior commissure; cc, corpus callosum; fx, fornix). The boxed areas in A and C delineate the high power images depicted in B and D respectively. Scale bars: A, C, 100 μm; B, E, 25 μm; D, 12.5 μm.
To validate Pax2 as a marker of the septal neuroepithelium, we used the Pax6Sey/Seymutant, which has been shown to have an enlarged septum . Using double immunofluoresence, we first examined expression of Pax2 and Pax6 in E12.5 wild type embryos. In rostral telencephalic sections, Pax6 was strongly expressed in the dorsal, lateral and ventral pallium, with its most ventral expression domain expanding into the lateral ganglionic eminence, just below the pallial-subpallial boundary (Fig. 6A), as previously described [11, 14, 24]. Pax6 was expressed at lower levels in the medial pallium but did not overlap with the expression domain of Pax2 in the septal neuroepithelium (Fig. 6A, arrows).
The Pax2 expression domain in the telencephalic septum is located in a more dorsal position in the Pax6Sey/Seymutant than in wild type. Pax2 expression in the telencephalic septum in E12.5 wild type (wt) (A, B, D) and Pax6Sey/Seymutant (Sey) (C, E) coronal sections. A: Double immunofluorescence with Pax2 (red) and Pax6 (green) reveals that the two proteins are expressed in non-overlapping, mutually exclusive domains. The arrows in A indicate the ventral and dorsal limits of Pax6 and Pax2 expression respectively. B-C: Pax2 expression in wt and Sey embryos, showing that the Pax2 expression domain is shifted dorsally in the Sey mutant (C) compared to wt (B). The septum and the Pax2 expression domain are indicated by dashed lines and arrows respectively. D-E: Double immunofluorescence with Pax2 (red) and Lim1/2 (green) in the septum shows that Pax2 expression is found within the Lim-positive domain in both wild types (D) and Sey mutants (E), suggesting that the shifted Pax2 expression domain in the Sey mutant is still within the limits of the ventral telencephalon. Note that panels B and C correspond to slightly more rostral sections than those shown in D and E respectively. Scale bars: A, B, C, 200 μm; D, E, 100 μm.
Using Pax6Sey/Seymutant embryos, we examined whether loss of Pax6 affects Pax2 expression in the septum. No gross changes in the extent of the main Pax2 expression domain were observed, although the intensity of Pax2 staining appeared reduced in Pax6Sey/Seymutants compared to the wild type (area between arrows in Fig. 6B,C). In addition, Pax2 expression was observed in a more dorsal area compared to wild type, revealing a larger septum in the mutant (compare areas between dotted lines in Fig. 6B and 6C), in accordance with previous published data . This result was consistent in all mutants examined (n = 6). Double immunofluorescence with antibodies for Pax2 and the septal marker Lim1 (also known as Lhx1), using an antibody that detects both Lim1 and Lim2 (Lim1/2) [26, 27], showed that in both wild types and Pax6Sey/Seymutants, Pax2 expression was confined within the Lim1/2 positive domain (Fig. 6D,E). This shows that the shifted area of Pax2 expression observed in the Pax6Sey/Seymutant most likely corresponds to ventral telencephalic tissue, and not to ectopic Pax2 expression in the dorsal telencephalon.
To gain further insight into the neurochemical properties of the differentiated Pax2 cells detected in the septum, we examined co-expression of Pax2 and markers of GABA-ergic and cholinergic neurons, the principal neuronal types of this structure [28–30].
To examine whether the Pax2-positive cells detected postnatally might be GABA-ergic we performed double immunostaining experiments with Pax2 and calbindin, a calcium-binding protein that has been shown to label a large population of GABA-ergic somatospiny neurons in the adult septum [31, 32]. A large number of neurons were labelled with an antibody for calbindin in the postnatal septum (Fig. 7B). However, Pax2-positive cells did not colocalize with calbindin, neither at P1 (not shown) nor at P8 (Fig. 7A–C). To examine whether the Pax2 immunolabelled neurons might be cholinergic, we performed double immunostaining experiments with appropriate markers. At P1, we examined co-expression of Pax2 and Islet1, a protein that has been shown to label some populations of cholinergic septal neurons [33, 34]. No co-localisation of these two proteins was detected (not shown). Choline acetyltransferase (Chat), the acetylcholine-synthesizing enzyme in cholinergic neurons, can not be detected clearly in septal neurons by means of immunohistochemistry before P8 [28, 30]. We examined co-localization of Chat and Pax2 at P8 by means of double immunostaining (immunohistochemistry followed by immunofluorescence). As shown in Fig. 7, at P8 a few cells in the septum start expressing Chat but do not express Pax2 (7D–F).
Pax2 expression in the medial septum does not co-localise with that of calbindin and choline acetyltransferase (Chat). A-C: Double immunohistochemistry with Pax2 (black) (A) and calbindin (magenta) (B) on P8 coronal sections reveal the presence of both calbindin-positive and Pax2-positive neurons in the septal area, but not co-expression of these proteins (C). Similarly (D-F), septal neurons that express Pax2 (green) (D) and Chat (brown) (E), do not co-express these proteins (F). The panels are high powers of the level depicted in Fig. 5C. Scale bar for all panels, 10 μm.
Specific markers expressed in different regions of the developing nervous system are widely used as tools to study neural development. Here, the expression of Pax2, a well-studied marker of the isthmus, spinal cord and developing eye, has been re-examined, using a polyclonal antibody, and novel areas of Pax2 protein expression have been identified in the ventral telencephalic septum and the developing hypothalamus. The polyclonal antibody used in the present study recognizes the same epitope described by Dressler and Douglass (1992) and has been previously used by several groups to characterize Pax2 protein distribution [2, 7, 9]. In this study, the use of paraffin sections subjected to microwaving for antigen retrieval, in contrast to the cryostat sections used previously, may have allowed the identification of the previously undescribed domains of Pax2 expression.
Pax2 staining in the hypothalamus is first detected at E10.5 and comprises a few cells in the ventricular zone, dorsal to the optic recess. By E11.5, Pax2 positive cells are found in a region that extends from the area of the optic recess to the lateral ventricle that will give rise to the anterior hypothalamus. Comparison of the distribution of Pax2 positive cells in this region at E12.5 and E10.5 suggests that they might follow a migratory path towards dorsal regions of the anterior hypothalamus. By E14.5, fewer Pax2-positive cells are present than at E12.5, and these are restricted to the dorsal anterior hypothalamus. Pax2 is expressed in the optic stalk, a structure that joins the optic cup to the brain, and that ends at the base of the hypothalamus [5, 7, 9, 10]. It is therefore possible that the small population of Pax2/β tubulin III-positive cells in the developing anterior hypothalamus described here, may arise from the optic stalk and collaborate with other cellular hypothalamic cues in guiding the trajectory of the optic nerve.
Pax2 is also expressed in the eminentia thalami, a transient developmental structure of unknown function that joins the ventral diencephalon to the telencephalon [22–24]. Pax2 expression in the eminentia thalami is first detected at E10.5 at the dorsal border between the diencephalon and telencephalon , and this staining can not be detected after E13.5. The Pax2-positive cells in the eminentia thalami do not express β-tubulin III, indicating that they are most likely to be neuronal precursors. The early appearance of these cells at the diencephalic-telencephalic boundary (this study and ) suggests that they might be important for the formation of this boundary.
Consistent, high levels of Pax2 expression were also observed in the septum of the basal forebrain. Pax2 is first expressed at E10.5 by a small number of cells located in the ventral telencephalon. It seems likely that these cells give rise to the Pax2-positive population observed in the septal area one day later. By E12.5, septal Pax2 expression although strong, is confined to the septal neuroepithelium, mainly at levels proximal to the lamina terminalis. Only a few Pax2-positive cells are observed in the differentiating layer of the septum. Septal expression is also observed at later developmental stages, but by E16.5 it is downregulated and becomes restricted to a small population of differentiated cells in the medial septum. During septal development in rodents, cells migrate from the lateral ventricle towards the midline. The medial nucleus is one of the first nuclei formed in the septum [36–38]. Therefore, it is possible that the sparse Pax2-positive cells observed at E16.5 correspond to the Pax2-positive cells observed in the differentiating field of the septum between E12.5 and E14.5 (Fig. 3A, 2G and not shown).
Pax2 is still expressed by a few cells located in the medial and lateral septal areas during postnatal development and it is not detected after P15. The majority of neurons in the medial and lateral septum are cholinergic or GABA-ergic [28, 30, 31]. Mature cholinergic neurons express the enzyme choline acetyltransferase (Chat) [39, 40], whose expression in the septum becomes detectable at around P8 [28, 30]. At this age, Pax2 expressing neurons are still found in the septal region but do not co-express Chat, suggesting that these cells might not be of the cholinergic type. However, it is also possible that the Pax2 immunopositive cells might develop into cholinergic neurons at later time points when Chat expression has increased and Pax2 expression has been turned off. Similarly, calbindin is expressed by a large number of somatospiny GABA-ergic neurons in the adult septum [31, 32] and it is present in the postnatal septal area, but it does not co-localize with Pax2, indicating that the Pax2 septal neurons are not of this particular GABA-ergic type. As there many different types of GABA-ergic neurons in the septum , it is possible that the Pax2-positive cells might develop into a different type, such as the septohippocampal projection neurons, a prominent GABA-ergic population comprised of parvalbumin-positive cells [28, 42, 43]. Although a few of these cells are first detected at around P8 [28, 30], when Pax2 expression is still detectable in the septum, they become clearly visible after P15, when Pax2 expressing cells are no longer present in the septum. Again, this expression pattern precludes us from being able to draw conclusions about the specific neuronal type of these cells based solely on immunostaining techniques. Cell fate experiments using a Pax2-cre mouse strain and an appropriate cre reporter strain would allow us to address which type of neurotransmitter fate and properties the Pax2-positive septal neurons adopt.
In the developing eye, there is a sharp boundary between the domains of Pax2 expression in the optic stalk and Pax6 expression in the optic cup, [18, 44, 45]. Mutual cross-repressive interactions between Pax2 and Pax6 are essential for formation of this boundary . Here we show that Pax2 and Pax6 are expressed in neighbouring, non-overlapping domains in the rostral telencephalon, reminiscent of the pattern observed in the developing eye [18, 45]. Given this expression pattern, and the fact that the septum is enlarged in Pax6Sey/Seymutants , we hypothesised that the Pax2 septal expression domain might be expanded in this mutant. However, we found no increase in the extent or intensity of Pax2 expression in this region in Pax6Sey/Seymutants. Nevertheless, the Pax2 expression domain was found at a more dorsal position than in the wild type, consistent with the previously described size increase of the septum in the Pax6Sey/Seymutant . The Pax2 expression domain still lies within the ventral telencephalon, as shown by co-expression of the septal marker Lim1 [26, 27].
There are several mouse models with different types of mutations in the Pax2 locus, including the Krd mice (Krd/+), a mutant with a chromosomal deletion that includes this locus [9, 46, 47], Pax2-/- null mutants [10, 48], and Pax21Neumice with a frameshift mutation in Pax2 . All of these mutants display defects in kidney formation, optic nerve trajectory, and inner ear patterning, consistent with previously identified expression domains of Pax2 transcript [10, 48–50]. However, defects in the midbrain-hindbrain region range from complete loss of the posterior mesencephalon and cerebellum in the Pax21Neumouse , to no phenotypic alteration in the Pax2-/- mutant [10, 51], possibly as a consequence of differences in genetic background . It would be of interest to study the telencephalic septum in these different mutants, to identify any possible alterations due to loss of Pax2 expression in this region.
Pax2 is expressed in the anterior hypothalamus, eminentia thalami and telencephalic septum in the developing mouse forebrain, in neuronal progenitors and early born neurons. Between E11.5 and E14.5, it is strongly expressed in the septal neuroepithelium, at a level close to the lamina terminalis and it is no longer detectable by P15. Further, the absence of functional Pax6 does not cause gross alterations of Pax2 expression in the septum. Thus Pax2 represents an ideal marker for the study of the developing septum.
Animal care was in accordance with institutional guidelines and UK Home Office regulations. The day the vaginal plug was detected was considered E0.5. Wild type mice on a CBA genetic background were used for the Pax2 expression pattern analysis. Pax6Sey/+ heterozygotes, kept on a mixed CD1-Swiss genetic background, were intercrossed to generate homozygous null embryos. These were identified by the absence of eyes, as previously described . Wild type mice from the same genetic background were collected for comparison by intercrossing wild types.
Embryos were harvested between E10.5 and E16.5 and pups between P1 and P15. Whole embryos or heads of P1 were immersion fixed in 4% paraformaldehyde in 0.1 M phosphate buffer overnight at 4°C. P7 and P15 pups were anesthetized with Avertin and perfused through the heart with fixative, followed by tissue dissection and overnight incubation in fresh fix. Embryonic and postnatal tissue were all processed following standard conditions , embedded in paraffin and cut in serial 10 μm sections (embryonic tissue) or 12.5 μm (postnatal brains) at a coronal or sagittal plane. At least two wild type embryonic heads or postnatal brains for each age were used, and six Pax6Sey/Seymutants with the corresponding wild types were examined with each marker.
For single immunohistochemistry experiments the dark brown signal was revealed after incubation with the ABC kit (Vector), followed by standard diaminobenzidin (DAB, Sigma) and hydrogen peroxide incubation. For the double immunohistochemistry experiments, nuclear staining was first detected using a DAB-nickel detection kit (Vector) resulting in a grey/black staining. Sections were then incubated with the second antibody and appropriate secondary and the dark brown filamentous signal was revealed after incubation with the ABC kit (Vector), followed by a DAB reaction using the DAB detection kit (Vector). For the double immunofluorescence experiments, Pax6 or Lim1/2 signal were amplified with a biotinylated anti-mouse IgG antibody and signal was revealed after incubating with streptavidin conjugated to Alexa fluor 488 dye (Invitrogen, 1:200). Monoclonal anti-β tubulin isotype III (Sigma – clone SDL.3D10, 1:400) was detected using as secondary antibody an anti-mouse IgG conjugated to Alexa fluor 488 dye (Invitrogen, 1:200). For double immunohistochemistry followed by immunofluorescence, after detection of the first antibody with DAB or DAB-Nickel immunohistochemistry as described above, sections were incubated with a second antibody which was detected by means of immunofluorescence. Polyclonal Pax2 and calbindin were detected using as secondary antibodies an anti-rabbit IgG conjugated to Alexa fluor 488 dye and an anti-rabbit IgG conjugated to Alexa fluor 568 dye, respectively (Invitrogen, 1:200). Appropriate controls were used in all cases by incubating some sections with all but the primary antibodies. No immunostaining occurred under these conditions.
A Leica microscope connected to a Leica DFC 480 digital camera was used to capture images of DAB and immunofluorescent labelled sections. Confocal images were captured with a Leica TCS NT confocal microscope.
Work in the authors' laboratory is funded by the BBSRC, Wellcome Trust and the MRC. We thank Christine Morrison and Tamsin Lannagan for excellent technical assistance, Trudi Gillespie for confocal imaging, Catherine Carr and Tian Yu for providing embryos, Dario Magnani and Petrina Georgala for postnatal brains and staff of the University of Edinburgh Biological Research Resource facility at Little France for animal care. The Lim1/2 antibody was generated by T. Jessell and S. Brenner-Morton, the nestin antibody by S. Hockfield and the Pax6 antibody by A. Kawakami. They were obtained from the Developmental Studies Hybridoma Bank developed under the auspices of the National Institute of Child Health and Human Development and maintained by the University of Iowa (Department of Biological Sciences, Iowa City, IA).
VF designed and carried out the experiments, analysed the results and wrote the manuscript. DJP and JOM participated in the analysis and writing of the manuscript. All authors read and approved the final manuscript. | 2019-04-24T00:02:09Z | https://bmcdevbiol.biomedcentral.com/articles/10.1186/1471-213X-8-79 |
The Origin of Life: A Scientific or Philosophical Issue?
A divergence of views illustrated by a partial snippet of an article, which can be viewed at the indicated URL, provides an occasion for commentary on a familiar theme.
>"A month ago I sent a letter to the journal First Things in response to an opinion piece by Robert T. Miller on why Intelligent Design should not be taught in public schools. Unfortunately, access to First Things is by subscription and his column is too lengthy to copy here. In any event, the recent issue has a very truncated version of my letter along with the submissions of others, including Michael Behe, to which Mr. Miller responds. My letter, slightly shortened, follows. The portions run by FT are italicized. I will post Mr. Miller's response tomorrow:"
>"Robert T. Miller asserts in his article Darwin in Dover, PA (April 2006) that ID "is not science but neither is it religion." He explains that it's not science, at least in the strong sense, because a designer does not operate by law-like necessity."
[Bradford]: That's a distinguishing feature of intelligence. Outcomes resulting from reason and choice are the conceptual opposite of outcomes determined by the necessity of natural forces. Or to put it differently, a prerequisite to an intelligent inference is data indicating that an outcome did not result from law-like necessity. For example, a hypothesis that a series of chemical reactions led to a living, self-replicating cell presumably would be falsifiable. Falsification could take the form of evidence that an essential property of life would not arise from unguided chemical reactions.
The coding conventions by which nucleic acids function would not result from a series of prebiotic chemical reactions. Any chemical process generating encoded nucleic acids must be one whose sequential nucleotide order already functions according to pre-established encoding conventions. A preordained functional link between codons and amino acids is found not in philosophy but rather the nature of nucleic acids themselves. Without a linkage information about functional amino acid sequences can neither be stored nor passed on to descendents. Without the functional requirement there is no selection basis. An encoded convention and sequences ordered according to it by intelligent manipulation circumvents the conundrum.
>"ID, he concludes, is metaphysics, a branch of philosophy, and thus does not belong in a science classroom."
[Bradford]: Is the belief that life was generated by a process devoid of intelligent guidance grounded in science or philosophy? It is the latter. The belief that life arose that way is unsupported by the evidence. The argument for the insufficiency of the evidence is empirical. It has nothing to do with philosophy. The belief that ID is metaphysical and prevailing theories of origin are not is self-deception.
The following referenced article contains some revealing comments by Judge Jones of Dover school district fame. My comments are interspersed and identified.
"CARLISLE, Pa. -- A federal judge who outlawed the teaching of "intelligent design" in science class told graduates at Dickinson College that the nation's founders saw religion as the result of personal inquiry, not church doctrine.
U.S. District Judge John E. Jones gave the commencement address yesterday to 500 graduates at Dickinson College, his alma mater.
"The founders believed that true religion was not something handed down by a church or contained in a Bible, but was to be found through free, rational inquiry," said Judge Jones, who was thrust into the national spotlight by last year's court fight over the teaching of evolution in the Dover school district."
[Bradford]: There was a variety of views among the founding fathers as to what constituted "true religion." Acknowledging this is an essential step in avoiding the type of self-serving doctrinaire pronouncements that frequently accompany constitutional rulings about the "establishment clause." While the founding fathers clearly favored the freedom to choose one's own religious conviction, one man's view as to what is rational is another's view of the irrational and the founding fathers were aware of this.
"The founding fathers -- from school namesake John Dickinson to Alexander Hamilton and Thomas Jefferson -- were products of the Enlightenment, Judge Jones said.
Following a six-week trial last year that explored concepts in biology, theology and paleontology, Judge Jones concluded that the Dover Board of Education had violated the separation between church and state.
Intelligent design holds that living organisms are so complex they must have been created by some kind of higher being.
[Bradford]: This has become a trite cliche favored by opponents of ID. Most IDers argue that intelligence better explains both the origin and diversity of both life and the universe it is found in than standard theories.
"In his ruling, Judge Jones called it "an old religious argument for the existence of God" and accused the school board of "breathtaking inanity" in trying to teach it."
[Bradford]: It can be used to argue for the existence of God just as evolution and abiogenesis have been used by prominent atheists since Darwin to argue that God does not exist. The comment is irrelevant.
The school board had argued that it hoped to expose students to alternatives to Charles Darwin's theory of evolution.
The case cost the district more than $1 million in legal fees -- and cost school board members, who were turned out in November's election, their seats.
Judge Jones credited his liberal arts education at Dickinson, more than his law school years, for preparing him for what he called his "Dover moment."
"It was my liberal arts education ... that provided me with the best ability to handle the rather monumental task of deciding the Dover case," he said.
[Bradford]: I note he did not credit his scientific expertise.
Nor does the concept of 'irreducible complexity'. Complexity is always being reduced, modified, converted to a different kind of complexity, lost completely, made over again from a new starting point, throughout evolutionary history. The human body (and that of chimps, sheep ...) isn't an example of a perfectly designed machine, but a grab bag of bits and pieces put together over a long time. It far more resembles a sculpture made from junk than a Swiss watch.
[Bradford]: This is a story not an exposition explaining the evolution of irreducibly complex systems. Biochemical interactions are more intricate and precise than the Swiss watch. Proteins are molecular machines. Their amino acid components are encoded precisely by their genes both as to identity and sequence. Their expression is dependent on other proteins and DNA sequence patterns to ensure that expression is both timely and adaquate in quantity. Such proteins in turn have their own encoding genes. Yet protein synthesis would be impossible despite this level of organization were it not for dozens of different tRNAs, aminoacyl synthetases with multiple active sites, properly sequenced mRNAs, robosomes and timely infused energy in the form of ATP. Some effective antibiotics are based on the idea of disabling one of the the multiple components of this system. Proteins are not synthesized without this breathtakingly precise apparatus. These are precsion parts not a grab bag of junk. Horton, like others who make the same arguments, does not bother to account for how such an irreducibly complex system arose from a precellular environment. And for good reason. There is no evidence that it would.
If you take any organ in the body. ANY organ. And trace it back through evolutionary history you will see how it has evolved through more and sometimes less complex stages, ultimately back to the first multicellular species.
[Bradford]: Correction. You will not see how it evolved. Instead you will be shown other species in what is believed to be the same line of descent. You can then view the differences in an organ. No process of change is on display.
In many ways the big evolutionary jump was not from simple animals to complex ones but from single celled to multi-celled species (although even that may not have been such a big deal at the time - two cells which have failed to separate fully after division can potentially swim faster than any one cell, and so on).
[Bradford]: More stories which get less entertaining as we go on. The differences between prokaryotic cells and eukaryotic cells are enormous. One can start with the basics and compare their genomes to illustrate the point. References to two cells swimming together is a childish bedtime story not a scientific explanation.
Once you have a body with many cells, then the challenges of preventing water loss, moving, taking in oxygen, absorbing nutrients, getting rid off excess fluids and waste products, circulating oxygen and nutrients, responding to stimuli from outside the body, reproducing, can all be done in many different ways and combinations. And initially some of those ways will be quite simple - for example a straight gut with little difference from front to back, and later that gut will become longer and more coiled and with different functions along its length - more complex if you like. Both guts will function very well, and so will the intermediate stages. And this is not theory, we can see all those different ways in both the modern species and in the fossil record.
[Bradford]: Of course we see a variety of phenotype. What Horton and others avoid is pinning down evolutionary causes that can be traced to genetics. For example, where are explanations describing the evolution of nucleosomes and histone acetylation and deacetylation mechanisms? Then we might tie this in with timely transcription before proclaiming the evolution of eukaryotes a done deal.
And, finally, of course complex structures are made up of simple parts. The bodies of all multi cellular animals are made of many cells. All organs are made up of cells, in various combinations and functionalities. All cells are fundamentally the same, but can become specialised, and the combinations of specialised cells are what make up complex organs.
[Bradford]: Cellular differentiation illustrates more not less difficulties with standard evolutionary explanations.
All of that makes sense when evolution is the result of natural selection operating on mutations in a varied and changing environment.
[Bradford]: Another assertion without merit. Natural selection does not explain why or how the different components of the protein synthesis function would evolve nor how that would be related to environmental factors.
What doesn't make sense is that an intelligent designer would come up with a middle ear made from what were originally jaw bones, or an appendix, or an upright species with a back originally evolved for walking on all fours.
[Bradford]: Misconceptions getting in the way of good theology.
David Horton puts on a show in an effort to convince the reader that he is answering a question posed by one of his readers. He is not. Instead we see an obfuscation tactic frequently in evidence when critics confront the issue of irreducible complexity. Comments follow the referenced URL.
"Two great questions about evolution in recent posts in response to my evolution blogs, and here is the second one. One of my readers asks 'How does an irreducibly complex structure appear in the first place? It can't evolve up from from something simple, right? How does a functioning whole come into being from parts?
This is another one of those cases where you read something and a light bulb goes off and you think, ah, that's the problem, that's the reason for the lack of comprehension. And then you think, yes, and this lack of comprehension is why children all over the world are being taught 'intelligent design' in the year 2006, a concept so simple minded that it was discredited 150 years ago."
[Bradford]: This is the first indication that something is seriously amiss in this analysis. Behe, who coined the phrase irreducible complexity, applied it to different biomolecular complexes that were not even dreamed of 150 years ago. Darwin and his contemporaries had no clue as to the make up of cells to say nothing of proteins and their encoding genes. Irreducible complexity is a descriptive term describing biochemical systems consisting of multiple proteins. Let's continue and find the real simple mindedness on display in this paper.
"Is the human body 'complex'? You betcha (but no more 'complex' than the bodies of gorillas and chimps and whales and sheep and bears and kangaroos and mice, and arguably less complex, in some ways, than the bodies of birds and snakes and fish). Did the 'complex' bodies of humans (and all other modern animal species) evolve directly from the 'primeval slime'? Of course not. Did they evolve from it indirectly over a long period of time? Of course."
[Bradford]: We are treated to the standard argument by assertion. Horton is holding a gun filled with blanks. Noone is contending that animals evolved from primeval slime. But Horton does not treat us to an explanation as to how a single cell did evolve in a prebiotic environment. Not surprising. Horton doesn't have a clue. To be fair neither does anyone else. But then why are we told that indirect evolution of course occurred. Horton is hoping for simple minded readers who do not pose questions he cannot answer.
"One big problem is the word 'complex'. Evolution doesn't work to make bodies more complex but more functional. Sometimes this might result in increased complexity, sometimes in increased simplicity. If by complexity people mean bodies with a lot of different organs then a human body is less complex than sheep or cattle which have very complex 'stomachs' or rabbits which have a functional caecum where we only have the remains of a non-functioning reduced caecum (an appendix). Birds have arms modified for flight, and bones modified to be light, fish have swim bladders instead of lungs, and so on. Fish can also analyse pressure variations in water, and some can analyse electrical signals, bats can send and receive very high frequency sounds in a process like radar, snakes can taste the air and receive vibrations through the ground, we can't do any of that stuff. The complexity concept makes no sense at all."
[Bradford]: What drivel. The author is saying nothing that has any relevance to points made by Behe or the questions posed by the reader. Evolutionists like to make the point that evolution has no direction. This fits in with their no intelligent causality position. But in fact the natural history of life on earth (the only planet known to have life) would entail a history of increasing complexity from organic chemicals to unicelluar organisms to eukaryotic organisms. Any legitimate model should explain events that actually took place. In this case increasing complexity would be an observable phenomenon in need of explaining. It makes no sense only to ideologically blinded Darwinists.
More about the article entitled 'Life without DNA Repair' by David M. Wilson III and Larry H. Thompson which can be accessed at the following address.
Some snippets from the article and related comments follow. The article focuses on a deficiency in a base excision repair (BER) component, AAG, a DNA glycosylase that excises damaged DNA bases.
"DNA glycosylases can be separated into two groups: those that possess only an N-glycosidic cleaving activity, and those that possess both an activity to remove substrate bases and an activity to incise the phosphodiester backbone immediately 3 of the resulting AP site via a -lyase mechanism (reviewed in ref. 9). The biological significance of the AP lyase activity, which produces a normal 5-phosphate and an obstructive 3-end (i.e., a 3-deoxyribose moiety or a 3-phosphate), is currently unclear. Furthermore, how, if at all, the type of initiating DNA glycosylase dictates downstream events during BER is unknown. It seems likely, however, that any glycosylase-initiated repair event would proceed through the short-patch pathway in which APE would act as the 3-repair diesterase to remove the abnormal AP lyase-generated 3-terminus before gap filling and ligation."
[Bradford]: This reminds us how many parts there are to the base excision repair mechanism. Not only are there multiple proteins, but there can be multiple active sites too.
"Engelward, Weeda, and colleagues (8) have genetically engineered animals deficient in AAG, a DNA glycosylase that removes a broad spectrum of base damages, including, but likely not limited to, 3MeA, 3-methylguanine, 7-methylguanine, 1,N6-ethenoadenine, hypoxanthine, and 8-oxo-7,8-dihydroguanine; AAG does not possess an AP lyase activity. It is worth mentioning that the mouse and human AAG proteins are only moderately conserved (80% identity at the amino acid level) and display some differences in their substrate preferences (32). Given this fact and considering the notable disparities that have been observed between certain repair-deficient mice and their counterpart human subjects, we must proceed with caution when interpreting data gathered from animal models. However, this caveat does not diminish the incredible wealth of information that is being obtained from these models (1).
Protein extracts from tissues of AAG (/) animals display essentially no detectable repair activity for 3MeA, 1,N6-ethenoadenine, and hypoxanthine base modifications, although a hint of a minor lung-specific glycosylase activity for 1,N6-ethenoadenine lesions was reported (8). Furthermore, the knockout embryonic stem cells show hypersensitivity to a variety of alkylating agents and, surprisingly, to mitomycin C (33). Thus, AAG likely represents the major repair glycosylase for alkylation base damages, whereas its role in protection against mitomycin C is unclear. The finding that AAG-deficient animals survive embryogenesis raises several issues, particularly in light of the embryonic lethality of the other BER knockouts (Table 1)."
[Bradford]: We see that AAG is likely the glycosylase repairing alkylation base damages but that AAG deficiency is not lethal during embryogenesis in contrast to other BER components. Next we find speculation as to the reason.
"The fifth, and perhaps most likely, explanation for the survival of these animals is that one or more of the other DNA repair systems substitutes for AAG in its absence. There may, in fact, be a minor DNA glycosylase activity that can cope with the normal level of alkylation base damage, but that goes undetected in the repair assays used. The ability to cross different genetically engineered repair-defective backgrounds may uncover any potential overlap of the various corrective systems. For instance, if two repair systems possess redundancy for a common cytotoxic lesion, then breeding the appropriate repair-deficient animals would lead to embryonic lethality of the double knockout. Measuring the distribution of the repair patch lengths in AAG (/) also may provide clues as to which pathway is adopted."
[Bradford]: The fifth possibility is thought to be perhaps the most likely. There may be functional redundancy that allows for other repair components to take up the AAG role. The DNA repair function is clearly a critical one and deserving of more attention from biology's natural historians.
If a judge is to use history to justify a judicial decision the judge should at least get the history right. Viewing the film 'Inherit the Wind', a slanted, fictionalized version of history, to familiarize himself with "historical context" would be comical if Judge Jones were bantering on stage rather than issuing judicial fiats. It is apparent that the judge had his mind made up in advance of the trial and was selectively searching for historical data to support a preconception. These comments appear at the following blog which incidentally is one of the more thought provoking ones on the web.
Below are the notes for my comments at the Traipsing into Evolution book party held at Discovery Institute yesterday. There the four authors discussed Judge Jones’ lengthy opinion in the Dover intelligent design trial, and touched on some of the highlights from the book, which was our response to his opinion.
My primary contribution to the book was comparing Judge Jones’ history of intelligent design with the true history of it I discovered in my research.
For instance, Jones suggests that the design argument began with St. Thomas in the Middle Ages. This was part of the judge’s attempt to depict intelligent design as fundamentally Christian. The problem is that the design argument dates back much further, to the pagan philosophers Socrates and Plato.
Jones also appears unaware of the modern design argument’s rich history in the 20th century, stretching back to discoveries by Albert Einstein and Edwin Hubble. This isn’t surprising since Judge Jones told the media that he planned to watch an old Hollywood film, Inherit the Wind, for “historical context.” Inherit the Wind is a thinly veiled account of the 1925 Scopes Monkey trial, where a man was tried for teaching evolution. Taken as history, the film grossly misrepresents the actual trial, a fact well attested to even by historians of science favorably predisposed to Darwinism.
The film’s central trope turns out to be Judge Jones’ central trope: Anyone who questions Darwinism is a dangerous creationist driven by Christian fundamentalist impulses.
In keeping with that trope, Jones suggests that intelligent design is just biblical creationism repackaged after a 1987 Supreme Court decision against biblical creationism.* If Jones had read key briefs submitted to him, he would know that the intelligent design arguments in biology pre-date that Supreme Court decision by several years, drawing on developments in information theory in the ‘50s and the information revolution in biology in the ‘50s and ‘60s.
One of the first to describe the significance of these discoveries was chemist and philosopher Michael Polanyi. In the late ‘60s, in essays published in the journal Science and in Chemical and Engineering News, he argued that DNA isn’t reducible to physics and chemistry any more than the sentences in a newspaper are reducible to ink and paper.
Who published the book? The Philosophical Library of New York, a publisher of more than twenty Nobel Laureates. When it appeared, the book was praised by several leading origin-of-life researchers as well as leading British philosopher Antony Flew, at the time an atheist.
These events never make it into the judge’s official history. Jones also ignores discoveries in physics and cosmology that began to reinvigorate the design argument as early as the 1920s.
These culminated in a growing body of evidence suggesting that the universe was fine tuned for life, a point attested to even by prominent scientists outside the intelligent design community. For instance, in 1982 prominent theoretical physicist Paul Davies described this growing evidence for fine tuning as “the most compelling evidence for an element of cosmic design.”[i] Physicist and agnostic Fred Hoyle and Nobel Laureate Arno Penzias made similar statements. Did Judge Jones dismiss their arguments as creationist drivel? Actually, Jones never addresses these matters because he’s apparently unaware of them. They didn’t fit his Inherit the Wind rubric, and so for him they don’t exist.
An excellent article entitled 'Life without DNA Repair' by David M. Wilson III and Larry H. Thompson can be accessed at the following address.
Some snippets from the article and related comments follow.
"The advent of gene targeting techniques has permitted the construction of specific genetic deficiencies to evaluate the biological contribution(s) of an individual protein. Mice lacking a precise DNA repair activity have been generated, and these mutants show various combinations of defective embryogenesis, tissue-specific dysfunction, hypersensitivity to DNA-damaging agents, premature senescence, genetic instability, and elevated cancer rates (1). That repair-deficient animals display such abnormalities underscores the fundamental importance of DNA repair in protecting against the mutagenic and cytotoxic effects of DNA damage."
[Bradford]: There is much evidence indicating that in the absence of DNA repair mechanisms genomes become corrupted in short order. This brings up a historic question. How would that first putative genome avoid the natural tendency to lose genetic information which occurs without a built in self-correction mechanism? In addition what evidence is there that a prebiotic environment would generate genetic information much less accumulation of such information at a pace that exceeds the information lost?
"Proteins participating in base excision repair (BER) cope with chromosomal damages that arise as spontaneous decomposition products or from reactions with metabolically or environmentally derived reactive chemicals (2)namely oxygen free radicals and alkylating agents."
[Bradford]: This brings up another point. Evolutionists have explained that as we look back in time to eras in which living organisms lacked biological capacities found in presently existing organisms, such organisms would have faced less competition because competing organisms also were less developed. Enhanced capacities are correlated to keener competition. However one type of challenge should have existed throughout history. Challenges to life based on environmental factors specifically, chemical reactions, should have existed at the outset as they exist today. General examples are cited by the authors. Nucleic acids are vulnerable to environmentally generated damaging reactions. Why would this not necessitate encoded self-correcting mechanisms at the origin of life? How do they fit Darwinian models?
An article at the referenced URL contains comments that are typical of exaggerated claims for evolution. A part of the article appears below along with my comments.
"This is like teaching chemistry and skipping (the) periodic table," she said. "Evolution is the idea that living things had common ancestors, and common ancestry of living things is what explains why biological phenomena are the way they are."
[Bradford]: Is that so? What is really amazing is how little is explained by evolution. How does evolution explain metabolic pathways? How does evolution explain how the transcription and translation functions evolved? How does evolution account for an initial replicating genome? Error detection and correction mechanisms that maintain genomic integrity? The evolution of a mechanism designed to cope with supercoiling?
"I go to church every Sunday," said panelist Joseph Travis, dean of FSU's College of Arts and Sciences and a biology professor who's taught evolution. Believing in evolution and God is not contradictory for many people, Travis said. Whether Darwin's theory of evolution explains everything already is questioned in science classes, he said. What concerns Travis is the view that understanding evolution is optional for students.
"The classic example are things like pathogens," he said. "They use methods from evolutionary biology to discern what strain of influenza to use to develop next year's vaccine. That affects a lot of people."
[Bradford]: Evolutionists like to hijack credit from disciplines like virology, molecular biology and more where the real action takes place.
Evolution "determines how we teach critical thinking, how we go about thinking what science is and how science is to be taught," said Frank Stephenson, editor of FSU's Research and Review magazine that is sponsoring the forum.
[Bradford]: Critical thinking is restricted to outcomes favorable to Darwinism.
Said Michigan State University philosophy of science professor Robert Pennock, who will be on the panel: "The Dover trial really was the test case for intelligent design. Creationists had lost in the courts in the 1980s and had to retool their position. Hopefully this will help teachers who want to teach good science to do that without worrying."
[Bradford]: Good science entails teaching students that the most basic biological systems and biochemical pathways are unexplained by Darwinian paradigms.
The following quoted paragraph is found in an article appearing in the referenced URL. It reflects a common practice of ignoring a glaring weakness of currently accepted biological paradigms that are said to account for the "diversity and complexity of life." The paragraph is broken into parts to allow for my comments.
>"The biologist, Randy Olson, accepts that there is no credible scientific challenge to the theory of evolution as an explanation for the diversity and complexity of life on earth."
[Bradford]: Evolution does not explain either. Evidence for adaptation by means of its random mutation, natural selection model is variation within species. Examples cited are generally rapidly reproducing unicellular organisms or insects which revert to prior allele frequency when the environment changes.
Nor does it explain the origin of life. If evolution begins with a living cell then we are left without an adaquate scientific explanation as to how life came about in the first place. Evolutionists can point out that this is within the parameter of abiogenesis, not evolution, but the impotency of scientific claims about life's origins is telling given the lack of meaningful explanations provided by origin of life proponents.
As long as advocates of evolution are able to assert that chemical reactions generated a minimal functioning genome in prebiotic conditions without having to explain what those reactions were, then they have effectively immunized their theory from competition. Generation of a genetic code, which is distinct from the matter associated with its manifestation, is assumed to arise without intelligence despite the lack of evidence that it does so. The objection that secondary inferences about the "supernatural" can be made from intelligent causality enables evolutionists to eliminate the possibility of challenges a priori.
>"He agrees that intelligent design's embrace of a supernatural "agent" puts it outside the realm of science."
[Bradford]: Why not squarely face the issue of how genetic information is generated through precellular chemical reactions while leaving the supernatural arguments to theology? How is the foregoing accomplished without intelligence?
Here is a human interest story with an evolutionary twist to it.
CHICAGO - Women looking for a long-term relationship like men who like children — and they can tell which guys might be interested in becoming fathers just by looking at their faces. Those are among the findings of a study of college students published Wednesday in a British scientific journal.
"This study suggests that women are picking up on facial cues that are perhaps related to paternal qualities," said James Roney, a University of California at Santa Barbara psychologist and lead author of the study. "The more they perceived the men as liking kids, the more likely they could see having a longer-term relationship."
"Experts said evolution has apparently programmed women to recognize men who might be interested in propagating the species by raising a family."
[Bradford]: It was evolution huh? How do random mutations program this? Let's see.
"The study wasn't all bad news for men not interested in settling down. It found that women can look at men's faces and figure out which of them have the highest testosterone levels. Those men — rated the most masculine by the women — turn out to be just the kind of guys they would want for a fling."
[Bradford]: You need a sense of humor at times like this. Evolution is credited with the "programming' needed to enable "women to recognize men who might be interested in propagating the species by raising a family" and also presumably for the capacity to figure out who the men are with the highest testosterone levels who "turn out to be just the kind of guys they would want for a fling." Now envision your putative cave woman having her fling and then getting pregnant with a child having the macho man's genes. The more sensitive fatherly type then cares for the child and ensures the survival of macho man's genes. These stories are priceless.
"Women make very good use of any information they get from a man's face," said co-author Dario Maestripieri, an associate professor of comparative human development at the University of Chicago. "Depending on what they want and where they are in their lives, they use this information differently."
"In the study, published in Proceedings of the Royal Society B: Biological Sciences, researchers looked at a group of 39 men, ages 18 to 33, at the University of Chicago. Each man was shown 10 pairs of photographs and silhouettes, one of an adult and the other of an infant, and asked to rate their preferences. Meanwhile, their saliva was tested to determine testosterone levels.
Photographs of the men's faces were then shown to 29 women, ages 18 to 20, at the University of California at Santa Barbara.
The women were asked to rate the men on four qualities: "likes children," "masculine," "physically attractive" and "kind." Then they were asked to rate how attractive they found each man for short-term and long-term romance.
The study found women did well at rating men on their interest in babies, and those they rated masculine generally had higher testosterone levels than the others.
For example, the men who indicated they liked children the most were rated as above average in liking children by 20 of the 29 women. The men who showed no interest in children were correctly rated as below average in that category by 19 of the women.
The higher the women rated the men for masculinity, the higher they were rated as potential short-term romantic partners. The higher they rated men for their interest in children, the higher they were rated for long-term romance.
The features that research has suggested denote high testosterone levels include a prominent jaw and a heavy beard.
The findings came as no surprise to those in the business of studying human behavior — and love.
"What this study illustrates is that there are genetic programs that increase survival of the species because there are hormones in women that are cuing their reactions to the hormones of the men," said Dr. Daniel Alkon, scientific director of the Blanchette Rockefeller Neurosciences Institute in Morgantown, W.Va., and Washington.
Or as Kristin Kelly, a spokeswoman for the online dating service Match.com, put it: "They call it `love at first sight' for a reason. They don't say `love at first sentence,' `love at first word.'"
It is unclear just what about the men's faces tipped the women off about their interest in children. While Maestripieri guessed it might have something to do with "a more rounded face, a gentler face," Roney said the answer might be found in the expressions on the men's faces.
He explained that after the study was completed, five graduate students were asked to rate on a scale of 1 to 7 whether the men looked angry or happy. Though the men were instructed to have a neutral look on their faces when photographed, some apparently looked happier than others.
"It seemed that the men who picked more infants in that test had a happier or more content look on their face," he said."
Excerpts from the article at the referenced web site discuss a familiar topic- where and if intelligent design should be discussed. My comments are included.
"If any ideas evolved at a forum on intelligent design at Palisades High School on Tuesday, it was that public schools should offer philosophy classes where questions about human origins could be discussed.
[Bradford]: I take it then that OOL or abiogenesis issues are also to be discussed in philosophy classes. If not why?
Sharon Mendelson, one of about 80 people who attended the panel discussion, said the science classroom is the wrong place to discuss whether a higher intelligence has had a role in life. A philosophy class is the better venue, she said, winning applause from audience members at the forum sponsored by the high school's Students for Social Change club."
[Bradford]: Do philosophy classes now accomodate questions related to the origin of the genetic code? If the study of organic chemistry yields no evidence favoring the view that the code was the result of unguided natural forces then why would a discussion of such be suitable in a philosophy class as opposed to a class in biology, genetics, chemistry...?
"Two science teachers, including Palisades biology teacher Pat Raynock, disagreed that intelligent design should be discussed in the classroom. “We look for evidence, testable evidence, not revelation,” Raynock said."
[Bradford]: In that case why not hypothesize as to what the precursor of a minimal genome was and subject whatever that is to selective pressure in a prebiotic environment and observe what natural forces produce? At what point does it become obvious that unguided natural forces are not up to the task at hand?
"Lehigh University biology teacher Steven Krawiec, who noted his church membership, also said that evolutionists are not focusing on the origin of life. Instead, they focus on the changes in species over time, something that can be demonstrated."
[Bradford]: Genetic change is not controversial. The type of change needed to produce a eukaryotic organism from a prokaryote is but, this type of change is exactly what is not demonstrated.
A common objection to intelligent design is the claim that it is unscientific. Those advancing this objection can hope to project that fabled attitude of scientific objectivity. The claim rings hollow for many. Now they can cite students who are at least frank enough to acknowledge their real reasons. This blog entry from Telic Thoughts reveals the objection by association phenomenon. This occurring at one of our Ivy League schools where one might expect a rejection based on evidence. Alas this is not to be.
Students reject ID over motives, not science? Oh, the humanity!
I have been teaching a new course on the frontiers of science, required for all freshmen at Columbia. These students are mostly sharp, capable, and open-minded. Still, many of them think that intelligent design should be studied in the interest of being fair and balanced. What’s troubling is that even those who accept evolution often treat it as a matter of belief, of political persuasion, as if it were akin to being for or against free trade. And if they reject intelligence design it’s often not because they can see its vacuousness as a scientific theory, but merely because the religious and conservative stripes of ID can sometimes look a little uncool. As for science, reason, evidence — what’s that? Students rejecting intelligent design, not because of any knowledge of science, but because they associate it with that “uncool” religious right? Sounds like they’ve been listening to Peter Ward."
A review of "Information Theory and Molecular Biology" by Hubert Yockey located at the referenced website contains the following remarks.
Contrary to engineering systems, there is no encoding process in the biological world. There is neither a Master Mind encoding information in DNA nor a natural process encoding information (from proteins or whatever source) into DNA. DNA is only decoded. In artificial systems, decoding implies encoding. Not so in genetic systems. Therefore, the encoding part of the 'information metaphor' is misleading."
[Bradford]: The author's conclusion reveals a philosophical bias. Since we are focused on a biological system we are to assume that genetic information is encoded in the absence of intelligence. This is both counter-intuitive and contrary to experience. Why refer to this as a "metaphor" while holding onto the remainder of the analogy? Why do so when no laws of chemistry are shown to predict that genetic information is generated from organic molecules under specified conditions?
Academic freedom is indeed no longer assured. It has become license for some. For others, whose views lie outside academic political norms, denial of tenure is a reality in spite of "academic freedom." Physicians are in a unique position to observe real effects of mutations. They are associated with disease and dysfunction in the real world. They are not observed leading to the formation of new functions composed of multiple novel proteins. The following statement can be found at the referenced URL.
"As medical doctors we are skeptical of the claims for the ability of random mutation and natural selection to account for the origination and complexity of life and we therefore dissent from Darwinian macroevolution as a viable theory. This does not imply the endorsement of any alternative theory.
As PSSI International, Inc. is a 501(c)(3) nonprofit corporation, contributions by PSSI members to the cost of the DVD distribution or other activities and events will be tax deductible. Our goal is to hold these educational events with a minimal admission fee, or no admission fee at all, to maximize attendance."
This article contains an interesting inference about the presence of nitrogen on another planet. It should not be there, at least not in abundance, argues Professor Kenneth Nealson. He explains that "substantial organic nitrogen deposits found in the soil of Mars, or of another planet, likely would have resulted from biological activity." Nealson's point is related to another unspoken assumption about life's origins. It is assumed to result from a series of chemical interactions that led to a cell. If this were so and the natural laws describing chemistry are the same throughout the universe then one would expect to find life under suitable conditions.
There is a missing factor though namely, intelligence. Life's information rich nucleic acids contain encoding properties enabling the synthesis of proteins as long as there is a cellular synthesis mechanism in place. The right conditions might make the formation of RNA possible outside a cellular environment. But nothing we know about organic chemistry predicts that a specified protein encoding sequence of nucleotides would be found in such nucleic acids. That is, not in the absence of intelligent guidance.
The narrow search for water can miss important clues, say USC geobiologists.
“If you found nitrogen in abundance on Mars, you would get extremely excited because it shouldn’t be there,” said USC College professor Kenneth Nealson.
The great search for extraterrestrial life has focused on water at the expense of a crucial element, say USC geobiologists.
Even if NASA were to find water on Mars, its presence only would indicate the possibility of life, said Kenneth Nealson, Wrigley Professor of earth sciences in USC College.
Co-author Douglas Capone, Wrigley Professor of environmental biology in USC College, said NASA should establish a nitrogen detection program alongside its water-seeking effort. He noted that next-generation spacecraft will have advanced sampling capabilities.
The authors also thanked NASA, the U.S. Department of Energy and the National Science Foundation for their financial support.
OSLO (Reuters) - Scientists have found about 10-20 new species of tiny creatures in the depths of the Atlantic in a survey that will gauge whether global warming may harm life in the oceans, an international report said on Thursday.
The survey, of tropical waters between the eastern United States and the mid-Atlantic ridge, used special nets to catch fragile zooplankton -- animals such as shrimp, jellyfish and swimming worms -- at lightless depths of 1-5 km (0.6-3 miles).
"This was a voyage of exploration ... the deepest parts of the oceans are hardly ever sampled," said Peter Wiebe, the cruise's scientific leader and senior scientist at the Woods Hole Oceanographic Institution in the United States.
"We found perhaps 10-20 new species of zooplankton," he said of the 20-day voyage by 28 scientists from 14 nations in April.
Most life, including commercial fish stocks, is in the top 1 km of water, but the scientists said the survey showed a surprising abundance even in the depths. The survey will provide a benchmark to judge future changes to the oceans.
New finds among thousands of zooplankton species caught included six types of ostracods, a shrimp-like creature, and other species of zooplankton such as swimming snails and worms.
Zooplankton are animals swept by ocean currents, mostly millimeters-long but ranging up to jellyfish trailing long tails.
Among 120 types of fish caught, the scientists found what may be a new type of black dragonfish, with fang-like teeth, growing up to about 40 cm (15 inches), and a 20-cm-long great swallower, with wide jaws and a light-producing organ to attract prey.
"By 2010, the research ... will provide a baseline against which future generations can measure changes to the zooplankton and their provinces, caused by pollution, over-fishing, climate change, and other shifting environmental conditions," said Ann Bucklin, lead scientist for the zooplankton census project at the University of Connecticut.
Most scientists believe the planet is warming because of a build-up of carbon dioxide in the atmosphere, mainly from human burning of fossil fuels in power plants, vehicles and factories since the Industrial Revolution.
The oceans absorb vast amounts of carbon dioxide but the process raises levels of carbonic acid in the seas. That build-up could threaten marine life, for instance by making it harder for crabs or oysters to build shells.
Zooplankton are a key to transporting carbon dioxide to the depths because they can swim 500 meters (yards) up and down daily. Many species eat their own weight every day in plant phytoplankton species near the surface.
By one estimate, 10,000 kg (22,000 lb) of plant phytoplankton is needed to feed 1,000 kg of small zooplankton.
The expedition was funded by the National Oceanic and Atmospheric Administration (NOAA), and used NOAA ship Ronald H. Brown. The findings are also part of a wider Census of Marine Life trying to map the oceans.
Scientists from Argentina, Australia, Britain, Canada, China, Germany, India, Japan, Mexico, Norway, Spain, Switzerland, Turkey and the United States took part.
Evolution proposes that an accumulation of gradual changes favored by natural selection produced the biological systems found among living organisms. It's a logical argument. But what do favorable changes "look like" and how do they produce biological systems? The focus of this post will be a particular system known as the lac operon; extensively studied in a prokaryotic organism known as E coli. Evolution is frequently spoken of in abstract terms but some general approaches to evolution on a molecular level can be gleaned from the study of the lac operon.
1. Is lac function possible without both ß-galactosidase and ß-galactoside permease? If not how was were they integrated into the operon and what was the sequence of events? Does permease have selective value in the absence of a means to metabolize lactose and would a gene encoding ß-galactosidase be selected if permease could not be synthesized?
2. How would the promotor, operator and repressor protein integrate themselves into the regulatory process? What was the sequence of events?
Similar questions arise when examining other biological complexes and the number increases in proportion to the complexity of the systems. Most are more complex than the lac operon. Defenders of evolution often argue that if they can envision or imagine pathways and intermediate functions then Behe's point about irreducible complexity has been adaquately addressed. However Behe's real point is an empirical one. Pathways to irreducibly complex systems are theoretical not observed. As long as this is so Behe's point remains unrefuted.
"A common weapon that is used to advance the "theory" of intelligent design is to posit that evolutionary biology cannot explain everything — that there remains uncertainty in the fossil record and that there is as yet no consensus on the origin or nature of the first self-replicating organisms. This, too, reflects a basic misunderstanding about how science works, for, in fact, all scientific theories, even those that are approaching 150 years of age, are works in progress."
[Bradford]: More accurately, advances in scientific knowledge are a continuing work in progress. Such advances can lend credence to a theory or detract from its credibility. The assumption that future knowledge favors a predetermined outcome indicates that objectivity has been compromised.
"Scientists live with uncertainty all the time and are not just reconciled to it but understand that it is an integral part of scientific progress. We know that for every question we answer, there is a new one to be posed. Indeed, the very word, "theory," is misunderstood by many who take it to mean an "idea" that has no greater or lesser merit than any other idea. The fact that Darwin's "ideas" on natural selection have stood the test of time through keen experimental challenge does not give his theory special status in their eyes."
[Bradford]: That lethal genetic changes are selected against is not in dispute. But where is experimental support for the concept that life arose through selected chemical outcomes or that universal metabolic pathways evolved through a selection process? How was a minimal genome selected and where is experimental support for the contention that the irreducibly complex translation function evolved? Add hundreds of other irreducibly complex biological systems to that list.
"There are also those who exploit the fact that scientists often disagree over the interpretation of specific findings or the design of experiments to argue that nothing is settled and thus anything is possible. The fact of the matter is that fierce disagreement is the stuff of scientific inquiry, and the constant give-and-take is needed to test the mettle of our ideas and sharpen our thinking. It is not, as many would claim, prima facie evidence for deep fissures in the central tenets of natural selection.
Of course, the real test of whether intelligent design is a scientific theory, comparable to Darwin's theory of natural selection and worthy of equal consideration in the biology classroom, is whether it poses testable hypotheses. Here the answer is self-evident — it does not — and therefore it has no place in the science curriculum of America's public schools, which rest on the premise that the state has no constitutional authority to impart supernatural truths."
[Bradford]: Let's take the last part first. What supernatural truth is revealed in the contention that the sequential order of nucleotides in nucleic acids, rather than their chemical composition, imparts their selective value? How is the position that this is relevant to whether a minimal genome would arise in a prebiotic environment, without intelligent input, a supernatural truth? There is nothing supernatural about DNA with encoding properties. There is however something amiss in assuming that on a planet devoid of both life and nucleic acids, an unspecifed series of chemical interactions not only led to the formation of nucleic acids but also to at least one with both encoding properties and a biochemical means to translate and replicate the same.
"Rather than searching for explanations for the complexity that is surely present in each living organism, intelligent design accepts that this complexity is beyond human understanding because it is the work of a higher intelligence, leading logically to the conclusion that experimentation — the tried and true basis for scientific progress — is pointless. The result is an intellectual dead end."
[Bradford]: Baloney. ID does not claim that biological systems are beyond human understanding because they are the work of a higher intelligence as alleged. This kind of straw man promotes intellectual dishonesty. Who made this claim? Dembski? Behe? Stephen Meyer? IDers are as curious as anyone to uncover the unknown and are not excoriated because they differ as to the components of a biochemical pathway, the nature of a cellular system or anything of scientific significance. They are criticized for believing intelligence better explains the origin of particular biological systems. The impact of this is felt more in the realm of the philospophical than in science. Antagonism toward ID can also be traced to extra-scientific motives.
"In fact, because there is no prediction that can be tested, the future of intelligent design is dependent on the failure of experiments designed to test other hypotheses."
[Bradford]: Another misconception. If data is capable of supporting a paradigm that is dependent on the adaquacy of biological mechanisms to generate sufficient selected changes over long time eras then data that contravenes this is data that can be used to support an alternative paradigm.
Stephen Meyer cited numerous studies in his paper 'Intelligent Design: The Origin of Biological Information and the Higher Taxonomic Categories'1 to butress his points that proteins and their encoding genes are highly specified to their functional roles and correspondingly sensitive to loss of function caused by sequence alterations. Limitations on amino acid residue variation when combined with mutation rate data and evolutionary time frames can suggest that a design paradigm is more consistent with what we know. Meyer cited one study indicating loss of protein function invariably occured in cases involving multiple amino acid substitutions. Tilghman may claim such references illustrate a dependency on "failure of experiments" but the concern is not a failure of science to advance in knowledge but rather evidence of concern for a "failure" of data to support a preferred outcome.
"It is ironic that intelligent design's reliance on negative proof exacerbates what religious historians have called the "shrinking God" problem. Each time a natural phenomenon that has been attributed to divine inspiration is explained by scientific exploration, the role for an intelligent designer is diminished. In other words, they are setting up God to fail."
[Bradford]: Now we are treated to the author's theological concern. It is also another straw man; a variation of the God of the gaps charge. Lack of knowledge as to how x occurs is the basis for a belief in a divine cause or so it goes. Actually most IDers, be they Christian Muslims or Jews, believe the universe functions in an orderly manner that was preordained by God. The implication being natural phenomenon operate this way independently of our knowledge. Attribution of divine causality is not dependent on scientific developments. That is not the same as saying that intelligence is indetectable. That's Ken Miller's position and he is entilted to his theology. What he and others are not entitled to is maintaining that scientific evidence precludes intelligence as a causal component of biological origins.
The "shrinking God" argument is a strange one. It's indicative of motive and a hidden one at that. If Darwinists wish to make the argument that either there is no God or that it is impossible to link natural phenomenon to intelligent causality then let them not hide behind science in doing so. | 2019-04-24T04:08:03Z | http://intelligent-sequences.blogspot.com/2006/05/ |
John Ritenbaugh reiterates the characteristics of a prophet, showing that both Moses and Aaron fulfilled this role. Jesus described John the Baptist as the greatest of all the Old Covenant prophets, distinctive by his austere dress and diet. Highly esteemed by the common people, John was unusually vital and strong, and consciously prepared the way for the Messiah. Although by no means a wild man, John, like the prophets of old, experienced alienation from people, especially the entrenched religious and political leaders within the system. His greatness lay in 1) the office he filled, 2) the subject he proclaimed, 3) the manner in which he did it, and receding into the background, 4) the zeal in which he performed his office, 5) the courage he demonstrated, 6) his lifetime service, and 7) the number and greatness of his sacrifices, performed in the spirit and power of Elijah, by which he restored and repaired family values, enabling people to see God.
To begin, we are going to go back to Deuteronomy 18:15-18. I want to use this scripture as a launching pad for this sermon because it is going to follow somewhat on the same line as Part 1 of this series. We are going to focus on one particular aspect, and this will also serve as a way of review.
Deuteronomy 18:15-18 The LORD your God will raise up unto you a Prophet from the midst of you, of your brethren, like unto me: unto him you shall hearken: According to all that you desired of the LORD your God in Horeb in the day of the assembly, saying, Let me not hear again the voice of the LORD my God, neither let me see this great fire any more, that I die not. And the LORD said unto me, They have well spoken that which they have spoken. I will raise them up a Prophet from among their brethren, like unto you, and will put my words in his mouth: and he shall speak unto them all that I shall command him.
All of us have the desire to know the future in order to be prepared for it. We want to be in control of as much of our destiny as possible, and not be merely at the mercy of events. However, some have this desire so strongly that they somehow maneuver themselves into the position of being the channel through which the future is given, and these people have misled many.
Deuteronomy 18, along with Deuteronomy 13, is a warning against such people. Whether these people are called diviners, charmers, spiritists, or channelers, using such methods as tealeaf reading, casting of lots, or séances, they are to be seriously and carefully avoided. This is because there is no absolute reality to their prognostications. Those seeking to know are being misguided, putting themselves at the mercy of lying demons, or at the very least, of imaginative men and women.
I think it is important for us to understand that prophets were not merely a temporary expedient God turned to on occasions. They played a vital and continuing role, especially in those times before the word of God was widely distributed. That is why provision is made for them right within the law.
God shows in many places that those He appoints to the prophetic office will always have the preaching of the keeping of the commandments of God as evidence of the source of their guidance. They will teach the conserving of truth that is past truth, even as they break new ground in terms of doctrine.
Isaiah 8:19-20 is an expansion on Deuteronomy 18:15-18.
Isaiah 8:19-20 And when they shall say unto you, Seek unto them that have familiar spirits, and unto wizards that peep, and that mutter: should not a people seek unto their God? For the living to the dead? To the law and to the testimony: if they speak not according to this word, it is because there is no light in them.
One of the outstanding characteristics of all of the prophets of God is related to us in Hebrews 3. Moses is used as an example, and Jesus Christ, who was also a Prophet, is the example. They were faithful in what they said, both as to their present message (be it something regarding the future), but they were always faithful to what had already been given in the past.
Prophets both forth tell (that is, they bring a message out truthfully, clearly, and authoritatively to those to whom it is intended), and they will on occasion, but not always, foretell; that is, they will give a message of events to occur before those events occur. In other words, a man can be a prophet without ever foretelling anything, but he will faithfully carry the message God gave him, and he will always stick to the line that God gave, beginning with Moses: "...like unto Moses, who was faithful in all of his house."
(1) The foundational pattern for the office was established through Moses. It says in verse 18: "Like unto me."
(2) The prophet will be raised up from among the Israelitish people. He says, "of your brethren." However, we find that a prophet might be drawn and appointed from any of the tribes, and from any occupation. In other words, a prophet did not have to be a Levite. A prophet did not have to be a priest. A New Testament parallel to this might be that if a prophet is raised up, he will be raised up from within the "Israel of God," which is the church.
(3) A prophet will perform the function of a mediator between God and men. That is in verses 16 and 18.
(4) Because of this, a prophet will stand apart from the system that is already installed. This means he will not be antagonistic to the system, because that is old truth, and he will conserve old truth, but he may very well be antagonistic to the sins of those within the system. We will see more of that a little bit later.
(5) A prophet is directly appointed and separated for his office by God, and therefore the thrust of his service as God's representative is direct and authoritative.
By way of contract, the priest's function was from man to God by means of sacrifice. It was far less direct and more appealing and pleading rather than demanding, as the prophet may do. The New Testament ministry combines both elements, but it tends to be somewhat more parallel to the prophet's direction than the priest's.
In simple broad definition, a prophet is one who is given a message by another of greater authority, and speaks to those for whom the message is intended in place of the original giver of the message. A good example is Moses, who was God's prophet, but Aaron was Moses' prophet. Both Moses and Aaron spoke for somebody else. Aaron spoke for Moses, and Moses spoke for God.
We are going to back into the New Testament to Matthew 11, because here is where a change of direction in the sermon begins. We are going to begin focusing on a very important personage who was a prophet.
Matthew 11:7-11 And as they departed, Jesus began to say unto the multitudes concerning John. What went you out into the wilderness to see? A reed shaken with the wind? But what went you out for to see? A man clothed in soft raiment? Behold, they that wear soft clothing are in kings' houses. But what went you out for to see? A prophet? Yes, I say unto you, and more than a prophet. For this is he of whom it is written, Behold, I send my messenger before your face, which shall prepare your way before you. Verily I say unto you, Among them that are born of women there has not risen a greater than John the Baptist: notwithstanding he that is least in the kingdom of heaven is greater than he.
As the last message ended we had reached the point of time of John the Baptist. As we begin here with John the Baptist, John is an Old Covenant prophet whose work is reported in the New Testament, but he is the last of the Old Covenant prophets.
Despite the greatness of the other Old Testament prophets that filters through the record of their deeds, Jesus declared that not a one of them was greater than His cousin John. In fact, several commentaries stated that Jesus' statement in verse 11 literally means that John was the greatest of all men who ever lived. Let that rattle around in your brain for awhile! He was not merely the greatest prophet, but of all men born of women, he was the greatest.
When one considers people like Abraham, Isaac, Jacob, Joseph, Moses, and David, one must marvel at how great this man John the Baptist was, and yet we know so little of him.
The Greek in verse 9 where Jesus said, "and more than a prophet," it literally says, "much more than a prophet." Jesus then goes on to say, in the larger context, that the reason for this is that John was the fulfillment of a prophecy. No other prophet was ever the fulfillment of a distinct prophecy, and what an important prophecy that was!
My specific purpose here is I want us to get a closer picture of John the Baptist. Not just closer, but also clearer as well. We are going to be using quite a number of scriptures. In fact there will be a number of times that I will use a scripture more than 3 or 4 times. I will flip back to it because I might want to emphasize one part of the scripture one time, and another part of the scripture another time. They all apply almost invariably to either what John said or did, or what Jesus said about John so that we can understand what a great man we are dealing with here.
We will first go to the book of Luke. Luke gives the most detailed record of the conception and birth of John the Baptist. We will read verses 5 and 7 this time, and then verses 13 through 17.
They had no child, because Elisabeth was barren, and they were now both advanced in age.
Luke 1:11-17 And there appeared unto him [Zachariah] an angel of the Lord standing on the right side of the altar of incense. And when Zacharias saw him, he was troubled, and fear fell upon him. But the angel said unto him, Fear not, Zacharias: for your prayer is heard; and your wife Elisabeth shall bear you a son, and you shall call his name John. And you shall have joy and gladness: and many shall rejoice at his birth. For he shall be great in the sight of the Lord, and shall drink neither wine nor strong drink: and he shall be filled with the Holy Spirit, even from his mother's womb. And many of the children of Israel shall he turn to the Lord their God. And he shall go before him in the spirit and power of Elijah, to turn the hearts of the fathers to the children, and the disobedient to the wisdom of the just; to make ready a people prepared for the Lord.
We might question what Zechariah was praying about because the immediate context gives the sense that he was praying about having a son. Now maybe he was doing that, but we have to remember he was performing his job, and so there is another way we can go with this.
What do you think Zechariah, who was a righteous man and a faithful servant of God, might have been praying about in the carrying out of his job? Was he praying that time about having a son? It is a possibility, but I think not. What he was praying about, brethren, are the kind of things we are praying about when we get on our knees and see what is going on in our nation, about how unrighteous and immoral it is.
As Zechariah was carrying out his responsibility, he was praying for the salvation of Israel, that somehow or another the nation would get turned around. And so when the angel came to him, the angel said, "Your prayer has been heard." It is very possible that he meant, "Your prayer about the salvation of Israel is going to be answered, and besides that you are going to have that son which you prayed about before."
Now remember, they were well advanced in years. They were beyond the time that Elisabeth could have a child. It was very likely Zechariah had not prayed about having a child since the time that she was 45 or 50 years old. She was way beyond having a child. It was impossible. But they had prayed about it before, and who knows how long they had to wait. This is so encouraging. God did not forget!
Zechariah is going to have both prayers answered. God is going to take steps to work out the salvation of Israel. He is going to answer Zechariah's and Elisabeth's prayer for a child, and it is going to begin with that child that Israel is going to be saved. What an honor to be given to these two!
John's birth, like Isaac's and Jesus', was miraculously produced by God. The exception though is that Jesus' was achieved through a virgin with no involvement of a human male. Isaac's and John's were normally produced, except that Sarah and Elisabeth were beyond the childbearing age, but miraculous nonetheless!
John the Baptist appears in each of the four gospels, but in each case his story is subordinated to that of Jesus, and that is as it should be. But we are going to see that John was quite effective in what he did in preparing the way before Christ.
Josephus pays a bit of attention to John. Though Josephus devotes only a vague reference to Christ, he devotes an intriguing fairly long paragraph to John. When what Josephus wrote is put together with brief interjections from the Bible, we can then get a picture of a very vigorous man of God who was turning that small nation of Judea on its spiritual ear.
They had no radio and no television to broadcast "Come out to see John!" sort of thing, but the knowledge of him spread quickly by word-of-mouth, because his ministry appears to have been short. We might guess it was about the same length of three and one-half years allotted to Jesus; however, virtually all of that time was utilized prior to the beginning of Christ's ministry. There are a number of commentaries I read in the preparation of this sermon, and the writers of those commentaries feel perhaps John's ministry was just one year long. But boy, I will tell you, was he effective!
Mark 1:1-8 The beginning of the gospel of Jesus Christ, the Son of God, As it is written in the prophets, Behold, I send my messenger before your face, which shall prepare the way before you. The voice of one crying in the wilderness, Prepare you the way of the Lord, make his paths straight. John did baptize in the wilderness, and preach the baptism of repentance for the remission of sins. And there went out unto him all the land of Judaea, and they of Jerusalem, and were all baptized of him in the river of Jordan, confessing their sins. And John was clothed with camel's hair, and with a girdle of a skin about his loins, and he did eat locusts and wild honey; and preached, saying, There comes one mightier than I after me, the latchet of whose shoes I am not worthy to stoop down and unloose. I indeed have baptized you with water: but he shall baptize you with the Holy Spirit.
Mark 2:18 And the disciples of John and of the Pharisees used to fast: and they come and say unto him, Why do the disciples of John and of the Pharisees fast, but your disciples fast not?
From Mark 1:1-8 and Mark 2:18 we learn that John the Baptist was apparently distinctive from what was normal for the time in both his dress and his diet. His dress was durable and serviceable, and it was something that would normally be associated with the very poor people as to the kind of thing they would wear.
The same is true for his diet. Most of us would cringe about eating grasshoppers; but nonetheless a lot of his diet was made up of grasshoppers. He looked distinctive, and his diet was distinctive. We have to remember that somehow or another God sustained him, because he was a man of great energy.
We are going to look at the scripture in Luke 1:80 just to confirm this.
Luke 1:80 And the child [John the Baptist] grew, and waxed strong in spirit, and was in the deserts till the day of his showing unto Israel.
I wanted to touch on this because this fits right in with the description in Mark 1:1-8, but it indicates an additional thing, and that is, despite his greatness—for even before his birth the angel reported "He shall be great!"—Jesus said he was "the greatest"!
God kept him a poor man. He was not wealthy like Abraham, or David, or Solomon, and many of the others. This man, who was possibly the greatest of all men who have ever lived (other than Jesus Christ) was kept poor by God. People who live their entire lives in the desert do not become rich. His home, though undoubtedly not a hovel, was certainly nowhere near what we are accustomed to in rich Israel.
God does not owe us what our emotions tell us we would like to have, but He will always provide us with what we need to serve His purpose for us. There is a big difference between the two. Sometimes, brethren, we have to repent and adjust our expectations, and try to understand what it is that God is working out in us, and through us. John's diet would be unusual for us, but it was fairly common for the poor of his time.
I think we can be assured that since he had God's spirit from birth, as Luke 1:15 states—"He shall be great in the sight of the Lord, and shall drink neither wine nor strong drink, and he shall be filled with the Holy Spirit even from his mother's womb."—he was in no way the wild man you see him depicted as in movies, running around, ranting and raving, hair askew all over the place, and generally seeming like a fool that nobody would pay any attention to. When he spoke, people listened and considered deeply and carefully what this man said. You do not do this with wild men and fools. I am going to add a scripture here just to prove this point to you.
We also noted in Luke 1:5 that from both parents (Zechariah his father, and Elizabeth his mother) he was a Levite. He was from Aaron's line. Both his father and his mother were from Aaron's line, and yet not one acknowledgement is made regarding John having any tie at all with the already-installed system of Temple worship.
I think it is interesting that the Bible positions John's ministry as "the beginning of the gospel of Jesus Christ." Apparently it does this because of the preparatory work for when Jesus came.
This confirms the impact of his ministry in that all Judea, including the Jerusalem folk, went out to hear and to be baptized of him, believing he was a prophet. The word "all" does not mean every last person, but it does indicate a very high percentage. A majority of the people went out to hear him.
If they counted John as a prophet like everybody else did, then they were condemned. "Why did you not repent?" Jesus would have said. So they were between a rock and a hard place. They had no answer for that kind of a question. What I want you to see is that John was held in very high regard by the people. The common people especially regarded him as a prophet, and indeed he was.
Something else we can pick up from this verse is that the very highest Jewish authorities—the chief priests, the Scribes, and the elders—were fully aware of John's reputation as a prophet, and they feared it. I do not think that these men who were accustomed to the use of power and authority within a nation would fear something they did not respect, and they would not respect a wild, crazy man. When John talked, people listened. They had something to lose by yielding to his preaching, and so they would not repent.
I think what we are beginning to see here is that in many respects John's work was of a magnitude very similar to Jesus'.
Mark 1:9-11 And it came to pass in those days that Jesus came from Nazareth of Galilee, and was baptized of John in Jordan. And straightway coming up out of the water, he saw the heavens opened, and the Spirit like a dove descending upon him. And there came a voice from heaven, saying, You are my beloved Son in whom I am well pleased.
This may be the series of verses with which we are most familiar in relation to John, because we know that he baptized Jesus. What I want to point out to you is that "the all" which appears in verse 9 included Jesus, both as believing his message and being baptized of him. It was at this time God fully revealed to John who the Messiah was. Mark 1:7-8 makes this clear, where John said, "There comes one mightier than I after me, the latchet of whose shoes I am not worthy to stoop down and unloose. I indeed have baptized you with water: but he shall baptize you with the Holy Spirit."
So at this time God fully revealed to John who the Messiah was, but it is clear, from verses 7 and 8, that he already knew before baptizing Jesus that he was preceding someone—that he was preparing the way for someone. How would he know this? Mom and dad told him, because mom and dad were told before he was even born that he was going to precede the Messiah.
Despite the fact that he was no wild man, he was what we would call today "radically alienated" from those who were part of the system God had installed during the time of David one thousand years earlier, and then re-established when it fell apart. It was re-established under Hezekiah. Then it fell apart again and was re-established under Josiah. It then fell apart again when the Jews went into captivity. When they came out of captivity, it was re-established once again under Zerubbabel and Nehemiah.
We find at the time of Christ that some very disgusting attitudes and sin had infiltrated the system. John was not against the system. He was against the conduct and the attitude of those who were within the system.
This is common practice for a prophet. I mentioned to you in my previous sermon, and also earlier in this sermon, that the prophets are often shown as not being against the system, but being separated from it, leaving them free to be against those who are part of the system. Jeremiah and Amos were some of the best known in this position. We are going to take a look at Jeremiah 15:10-17.
Jeremiah 15:10 Woe is me, my mother, that you have borne me a man of strife and a man of contention to the whole earth!
Notice the position in which he puts himself. Life for a prophet of God was not easy. Life for Jeremiah was exceedingly difficult, and he was feeling very sorry for himself.
Jeremiah 15:10-17 I have neither lent on usury, nor men have lent to me on usury; yet every one of them does curse me. The LORD said, Verily it shall be well with your remnant; verily I will cause the enemy to entreat you well in the time of evil and in the time of affliction, Shall iron break the northern iron and the steel? Your substance and your treasures will I give to the spoil without price, and that for all your sins, even in all your borders. And I will make you to pass with your enemies into a land which you know not: for a fire is kindled in my anger, which shall burn upon you. O LORD, you know: remember me, and visit me, and revenge me of my persecutors; take me not away in your longsuffering: know that for your sake I have suffered rebuke. Your words were found, and I did eat them; and your word was unto me the joy and rejoicing of my heart: for I am called by your name, O LORD God of hosts. I sat not in the assembly of the mockers, nor rejoiced; I sat alone because of your hand: . . .
That is the way John the Baptist was too. He sat alone. I am sure that when push came to shove, Isaiah sat alone, and Hosea sat alone.
Jeremiah 15:17-18 . . . for you have filled me with indignation. Why is my pain perpetual, and my wound incurable, which refuses to be healed? [His wound was in his heart.] Will you be altogether unto me as a liar, and as waters that fail?
Amos 7:14-15 Then answered Amos, and said to Amaziah, I was no prophet, neither was I a prophet's son; but I was an herdman, and a gatherer of sycamore fruit: And the LORD took me as I followed the flock, and the LORD said unto me, Go, prophesy unto my people Israel.
So what did Amos get for doing this? Persecution. From that time on, Amos was separated away, and he was no longer part of the people.
Matthew 3:4-11 And the same John had his raiment of camel's hair, and a leathern girdle about his loins; and his meat was locusts and wild honey. Then went out to him Jerusalem, and all Judaea, and all the region round about Jordan. And were baptized of him in Jordan, confessing their sins. But when he saw many of the Pharisees and Sadducees come to his baptism, he said unto them, O generation of vipers, who has warned you to flee from the wrath to come? Bring forth therefore fruits meet for repentance: And think not to say within yourselves, We have Abraham to our father: for I say unto you, that God is able of these stones to raise up children unto Abraham. And now also the axe is laid unto the root of the trees: therefore every tree which brings not forth good fruit is hewn down, and cast into the fire. I indeed baptize you with water unto repentance: but he that comes after me is mightier than I, whose shoes I am not worthy to bear: he shall baptize you with the Holy Spirit, and with fire.
These words were a scathing attack against both the Pharisees and the Sadducees. The Pharisees were those who had public power because they tended to be fairly successful people in private life, but they also had the admiration of the people. The Sadducees were largely drawn from the priesthood, and thus controlled the Temple; consequently they pretty much controlled the religious life of the people. But because they also tended to be wealthy, but haughty in disposition, this prejudiced the feelings of the people against them. So John was sent from God to confront the leadership of the establishment. That was one of his jobs, and his was an unpopular message of judgment against them, aimed directly at the powerful.
Luke 7:28-30 For I say unto you, Among those that are born of women there is not a greater prophet than John the Baptist: but he that is least in the kingdom of God is greater than he. And all the people that heard him, and the publicans, justified God, being baptized with the baptism of John. But the Pharisees and lawyers rejected the counsel of God against themselves, being not baptized of him.
These (the Pharisees and the lawyers) were the powerful men in the community. They rejected what John said.
Matthew 21:23 And when he [Jesus] was come into the temple, the chief priests and the elders of the people came unto him as he was teaching, and said, By what authority do you these things? And who gave you this authority?
I read this just so we could see who the people were that He was addressing.
Matthew 21:32 For John came unto you in the way of righteousness and you believed him not; but the publicans and the harlots believed him: and you, when you had seen it, repented not afterward, that you might believe him.
The powerful knew that Jesus was speaking about John the Baptist, and so in disdainful anger they rejected him, while the publicans and the harlots accepted his teaching.
Now John the Baptist had a foe who was more powerful than the scribes and the Pharisees, and that was Herod Antipas who was tetrarch of Galilee. Herod and John had an interesting relationship. Herod respected John, and yet at the same time he feared him because of what he perceived to be John's growing political power and because of John's popularity with the people. In other words, in Herod's mind's eye he could see this man (John the Baptist) as the point of a rebellion, and that John was going to be the one the people would proclaim to be their leader.
Josephus gives us a bit of background that the Bible does not contain. Herod was married to the daughter of Aretas, who was king of Petra. However, sometime before John became quite popular, or became a "celebrity," as we would say today, Herod divorced the daughter of King Aretas and married his sister-in-law Herodias. This part about Herodias is in the Bible. This caused a problem because Herodias was already married to Herod's brother Philip.
It was right here that a convergence takes place between the fact of John's rising influence with the people and Herod's and Herodias' adulterous and incestuous marriage, which clearly violated the sexual laws of Leviticus 18. You may not be aware that Herod was part Israelite. He was half Israelite and half Edomite, and so there was kind of an attachment to Israel and to the law of God in him.
We will now go to Mark 6 where there is a bit of fill in.
Mark 6:14 And king Herod heard of him; [The "him" here is Jesus.] (for his name was spread abroad:) and he said that John the Baptist was risen from the dead, and therefore mighty works do show forth themselves in him.
John the Baptist was already dead by this time, but Herod was wrongly thinking that Jesus was John the Baptist resurrected. Others, besides Herod, said that Jesus was Elijah.
Mark 6:15-17 Others said, That it is Elijah, And others said, That it is a prophet, or as one of the prophets. But when Herod heard thereof, he said, It is John whom I beheaded: he is risen from the dead. For Herod himself had sent forth and laid hold upon John, and bound him in prison for Herodias' sake, his brother Philips wife: for he had married her.
It is interesting that the Bible still calls Herodias "his brother Philip's wife," even though Herod was married to Herodias too. But it was not a legal marriage in God's eyes. God tells us exactly what she was. She was still Philip's wife regardless of her living with King Herod.
Mark 6:19-20 Therefore Herodias had a quarrel against him [John the Baptist], and would have killed him: but she could not: For Herod feared John, knowing that he was a just man and an holy, and observed him: and when he heard him, he did many things, and heard him gladly.
Like I said, they had a strange relationship, for there was a great deal of respect in Herod for John the Baptist.
Apparently it was during the period of time that Herod took John the Baptist and put him in prison that John made clear to Herod that he was involved in an adulterous relationship with Herodias. I guess Herod spilled the beans to Herodias, and she was boiling with anger.
Mark 6:21-27 And when a convenient day was come, that Herod on his birthday made a supper to his lords, high captains, and chief estates of Galilee: And when the daughter of the said Herodias came in, and danced, and pleased Herod and them that sat with him, the king said unto the damsel, Ask of me whatsoever you will, and I will give it you. And he sware unto her, Whatsoever you shall ask of me, I will give it you, unto the half of my kingdom. [The guy was daffy!] And she went forth, and said unto her mother, What shall I ask? And she said, The head of John the Baptist. And she came in straightway with haste unto the king [You can tell who was running things around there! Herodias was running things!], and asked, saying, I will that you give me by and by in a charger the head of John the Baptist. And the king was exceeding sorry; yet for his oath's sake, and for their sakes which sat with him, he would not reject her. And immediately the king sent an executioner, and commanded his head to be brought: and he went and beheaded him in the prison.
Well, the convenient occasion turned out to be this birthday dance.
Now an interesting thing happened. King Aretas was not out of the picture, and he was upset with Herod because Herod had dumped his daughter in favor of Herodias, and so King Aretas declared war against Herod. Herod had to assemble an army, and he did. His army and the army of King Aretas met on the field of battle, and Aretas just wiped out Herod's army.
The people who liked John reached a conclusion on their own, and that was that God had avenged John's spilled blood by causing Herod to have his army wiped out. It was a judgment, according to them. That appears in Josephus.
It is Luke that gives the most distinctive account of John's birth, and the verses in Luke 1:5-25 are devoted to the announcement of John's birth to his father Zachariah. Verses 57 through 80 is Zachariah's hymn of praise to God for John. Beginning in verse 76, these verses are devoted, without qualification, to John.
Luke 1:76-80 And you, child, shall be called the prophet of the Highest; [See, one who speaks for another.] for you shall go before the face of the Lord to prepare his ways; To give knowledge of salvation unto his people by the remission of their sins, Through the tender mercy of our God; whereby the dayspring from on high has visited us, To give light to them that sit in darkness and in the shadow of death, to guide our feet into the way of peace. And the child grew, and waxed strong in spirit, and was in the deserts till the day of his showing unto Israel.
John was a great man, and as we shall see, Jesus had very high regard for him, and so did the apostles who wrote the gospels. But at the same time, the gospels make clear that John is to be subordinated to Jesus. John and Jesus were allied together in the salvation scheme from the very beginning; however, the Bible shows in interesting ways how it subordinates John to Jesus.
Luke 1:36 And behold, your cousin Elisabeth, she has also conceived a son in her old age: and this is the sixth month with her who was called barren.
(1) Verses 40 and 41 show that when Mary and Elisabeth, who were related and were probably cousins, meet one another, it is John who leaps in Elisabeth's womb at the presence of Mary.
(2) It also shows that even though both women conceive in a miraculous way, Mary's is by far a greater more miraculous conception.
(3) Verse 76 shows that John is to be "only"—I do not really like to use that word "only"—a prophet. But if we would read chapter 1, verses 32 through 35, it shows that Jesus is the Son of God and King over the house of David.
John 1:6-9 There was a man sent from God whose name was John. The same came for a witness, to bear witness of the Light, that all men through him might believe. He was not that Light, but was sent to bear witness of that Light. That was the true Light which lights every man that comes into the world.
I think the best way to appreciate these four things that I have given you is to look at them in their context there in Luke 1 and in John 1, and try to put it back into the times these things were written, because the people held John in such high regard. Something had to be written in the gospels in order to show people who would read this that John was to be subordinated to Jesus. I just bring this to your attention because I want you to see John was held to be a very, very great man.
We have a tendency to think that John's ministry was little more than a blip on a radar screen. In terms of impact, and in terms of importance, my personal belief is that I do not believe there was ever a ministry greater than John's, except for Jesus', in terms of fulfilling the responsibility of his office. But because of our wrong perception, it sets up the possibility of not thinking very much of him.
John fulfilled Isaiah 40:3, and Malachi 3:1 as the messenger who prepared the way for the Messiah. In Luke 1:15-17, by God's own estimation, John would be great right off the bat. No other prophet that I know of was given that accolade from the highest source in the entire universe.
John's greatness lay in the office that he filled.
His greatness lay in the subject that he dealt with: repentance and preparing the way for Christ.
His greatness lay in the manner in which he did it; that is, in humility, calling no attention to himself, and voluntarily receding into the background when the Messiah appeared. Just like that—he turned himself away. You will see that in John 3:30.
He performed this function with great zeal.
His greatness lay in his personal attributes of character as being above reproach in terms of sin, of self-denial, and in terms of manner of life. He was courageous in the face of opposition.
He did his service for his entire life. I do not mean that he was preaching the whole time, but his entire life, from the womb, was devoted to God. John was "the crown" of the Old Testament prophets.
His greatness lay in the number and the greatness of his sacrifices, including his life in martyrdom.
Luke 1:17 And he shall go before him in the spirit and power of Elijah, to turn the hearts of the fathers to the children, and the disobedient to the wisdom of the just, to make ready a people prepared for the Lord.
John the Baptist resembled Elijah, as he did the work of Elijah. What was the work of Elijah? Elijah revealed the true God through a ministry devoted to preaching repentance, and the certainty of things contained in the scriptures regarding Christ. And brethren, John did it without miracles! It plainly says in John 10:41 "John did no miracle."
It is obvious that God does not measure a man's greatness by the miracles that he does. The public looks to wealth, celebrity, or in this case miracles—doing great things; but the public's ideas of great things are not the same as God's.
We are going to look at two separate occasions. We will look first at Matthew 11. Jesus is the speaker here.
Matthew 11:13-14 For all the prophets and the law prophesied until John. And if you will receive it, this IS Elijah, which was for to come.
Matthew 11:15 He that has ears to hear, let him hear.
Jesus used that phrase whenever He was saying something He wanted people to especially listen to, to have regard for. Jesus is saying that John fulfilled Malachi 4:5-6. John was Elijah!
John the Baptist fulfilled Malachi 4:5-6. John was Elijah. He was not Elijah risen from the dead. He resembled Elijah in the message that he brought, and he resembled Elijah in the disposition and the mannerisms in which he did what he did; but John still did no miracles. Jesus is saying that John the Baptist was Elijah in what he preached about, in the way he preached, and in the fulfillment of that prophecy.
We are going to go to Matthew 17:10-12. This occurred right after the Transfiguration, when they were coming down from the mountain.
Matthew 17:10-12 And his disciples asked him, saying, Why then say the scribes that Elijah must first come? And Jesus answered and said unto them, Elijah truly shall first come, and restore all things. But I say unto you that Elijah is come already, and they knew him not, but have done unto him whatsoever they listed [wanted to]. Likewise shall also the Son of man suffer of them.
In verses 11 and 12 Jesus is giving no indication that anybody is going to follow John the Baptist in that office, and I will show you this as we go along.
Part of the reason for the mention of Elijah is a reference to the prophecy given to Zechariah (John's father) in Luke 1:17 before John was even conceived in Elizabeth. Matthew 17:10-12 is Jesus' commentary on Malachi 4:5-6. This verse 12, I think, is one of the most commonly misunderstood scriptures in all the time we were in the Worldwide Church of God.
First of all Jesus is not contradicting what He said earlier, even though His statement seems to be that way when it says "but." The word "but" here introduces what appears to be a contradiction. His statement in verse 11 is the key to this. In verse 11 He is saying that the scribes correctly interpreted Malachi 4:5-6. In other words, the scribes were teaching that before the Messiah would come, Elijah had to come first. And so the disciples come along and they say, "Why did the scribes say this?" Jesus answered them by saying, "Elijah truly shall come first, but I say unto you HE HAS ALREADY COME!"
Now why would Jesus say that? It is because even though the scribes correctly interpreted Malachi 4:5-6, they were still looking for Elijah, and he had already come! Do you understand that? That scripture has already been fulfilled. Jesus is not saying one is going to come later on. He has already done it. Malachi 4:5-6 was fulfilled by John the Baptist; and so what is Malachi 4:5-6 about? It is about the arrival of the Messiah. The scribes had it correctly interpreted, but they did not recognize it when they heard it. They rejected it. Let me put this another way. Jesus is saying that Malachi 4:5-6 had been fulfilled by the greatest Old Testament prophet who ever lived.
Now what about "restoring all things" given in verse 11? Verse 11 says, "Elijah truly shall first come, and restore all things." Is it referring to doctrine? It can, but not specifically. This is a very general statement. The Greek word that is translated "restore all things" literally means "put back again." It can mean "re-organize." It can mean "set up again." In regard to health it can, and is used, when somebody's health is restored and put back the way it should be. It can be used in the sense of authority; that is, "put back the authority again," or "to reinstall a government." It means "to straighten out." It means to re-organize so that things are straightened out.
What did John the Baptist do when he restored all things? What was he preaching about? He was preaching about the coming of the Messiah. The scribes, the Pharisees, the Sadducees, and all of those other peoples' ideas and conceptions and notions about the Messiah were all screwed up. So John did put things back in the right order so they would be able to see the Messiah when He came. He destroyed all of their false ideas. And you know from what God says about John the Baptist, that he did it, because He was very pleased with what this man did.
Let us reflect back. What did Elijah do? "How long halt you between two opinions? If the Lord be God, follow him; but if Baal, then follow him." (I Kings 18:21) Elijah restored to people the knowledge of the true God. He enabled people to see God, and differentiate Him from all of the false Baals they were worshipping at the time. John the Baptist did the same thing, but he did it in reference to our God the Messiah. When He came, it was not a figment of peoples' imagination. John said, "This is the One. Follow Him." John enabled people to see God.
It was interesting the way one of those descriptions was worded, where it said, "They would not repent, that they might see God." They had to repent first, and God would have opened up their eyes; but they would not repent. John the Baptist did the work of Elijah by enabling people to see God.
Just remember that John the Baptist' ministry was to straighten out—that is, to restore all things concerning who was the true God, just like the original Elijah did in a little bit different setting.
As for John the Baptist turning the hearts of the fathers to the children, and the heart of the children to the fathers, logic demands that this refers to his preaching as having a positive impact on family life. First of all, this interpretation fits into the historical background of times in which Malachi was written.
Are you aware that in Malachi 2 God says He hates divorce? Are you aware that He said He instituted marriage so that He could have "holy seed"? What was happening when Malachi was written? It was all that trouble you see written in the books of Ezra and Nehemiah about the family problems, and especially those which Nehemiah had to confront.
And so Malachi prophesied in those times, and at the very end of the book we are told that the way that is going to be prepared for the coming of the Messiah is going to be through a knowledge of right family life. "Turning the hearts of the fathers to the children, and the hearts of the children to the fathers" is something I know this nation very sorely needs, because what are we being prepared for? To live in a family! We are not going to be in that family unless we know how to relate to one another, and to relate to the Father of that family.
The precursor of Jesus Christ taught about marriage and about divorce. Is it not interesting that his preaching about divorce cost him his life? There is a tie between "restoring all things." It is knowledge of family life that needs to be restored in order to enable us to be able to really see God. God IS a family! We are to begin practicing these things in our life, and to turn our hearts to one another in our own families, and within the church family of God as well.
Family problems were extant in both occasions in the time of Malachi and in the time of John the Baptist. I think we have to be very careful that we do not take this statement "restore all things" beyond the scope that it was prophesied to be part of his ministry and get into all kinds of fanciful interpretations of what is to be restored. We already know. The book of Malachi tells us what it is. It is about family, and loving one another.
It also says in Malachi 4:5 that He is going to send Elijah "before the coming of the great and dreadful day of the LORD."
I John 2:18 Little children, it is the last time [the margin says the last hour]: and as you have heard that antichrist shall come, even now are there many antichrists: whereby we know that it is the last time.
The time of anti-christ had already begun. You can also read another telling scripture in I Peter 4:7 in regard to this.
I Peter 4:7 But the end of all things is at hand: be you therefore sober, and watch unto prayer.
The "last days" began with the arrival of Jesus Christ and John the Baptist. The prophesied Elijah appeared just before the "last days" began, and so he was the last and greatest of the Old Testament prophets, and his preaching turned the hearts of the fathers to the children as he prepared the way for the Messiah.
There was only one commentary that I looked into that delved into the possibility of a second Elijah to come just before Christ's second coming. Even as it did so, it claimed that the concept was weak, seeing that Jesus so clearly made His case that John the Baptist was the Elijah, and that no more were to come. I want us to take a look at a scripture that is used to support that concept.
Matthew 16:18 And I say also unto you, That you are Peter, and upon this rock I will build my church: and the gates of hell [the grave] shall not prevail against it.
Now does it not say that the church will never die out? Yes it does. However, the way chosen to translate one word in this statement clearly alters the focus of what Jesus said. It is the word "prevail." It also means, "stand"—to stand. By choosing to translate the word as "prevail," it changes the church from being on the offensive against the kingdom of Satan, as represented by Hades, to being on the defensive, because it is continually under attack.
Jesus is promising that He would enable His church to be triumphant against Satan and death. Is the church constantly under attack? Yes it is. There have been several times, as far as we know, that it seemingly almost died out, but always it has emerged triumphant, and continues on. How was this accomplished when it almost died out? Well, Jesus Christ raised up a man to go forth and once again preach the gospel. One of the people we are most aware of is Peter Waldo. He was one of these clear examples. In the process he became the one God used to call others into His truth, and around him a continuation of the Church of God formed.
What this commentary said, was that using this interpretation, even the First Century apostles, as they took the gospel into new areas, became a weak type of Elijah, and so did all those men used down through the ages, like Peter Waldo, also become weak types of Elijah. Each one of them had to re-establish things and preach repentance as preparation for the receiving of the gospel and the Messiah, but not a single one of them was "the Elijah to come," because by Jesus' own words, that office and that prophecy has already been filled, and there is no higher authority. John the Baptist was the Elijah. That is one of the major reasons why he was so great. | 2019-04-19T00:24:48Z | https://www.cgg.org/index.cfm/fuseaction/Audio.details/ID/1102/Prophets-Prophecy-Part-2.htm |
History - Welcome to Coul House. History - Welcome to Coul House.
Each year we learn a little more of the history of Coul House and the Mackenzie’s of Coul from various sources, including descendants of the Mackenzie family that have had dwellings here on the Coul Estate since 1560, local historians, references books, websites and from a multitude of other kind people that have shared their stories and connections to the house and estate.
As with any historic ‘facts’ some of what I will tell you will be hearsay, true or slightly embellished. Either way it will help you get a feel for its rich past. It has certainly enhanced our feeling of being the custodians of something special.
The Mackenzie Baronetcy, of Coul in the County of Ross, was created in the Baronetage of Nova Scotia on 16 October 1673 for Kenneth Mackenzie. His father Alexander Mackenzie of Coul was the illegitimate son of Colin Cam Mackenzie, 11th of Kintail, and half-brother of Kenneth Mackenzie, 1st Lord Mackenzie of Kintail, ancestor of the Earls of Seaforth, and of Sir Roderick Mackenzie, ancestor of the Earls of Cromarty. The third Baronet was involved in the Jacobite Rising of 1715. Being on the losing side he was attainted with the baronetcy forfeited. The baronetcy was assumed by descendants of the brother of the third Baronet.
The presumed thirteenth and present Baronet has not successfully proven his succession and is therefore not on the Official Roll of the Baronetage. Perhaps I can apply to succeed and claim the title!
The house as it currently stands was built in 1821 for Sir George Steuart (unusual spelling of Stuart) Mackenzie, 7th Baronet (1780–1848). It was designed by Edinburgh brothers Richard and Robert Dickson (usually simply referred to as R & R Dickson), acting as architects in Scotland in the early and mid-19th century. Whilst most of their work is typified by remote country houses they are best known for their magnificent spire on the Tron Kirk in the heart of Edinburgh on the Royal Mile.
The brothers designed in a variety of styles from Gothic to Classical. There buildings are both sound and attractive and most are now listed buildings including Coul House which is a category A listed building, primarily due to its ornate plasterwork on the ground floor ceilings and the horizontal plane windows; many of which were changed in the Victorian times but some can still be seen in the lounge bar and most of the ground floor windows at the entrance side of the building.
Sir George Steuart Mackenzie`s chief claim to fame was his interest in science, especially geology. He first became known to the scientific world in 1800, when he proved that the constituent of diamond was carbon, demonstrated by a series of experiments in which he is said to have made free us of his mother`s jewels.
George`s inquisitiveness later paid off when he became a fellow of the Royal Societies of both London and Edinburgh. In 1810 George undertook a journey to Iceland, and later to the Faroe Islands, to study their geology. On his return he presented an account of his observations before the Edinburgh Royal Society.
A book entitled “Travels in Iceland” was published and George contributed sections concerning the voyage and the travels, the mineralogy, rural economy and the commerce of the islands. This was the first of a large number of learned publications to come from the pen of George Mackenzie of Coul. In addition to science he turned his pen to the subject of agriculture, but with this topic he entered rather more contentious grounds.
We have heard but a few feeble voices exclaim against the necessity of removing the former possessors to make way for shepherds.
In 1831 Sir George`s fourth son Robert Ramsay at the tender age of 19 caused quite the scandal when he was exposed for having an affair with Captain James Murrays’ 29 year old wife. He had to flee the country quickly; taking the first ship he could to Sydney Australia to join his brother James. Somewhat later Robert became the Premier of Queensland, Australia.
Sir Robert Evelyn Mackenzie, 12th Baronet of Coul, was the one who, due to hard times, had to sell off the land and ultimately the house, which sold in 1949 for three thousand pounds. The house was initially split in to two halves with interior partitions and separate landlords renting out the converted bedrooms as flats. It stayed that way till the 1960s when each half was converted again and opened as two guest houses, before coming together as one in 1968. In 1978 the house sold again and opened up as a licensed hotel.
The Story Continues…Read our Annual Progress reports.
Well, we have survived the first year, which “they” say is, the toughest and we certainly hope “they” are right. We have now been here for eighteen months and the to-do list seems to be getting longer, not shorter.
We started by attacking the rhododendrons, and, believe it or not, we have cleared well over an acre significantly reducing their numbers particularly around the pitch and putt. We had to bring in some heavy equipment to dig up the roots and replace them with 700 tons of topsoil. Next the drainage needs to be sorted out, and then in the spring we can plant the grass seed.
Our bedroom and ground floor refurbishment plans were delayed by the significant investment needed in the kitchen with new ovens, stovetops, dishwashers, mixers and slicer all needing to be replaced. We have however now finished our first en suite bathroom upgrade and painted 60 exterior window frames, repaired some guttering, painted and laid new carpets in several bedrooms. Our maintenance man Charlie Cleland is a true handy man – who has worked here for over 26 years. Without Charlie progress would be a lot slower. We also had to rewire and upgrade some of the electrical supply and replace many fuse boxes. We have had the driveway re-graded twice, the road sign replaced twice and now, after several months of wild goose chasing, we have the sign illuminated …….so the upgrades and refurbishment continues.
We have been working with local architects, who had to create blueprints for the house so that we could submit a listed building consent to Historic Scotland for refurbishment. Our plans include the refitting of the old public bar, the relocation and upgrade of the gent’s toilets and the reinstatement of the main central double doors into the octagonal room. This will involve removing the existing reception area and creating a new opening in the wall opposite the log fire in the main hall.
The girls have started play group, nursery and ballet so we have added taxi service to our list of daily duties.
The business has already shown early signs of success with many of our guests returning several times throughout the year. The restaurant is gaining a great reputation locally with more and more locals coming in to dine. This success is, of course, down to our friendly staff without who we would be lost.
Only thirteen and a half years to go and the mortgage will be paid off…………….
Well, we have now been here for two and a half years and in many ways it feels like a lifetime. I can barely remember the sweltering heat, the leisurely golf games with chef Garry, the lazy days off lying by the pool with the blissful reassurance that I was not funding the luxury hotel development in the farming belt of rural Georgia.
The passing of time of course has many benefits I can now confidently stride to my car knowing which side the steering wheel is on. I had become adept at making myself look busy in the passenger seat before getting back out of the car to find the correct side. I have also renewed my passion for the Highlands of Scotland. I seem to see the landscape, breath the clean air and relish the weather with an unprecedented appreciation that I can only explain has come from the long absence from such beautiful surroundings.
The bane of my life this year has been the three “r’s”, roads, rhododendrons, and roofs.
After waiting for various expert opinions and quotes to fix the drainage problem left in the aftermath of the rhododendron clearance, none of which came to fruition, we met Mr. Archer an estate gardener from down South who was staying for a couple of nights at the hotel. Mr. Archer suggested we work with the wet soil to create a bog garden and cut trails through the remaining rhododendrons. So we finally planted the grass seed in October and started cutting trails, which are already providing great excitement for my two wee girls.
The half-mile driveway continues to challenge us. This year we invested a handsome amount of money in improving the road surface, which is already showing signs of deteriorating, and has managed to force the last brown hair on my head to turn grey.
During the occasional summer shower it became apparent that our roof was letting in water in several places and so once again we found ourselves calling in the experts. It is now evident that renewing some parts of the roof over the next several years would be prudent, and my receding hairline races to meet the nape of my neck.
Our refurbishment plans are slowly but surely taking shape and we now have a ground floor design plan and are waiting for the go ahead from historic Scotland, which we expect to have by early January 2006.
We managed to purchased new crockery and silverware for the restaurant and retire the old oval grey-rimmed plates to the attic for a rainy day.
My wee girls continue to be a source of joy any anguish, Aurora has now started School and is thoroughly enjoying it. Liah is in nursery and enjoys it most days.
The business continues to grow although more slowly than I would like.
Well I feel like I need to rename the report this year, as progress will not be what I think of when I look back at 2006. We did finally get our building warrant from town planning and approval from Historic Scotland for the first phase of the refurbishment at the end of March. However at the same time we received a set back we had not anticipated, Susannah was diagnosed with breast cancer and our focus changed completely for the next eight months.
Fortunately in the face of adversity good things often happen and this year was no exception. One of those good things was the opportunity in June to hire the assistance of an enthusiastic part-time gardener Anna Ross. She and Charlie have worked tirelessly to create a bog garden, finish cutting the trails through the remaining rhododendrons and create the foundation of our garden for the years ahead. Another good thing this year (proving all good things come to those who wait) was Chris MacLeod our General Manager. Susannah and I were delighted when Chris joined us in July from Tulloch Castle in Dingwall where he had been working. Chris is extremely service oriented and shares our desire to build the business into one of the finest country house hotels in the Highlands. Chris has given me the opportunity to take some time off on a regular basis and has been instrumental in the rehabilitation of my sanity.
This summer we decided to take matters into our own hands regarding the maintenance of the driveway armed with a hired vibrating roller and ten tonnes of road chips Charlie, Anna and myself embarked on a new potential career. However, like the cowboys before us our work was undone within months. So we continue to patch taking solace in the fact that we are saving thousands of pounds by doing it ourselves.
We are proud to announce that we received one AA rosette award for our food and delighted to say our kitchen brigade remains intact with Garry at the helm aiming to achieve two rosettes next year. With the support of our valued guests’ votes we also won two other awards Rising Star and Hospitality Hotel this year at the Hotel of the Year Awards held in Glasgow. Credit is certainly due to all the staff for their commitment and hard work.
I am delighted to say that Susannah is pretty much back to her old self now and planning to crack on with the refurbishment early in the New Year. Aurora and Liah continue to thrive and seem to have spent much of the year in anticipation of Christmas not only because we decided to close for a few days to enjoy a family Christmas but also the eagerly awaited arrival of their first pet delivered by Santa. We now have two more mouths to feed namely Daisy and Bubble the gerbils, much excitement particularly for Liah…it remains to see for how long!!!
I have come to realise like a classic car this building and therefore the business within it is truly a labour of love requiring not only continuous maintenance but also the patience of a saint. Thank you all once again for your support and enthusiasm for what we do.
Well it is fair to say a lot has been accomplished this year. We finally started the long awaited refurbishment of the ground floor with many of the improvements now clearly visible and I must say we are all feeling quite optimistic about this coming year. Susannah is keeping well, the kids continue to thrive and the business continues to grow. This year I managed to join Fortrose Golf Club and even had some family holidays, a week away to Perthshire in April and a week in the south of France in October followed by five days on Arran where I played a little more golf than I should have. I practically had to be dragged off the golf course to get back to work. Thank goodness I have Chris here, to leave the place in such good hands takes strokes off my game without a doubt.
Refocusing our energy in the early part of the year on the refurbishment it was time to find a builder that would help us. I am sure it comes as no surprise to many of you that this proved almost impossible. We were struggling to find a stonemason that would convert the old stone cellar into an office and break through the three-foot thick stonewall into the front hall making the opening for the new reception area. Then in passing Rory (our then second chef) announced he used to do a little stonemason work and with Charlie’s help would love to have a go at it. So we had a meeting with our conservation architect Hector MacDonald and with his guidance Rory traded his apron for a crow bar! Charlie and Rory removed several tonnes of stone one wheel barrow at a time, broke through the stone wall, reinstated the false window, laid a new floor, created a new ceiling and hooked the room up to the central heating. We now have a lovely new reception plus a great new office with desks for Chris, Yuliya and myself (oh the luxury of space) although the kids think it’s their games arcade, with two computers being side by side they enjoy finding their way on to CBBC games and can spend hours (if we let them) playing selections of noisy games in stereo. So if you have called to make a reservation and heard kids games in the background it is not Chris and myself playing those games it’s the kids…..honest.
Anna continues to nurture the garden on the limited budget I allow her and to her credit each year it is getting more and more established. We have had several lovely weddings this year and the grounds were a beautiful backdrop for all the pictures.
It’s been very encouraging for us this year once again to see more and more of our guests and their friends returning throughout the year and to have been featured in many independent travel guides including the “Good Hotel Guide”, “The Lonely Planet” and “Scotland the Best.” We have also had many great reviews on the web site “www.tripadvisor.com” so thank you to those who wrote to any of these guides and to all of you for your support. Credit is due to Chris, Garry and all the staff for their good nature, hard work and enthusiasm without which not only would I be insane by now but the guest experience would not be what I so often hear it is……………… “Deliciously relaxing”.
So here is to another successful year.
Five years done, fifteen or more to go. Maybe by then I will be satisfied with all we have achieved but, perhaps, one is never truly satisfied? Maybe this is what keeps us motivated? However, looking back, much has been accomplished, just not as much as we’d dreamt of when we first crossed the threshold. When we looked to the future our dreams then were untainted by the shackles of cash flow and the unforeseen challenges that lay ahead.
As it wasn’t possible for us to view the hotel prior to purchasing, we were unable to ascertain the amount of investment that was needed to both maintain the old house or the amount of upgrading that was necessary. Buying it sight unseen may seem foolish (or, if you are being kind, courageous!), but we have no regrets. There are some advantages to buying blind: it meant that our decision was based purely on the ability of the business to pay the debt and not tainted by the physical condition of the house and the late 70’s décor, like the shag pile carpet, dusty old dried flowers and painted plate collection that covered many of the walls on the ground floor.
With the help of a local interior designer Graham Grant we have now taken the ground floor back to a more Georgian feel. Graham worked with Brinton’s Carpets to design the unique carpet that now dominates the lower floor and stairwell. This set the tone for the Farrow and Ball vintage paint colours and the subsequent fabrics that we’ve re-upholstered all the furniture with.
Continuing on from last year’s good work, this year we managed to refurbish the bar and Regency lounge with new carpet, curtains and paint. We reinstated the original fireplace into the Regency lounge and Dougal Black, our local carpenter, made some authentic bookcases that, we feel, finish off the room beautifully. We also purchased forty new restaurant chairs and some swanky table skirts to give the restaurant an air of sophistication.
This year Charlie (our overworked handyman) celebrated his sixtieth birthday and, coincidentally, his thirtieth year of working here, at Coul House. To mark the occasion we hosted a little BBQ lunch with all the staff. Rather than buy Charlie the conventional gold watch we asked what he might like. Much to our surprise, he wanted a rowing machine. Now, Charlie has never been one for recreational exercise so I assumed this would be quickly relegated to the darkest corner of his house to gather dust or taken to the local car boot sale, however, to his credit, he continues to use it nightly and, combined with his new eating habits, has lost over three stone (48 lbs or 22 kilograms) in the last six months and is looking healthier than I have ever seen him.
Our annual family holiday (oh, I love how that sounds), thanks to having great staff that has made this possible two years on the trot, was to sunny Florida. This gave the girls a chance to meet Mickey Mouse and to test their new swimming skills without freezing their toes off in our local loch. Eighteen days away… and I felt like a new man.
So here’s to another successful year. A special thanks goes to you, the reader, guest and friend of Coul.
Onwards and upwards, in 2009 we will aim to refurbish as many of the bathrooms as we can.
I feel I should start off by saying that Susannah continues to be in good health three years on from her treatment, and we would like to thank everyone for their kind support and concern. Aurora and Liah are still thriving. Unfortunately the same can’t be said for Flopsy the bunny rabbit, or Daisy and Bubble the gerbils, who all passed away this year. Before their passing our menagerie had grown this spring by the addition of nine ducks and later 20 ducklings and a number of baby bunnies (which as you can imagine delighted our girls!) Boy those ducks are promiscuous and we talk about rabbits. These ducks make Tiger Woods look like a choirboy. All joking aside we, and many our guests have thoroughly enjoyed watching the ducks waddle around the property. Anyway twenty plus ducks were proving quite a handful so we found a new home for most of them, no not the chef’s freezer, a farm near Culbokie.
I am delighted to say the growth of the business this year has been phenomenal, despite the doom and gloom predicted by the media. The strong Euro and our ever-increasing repeat and referred guests have combined to give us unprecedented levels of business over the last twelve months. Of course there is a down side to all of this…. I have been playing significantly less golf than I would like.
My cautious bank manager has not managed to quell my optimism despite his gloomy predictions for the hotel industry in the short term so I’ve been pushing on with manageable bite sized projects. We’ve proceeded with converting the room know as the Tartan Bistro (latterly used as our wine cellar) into swanky new toilets, which we are delighted with. We also managed to employ a much-needed roofer Marc Beagent (GM, Chris McLeod’s brother in-law who was between contracts) Marc was a real trooper and battled on during a horrendously wet period of the summer, fixing and replacing as much as he could in six weeks. Hopefully we he will be back again in the new-year to continue what he started.
Another accomplishment this year by Charlie, Rory and Anna has been the addition of a winding path around the lawn on the entrance side of the building. It takes you past the duck pond, up to the top end of the property and snakes back round to the car park. We’ve placed a few benches between the magnificent old trees to enjoy the surrounding wildlife and the good views of the architecture. A wendy house has also gone up, hidden amongst the trees on the other side of the house for the kids. I have been tempted to let it out several time this summer when we’ve been sold out, so be sure to book early for next summer as you never know where you might be accommodated!
2010 will see us slowly begin the bedroom and bathroom refurbishment that we have all been looking forward to starting. We will be fitting this in around our quieter periods of the year not to cause any distruption.
Susannah and I would like to take this opportunity to thank you for your continued support and patience, your encouragement truly helps us have the patience and perseverance to recognise our vision for the place and each year we get a step closer to that being achieved. It would be remiss of me not to highlight once again the wonderful staff that have helped make 2009 so successful.
In China it was the year of the Tiger however here at Coul House it was the year of compliance. Coul House has been grandfathered until this year, from so many of the modern regulations, unfortunately for us this year is the deadline for Coul House to become compliant. We have been required to install door closers and intumescent foam strips to thirty-eight of our doors, upgrade our fire alarm system and renew one of our oil tanks. We have also renewed three of the roof pitches with new cross-timber and slates, only sixty-one to go. It looks like Mark our full time roofer/handy man has a job for life!
I finished off last years letter by optimistically saying that, “2010 will see us begin the bedroom and bathroom refurbishment that we have all been looking forward to”. Well it is now December and all we have managed to do is paint eight bathrooms and renew all of the mattresses, duvets and pillows.
So often it seems when we plan to start something that seems fairly straight forward, it ends up escalating into a major project that gets beyond our budget very quickly. Our intention whilst upgrading the bathrooms has been to address the water pressure problems that hamper the enjoyment of having a shower. For those guests that are used to more pressure than Coul House currently offers (which we acknowledge…. is most people) I apologise, I know it causes frustration. Gone are the days of a weekly shower from a damp squib. Most of us these days have become used to being pressure washed daily. Almost all of us have lost the old British art of water temperature regulation first thing in the morning with the advent of thermostatic temperature controls. No more impromptu dancing in the shower as the hot water turns frigid.
So with this is mind we have been quizzing plumbers since March on how best to address these issues. Then came all the new questions….vented or un-vented pressurised water system? Will the old pipes handle the increased pressure? How will you heat the higher volume of water? Are the old boilers efficient enough? Do you have sufficient water pressure being supplied to the building? Is the main water supply pipe coming into the building big enough? Should we replace the boilers with a new wood-burning boiler and if so what type; solid fuel or wood chip? Should we consider solar panels? Are there grants available to assist with this type of capital improvements…………………………? Ahhhhhh.
So many months and several other plumbers later we are still confused, but I believe there is some light at the end of the tunnel (ever the optimist). We have managed to pare it down to a more manageable project by eliminating what I now know we can’t afford. I am currently waiting for the latest solution and the dreaded quotation.
As well as all of the above it has been an extremely busy year here again at the hotel. Once again we have increased the number of visitors and hope to keep our bank manager happy. The tough part starts now with December through to March looking like it will be very quiet. So if you would brush off the snow from you car, clear you drive way and come visit. I will have the log fire on for you and a good meal is never far away with Garry and Gediminas in the Kitchen.
Well the winter season of 2010/11 was as anticipated a quiet one, with the weather conspiring to keep many off the roads, financial forecasters full of doom and gloom, things were looking bleak. So we battened down the hatches, tightened our belts, cancelled several projects, delayed hiring seasonal staff and held on tight for the “stormy months” ahead.
In my infinite wisdom I decided to finally take the plunge and buy a front desk reservation software system with the idea of relegating the old trusted diary to the local museum. I realised, that with more and more guests using the Internet to book their accommodation, this would enable us to streamline our reservations process and free up some of our office admin time. Perhaps it would even let me get in touch with the ever-increasing number of repeat guests throughout the year by creating a database.
For reasons I am now still questioning, I also chose at the same time to install a new phone system and online reservations handler. The idea was that the new phone system would give us total wireless broadband access throughout the hotel and ultimately improve mobile phone reception. Boy oh boy have I had a tough time coming to terms with all the changes, I thought I liked change! However I have been holding on to my old reservations diary like a comfort blanket and feeling quite overwhelmed with the whole process of integrating these new systems… we are still in the middle of it all as I write.
Although the year started out very slowly when it did eventually gather momentum we ended up having some extremely busy months. There were lots more British visitors this summer, many staying several nights instead of the traditional one or two, a few weddings in the Autumn and early Winter, and a wind- fall of corporate business in September, October and into November, that has helped put our figures back on track. So thank you all for your continued support and for being such fantastic guests.
Here’s to another year. No matter what challenges it brings it is made a lot easier with the support of our right hand man Chris McLeod and, of course, all of our other wonderful staff.
“It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us” Charles Dickens A Tale of Two Cities 1859.
I find myself wondering if Charles Dickens was a hotelier in the Highlands before being regarded as the greatest novelist of the Victorian period.
It was certainly a roller coaster of a year (no not as eloquently put at Dickens but it is what it is)! It started with Susannah being diagnosed with more cancer and subject to another gruelling schedule of chemotherapy, surgery and radiotherapy. At the same time the hotel was enjoying its best financial year thanks to some timely weddings and a troublesome water mains renewal that stretched from Strathpeffer to Garve. This gave several May Gurney employees the need to be resident at Coul House on and off for three or four months. The “Strathpuffer” (the annual 24 hour mountain bike endurance test for the crazy fools who seem to enjoy the weather the more extreme it gets!) was hosted once again in the local forest in early January which always brings us welcome business in one of our quietest months of the year.
The new phone system I talked of in last year’s letter continued to be a source of consternation throughout most of the year. It uses what is called VOIP (voice over internet protocol). Believe me, when I say that it has been a major headache. We lost contact with the outside world on several occasions. Some might say that is not a bad thing but who knows how much business we lost through it! I had to remind myself on several occasions what possessed me to buy such a phone system! Nowadays we need wifi to meet the needs and expectations of the majority of travellers. The new phone system was somehow relevant to that, but in what way now I can`t remember, so we persevered!!!!! Charlie and Rory crawled through the tiny attic spaces of the entire house installing modems for the phone company to try and keep the costs manageable. Thankfully we now have a fairly reliable wifi service throughout…most of the time! As for the digital phone system well I am still trying to figure out how to use the hold feature properly.
The big project for this year was the installation of a new biomass wood pellet boiler. It came in its own heat cabin with back-up oil boiler, pellet and water storage. Costly, yes, but with the government`s 20 year, renewable heat incentive on offer it was a matter of let`s do it now before the incentives go! The old oil boilers we had were 30 plus years old and costing more each year in maintenance (a bit like the wife) oops! Sorry darling only joking! The challenge was how to finance it (the boiler not my darling wife). The bank would not entertain the idea at all and trying to find the money proved as tricky as looking for dog doo in the autumn leaves! I knew it was there – I just could not find it initially. However the Carbon Trust and The Energy Saving Trust came to the rescue and the boiler is now up and running.
As usual I have run out of space and I limit myself to one page so as not to bore you too much! So thank you once again for your continued support and for being such fantastic guests, roll on 2013.
Well here I am again, late December having to recall the significant events of the last twelve months!?!
2013 marked our tenth year here at Coul House. In many ways I cannot believe ten years have passed, and then I look at my kids and see how much they have grown. I look at the business and see how much it has blossomed, provided a constant source of challenges and achieved many of the goals we set out to achieve. Then I look in the mirror and I see each of the three thousand six hundred and fifty days that have past and think……..has it only been ten years!! This year I’ve had guests taking my father as my brother, Chris as my son and my kids as my grandkids!!?
No one told me it would be easy to own my own business, but it never ceases to amaze me how precarious a journey it is between success and failure! Last year (for some unknown reason) the Highlands had the worst October – February occupancy figures it has ever had. Not only that, but final payment was due for the newly installed biomass boiler (which at the time was not firing on all cylinders). The VAT payment was due and to top it all of the bank was looking to reduce our overdraft! Once again we needed to do some fast talking, belt tightening and the cancelling of maintenance projects to get through the winter on a wing and a prayer.
Salvation came in March with a fantastic review for the restaurant in the Inverness City Advertiser (a free local magazine). Restaurant revenue started flowing in with new guests being introduced to the culinary delights of Coul House. Then the good spring weather got the season off to a great start and the momentum seemed to continue right through to November, which was tremendous. I’m delighted to say that during the spring and summer months we were the #1 hotel in the Scottish Highlands on tripadvisor.com which proved most beneficial; introducing yet more new guests to Coul House. So a big thank you to all those who wrote reviews, recommended us to your friends and indeed for your continued support and repeat visits.
This summer was only marred by the passing of Charlie Cleland our maintenance man. Charlie had 35 years of service here at Coul House; he will be missed for years to come and is never far from our thoughts.
It is times like these that I count my blessings; Aurora is now thirteen, enjoying school and to my delight no boyfriends on the scene yet! Liah is eleven and currently playing tenor drum in two different pipe bands and looking forward to starting academy in August. Susannah, my gorgeous wife and I are both fit and well and thankfully able to work hard. The biomass boiler is finally running well and saving us money, and of course the continued support from the hard working staff.
Unfortunately during the high winds of early December several mature trees were blown over, including an Eastern Hemlock, a few Lawson Cypress, a Yew and a Portuguese Laurel. Two of these were found crushing the old shed at the top of the drive, which we believe had been made from a previous fallen Monkey Puzzle tree many years ago.
So after much reflection I have come to realise that the refurbishment of Coul House is a journey not a destination. Here’s to the next ten years and to the continued challenges that make it so……interesting!
Whoa is it really December already….I cannot believe where this year has gone! It has certainly been a whirlwind (although that may just have been the side effects of all the coffee I have been drinking). I have discovered the joy of coffee this year, probably something to do with the new espresso machine that we got at the tail end of last year. It is especially good with a shot of Disaronno Amaretto and a slice of Garry`s apple and frangipane tart!
………….A successful whirlwind I should add. We started the year with some online marketing, yes, I finally realised that I was never going to get round to it myself, so I outsourced the marketing to a friend of Susannah`s (Kate) that does marketing for other hotels throughout the UK. Kate wasted no time, having the first mail shot out in early January, to our previously unused database. She made it look so easy I thought perhaps I should have done it myself!
Perhaps as a result of the marketing campaign we had a very busy start to the year. It may have also been the fact that we ran a groupon offer (at a ridiculously low net rate, especially after their share and the vat man`s take) to boost our winter occupancy and provide a cash injection to assist us with the £50k refurbishment project that we embarked on in February. Rooms 14 & 16 now have fabulous new bathrooms and carpets, with room 14 becoming an open plan suite with a lovely chesterfield sofa bed, raised ceiling and a showcase bathroom. We have certainly raised the bar with these two rooms; making me more inclined to throw caution to the wind and do all of the rooms immediately! Patience and common sense however prevails as I have never been one to buy lottery tickets and despite Ana the gardener’s best efforts, (and all the fairy magic in the garden) she has not managed to grow us a money tree!
We also undertook to renew half of the roof valleys; the other half of which is scheduled for January 2015 along with the gutters. However you will have to take my word for it due to all of the health and safety regulations. I have only been up once myself to see it and that was just to see what I had paid all that money for, I looks great, £20k well spent I`m sure!
Some of you may have met our very talkative local painter/ keen fisherman James Mutch, who seems to know everyone (or someone they know from the western hemisphere). James has been working his way round the hotel both inside and out, taking off the old wall paper (only one room left to go), painting the windows and freshening the rooms up generally.
More driveway maintenance and emergency lighting, new garden gazebo and garage, new ovens for the kitchen, new uniforms for the restaurant staff……all this and I am still smiling…no really I am I`m just hiding it under the moustache!
Family, dogs, staff and hotel all doing well, let another year roll on and see what it brings.
Thanks as always for your support and positive feedback.
Well what a phenomenal year. I really did not anticipate that 2016 could possibly surpass 2015 in such a spectacular way, but it certainly has. Some bright spark or several bright sparks from the North Highland Initiative came up with the brilliant idea of branding the northern road network simply as the North Coast 500 or NC500 as it’s often referred to. 516 miles of glorious country roads packed with scenic diversity, centuries of history and of course great places to stay, eat and visit. A whole bunch of clever marketing, social media activity and some well-timed TV coverage and ……. BOOM……people started flocking to the Northern Highlands in numbers not seen since the days of the cured herring packing in 1860s. Needless to say all of us here at Coul House have been delighted to be able to welcome so many travellers in need of a bed, or simply some sustenance, on their way round the route. Here`s hoping that it continues to attract many new travellers to the area and to the delights of the Northern Highlands.
As a direct result of the economic boom, we have been able to accelerate our refurbishment plans this year during some rare quieter times (early spring and late autumn/winter). So it was new bedroom carpets throughout and I am delighted to say that we have completed a total of ten new bathrooms this year to add to the two in the superior rooms we did previously. Each of which has been completely fitted out with new flooring, WC`s, sinks and either a walk-in shower or claw foot tub with shower over or both in some cases. Each of the new bathrooms has been plumbed into a new pressurized water system to provide consistent water pressure, finally all mod cons for at least half of the house or more. Now if 2017 is as prosperous as 2016 we will be able to do the other nine! In fact we started two more rooms early January, so we are certainly committed to maintaining the pace of refurbishment if we can.
It has not been all work, work, work; as a family we managed several short breaks this year including a long weekend in London to celebrate my father`s 75th birthday (yes my father not my older brother as he occasionally gets mistaken for; which of course delights him) a couple a days in Barcelona before going on to Costa Brava, plus five days in Venice in the October holidays finished off our forays for the year. No crazy Penny Farthing stunts this year, choosing to keep my feet firmly on the ground. In an attempt to compete with my father`s youthful looks, I decided it was time to shave off the Georgian styled beard and get the running shoes back out. In fact I have been racking up the miles on the local running trails with Alfie and Bella typically following loyally behind. I am sure you will agree it has taken years off me! Ah, a boy can dream….. fifty years old this year and still the stamina of a twenty year old.
Mark and Fionna Ellison from Reddishpink Media have been very patient with me over the last year as I have been very pernickety over the content and text of our new website. However, we finally managed to get it launched in early December 2016, I hope you all like it? Check it out and let me know what you think but be gentle, as it was a lot of work and I am getting much more sensitive in my old age – or as Susannah thinks as a result of the Penny Farthing debacle!
Delighted to say that the foundation of our fantastic crew has remained in place and several new staff members have recently joined us to cope with the continued upturn in business.
Thank you once again for your support, please continue to spread the word and of course remember to book early for your next visit to avoid disappointment. If you are (unlike me) riding the wave of social media feel free to like us, join us or follow us on the various online avenues that I am tentatively embracing.
Well, what a phenomenal year. Yes, I know, I said that last year too! As 2016 came to an end I was giddy from having had such a successful year. I decided to share the joy and came up with a particularly generous offer, my “2017 Celebration Offer”, £20.17 per person B&B to celebrate the start of the New Year. The offer was remarkably well received with several couples taking advantage of the offer multiple times during both January and February. This impacted our occupancy positively in what is traditionally a quiet time of year. There was a lovely atmosphere with log fires ablaze and many happy couples enjoying great food and much wine. As a direct result of the offers uptake, we once again found ourselves needing to recruit additional staff during January. Needless to say, the Celebration Offer is here to stay.
The rest of 2017 was simply …eat, sleep, work, repeat! At least it felt like that at times. Being busier than we have ever been for teas, lunches, dinners, and accommodation is of course fantastic. Every business owner`s dream, I hear you say. It is, of course, a dream come true and indeed what we have been working towards for the last fourteen years but it does not make it any easier to manage. For me it was a tougher year than most; trying to maintain our service standards whilst training new staff and welcoming more guests than ever.
This summer there was a little girl staying in the hotel for a few nights with her family. One evening after her dinner she was wandering around asking questions. After a while, she asked me “why the frowny face?” I had just answered her umpteenth question whilst trying to write down an order from one of the many waiting diners whilst trying not to forget what they had ordered. I explained that I was simply concentrating not angry. When I concentrate my brow furrows like a Shar Pei puppy; wrinkles are clearly unfashionable, something you see less and less these days particularly from our TV/movies stars as they battle to hold back the aging process!
Anyway, why the frowny face is something I have asked myself several times this year. Busy hotel, yes, happy guests, yes. So why the frowny face? The bane of my life this year has been the three “r’s” no, not the road, rhododendrons, and roof as was the trouble back in 2005. In 2017, recruitment, retention, and re-motivation were the biggest challenges. With the weak pound and NC500 bringing prosperity to the entire area, fewer Europeans looking for work in these uncertain times, not to mention local unemployment at a record low, there are simply not enough people to go around. Consequently, we are often forced to take a leap of faith in the recruitment process and this year I have had that faith questioned a number of times. We had staff that disappeared prior to their next shift, staff that thought a few days-notice was sufficient enough time to recruit replacement cover and staff that found being hospitable a bit tricky. So it was recruit, train, cover, recruit, train, cover until things settled down towards the end of the season.
Fortunately over the years we`ve been blessed with longevity from many great key members of staff including the omnipresent Chris McLeod our GM, the ever creative and accommodating Head Chef Garry Kenley (that we keep locked in the back for no one to see) Gedas our talented second chef and of course the long-serving Rory Macrae who came with the building when we purchased it back in 2003; the definitive handyman who turns his hand to anything and everything from cooking breakfast to mowing the lawn and a multitude of other maintenance tasks. Each of them has contributed to the Coul House success for more than ten years and for that we are very grateful.
I`m delighted to say that this year we’ve managed to refurbish another five bedrooms and more importantly their en suites. Only four more rooms to do now which is very exciting…..….ah there is light and the end of the tunnel. Thank you once again for your support and encouragement throughout the year. It truly means a lot to us.
Ahh… December 23rd and here I am once again writing our progress report. It feels like it’s been a long and certainly; busy year. So much so I`m struggling to remember what it was exactly we did this year, I certainly hope that there will be more years ahead like this one in many respects.
I feel sure we must have knocked something off our to-do list..? Now I recall; we started the year with two more rooms being refurbished and the bathrooms being put onto the new pressurised water system – only two rooms left to go which is fantastic. I have been teasing Susannah that we should keep one room ‘untouched’ to remind us of how far we’ve come – but she’s not keen on that idea for some reason. In fact the last two rooms are scheduled to be done in February 2019 – finally! Although I’m not sure what we’re going to do after they’re all complete…ahh…if only that were true, we’re already looking at new furniture and fabrics to refurbish the incumbent furniture, fixtures and fittings, and on it goes.
I said goodbye to my cautious bank manager and established a new fifteen year business loan to allow us to press on with our refurbishment plans. So, August 2025 is for now; the new target.
In 2011 I talked about the ‘incredible’ quote for a pressurised water system and here we are now seven years later having spent 50K in the last two years upgrading the plumbing system one pressurised tank at a time. Mind you at least this cost includes all new pipes and some of the bathroom hardware. Now, I know patience is a virtue and one I am often credited with however, trying to find a plumber to finish the job that they’ve started is beginning to test even my patience (and that’s saying something). I’m hoping 2019 will bring better fortunes where plumbers are concerned. Whilst I have been frustrated with plumbers over the years I have been blessed with sparkies, A. J. Morrison and in particular Caly, Paul Morrison`s right hand man has been extremely diligent and a pleasure to work with as has Paul himself. Caly has installed outside lights, rewired more parts of the hotel than I care to mention, installed many USB sockets and patiently waited on a number of occasions for Susannah`s attention to inform him of her latest plans or indeed amended plans for the refurbished bedrooms electrical requirements.
And as if each passing year is not enough of a measure of time, my gorgeous wee girlies are hardly recognisable, having grown into young women almost overnight (certainly before I was ready for it). They both helped out in the hotel this summer, Aurora up front in the restaurant and Liah with Garry in the kitchen. Aurora also passed her driving test this year (with some excellent tuition from Susannah) and left Contin for Edinburgh College in August to study contemporary dance which she is thoroughly enjoying. Liah is champing at the bit to pass her driving test in 2019 (once she’s finished her Highers). It barely seems like yesterday that we were moving into Coul House whilst trying to keep an eye on two wee tots running around, often in one dressing-up outfit or another!
We were lucky in August to add Lara McLeod (Chris’s wife) to our management team and our luck continued in September when Casey Mackenzie re-joined the team after a nine month sabbatical. Susannah and I feel the whole team (front & back of house) has never been stronger and we are extremely proud of how they have pulled together and worked hard to make the hotel a success for another year. | 2019-04-22T20:25:27Z | https://coulhousehotel.com/coul-stuff/history/ |
I will be posting the Arabic original as soon as possible—it is now in my possession in its entirety—but am posting first (because technically easier) the English translation. The minutes come from the same source in Sudan who provided to me the minutes of the 31 August 2014 meeting of senior regime security and military officials (available in English and Arabic) and also for the 1 July 2014 meeting (again available in English and Arabic). For a compendium of expert opinion on the authenticity of the former (leaked first), see | http://wp.me/p45rOG-1w5).
I will be commenting further on the highly revealing words and policies articulated in this meeting. What the brutal men of this regime say among themselves—thinking that their words will never leave the room—gives us extraordinary insight into minds of génocidaires.
I will focus in good measure on language that seems to be disturbing evidence of diplomatic malfeasance on the part of the African Union’s Thabo Mbeki, Haile Menkerios, and Mohamed Ibn Chambas. Their imbalanced mediation between the belligerents in Sudan’s ongoing civil wars, as well as their poisonous relations with South Sudan—particularly over Abyei—are put in a context not previously available from public sources.
After six years of representing the African Union diplomatically as a mediator in Sudan’s conflicts (first in Darfur, to no effect), Thabo Mbeke is well known to this regime. And they are presumably in a position to know whether he would accept “money of the Islamic Movement that is deposited abroad,” even though this is hardly a standard method of payment for what are to be neutral and impartial diplomatic efforts. This is no small matter, since the focus of much these minutes is on Mbeke, the September 2014 agreement in Addis he helped secure, and how that agreement will affect Khartoum’s domestic political and electoral plans.
Text in blue italics highlights my mainly brief notes of clarification, although there is some editorializing. I have also put asterisk ( *** ) at the beginning of paragraphs I think of greatest significance. This highlighting all presumes the authenticity of the document, which of course can’t be established by means of a “chain of custody.” My highly reliable and deeply honest Sudanese source communicated to me, indirectly but emphatically, that a number of people put their lives at risk to obtain and leak this document. Given the verdict on authenticity for the previously leaked minutes, I feel quite confident in assuming that these, too, are authentic—and a deep obligation to disseminate them.
I have edited the English translation I received to remove typos, proofreading lapses, misspellings, unidiomatic and ungrammatical constructions; in a few places I have sought to clarify what seemed opaque; I have regularized the transliteration of Arabic names and sometimes made them more consistent with common usage; I have not changed the meaning of the text at any point.
President [al-Bashir] opened the meeting by saying welcome to the attendance, then commended the role played by the Secretary General [al-Zibeer Mohammed al-Hassan] in rebuilding the Islamic Movement institutions that came back to life stronger than before, despite the conspiracies and plots of the enemies, and the retreat by many of her own narrow-minded sons. President al-Bashir appreciated the Secretary-General’s ability to reconnect the Sudanese Islamic Movement with the rest of the Islamic Movements in order to assume her leading role for all the Islamic Movements world-wide. The Secretary-General presented the Sudanese Islamic Movement as a model for good governance, saying it was copied by many Islamic Movements abroad. Today the Sudanese Islamic Movement is feared by the enemies.
President [al-Bashir] said in this meeting that he wished to hear the opinions on current events of those present. “This is because you represent our ears that we listen with, the eye we see through, and the hand we use to hit every conspirator working to bury the Ingaz Revolution (now known as the National Congress Party) and sue its leaders.
The Addis Ababa Agreement of September 2014 signed by the African Union High-Level Implementation Panel (AUHIP), the Sudan Revolutionary Forces (SRF) and Sadig al-Mahdi constitutes a deception plan despite the apparent concessions made by the Sudan Revolutionary Forces seen in the light of their original hardline position.
It is clear that the opposition is aiming to use this agreement in order to dismantle the Ingaz in a benign manner through negotiations after they suffered major defeats in the military operations theatre; but we won’t allow that deception to work. As a matter of principle, we don’t agree with the idea of a transitional government or the holding of peace-talks outside the country anymore.
Instead we will use these peace talks to dismantle the rebel militias. But we will tell them that Addis agreement is accepted, with the aim to give the region and the international community the impression that Sudan is taking the political dialogue initiative seriously. But when it comes to the details we will ask the rebels to disarm and demobilize their forces [in other words, unconditional surrender—ER] in the event they refuse, we will turn public opinion against them. Meanwhile, the Decisive Summer Campaign military operations against them will continue until they surrender.
Haile Menkerios came to me in the office, and I discussed with him the issue of how we can demobilize and dismantle the Sudan People’s Liberation Army/Movement-North (SPLA/M-N) and the Darfurian rebel forces. This is to enable them to catch-up with the National Dialogue that will take place from within the country. On the other hand, we must stop South Sudan from supporting these armed movements on the other side.
I also discussed with him the necessity of the implementation of the joint cooperation agreement (between the Government of South Sudan [GOSS] and the National Congress Party [NCP]), including issues such as security arrangements, harboring and support to opposition forces, creation of a demilitarized zone, zero-line border demarcation, and trade crossing points under the supervision of the African Union. The purpose here is to expose the position of the Government of South Sudan, which refuses to implement agreements made with the world. Moreover, in the event that the army of the GOSS is defeated by Dr. Riek Machar’s rebel forces, nobody should blame Khartoum. We will support Dr. Riek’s forces in order to take South Sudan by surprise, and make their defeat a lesson to others. We will establish for them a radio broadcasting station.
*** By the way, Haile Menkerios is cooperating with us fully and likewise are Thabo Mbeke and Mohammed Ibn Chambas who are so keen to serve and protect our interest even more than us. When they visited Qatar they were accorded a good reception and treated generously; they are now under our control. These are the ones[Mbeki, Menkerios, Chambas—ER] we use to dismantle the rebellion. In case the rebels resist, we will report them to the AU Peace and Security Council and the UN Security Council, and will depict them in the report as the party who rejected a peaceful settlement of the conflict. On the other hand, we will also use them [again, Mbeki, Menkerios, Chambas—ER] to subjugate the South to our will and implement the agreement the way we want. All of these envoys promised to submit to the African Union and the United Nations positive reports on Sudan records on human rights and freedoms.
This is an opportunity that we should not miss and we should allocate enough resources in order to exploit.
I recommend the establishment of a separate office to attend to our security and military relations with Iran that is far away from the eyes of all. We leave the diplomatic relations file only to the Ministry of Foreign Affairs.
*** Let us bless the agreement politically in the media and keep our real position tightly held among ourselves, working to achieve our goal using the agreement itself.
The ruling National Congress Party (NCP) on 29 March refused to attend a meeting in Addis Ababa to discuss issues pertaining to the national dialogue conference and its procedures. Khartoum said the mediation didn’t coordinate with the government on who [would] participate in the meeting; also it said it would be held at the wrong time, arguing they are busy with the election of 13 April.
In a statement released on 1 April, the African Union High Level Implementation Panel (AUHIP) regretted the NCP refusal saying it had previously pledged to attend the consultations. The mediation also said the agenda of the two-day meeting were exclusively dealing with the dialogue process in line with its mandate refuting claims that it aims to postpone the elections.
[a] Lifting the blockade (sanctions).
[c] Alleviate the pressures on us.
[d] Dismantle the movements (Sudan Revolutionary Forces), since they showed no interest in dialogue through demobilization of their forces. After demobilization we will be able renege on implementing the agreement. The movements are pretending to be clever, but we can outwit them. Regarding the preparation for the elections we are now training 100,000 policemen from the various states. On the other, side we organized a special operations force to deal with riots expected to take place on September 23rd (memorial day of the victims of September 2013) with clear order to use gunfire against them, including any other saboteurs.
Those who want to express their views from the political parties or individuals are allowed to do so through the National Dialogue forums, not though demonstrations. The media must be controlled when it is covering the news of the armed forces (the Sudan Armed Forces and the Rapid Response Forces). On the other side, any delay of the elections will demoralize our forces, so elections should take place on time and should not be connected to National Dialogue. The elections should take place on time, and the National Dialogue can continue for two to three years after the elections. It will make no difference.
Regarding our relations with South Sudan, we have expressed our opinion earlier that we should maintain and support an armed opposition to the government in the South in order to maintain the balance of forces. If they support our opponents, we will support their opposition. I agree to the idea of buying a radio broadcasting station for them. The Nuer tribe of Riek Machar are very close to us. They fought along the Sudan Armed Forces (SAF) against the Sudan People’s Liberation Movement/Army (SPLM/A) throughout the 1990s.
I recommend that you support them. Open for them channels of support from Iran, because of the presence of the Americans and Israelis in South Sudan.
The information we got was that the same countries that supervised the Paris Declaration are the ones who formulated the draft of the Addis Ababa agreement, though they took care in order not to offend the National Congress Party. They dropped some of the opposition demands, such as the demand for regime change and the International Criminal Court issue. That explains why Abdel Wahid al-Nur refused to sit with the Sudan government delegation.
Our agents who sat with the opposition in Addis reported that they saw French, British, Americans, and Israelis, plus others, who met with them. Each one of those foreigners has got his own agenda aimed at destroying the Islamic Movement, abrogate the shari’a laws, and take the nation’s leader and his aides to the International Criminal Court. Based on the above information, we briefed all the members of the leadership and accordingly, they decided to welcome the agreement. They encouraged Ghazi Salahaddin al-Atabani [who was expelled from the NCP in 2013 for a memo calling for the end to violent measures such as those used against demonstrators in September 2013] to sign.
Leave the country without legitimate leadership.
Neutralize the government in regards to the Decisive Summer Campaign military operations.
Put the National Congress Party on [the defensive?] after they have succeeded in postponing the elections, releasing the political detainees, and gaining freedom for political parties and unite with them against us.
We managed to reach out to the sons of Sadig Al-Mahdi (Abdel Rahman and Bushra) through our agents. We told them if you allow your father to stay outside of the country, he will be [replaced by?] Nasaradin al-Mahdi. And Nasaradin will tell Sadig that his sons are supporting the regime. Automatically Nasaradin will inherit the party leadership from their father because he will have lost the support of the Ansar sect leaders and followers abroad [the Ansar sect is the primary base of support for the National Umma Party—ER].
Here Bushra asked how could he go to his father. We told him to inform his father that he (Bushra is going to protect him (Bushra is a Military Intelligence officer in the rank and file of Sudan Armed Forces [SAF]). We told Bushra that we want his father to return to the country, and in order to achieve that, let us work together. We told him to travel to Addis Ababa to stay with his father, but always to sit far away and not interfere with his business. You only keep records of those he meets, those who call him, and his destination if he is travelling. In Cairo you must stay with him, learn about his meetings, chat with him, and convey to us whatever information you get from him. We agreed to rescue your father from the hands of the rebellion and their foreign friends. In case he travels to any country or meets any VIP, you must attend the meeting.
Regarding the National Consensus Forces (NCF), they will not agree with the rebels or approve the Paris Declaration; this is because our elements within Arab Baath Party, whose vision is based on the central principle (One Arab Nation and an eternal Mission), and the movements (Sudan Revolutionary Forces, SRF) in their eyes are racists, because they are rejecting the Arabs and Muslims. On the other side, the Democratic Forces Movement (DFM) also plays a big role in ensuring disagreement within the National Consensus Forces concerning the rebels, because they belong to the leftist groups. We should increase the support to this political party in order to enable them replace the Communist Party, because the Democratic Forces Movement leadership is composed of the youth, and its chairwoman is very active. They used to bring live information from the rebels, and the rebels trust them.
It has become a necessity also to make change within the Sudan Congress Party, and that we replace Ibrahim al-Sheikh—the current leader—with his deputy Dr. al-Fatih Omer al-Sayid, who is our agent in that party. Recently, al-Fatih) played a decisive role in widening the gap between the National Consensus Forces and the rebels. In relation to Abu Eisa, we have infiltrated his office, by some of his staff and we get a copy of every e-mail or telephone call he makes. No fear from his side.
*** At this stage we must welcome the agreement in order to give Thabo Mbeke and Mohamed Ibn Chambas the ability to be seen as productive and having achieved something. Accordingly, we must participate in the writing of the report that will be submitted by Mbeke to the African Union and the UN Security Council in order to ensure that it reflects the political transformation that is taking place in the Sudan: the release of the political detainees, the release of Ibrahim al-Sheikh on request from the AU Peace and Security Council in September. We must see that this report, which is going to the UN Security Council, contains what we want in terms of recommendations and resolutions. That meets our interest.
*** In this manner we will get rid of the crisis between us and the international community, then we will play politics with the Sudan Revolutionary Forces as we infiltrate their rank and file. At the same time we guarantee that the National Dialogue is going on within the country and the elections are taking place. We shall call international NGOs to monitor the elections, and there will be no rigging because we don’t need to do it due to the fact that the voting will be done through the National Identification Number and the majority of those who got it are NCP supporters.
*** This is an opportunity that will not repeat itself. We will be in a position to dictate our conditions on South Sudan using Mbeke and Haile Menkerios, who can play this role to enable us control our borders. Additionally, we keep the peace-talks forums in Addis and Doha (Qatar) going on separately when we discuss the details of the agreement signed in Addis with the African Union High-level Implementation Panel (AUHIP).
We worked to change the form of coordination between us and the revolutionaries in Libya in order to avoid committing another mistake like the one took place when the plane went last time.
Let us stick to our position regarding Paris Declaration and never recognize it.
The agents we planted in the National Consensus Forces (NCF) who attended the meeting of September 10th managed to influence and convince the NCF to reject the Addis Ababa agreement and Paris Declaration. Today the NCF are at loggerheads with their allies in the Sudan Revolutionary Forces. Our agents succeed in doing this by engineering higher ceiling demands compared to the achievements of Addis Ababa agreement. Additionally, we managed to infiltrate and monitor the activities of all the foreign missions in our country, in addition to our ability to cover all the rebels’ activities. Through these agents we succeeded in discovering earlier that the Sudan Revolutionary Forces agreement with African Union High-level Implementation Panel (AUHIP) was a deception intended to embarrass our government. But thankfully we had information about the concessions they will make before time and what they said in their meeting with Sadig al-Mahdi to achieve their plan.
Accordingly, we decided to welcome the agreement and that the African Union High-level Implementation Panel can sign onto it; then, however, we leave the rest to be discussed when we come to the details of the agreement. This is because we need better relation with the regional and international communities. Our relation with South Sudan should be based on the degree of animosity: either they are friend to us or to the Sudan Revolutionary Forces. We will discover this in the upcoming meetings of the joint security committee. We will base our decisions on the outcome of that meeting as to whether or not to provide security and intelligence support to the fighters of Dr. Riek Machar. We should establish a radio broadcasting station for them [presumably the rebel forces of Riek Machar] in order to reach their voice to outside world, and we should supply them with all types of logistics through a remote channel that won’t be seen or discovered by anybody.
In order to get the demarcation of the [North/South] border and the demilitarized zone, Thabo Mbeke and Haile Menkerios are in agreement with us in regard to the necessity of holding the joint security committee meeting. Holding of that meeting will enable us to present our case and all the evidence we have on South Sudan’s support to the movements [Sudan Revolutionary Forces].
In regards to South Sudan, let us support both parties to the conflict: Salva Kiir politically, because his presidential term is coming to an end, and Dr. Riek on his legitimate demands, especially given that Dr. Riek declared that [his rebel forces] are allies to us. So even if a peace agreement is reached today, we will have allies within the South, contrary to the case if Dr. Garangs’ Boys are in power. I support the establishment of a radio broadcasting station to help them in the war propaganda against Salva Kiir. In addition to intelligence and logistical support to Dr. Riek, this will constitute the beginning of the end of the rebels [again meaning Sudan Revolutionary Forces].
*** Our 4th Division Command in Damazin has succeeded organizing a local militia in Maban from the sons of the area, under the command of Kamal Loma, a former R/S/M in SAF, with the aim to expel the refugees of Blue Nile from Maban area refugee camps. We supported the step and encouraged them.
*** Another attempt made by Abdalbagi Garfa (a Sudan People’s Liberation Movement/Army dissident) to establish a group from the sons of Shat tribe in Yida refugee camp and there is some progress reported on that. We have also some consultations with some Nuer sons in order to convince them to go there (Yida refugee camp in Unity State) to disperse the Nuba refugees and they said they are ready to go and execute the mission, because they don’t want refugees in Yida. In fact, the gap between the Nuer and the movements (Sudan Revolutionary Forces) is widening because the Nuer are accusing the movements of participation in the killing of their relatives. On our side we will support this hypothesis through the media.
*** Let us use Mbeke to help us finish the rebellion for good. We don’t accept anything called a “transitional government” or “constitutional conference.” It is up to the politicians to welcome the Addis Ababa agreement in order to attract the rebel Sudan Revolutionary Forces to the National Dialogue according to our conditions, but only if any National Dialogue taking place is presided over by the president.
The greatest threat to us is coming from the South. The meeting with the Mechanism [for border delineation and demarcation] must settle all the suspended issues immediately. It is also important to dry-up all the sources of supply to the rebels, especially the Mountains Bank, that support the war.
We must support and strengthen the Nuer tribe; they are strong fighters and lack only the experience to operate tanks and how to manage missile-launching devices. They also need an FM radio broadcasting station to take their voice to the world, in addition to the organization of a strong military intelligence wing, qualified to lead and command military operations and political battles.
The agreement which was signed by the rebels and Sadig with the Mechanism was arranged earlier in terms of content and objectives, but the question is: why did they make that concession at this specific time? It is clear that all is arranged by foreigners and it is not their own will.
We got the outcome of the opposition meetings with the foreign diplomats and we passed it to the members of the 7+7 team going to Addis Ababa in advance in order to sign the agreement and foil the opposition plan. The opposition conspiracy that was aiming to sabotage the internal National Dialogue and pit the National Congress Party against the international community giving them the impression that the National Congress Party is not serious about dialogue and peaceful settlement to the conflict in Sudan. We thought about how we can give concessions, but meanwhile work to allay the fears of our members in the National Congress Party. So the decision to release Ibrahim al-Sheikh was designed to coincide with Mbeke’s visit to Khartoum and prior to the submission of his report to the AU Peace and Security Council and the UN Security Council.
At the same time our agents within the National Consensus Forces will work hard to prevent the unification of the political parties under the leadership of the rebel Sudan Revolutionary Forces. The National Consensus Forces must not be used, as Dr. Garang used the National Democratic Alliance in the past.
*** In fact, the concessions made by the rebels puzzled me, because the sons of the two areas (Nuba and Blue Nile) have not changed their positions for long time. After that we decided, with the security organs, to wait and monitor the situation until we got full information about the motive behind their new position. So we decided to sign since it is not a framework agreement and not binding to us, instead we have used their signature in propaganda that serves our party and to show that the National Congress Party is serious in regards to National Dialogue. That way we will be able to mislead the countries supporting them in order that they don’t influence the European Union’s positions. That is why we declared that the National Congress Party welcomes the Addis Ababa agreement of 2014. We decided to use the agreement for propaganda in the media, to be followed by the decision to release Ibrahim al-Sheikh.
Because we welcomed the agreement, our image has improved and we have won substantial support, which you can see from the statements issued by many international organizations commending our position. The proof is that the EU Ambassador and the American consul visited me and both commended the role of the mediation and our role in reaching that agreement. After that I met Mbeke and we agreed on the recommendations he should submit in his report to the AU Peace and Security Council and the report to UN Security Council. That should include a request concerning the lifting of sanctions and support to Sudan in addition, he should reflect a good image of the Government of Sudan. For now we have won the game.
When the time comes for negotiation, we shall use discussion of the details to dodge and buy time with aim to foil the rebels’ plans. Additionally, we will conduct internationally recognized elections, especially since the EU, AU and Carter Center will participate in the monitoring these elections. I have assured the American special envoy of our good will. He was so happy. [The Carter Center failed miserably in monitoring the 2011 gubernatorial elections in South Kordofan]. We also raised the issue of Abyei in the coming elections, in order to put pressure on the South to accept the implementation of the security agreements in the presence of Mbeke.
***But whatever we do to thank Mbeke will not be sufficient to reward him fully for the things he did for our sake and our behalf.
In regards to the elections, we are ready and have prepared for them.
I say that, you must properly cover the movement of the weapons you are transporting to Libya, so that we avoid embarrassment next time. There is a general consensus within the National Consensus Forces that they should maintain their position and not sign the Addis Ababa Agreement, or unite with the Sudan Revolutionary Front and Sadig al-Mahdi. We provide the National Consensus Forces with full freedoms as requested in their statement so that we use them in bargaining with the international community, which is currently supporting preparatory meetings in Addis Ababa for the National Dialogue to take place in Khartoum.
We must hurry to dry up the source of supply to the rebels, especially the Nuba Mountains Bank, because it is supporting the rebellion and currently this Bank has two branches, in Juba and Nimule. You said before that you managed to create a problem within this Bank that can lead to its collapse, but nothing has happened to this point. Yes the drying-up is going well, but this Bank has still not collapsed.
Kumundan Joda and Abdal-Ghani (ex-SPLM/A and currently pro-NCP politicians from Blue Nile) and told me that their contacts with their supporters in SPLM/A controlled areas of Blue Nile are going on well. But due to the rainy season and bad roads their supporters could not cross to National Congress Party-controlled areas yet. Also Daniel and his people (Daniel Kodi, an ex-SPLM/A commander from Nuba Mountains who defected and joined the National Congress Party). They said they are working to divide SPLM/A members in areas of the Nuba Mountains controlled by the SPLM/A. But we are also working on them on the other side, from a different direction. They are opposed to the Islamic Movement (they are Christians) and our campaign will continue until we liberate all our lands.
We shall activate both Doha and Addis forums to engage the rebels, but we will not recognize the Sudan Revolutionary Front or Paris Declaration at all. Additionally, the steps taken toward Sadig al-Mahdi are important: he must come back and get politically assassinated for good.
I agree with all that has been said. I say all the government institutions, the embassies and our presence abroad are collecting information about the movements and meetings of the opposition in Europe, Egypt, the Emirates, and Addis. This has enabled us to fashion down our strategy for dealing with the trap laid down for us and escape it. Regarding the international and regional staff, we must study their personal tendencies or weaknesses in order to use and engage them. For the time being we have taken the initiative and Mbeke will submit a report to the AU Peace and Security Council and the UN Security Council, praising us; and his report will include recommendations that we badly need.
Regarding the joint security committee meeting with the South let us use this agreement gradually and tactically in order to be able to cross to the other side of the river safely, meanwhile keeping the two negotiation forums (Doha/Qatar and Addis Ababa, Ethiopia) separate from each other. Regarding Abdel Rahman (son of Sadig al-Mahdi), I was assigned by the president of the Republic to meet him. I found that he is against the step taken by his father and we want Abdel Rahman to take over as Umma Party Chairman in the place of his father.
Regarding the Bank of the Mountains, even the people of the Mountains said this Bank is supporting the war. We want this Bank to be dismantled and to stop its activities. The issue of this Bank must be raised in the joint security Committee meeting with South Sudan and tell them bluntly: this Bank is part of the support to the war.
In regards to corruption, we said in order to eradicate it, you don’t need to dismiss or expose a person, because our security organs know each and every case of corruption. If we adopted the policy of dismissal, we would dismiss the whole Party and disperse our members. By that I mean we will dismantle our party and ourselves. Then what is the benefit that we gain? Instead any corrupt person can be called secretly and be shown the documents that condemn him or the evidence, and he can be asked to resign or pay the money he embezzled confidentially. After that, if he makes any statement or changes allegiance or joins another party, we will tell him: “we will expose and defame you.” This way we succeeded and today big amounts of money are recovered, and we preserved the dignity of these people and they are staying with us as members in the party. We decided that all this recovered money (Tahlil money) must go to the security and intelligence activities.
*** I agree with all that has been said. It is good that we accepted the agreement, which indicates political cleverness. I concentrate on the Mountains Bank, because many reports are talking about this bank’s activities. And I say you must give incentive to Mbeke, his people, and Ibn Chambas from the money of the Islamic Movement that is deposited abroad.
“Praise be to Allah, Glory be to Allah, who has put this under our control though we were unable to control it” (end of quote).
First is the concession made by the rebels; they are decreasing the ceiling of their ambitions and this is evidence that they have been defeated on the battlefield; this is due to the effort made by our Mujahidin (Popular Defense Forces), the Sudan Armed Forces, and the Rapid Response Forces.
Second is the bypassing the Paris Declaration.
Third is the position taken by National Consensus Forces against the unity with the Sudan Revolutionary Forces in a clever political and methodological manner.
Fourth is cleverness of our leaders and cadres in dealing with the agreement.
Fifth is the wide international support to the agreement appreciating the role of the government.
Sixth is maintenance of the two negotiation forums separately which will enable us to dismantle the agreement any time we want during the discussion of the details.
Seventh is proper infiltration and control of the envoys. After this, elections can take place with recognition in a way that we will be mandated by the people and empowered to subjugate all the other political parties. At the same time, military operations will continue in order to liberate the land. It is also possible that the opposition may differ among themselves, when they discover that they were cheated. They may simply disintegrate, and we can continue with our media propaganda in favor of the agreement and an internal dialogue that will continue even after the elections.
It is a must that South Sudan stop its support for the rebels, and that we support the Nuer; they are closer to us and they fought together with our army before. The weapons and equipment going outside Sudan must be secured in addition to the necessity to ensure the safety of all the Islamists who are accommodated in different places outside Khartoum.
Regarding the Mountains Bank, you must involve it in failed (broken) commercial deals, through a third party to incur losses and collapse, and the Islamic movement can finance the whole deal and bear the consequences in terms of losses. That way we will rescue the country from a resource that is working to destroy us. Work hard in the infiltration and dismantling of our enemies, use all means including deception and money; and Halal is allowed, since it is intended to serve the interest of Islam and Muslims.
Our consent in signing the [Addis] framework agreement with the Mechanism came after consultation with all the relevant organs and supported with thorough information. Actually, we were in need for this agreement. Accordingly, we thank Mbeke, Haile Menkerios, Ibn Chambas, and Qatar for achieving this agreement. Regarding Mbeke, he is honest and we will work with him. The agreement will not deceive us, and our battle with the rebels is a long one and has not started yet. I salute the vigilance of our security organs and Military Intelligence. I don’t want to add more to what you have said, except that 1st Vice President Bakri Hassan travelled to Chad and I asked him to talk to [Idriss] Déby on the issue of our plane taking weapons to Libya. He has to make Déby understand and deny all of this before any further involvement of Sudan in Libya. So in Libya we must work secretly; it is all about the requirement of the situation in this country. We have political detainees and we released Ibrahim al-Sheikh; but the rest are sentenced before courts and their fate is connected to a political agreement with the rebels after we force them to demobilize their forces. This will be done by means of the Decisive Summer Campaign military operations on one hand and continuing the National Dialogue on the other hand. We will go anywhere wearing the hat of “dialogue,” and on this basis the negotiations will continue.
For our strategic relation with Iran we formed a security committee under First Vice President Bakri, with a membership of Abdel Rahim [Mohamed Hussein, Minister of Defense]; General Dalil al-Dhau, General Siddig Amir, Mohammed Atta [head of the National Intelligence and Security Services—ER] and Rashid Fagiri; the committee is under my direct supervision along with the Secretary General of the Islamic Movement.
A security committee to monitor and deal with the movement of the rebels and dismantling of the alliance between the National Consensus Forces and the Sudan Revolutionary Forces. Member are: Ghandour, Mohamed Atta, Salah Al-Tayib, Mohammed al-Mustafa, Dr. Yusif Tibin, Dr. Hamid Siddig, Dr. Kamal Ebeed, and General Abdel Wahab al-Rashid.
A committee to prevent the unification of the internal opposition and its external offices with the rebels. Members are: General Hashim Osman, Dr. Mustafa Osman, Kamal al-Sunni, Abdel Gadir Mohamed Zeen, Dr. Al Fatih Izzadin, General Malik Hussein, and al-Dirdiri Mohamed Ahmed.
A committee on the evaluation and analysis of the local, regional and international political positions from security and military perspective towards Sudan. Members are Khalafalla al-Rashid, General Mustafa Ebeed, Dr. Ibrahim al-Karuri, Prof. Abdalla Ali Al-Naim, Prof. Ahmed al-Majzub, General Ismail Birema, General Ahmed Abdalla al-Naw, Dr. al-Muz Farug, and General Imad Adawi.
• Supervising the weapons of those working with us under supervision of the five security organs: National Intelligence and Security Services, Military intelligence, popular security, religious security and personal security.
• Finance deals that can lead to the collapse of the Mountains Bank.
• Finance the establishment of a radio broadcasting station for Dr. Riek Machar.
*** Prevention of any demonstrations in this month of September by means of the arrest of anybody reported to have an intention to participate in demonstrations. Any demonstration to be fired at with live ammunition.
“Any demonstration to be fired at with live ammunition”—this is the face of the elections that will take place in Sudan mid-April. | 2019-04-24T20:28:02Z | http://sudanreeves.org/2015/04/04/newly-leaked-minutes-another-high-level-meeting-of-khartoum-regime-officials-10-september-2014-stands-revealed-april-4/ |
2006-06-19 Assigned to MEDTRONIC, INC. reassignment MEDTRONIC, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LEE, MICHAEL T., GOETZ, STEVEN M.
A programming device used to program delivery of therapy to a patient by a medical device, such as an implantable neurostimulator or pump, maintains or accesses a programming history for the patient. The programming history may take the form of a record of programs, e.g., combinations of therapy parameters, tested during one or more prior programming sessions. The programming device may analyze, or otherwise use the programming history to provide guidance information to a user, such as a clinician, which may assist the user in more quickly identifying one or more desirable programs during a current programming session.
This application is a continuation-in-part of U.S. application Ser. No. 11/186,383, filed Jul. 20, 2005, which claims the benefit of U.S. provisional application No. 60/589,348, filed Jul. 20, 2004. The entire content of both applications is incorporated herein by reference.
The invention relates to the delivery of therapy by medical devices and, more particularly, to programming the delivery of therapy by medical devices.
Medical devices that deliver a therapy to a patient often do so according to a program that includes a plurality of parameters. Each of the parameters of such a program defines an aspect of the therapy as delivered by the medical device according to that program. For example, the programs used by medical devices that deliver therapy in the form of electrical stimulation, such as neurostimulators, typically include parameters that define characteristics of the electrical stimulation waveform to be delivered. Where electrical stimulation is delivered in the form of electrical pulses, for example, the parameters for such a program may include a voltage or current amplitude, a pulse width, and a rate at which the pulses are to be delivered by the medical device. Further, where a medical device that delivers electrical stimulation is implantable and, as is typical for implantable neurostimulators, coupled to an electrode set including a plurality of electrodes, such a program may include an indication of the particular electrodes within the electrode set to be used to deliver the pulses, and the polarities of the selected electrodes. As another example, the programs used by medical devices that deliver therapy via infusion of a drug or other agent may include parameters that define flow rates, agent types or concentrations, and infusion type, e.g., continuous or bolus.
In most cases, a clinician creates the one or more programs that a medical device will use to deliver therapy to a patient during an initial programming session. In the case of implantable medical devices, the initial programming session typically occurs shortly after the device is implanted in the patient. The values for each of the parameters of a program may have a significant impact on the efficacy and side effects of the delivery of therapy according to that program. The process of selecting values for the parameters that provide adequate results can be time consuming. In particular, the process may require a great deal of trial and error testing of numerous potential combinations of parameter values before a “best” program is discovered. A “best” program may be a program that is better in terms of clinic efficacy versus side effects experienced than other programs tested. The process is particularly burdensome in the case of programming implantable neurostimulators for delivery of spinal cord stimulation (SCS) therapy, which are often coupled to an electrode set including eight or sixteen electrodes. The number of possible combinations of electrodes that could be tested during a programming session from set of that size is substantial, e.g., potentially on the order of tens or hundreds of thousands, or even millions of possible electrode combinations.
In some cases, the clinician may test combinations of parameter values, i.e., potential programs, by manually specifying each combination to test based on intuition or some idiosyncratic methodology, and recording notes on the efficacy and side effects of each combination after delivery of stimulation according to that combination. During a programming session, the clinician may be required to make notations describing the parameters of a number of tested programs and feedback received from the patient regarding the perceived efficacy side effects of each program. The clinician may then select the “best” program based on the notations.
Even after this often-lengthy process, the programs selected during an initial programming session may ultimately prove to be inadequate. The eventual inadequacy of the initial programming may be due to a variety of problems, including progression of symptoms and/or an underlying ailment, increased or changed symptoms or side effects during activities and/or postures that were not replicated in the clinic during the initial programming session, slow onset of side effects and, in the case of delivery of stimulation via electrodes located on implantable leads, lead migration. If the programs selected during an initial programming session prove to be inadequate, the patient must return to the clinic for a follow-up programming session. Multiple follow-up programming sessions may be required over the period of time that the medical device is used to deliver therapy to the patient.
During a follow-up programming session, the clinician may refer to any printed records, or his or her own memory of the re previous programming sessions, i.e., of the previously tested programs and their efficacy and side effects. However, printed records and clinician memory of previous programming sessions are often absent or inadequate, and provide little assistance in more quickly identifying desirable programs during a current programming session. Consequently, the clinician typically must start the time-consuming program selection process anew during each follow-up programming session.
In general, the invention is directed to maintenance of a programming history for a patient. The programming history may be maintained or accessed by a programming device used to program delivery of therapy to a patient by a medical device, and may take the form of a record of programs, e.g., combinations of therapy parameters, tested during one or more prior programming sessions. The programming device may analyze, or otherwise use the programming history to provide guidance information to a user, such as a clinician, which may assist the user in more quickly identifying one or more desirable programs during the current programming session.
During a programming session, the clinician may specify a program using the programming device by selecting values for various program parameters. When a program is specified, the clinician may test the program by directing the programming device to control the medical device to deliver therapy according to the program to the patient. The clinician or patient may enter rating information into the programming device for each tested program. The rating information for a tested program may include information relating to effectiveness of delivery of neurostimulation therapy according to the program in treating symptoms of the patient, side effects experienced by the patient due to the delivery of neurostimulation therapy according to the program, the power consumption required for delivery of stimulation according to the program, or the like. During the programming session, the programming device may maintain a session log for that session with the patient that includes a listing of programs tested on the patient and rating information provided by the clinician or the patient for programs of the list. The listing may be ordered according to the rating information in order to facilitate the selection of programs from the list by the clinician.
The programming device may create the programming history during the initial programming session after the medical device is provided to, e.g., implanted in, the patient. The programming device may store all or selected ones of the programs within the session log for that session within programming history. Similarly, the programming device may include all or selected ones of the programs from the session logs for follow-up programming sessions within the programming history, or may update the programming history based on retesting of programs during a follow-up programming session. The programming history may include the information stored for a program in the session log, e.g., information describing the parameters and rating information for the program, and may include clinician comments regarding the program and concomitant therapies delivered with the program.
During a current programming session, the programming device may retrieve information relating to the extent or times of use for one or more programs that were sent home with the patient, e.g., that the medical device was programmed with, during a previous programming session, and may update the record for those programs within the programming history to include this usage information. The programming device may also retrieve patient diary information associated with the one or more programs, which may include subjective comments regarding efficacy, side-effects, use, or the like, recorded by a patient during use of the programs, e.g., outside of the clinic setting. The programming device may also include objective sensor information collected via physiological sensors during use of the programs outside of the clinic. Usage information, patient diary information and sensor information may be stored by, and therefore retrieved from, one or both of the medical device and another programming device used by the patient to control delivery of therapy by the medical device, e.g., a patient programming device.
The programming device may display the programming history to the clinician during the current programming session, and the display of the programming history may assist the clinician in more quickly identifying desirable programs during the current programming session. The programming device may receive selection of a particular field within the programming history, e.g., effectiveness or side effects, and may order the programming history according to the selected field.
The programming device may analyze the programming history and, during the current programming session, may provide guidance information to the clinician to guide the selection and testing of programs. For example, the programming device may compare program parameters entered by the clinician while attempting to create a new program to the programs stored within the programming history. The programming device may identify the same or similar programs within the programming history, and may bring the record of such programs within the programming history to the user's attention, e.g., by displaying the record or a message and a link thereto. The clinician's decision of whether to proceed to test the program being entered may be informed by the results, e.g., rating, usage or patient diary information, when the same or similar programs were previously tested or used. Further, the programming device may identify same or similar programs within the programming history based on entry of only a portion of the parameters of a complete program, and may provide the parameters that would recreate one of the programs identified in the programming history based on the comparison to the clinician. In this manner, the programming device may act as a program generation “wizard,” allowing the clinician to decide whether to test the automatically completed program, or to manually complete the program with different parameter values.
As another example, during a previous programming session, or during use by the patient outside of the clinic, a program, group of programs, or particular program parameter value may have proven to be so ineffective or to have such undesirable side effects as to be “blacklisted” in the programming history. Blacklisting of programs or parameter values may be done automatically by the programming device based on rating information, or manually by the clinician. The programming device may provide, for example, a visual indication such as highlighting or a text message within the displayed programming history to indicate that program is blacklisted, and may also present such an indication during an attempt to create a program with the same or similar parameters during a current programming session. In some embodiments, the programming device may “lock-out” the blacklisted program, e.g., prevent creation of programs with the same or similar parameters to a blacklisted program. Where a set of similar programs are blacklisted, the programming device or clinician may determine that a particular value or range of values for one or more individual parameters should be blacklisted, and the programming device may provide similar indications or messages when blacklisted parameter values are selected, or may lock-out selection of blacklisted parameter values. Further, in embodiments in which the programming device directs or suggests testing of parameter combinations according to a protocol, the programming device may modify the protocol to skip blacklisted parameter values or programs.
The programming device may perform a statistical or pattern matching analysis to correlate a parameter value or range of parameter values with rating information or usage information, e.g., an effectiveness or overall score, a particular side effect, or the amount of out of clinic use, and may provide guidance information to a user based on the results of the analysis. For example, the programming device may indicate that particular parameter values or ranges have proven effective, or have proven to be correlated with a particular side effect or severity of side effects. In some embodiments, the programming device may combine the identification of underutilized parameter values and such correlations to suggest untested programs, e.g., combinations of parameters, that may provide desirable efficacy and side effects as indicated by the correlations. Further, in embodiment in which the programming device directs or suggests testing of parameter combinations according to a protocol, the programming device may modify the protocol based on the correlations between parameter values or ranges and effectiveness or side effects. The programming device may perform such analysis on the current patient's programming history, or the programming histories for a plurality of patients, e.g., a plurality of patients with similar symptoms, medical device configurations, or the like.
In some embodiments, when a previously tested program is selected for retesting, the programming device may collect rating information after the program is retested. The programming device may then compare the currently collected rating information to previously collected rating information for the program. If the programming device identifies a significant change in the rating information over time, the programming device may alert the clinician of the possibility of, for example, symptom or disease progression, or lead failure or movement. Additionally or alternatively, the programming device may present trend charts or a diagram of rating information for one or more programs over time, which the clinician may use to detect, for example, symptom or disease progression, or lead failure or movement.
In one embodiment, the invention is directed to a method in which a programming history stored in a memory is analyzed, and guidance information is provided to a user based on the analysis. The programming history includes information describing therapy programs tested on a patient during at least one prior programming session, and the information stored for each of the programs within the programming history includes information describing a plurality of parameters that define delivery of therapy according to that program and rating information for that program.
In another embodiment, the invention is directed to a system that includes a user interface, and a memory that stores a programming history, wherein the programming history includes information describing therapy programs tested on a patient during at least one prior programming session, and the information stored for each of the programs within the programming history includes information describing a plurality of parameters that define delivery of therapy according to that program and rating information for that program. The device further comprises a processor to analyze a programming history provide guidance information to a user based on the analysis to guide the selection of therapy programs during a current programming session.
In an added embodiment, the invention is directed to a computer-readable medium comprising instructions that cause a processor to analyze a programming history stored in a memory, and provide guidance information to a user based on the analysis. The programming history includes information describing therapy programs tested on a patient during at least one prior programming session, and the information stored for each of the programs within the programming history includes information describing a plurality of parameters that define delivery of therapy according to that program and rating information for that program.
In another embodiment, the invention is directed to a method comprising controlling an implantable medical device to deliver therapy to a patient according to each of a plurality of programs with a programming device during a programming session, recording rating information for each of the plurality of therapy programs during the programming session within the programming device, receiving selection of a subset of the programs, storing the selected subset of the programs within in the implantable medical device, wherein the subset of the programs are available to be selected by the patient to control delivery of therapy by the implantable medical device, and, for at least the subset of the programs, storing information identifying the programs and rating information associated with the programs in a programming history for the patient within the implantable medical device.
In another embodiment, the invention is directed to a computer-readable medium comprising instructions. The instructions cause a programmable processor to control an implantable medical device to deliver therapy to a patient according to each of a plurality of programs with a programming device during a programming session, record rating information for each of the plurality of therapy programs during the programming session within the programming device, receive selection of a subset of the programs, store the selected subset of the programs within in the implantable medical device, wherein the subset of the programs are available to be selected by the patient to control delivery of therapy by the implantable medical device, and for at least the subset of the programs, store information identifying the programs and rating information associated with the programs in a programming history for the patient within the implantable medical device.
The invention may provide a number of advantages. For example, by maintaining programming history, a programming device may be able to provide guidance information to a user, such as a clinician. The guidance information may allow the user to avoid repeated testing of unsuccessful programs or parameter values, and to more quickly identify programs that are desirable in terms of efficacy and side effects during follow-up programming sessions. The maintenance of a programming history may be particularly advantageous in the case of implantable stimulators, such as implantable neurostimulators that deliver SCS or deep brain stimulation therapy, where each programming session could involve testing a very large number of potential programs and/or redundant testing of a plurality of programs.
FIG. 1 is a conceptual diagram illustrating an example system for delivering therapy and programming delivery of a therapy to a patient.
FIG. 2 is a block diagram illustrating an example implantable medical device for delivering therapy to a patient according to one or more programs.
FIG. 3 is a block diagram illustrating an example patient programmer that allows a patient to control delivery of therapy by an implantable medical device.
FIG. 4 is a block diagram illustrating an example clinician programmer that allows a clinician to program therapy for a patient by creating programs, and that maintains a programming history for the patient according to the invention.
FIGS. 5-7 are conceptual diagrams illustrating an example graphical user interface that may be provided by a clinician programmer to allow a clinician to program neurostimulation therapy using a session log.
FIG. 8 is a conceptual diagram illustrating another example graphical user interface that may be provided by a clinician programmer to allow a clinician to program neurostimulation therapy using a session log.
FIG. 9 is a flow diagram illustrating an example method that may be employed by a clinician programmer to allow a clinician to program neurostimulation therapy using a session log.
FIG. 10 is a flow diagram illustrating an example method for recording rating information for a session log during programming of neurostimulation therapy.
FIG. 11A is a conceptual diagram illustrating display of an example stored programming history by a graphical user interface of a clinician programmer.
FIG. 11B is a conceptual diagram illustrating display of another example stored program history by a graphical user interface of a clinician programmer.
FIG. 12 is a flow diagram illustrating a method that may be employed by a clinician programmer to generate and update a programming history for a patient.
FIG. 13 is a conceptual diagram illustrating display of guidance information by an example graphical user interface of a clinician programmer based on comparison of program parameters to a stored programming history by the clinician programmer.
FIG. 14 is a flow diagram illustrating a method that may be employed by a clinician programmer to display guidance information based on comparison of program parameters to a stored programming history.
FIG. 15 is a conceptual diagram illustrating display of guidance information by an example graphical user interface of a clinician programmer based on analysis of a stored programming history by the clinician programmer.
FIG. 16 is a flow diagram illustrating a method that may be employed by a clinician programmer to display guidance information based on an analysis of a stored programming history.
FIG. 17 is a flow diagram illustrating a method that may be employed by a clinician programmer to display guidance information based on a comparison of currently collected rating information for a program to previously collected rating information for the program that is stored within a programming history.
FIGS. 18-20 are conceptual diagrams illustrating examples of graphical guidance information that may be provided to a clinician by a clinician programmer based on a stored programming history.
FIG. 1 is a conceptual diagram illustrating an example system 10 for delivering therapy to and programming delivery of a therapy for a patient. System 10 includes an implantable medical device 14, which in the illustrated embodiment delivers neurostimulation therapy to patient 12. IMD 14 may be an implantable pulse generator, and may deliver neurostimulation therapy to patient 12 in the form of electrical pulses.
IMD 14 delivers neurostimulation therapy to patient 12 via leads 16A and 16B (collectively “leads 16”). Leads 16 may, as shown in FIG. 1, be implanted proximate to the spinal cord 18 of patient 12, and IMD 14 may deliver spinal cord stimulation (SCS) therapy to patient 12 in order to, for example, reduce pain experienced by patient 12. However, the invention is not limited to the configuration of leads 16 shown in FIG. 1 or the delivery of SCS therapy. For example, one or more leads 16 may extend from IMD 14 to the brain (not shown) of patient 12, and IMD 14 may deliver deep brain stimulation (DBS) or cortical stimulation therapy to patient 12 to, for example, treat movement disorders, such as tremor or Parkinson's disease, epilepsy, or mood disorders. As further examples, one or more leads 16 may be implanted proximate to the pelvic nerves (not shown) or stomach (not shown), and IMD 14 may deliver neurostimulation therapy to treat incontinence, sexual dysfunction, pelvic pain, gastroparesis, or obesity. Further, IMD 14 may be a cardiac pacemaker, and leads 16 may extend to a heart (not shown) of patient 12.
Moreover, the invention is not limited to systems that include an implantable pulse generator, or even an IMD. For example, in some embodiments, a system according to the invention may include an implanted or external pump that delivers a drug or other agent to a patient via a catheter, e.g., for alleviation of pain by intrathecal drug delivery. Systems for delivering therapy to and programming delivery of a therapy for a patient according to the invention may include any type of implantable or external medical device.
IMD 14 delivers neurostimulation therapy according to one or more programs. Each program may include values for a number of parameters, and the parameter values define the neurostimulation therapy delivered according to that program. In embodiments where IMD 14 delivers neurostimulation therapy in the form of electrical pulses, the parameters may include voltage or current pulse amplitudes, pulse widths, pulse rates, and the like. Further, each of leads 16 includes electrodes (not shown in FIG. 1), and the parameters for a program may include information identifying which electrodes have been selected for delivery of pulses according to the program, and the polarities of the selected electrodes. As another example, in embodiments which include a pump instead of or in addition to a neurostimulator, program parameters may define flow rates, agent types or concentrations, or infusion types, e.g., continuous or bolus.
System 10 also includes a clinician programmer 20. Clinician programmer 20 may, as shown in FIG. 1, be a handheld computing device. Clinician programmer 20 includes a display 22, such as a LCD or LED display, to display information to a user. Clinician programmer 20 may also include a keypad 24, which may be used by a user to interact with clinician programmer 20. In some embodiments, display 22 may be a touch screen display, and a user may interact with clinician programmer 20 via display 22. A user may also interact with clinician programmer 20 using a peripheral pointing devices, such as a stylus or mouse. Keypad 24 may take the form of an alphanumeric keypad or a reduced set of keys associated with particular functions. Display 22 may also present so-called soft keys for selection by the user.
A clinician (not shown) may use clinician programmer 20 to program neurostimulation therapy for patient 12. As will be described in greater detail below, the clinician may select existing programs or specify programs by selecting program parameter values, and test the selected or specified programs on patient 12. The clinician may receive feedback from patient 12, and store information identifying the programs and rating information associated with the programs as a session log for patient 12, either in a fixed or removable memory of the clinician programmer, or within a memory of another computing device coupled to the clinician programmer, e.g., via a network. The clinician may use the session log to more quickly select one or more effective programs to be used for delivery of neurostimulation therapy to patient 12 by IMD 14 outside of the clinic.
System 10 also includes a patient programmer 26, which also may, as shown in FIG. 1, be a handheld computing device. Patient programmer 26 may also include a display 28 and a keypad 30, to allow patient 12 to interact with patient programmer 26. In some embodiments, display 26 may be a touch screen display, and patient 12 may interact with patient programmer 26 via display 28. Patient 12 may also interact with patient programmer 26 using peripheral pointing devices, such as a stylus or mouse.
Patient 12 may use patient programmer 26 to control the delivery of neurostimulation therapy by IMD 14. Patient 12 may use patient programmer 26 to activate or deactivate neurostimulation therapy and, as will be described in greater detail below, may use patient programmer 26 to select the program that will be used by IMD 14 to deliver neurostimulation therapy at any given time. Further, patient 12 may use patient programmer 26 to make adjustments to programs, such as amplitude or pulse rate adjustments.
Programs selected during a programming session using clinician programmer 20 may be transmitted to and stored within one or both of patient programmer 26 and IMD 14. Where the programs are stored in patient programmer 26, patient programmer 26 may transmit the programs selected by patient 12 to IMD 14 for delivery of neurostimulation therapy to patient 12 according to the selected program. Where the programs are stored in IMD 14, patient programmer 26 may display a list of programs stored within IMD 14 to patient 12, and transmit an indication of the selected program to IMD 14 for delivery of neurostimulation therapy to patient 12 according to the selected program.
IMD 14, clinician programmer 20 and patient programmer 26 may, as shown in FIG. 1, communicate via wireless communication. Clinician programmer 20 and patient programmer 26 may, for example, communicate via wireless communication with IMD 14 using RF telemetry techniques known in the art. Clinician programmer 20 and patient programmer 26 may communicate with each other using any of a variety of local wireless communication techniques, such as RF communication according to the 802.11 or Bluetooth specification sets, infrared communication according to the IRDA specification set, or other standard or proprietary telemetry protocols. Clinician programmer 20 and patient programmer 26 need not communicate wirelessly, however. For example, programmers 20 and 26 may communicate via a wired connection, such as via a serial communication cable, or via exchange of removable media, such as magnetic or optical disks, or memory cards or sticks. Further, clinician programmer 20 may communicate with one or both of IMD 14 and patient programmer 26 via remote telemetry techniques known in the art, communicating via a local area network (LAN), wide area network (WAN), public switched telephone network (PSTN), or cellular telephone network, for example.
As will be described in greater detail below, clinician programmer 20 may maintain a programming history for patient 12, which may take the form of a record of programs, e.g., combinations of therapy parameters, tested during one or more prior programming sessions. Clinician programmer 20 may create the programming history during the initial programming session that occurs after IMD 14 is implanted in patient 12. Clinician programmer 20 may store all or selected ones of the programs within the session log for that session within programming history. Similarly, clinician programmer may include all or selected ones of the programs from the session logs for follow-up programming sessions within the programming history, or may update the programming history based on retesting of programs during a follow-up programming session. The programming history may include the information stored for a program in the session log, e.g., information describing the parameters and rating information for the program, and may include clinician comments regarding the program. Clinician programmer 20 may store the programming history within, for example, a fixed or removable memory of the clinician programmer, patient programmer 26, IMD 14, or a memory of another computing device coupled to the clinician programmer, e.g., via a network. When not stored within clinician programmer 20, the clinician programmer may retrieve the programming history for use during a current programming session.
During a current programming session, clinician programmer 20 may also retrieve usage information, e.g., information relating to the extent or times of use for one or more programs that were sent home with the patient after a previous programming session, and may update the record for those programs within the programming history to include the usage information. Clinician programmer 20 may also retrieve patient diary information associated with the one or more programs, which may include subject comments regarding efficacy, side-effects, use, or the like, recorded by patient 12 during use of the programs, e.g., outside of the clinic setting. Usage information and patient diary information may be stored by, and therefore retrieved from, one or both of IMD 14 and patient programmer 26.
Clinician programmer 20 may display the programming history to the clinician during the current programming session via display 22, and the display of the programming history may assist the clinician in more quickly identifying desirable programs during the current programming session. Clinician programmer 20 may receive selection of a particular field within the programming history, e.g., effectiveness or side effects, via display 22, keypad 24, or a pointing device, and may order the programming history according to the selected field. Further, as will be described in greater detail below, clinician programmer 20 may analyze, or otherwise use the programming history to provide guidance information to a user, such as a clinician, via display 20. The guidance information may assist the user in more quickly identifying one or more desirable programs during the current programming session.
In addition to maintaining a programming history for patient 12, clinician programmer 20 may also store some or all of the session logs for the patient in one or more of a fixed or removable memory of the clinician programmer, patient programmer 26, IMD 14, or a memory of another computing device coupled to the clinician programmer, e.g., via a network. When not stored within clinician programmer 20, the clinician programmer may retrieve the historical session logs for use during a current programming session.
In some embodiments, as discussed above, the clinician programmer only includes selected programs from each of the session logs in the programming history. For example, the programming history may include only information regarding programs that were in fact programmed into IMD 14 for long-term delivery of therapy at the end of a programming session, rather than all of the programs tested during the programming session. In this respect, the programming history may act as a summary of the various session logs. In some embodiments, in addition to accessing the programming history, the clinician programmer may access the historical session logs for information regarding programs or specific electrodes not included in the programming history to provide guidance information that may assist the user in more quickly identifying one or more desirable programs during the current programming session. In some embodiments, whether the information regarding a program or electrode is stored in a programming history or a historical session log, clinician programmer 20 may be able to access such information to provide guidance information to a user.
FIG. 2 is a block diagram illustrating an example configuration of IMD 14. IMD 14 may deliver neurostimulation therapy via electrodes 40A-H of lead 16A and electrodes 40I-P of lead 16B (collectively “electrodes 40”). Electrodes 40 may be ring electrodes. The configuration, type and number of electrodes 40 illustrated in FIG. 2 are merely exemplary.
Electrodes 40 are electrically coupled to a therapy delivery circuit 42 via leads 16. Therapy delivery circuit 42 may, for example, include an output pulse generator coupled to a power source such as a battery. Therapy delivery circuit 42 may deliver electrical pulses to patient 12 via at least some of electrodes 40 under the control of a processor 44.
Processor 44 controls therapy delivery circuit 42 to deliver neurostimulation therapy according to one or more selected programs. Specifically, processor 44 may control circuit 42 to deliver electrical pulses with the amplitudes and widths, and at the rates specified by the one or more selected programs. Processor 44 may also control circuit 42 to deliver the pulses via a selected subset of electrodes 40 with selected polarities, as specified by the selected programs. Where a plurality of programs are selected at a given time, processor 44 may control circuit 42 to deliver each pulse according to a different one of the selected programs. Processor 44 may include a microprocessor, a controller, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or other discrete or integrated logic circuitry.
IMD 14 also includes a memory 46. In some embodiments, memory 46 may store programs 48 that are available to be selected by patient 12 for delivery of neurostimulation therapy. In some embodiments, processor 44 may record usage information 50, and store usage information 50 in memory 46. Memory 46 may also include program instructions that, when executed by processor 44, cause processor 44 and IMD 14 to perform the functions ascribed to IMD 14 herein. Memory 46 may include any volatile, non-volatile, fixed, removable, magnetic, optical, or electrical media, such as a RAM, ROM, NVRAM, EEPROM, flash memory, and the like.
IMD 14 also includes a telemetry circuit 52 that allows processor 44 to communicate with clinician programmer 20 and patient programmer 26. Processor 44 may receive programs to test on patient 12 from clinician programmer 20 via telemetry circuit 52 during a programming session. Where IMD 14 stores programs 48 in memory 46 for long-term use, processor 44 may receive programs 48 from clinician programmer 20 via telemetry circuit 52, e.g., at the end of a programming session, and later receive program selections made by patient 12 from patient programmer 26 via telemetry circuit 52. Where patient programmer 26 stores the programs, processor 44 may receive programs selected by patient 12 from patient programmer 26 via telemetry circuit 52.
In some embodiments, processor 44 receives patient diary information 51 entered by patient 12 using patient programmer 26 via telemetry circuit 52, and stores the diary information within memory 46. Clinician programmer 20 may retrieve usage information 50 and diary information 51 from memory 46 via telemetry circuit 52.
Further, in some embodiments, as discussed above, clinician programmer 20 stores a programming history 90 and session logs 86 within memory 46 of IMD 14, and retrieves the programming history and session logs from memory 46, via telemetry circuit 52. The clinician programmer may retrieve and update the programming history and session logs during each new session. Maintaining the programming history and session logs within memory 46 of IMD 14 may be advantageous in situations where patient 12 may visit a different clinic or, in any event, where IMD 14 may communicate with different clinician programmers 20. In such cases, the programming history 90 and session logs 86 are stored with patient 12 and available to all such clinics, clinicians, or clinician programmers.
Additionally, as illustrated in FIG. 2, IMD 14 may include one or more physiological sensors 53 that generate signals as a function of a physiological parameter of patient 12. For example, physiological sensors 53 may include accelerometers or other sensors that generate signals as a function of patient activity, gait or posture. As other examples, physiological sensors 53 may include electrodes that detect an electromyogram (EMG), electrocardiogram (ECG), electroencephalogram (EEG), or impedance-based respiration of patient 12. Further, physiological sensors 53 may include known transducers that generate a signal based on a blood pressure or blood oxygen saturation of patient 12. Although illustrated in FIG. 2 as being located within IMD 14, sensors 53 may additionally or alternatively be coupled to IMD 14 wirelessly or via leads.
Processor 44 may store sensor information 54 in memory 46 based on the signals generated by sensors 53. The physiological parameters detected by sensors 53 and, therefore, sensor information 54, may reflect the severity or prevalence of pain, movement disorders, epilepsy, mood disorders, cardiac disorders, gastrointestinal or urinary disorders, or side-effects associated with the treatment of such symptoms or disorders. In some embodiments, processor 44 may associate sensor information 54 with the one or more of programs 48 presently used by therapy delivery circuitry 42 for delivery of therapy, e.g., neurostimulation patient 12. In this manner, sensor information 54 may advantageously provide objective information regarding the efficacy or side-effects associated with delivery of therapy according to the programs. In some embodiments, processor 44 may store sensor information 54 in association with the programs within the programming history 90, or may provide the sensor information to clinician programmer 20 for inclusion in a programming history and use in providing programming guidance.
Sensor information 54 may include raw data derived from the signals output by the sensors, averages or other statistical representations such data, or any other metric derived from such data. For example, in embodiments in which sensors 53 include one or more accelerometers or EMG electrodes, processor 44 may periodically detect the severity of tremor, or may detect incidences where patient 12 falls or experiences immobility, e.g., a gait freeze. Processor 44 may record the information describing the severity of the tremor and times of detection, or numbers and times of falls or immobility, as sensor information 54. These examples of sensor information 54 may be particularly relevant as, for example, an objective indication of the efficacy of a movement disorder treatment program, e.g., a deep brain stimulation program.
In addition to the specific examples discussed above, sensors 53 may include any of the sensors, and sensor information 54 may include any of the activity, posture, sleep quality, or other metrics described in the following U.S. Patent Applications, each of which is incorporated herein by reference in its entirety: U.S. patent application Ser. No. 11/081,811, entitled “COLLECTING SLEEP QUALITY INFORMATION VIA A MEDICAL DEVICE,” filed on Mar. 16, 2005; U.S. patent application Ser. No. 11/081,872, entitled “COLLECTING POSTURE INFORMATION TO EVALUATE THERAPY,” filed on Mar. 16, 2005; U.S. patent application Ser. No. 11/081,786, entitled “DETECTING SLEEP,” filed on Mar. 16, 2005; U.S. patent application Ser. No. 11/081,785, entitled “COLLECTING ACTIVITY INFORMATION TO EVALUATE THERAPY,” filed on Mar. 16, 2005; U.S. patent application Ser. No. 11/081,857, entitled “COLLECTING ACTIVITY AND SLEEP QUALITY INFORMATION VIA A MEDICAL DEVICE,” filed on Mar. 16, 2005; U.S. patent application Ser. No. 11/081,155, entitled “CONTROLLING THERAPY BASED ON SLEEP QUALITY,” filed on Mar. 16, 2005; and U.S. patent application Ser. No. 11/106,051, entitled “COLLECTING POSTURE AND ACTIVITY INFORMATION TO EVALUATE THERAPY,” filed on Apr. 14, 2005.
FIG. 3 is a block diagram illustrating an example configuration of patient programmer 26. Patient 12 may interact with a processor 60 via a user interface 62 in order to control delivery of neurostimulation therapy as described herein. User interface 62 may include display 28 and keypad 30, and may also include a touch screen or peripheral pointing devices as described above. Processor 60 may also provide a graphical user interface (GUI) to facilitate interaction with patient 12, as will be described in greater detail below. Processor 60 may include a microprocessor, a controller, a DSP, an ASIC, an FPGA, discrete logic circuitry, or the like.
Patient programmer 26 also includes a memory 64. In some embodiments, memory 64 may store programs 66 that are available to be selected by patient 12 for delivery of neurostimulation therapy. In some embodiments, processor 60 may record usage information 68, and diary information 69 entered by patient 12 via user interface 62. Processor 60 stores the usage and diary information in memory 64.
Usage information 68 for a program may indicate, as examples, the number of times patient 12 selected a program, the average or median length of time that a program was used when selected, the average amount of time a program was used over a period of time, such as on a per day, week or month basis, or the total amount of time or percentage of time that the program was used since the most recent programming session. Diary information 69 may include, for example, textual or numerical comments or ratings entered by a patient for a program, which may be related to, for example, the efficacy or side-effects resulting from delivery of neurostimulation according to the program. Processor 60 may associate diary information 69 with the one or more of programs 66 to which it pertains, e.g., the programs according to which therapy was delivered when the diary information was collected. In this manner, diary 69 may provide a subjective indication of the efficacy or side effects associated with delivery of therapy according to a program.
In some embodiments, processor 60 may prompt patient 12 to enter diary information 69 for a program. Further, in some embodiments, the prompting by processor 60 may include questions intended elicit responses from the patient relating to efficacy or side effects of the program, or other metrics of the quality of the patient's life during delivery of the program. For example, processor 60 may ask patient 12 to indicate the severity of pain, or how often patient 12 has fallen or experienced immobility, during delivery of therapy according to a program. Such information in a diary 69 may provide a subjective indication of the efficacy or side effects associated with a pain or movement disorder therapy program, e.g., a spinal cord or deep brain stimulation program.
Memory 64 may also include program instructions that, when executed by processor 60, cause patient programmer 26 to perform the functions ascribed to patient programmer 26 herein. Memory 64 may include any volatile, non-volatile, fixed, removable, magnetic, optical, or electrical media, such as a RAM, ROM, CD-ROM, hard disk, removable magnetic disk, memory cards or sticks, NVRAM, EEPROM, flash memory, and the like.
Patient programmer 26 also includes a telemetry circuit 70 that allows processor 60 to communicate with IMD 14, and input/output circuitry 72 that to allow processor 60 to communicate with clinician programmer 20. Processor 60 may receive program selections made by patient 12 via user interface 62, and may transmit either the selection or the selected program to IMD 14 via telemetry circuitry 70 for delivery of neurostimulation therapy according to the selected program.
Where patient programmer 26 stores programs 66 in memory 64, processor 60 may receive programs 66 from clinician programmer 20 via input/output circuitry 72 that were selected for long-term use as a result of a programming session. Processor 60 may also provide usage information 68 and diary information 69 to clinician programmer 20 via circuitry 72. Circuitry 72 may include transceivers for wireless communication, appropriate ports for wired communication or communication via removable electrical media, or appropriate drives for communication via removable magnetic or optical media.
Further, although not illustrated in FIG. 3, in some embodiments, as described above, clinician programmer 20 may store programming history 90 and session logs 86 for patient 12 in memory 64 of patient programmer 26. Clinician programmer 20 may store, retrieve and update the programming history and session logs via telemetry circuitry 72. In some embodiments in which the programming history is stored in memory 64, processor 60 may update the programming history with usage information 68 and diary information 69 periodically or as such information is collected. In other embodiments, clinician programmer 20 may retrieve usage information 68 and diary information 69 from memory 64, and update the programming history based on such information.
FIG. 4 is a block diagram illustrating an example configuration of clinician programmer 20. A clinician may interact with a processor 80 via a user interface 82 in order to program neurostimulation therapy for patient 12 as described herein. User interface 82 may include display 22 and keypad 24, and may also include a touch screen or peripheral pointing devices as described above. Processor 80 may also provide a graphical user interface (GUI) to facilitate interaction with a clinician, as will be described in greater detail below. Processor 80 may include a microprocessor, a controller, a DSP, an ASIC, an FPGA, discrete logic circuitry, or the like.
Clinician programmer 20 also includes a memory 84. Memory 84 may include program instructions that, when executed by processor 80, cause clinician programmer 20 to perform the functions ascribed to clinician programmer 20 herein. Memory 84 may include any volatile, non-volatile, fixed, removable, magnetic, optical, or electrical media, such as a RAM, ROM, CD-ROM, hard disk, removable magnetic disk, memory cards or sticks, NVRAM, EEPROM, flash memory, and the like.
A clinician may program neurostimulation therapy for patient 12 by specifying programs to test on patient 12. The clinician may interact with the GUI and user interface 82 in order to specify programs. Processor 80 transmits the selected or specified programs to IMD 14 for delivery to patient 12 via a telemetry circuit 88.
Processor 80 may maintain a session log 86 for patient 12 during programming of neurostimulation therapy for patient 12 by the clinician. Upon delivery of a selected or specified program, the clinician may receive feedback relating to the tested program from patient 12, and enter rating information relating to the tested program via the GUI and user interface 82. Processor 80 may store information identifying tested programs and associated rating information as part of session log 86. Information identifying tested programs may include the parameters for the tested programs. Processor 80 may present a listing of tested programs and associated rating information to the clinician in order to facilitate selection of programs for programming IMD 14. Session logs 86 may be stored in a volatile medium of memory 84, or may be stored within a non-volatile medium of memory 84, e.g. within a database of patient information.
Processor 80 may transmit programs created by the clinician to IMD 14 via telemetry circuitry 88, or to patient programmer 26 via input/output circuitry 92. In this manner, processor 80 may be used to control IMD 14 to deliver neurostimulation therapy for purposes of evaluating effectiveness of particular programs. I/O circuitry 92 may include transceivers for wireless communication, appropriate ports for wired communication or communication via removable electrical media, or appropriate drives for communication via removable magnetic or optical media.
Processor 80 may also maintain a programming history 90 for patient 12, which may take the form of a record of programs, e.g., combinations of therapy parameters tested during one or more prior programming sessions. During an initial programming session, processor 80 may create the programming history by storing all or selected ones of the programs within the session log 86 for that session within programming history 90. Similarly, processor 80 may include all or selected ones of the programs from the session logs 86 for follow-up programming sessions within the programming history 90 for patient 12, or may update the programming history 90 based on retesting of programs during a follow-up programming session. The programming history may include the information stored for a program in the session log 86, e.g., information describing the parameters and rating information for the program, and may include clinician comments regarding the program. The rating information may rate a program in terms of therapeutic efficacy, side effects, IMD power consumption associated with delivery of therapy according to the program, or the like.
In some embodiments, during a current programming session, processor 80 may retrieve usage information 50, 68, diary information 51, 69 and sensor information 54 from IMD 14 or patient programmer 26, and may update the record for those programs within the programming history 90 to include the usage, diary and sensor information. Processor 80 may display the programming history 90 to the clinician during the current programming session via the GUI provided by user interface 82, and the display of programming history 90 may assist the clinician in more quickly identifying desirable programs during the current programming session. Processor 80 may receive selection of a particular field within the programming history 90, e.g., rating information related to effectiveness or side effects, via user interface 82, and may order the display of the programming history 90 via the GUI according to the selected field. Further, as will be described in greater detail below, processor 80 may analyze, or otherwise use the programming history 90 to provide guidance information to a user, such as a clinician, via user interface 82. The guidance information may assist the user in more quickly identifying one or more desirable programs during the current programming session. Additionally, in embodiments in which the programming history 90 and session logs 86 for patient 12 include different information or different quantities of information, clinician programmer 20 may access historical session logs 86 as necessary during a programming session for presentation to a clinician or formulation of guidance information.
Although illustrated in FIG. 4 as stored within memory 84 that is within clinician programmer 20, programming histories 90 need not be stored within a fixed memory of the clinician programmer. Memory 84 may include removable media on which programming histories 90 may be stored, or programming histories 90 may be stored within a memory of another computing device accessible to processor 80, e.g., via a network. Further, processor 80 may store the programming histories for patient 12 within memory 46 of IMD 14 or memory 64 of patient programmer 26, and may retrieve the programming history during a current programming session for use during the programming session.
FIG. 5-7 are conceptual diagrams illustrating an example graphical user interface (GUI) 100 that may be provided by clinician programmer 20 to allow a clinician to program neurostimulation therapy for patient 12 using a session log 86. The configuration of GUI 100 illustrated in FIG. 5-7 is merely exemplary and is provided for purposes of illustration.
FIG. 5 illustrates a portion of GUI 100 that may be used by a clinician to specify a new program to test on patient 12. GUI 100 may, as shown in FIG. 5, include a field 110 which the clinician may use to name a new program for the session log 86. GUI 100 also includes fields 112-116, which the clinician may use to program parameter values such as pulse amplitude, pulse width and pulse rate for the new program, and a field 118, which the clinician may use to select particular electrodes 40 and assign polarities of selected electrodes 40 for the program. In particular, the clinician may select individual electrodes, e.g., with a stylus, to identify electrodes to be included in an electrode combination, and also specify polarities for the electrodes. For example, clicking once on an electrode within field 118 may specify selection of the electrode with a positive polarity, clicking twice on an electrode may specify selection of the electrode with a negative polarity, and clicking three times on an electrode may specify de-selection of the electrode and removal of the electrode from the pertinent electrode set for the program.
FIG. 6 illustrates a portion of GUI 100 that may be used by a clinician to enter rating information for a program tested on patient 12. Rating information may include information relating to the degree of effectiveness of the tested program in treating symptoms of patient 12 and the degree and/or types of side effects experienced by patient 12 due to the delivery of neurostimulation therapy according to the program. Effectiveness of a program may encompass both the coverage area provided by the program and degree of symptom relief. As an illustration, for spinal cord stimulation, symptom relief may be expressed in terms of overall pain relief on a numeric scale. Rating information may also, for example, include information relating to the performance of IMD 14 during delivery of neurostimulation according to the program.
Rating information may include information relating to at least one metric for rating the program, and may, as illustrated in FIG. 6, include numerical values. For example, as shown in FIG. 6, the clinician is prompted to enter a numerical rating for the effectiveness of the tested program using field 120. Multiple metrics may be used. For example, the clinician may provide a rating for the severity of side effects in general, for specific side effects, or for more particular measures of the effectiveness of a particular type of therapy. For example, different metrics may be applicable to pain, movement disorders, incontinence, sexual dysfunction, and gastrointestinal disorders. The clinician may select which of these types of metrics are to be used to evaluate tested programs.
Field 120 is merely exemplary, and numerical values for metrics may be entered using any type of field, such as a text box, drop-down menu, slider-bar, or the like. Moreover, rating information is not limited to numerical values, and may also, for example, include percentages or graphical or textual descriptions of the effectiveness, side-effects, and the like. An example of a graphic description is selection of one of a series of facial expressions representing a range between poor and good effectiveness, similar to pain scales used in many clinics. The clinician may use fields 122-126 to identify the location of the effectiveness of the tested program as reported by patient 12, and this location information may be used as a name for the tested program within session log 86.
Further, rating information can include a visual analog scale (VAS) rating for the program, entered by the clinician or patient 12 by, for example, moving a slider bar along a visual analog scale from 1 to 100 as provided by GUI 100. In some embodiments, GUI 100 may provide an outline or image of a human body, and the clinician or patient may indicate areas of pain, and areas of paresthesia for each program, on the body image. The paresthesia map and/or or a calculation of overlap between the indicated pain and paresthesia regions may be stored as rating information.
FIG. 7 illustrates a portion of GUI 100 that may be used by clinician programmer 20 to present a list 130 of the programs identified within session log 86 and associated rating information. As shown in FIG. 8, list 130 may be ordered according to the rating information. In embodiments where more than one metric is used to rate programs, list 130 may be ordered according to a metric selected by the clinician, or an overall rating may be calculated based on a number of metrics, and the list may be ordered according to the overall rating. For an overall rating, weighting factors, which may be selected by the clinician, may be applied to the metrics.
Ordering of list 130 according to rating information may facilitate comparison of the programs and quick program selection by the clinician. The clinician may select a program from list 130 for inclusion in programs based on the rating information. List 130 may also facilitate retransmission of multiple programs from list 130 to IMD 14 for side-by-side comparison, e.g., if multiple programs directed toward a particular symptom are closely rated. In such embodiments, clinician programmer 20 may prompt the clinician to add one of the compared programs to a parameter set, or remove one of the compared programs. In some embodiments, clinician programmer 20 may automatically select programs from session log 86 for inclusion in a parameter set based on the rating information.
FIG. 8 is a conceptual diagram illustrating another example graphical user interface (GUI) 134 that may be provided by a clinician programmer 20 to allow a clinician to program neurostimulation therapy using a session log. Like GUI 100 described above with reference to FIG. 5-7, GUI 134 provides a field 135 that a clinician may use to select electrodes for a program to be tested on patient 12, as well as fields 136 that the clinician may use to select other parameters for the program, such as pulse rate, pulse width and amplitude. In the illustrated example, field 136 includes arrow-buttons that the clinician or patient may use to change the amplitude for a program. GUI 134 also provides buttons 137 and 138 that the clinician or patient may use to enter rating information side effect information. Selection of buttons 137 and 138 may cause text boxes or drop down menus, as examples, to appear, which the clinician or patient may use to enter or select rating information or side effects.
In some embodiments, the user may use the amplitude buttons of field 136 to increase amplitude for the program until beneficial or adverse, e.g., side, effects are first detected. The clinician or user may then select one of buttons 137 or 138 to enter rating or side effect information. The user may continue to increase amplitude, and select one of buttons 137 and 138 whenever the rating or side effects increase or otherwise change. When the user reaches an amplitude at which the negative effects of the stimulation are intolerable by patient 12, or the amplitude cannot be further increased, the user may select button 139. In response to selection of button 139, clinician programmer 20 may cause the program, including amplitude, rating and side effect information for the program, to be stored in session log 86.
In some embodiments, programmer 20 may store all amplitudes tested for a program in session log 86. In other embodiments, programmer 20 may store only the amplitudes at which efficacy or side effect rating information was entered or changed by the user, along with the rating information. Programmer 20 may store such information in session logs as ranges of amplitudes associated with a particular efficacy rating or side effect. For example, for a program stored in a session log, a range of 2.5-4.0 Volts may be associated with an efficacy rating of 4.5 on a numeric scale between 0.0 and 5.0, and a range of 3.5-5.0 Volts may be associated with a particular side effect, such as discomfort or speech difficulty. Programmer 20 may further reduce the amount of data stored within session log 86 for a particular program by, for example, only storing the amplitude at which the effects of the stimulation are first perceived by patient 12, the amplitude at which the best efficacy from the stimulation is experienced by patient 12, and the amplitude at which the side effects associated with stimulation is no longer tolerable.
In some embodiments, clinician programmer 20 may store an indication of a “therapeutic range,”, e.g., the range from the amplitude of first effect perception to the amplitude at which side effects are intolerable, for each program in session log 86. The clinician programmer may store the therapeutic range as a range between the two end points, or as a value equal to the difference between the end points of the range. The therapeutic range may be a particularly useful indicator of the overall effectiveness of a program, because it indicates the extent to which patient 12 will be able to adjust amplitude to address changing symptom intensity, side effects, or therapy accommodation, without requiring selection of a new program, or an additional in-clinic programming session.
In some embodiments, to further reduce the amount of data stored within session log 86, programmer 20 may only store the therapy range for a program, rather than all amplitude points, or some larger subset of the amplitude points. Further, in addition to or instead of session logs 86, the clinician programmer may apply this or any of the other data reduction techniques discussed above to programs stored in programming history 90. Reducing the amount of data stored for each program in one or both of the session logs and programming history may be particularly beneficial in embodiments in which the clinician programmer 20 stores such data in IMD 14, which may include less memory than an external programmer.
FIG. 9 is a flowchart illustrating a method that may be employed by clinician programmer 20 to allow a clinician to program neurostimulation therapy using session log 86. Clinician programmer 20 receives a program to test that is specified by the clinician (140), and transmits the program to IMD 14 to control delivery of neurostimulation therapy according to the program (142). The clinician receives feedback from patient 12, and records rating information, which may include information related to efficacy and/or side effects, as described above (144). Clinician programmer 20 displays a list 130 of programs and rating information from session log 86 (146), which may be ordered according to the rating information, and may update the list after each new program is tested (148). When the clinician has completed testing programs, clinician programmer 20 may receive selections from list 130 for creation of parameter sets (150).
FIG. 10 is a flow diagram illustrating an example method for recording rating information for a session log during programming of neurostimulation therapy. In particular, FIG. 10 illustrates an example method in which particular stimulation amplitudes are identified, rating information is collected at the amplitudes, and a therapeutic range is identified. Although described with reference to amplitude for neurostimulation, the method of FIG. 10, including the recording of rating information at particular points and the identification of a therapeutic range, may be applicable to other therapies. For example, rating information may be recorded at particular points and a therapeutic range may be similarly identified during increases in the titration rate of a drug therapy.
According to the illustrated example method, clinician programmer 20 directs IMD 14 to increase the amplitude for a program as directed by a user, e.g., a clinician or patient 12 (152). The user may direct programmer 20 to increase the amplitude via one of GUIs 100 and 134 as described above. The user may increase the amplitude until the user first desires to record rating information for the program, e.g., until a beneficial effect resulting from delivery of stimulation according to the program is first perceived (153). The clinician programmer 20 may then record any of the types of rating information discusses above for the program, e.g., a numeric efficacy rating, based on input from the user (154).
The clinician programmer 20 may continue to increase amplitude (152), and record beneficial and side effect rating information (154) as directed by the user, until the user indicates that the stimulation has reached an amplitude such that the side effects are no longer tolerable by patient 12 (155). As discussed above, the user may direct the clinician programmer to record rating information whenever it changes, e.g. when the efficacy or side effects increase or decrease, or a new side effect is perceived. Clinician programmer 20 may also determine the therapeutic range for the program, e.g., based on the amplitudes at which beneficial effects are first perceived and at which stimulation is no longer tolerable (156).
FIG. 11A is a conceptual diagram illustrating display of an example stored programming history 90 by GUI 100 of clinician programmer 20. In particular, FIG. 9 illustrates display of the programming history 90 in the form of a list 160A of programs tested on patient 12 across one or more prior programming sessions. In the illustrated embodiment, list 160A displays information stored as part of the programming history 90, which includes a date tested, electrode configuration, pulse parameters, effectiveness related rating information, and side effect related rating information for each program.
Programming history 90 and list 160A further include an indication of whether patient 12 was sent home with the program at the end of the session in which it was tested, any usage information 50, 68 collected by IMD 14 or patient programmer 26 for each program, and any comments entered by the clinician for each program. Usage information may, as indicated in FIG. 9, include a percent or amount of time that the program was used, and may also indicate what times of day or timeframe within the period since the last programming session that the program was most frequently used. Although not illustrated in FIG. 9, programming history 90 and list 160A may include patient diary information, which may be in the form of textual comments regarding efficacy, side-effects, or use of a program, as well as other information regarding the quality of life experienced by patient 12 during use of the program, entered by a user using patient programmer 26. Further, programming history 90 and list 160A may include sensor information 54, such as various metrics of patient activity, as discussed above.
FIG. 11B is a conceptual diagram illustrating display of another example stored programming history 90 by GUI 134 in the form of a list 160B. In general, the programming history of FIG. 11B and list 160B include substantially similar information to the programming history and list 160A of FIG. 11A. List 160B includes different electrode configuration representations and side effects that list 160A, and also includes a therapy range for each program. While both of lists 160A and 160B may represent relevant programming history information for any type of neurostimulation, the electrode configurations and side effects in list 160A are generally associated with delivery of spinal cord stimulation to treat pain. The electrode configurations and side effects in list 160B, on the other hand, are generally associated with delivery of deep brain stimulation via one or more independent, hemispherical leads to treat, for example, a movement disorder.
FIG. 12 is a flow diagram illustrating a method that may be employed by clinician programmer 20 to generate and update programming history 90 for patient 12. Clinician programmer 20 and, more particularly, processor 80 of clinician programmer 20 searches memory 84 to determine whether a programming history 90 has previously been created for patient 12 (170). If patient 12 is a new patient, clinician programmer 20 creates a new programming history 90 for patient 12 (172). Creation of a the programming history 90 may occur at the beginning, end, or any time during a programming session, which will generally be the initial programming session after implant of IMD 14 within patient 12.
As described above, clinician programmer 20 maintains a session log 86 for the current programming session that includes information describing the parameters, e.g., electrode configuration and pulse parameters, and rating information for each tested program. Clinician programmer 20 may create or update the programming history 90 by replicating the information included in the session log 86 to the programming history 90. Clinician programmer 20 may replicate records from the session log 86 to the programming history 90 automatically, based on individual selection by the clinician of programs, or based on some user configurable preference, such as “save all,” “save none,” “rating>X,” “rating<Y.” As another example, as described above, clinician programmer 20 may replicate only the records for programs selected for long term use in IMD 14 during a programming session from the session log to the programming history. The records relating to such programs may provide the most relevant information relating to the most efficacious programs to a clinician for review during future programming sessions.
If a programming history 90 was already created for patient 12 during a prior programming session, clinician programmer 20 may initially interrogate IMD 14 and patient programmer 26 for usage information 50, 68, diary information 51, 69, and sensor information 54, and may update the usage, diary and sensor information for one or more of the programs stored in the programming history 90 (176). As discussed above, in embodiments in which the programming history is stored in the IMD or patient programmer, the IMD or patient programmer may update the programming history with usage, diary or sensor information prior to transmitting the programming history to clinician programmer 20. Clinician programmer 20 may also display programming history 90 to the clinician to aid in the selection and testing of programs during the current programming session, e.g., display list 160 via GUI 100 (178). In some embodiments, clinician programmer 20 may receive a selection of one of the fields within the programming history 90 (180), e.g., effectiveness or side effects rating information, and may order list 160 according to the selected field (182). Ordering list 160 in this manner may allow the clinician to more easily identify relevant information about previously tested programs. Where a previously tested program is retested during the current programming session, clinician programmer 20 may update programming history 90 with newly collected information, e.g., rating information, for the retested program.
FIG. 13 is a conceptual diagram illustrating display of guidance information by example GUI 100 of clinician programmer 20 based on comparison of program parameters to a stored programming history 90 by clinician programmer 20. The illustrated portion of GUI 100 includes a parameter entry portion 190, which may correspond to the parameter entry portion of GUI 100 described with reference to FIG. 5. The illustrated portion of GUI 100 also includes a guidance information alert box 192.
Alert box 192 may be displayed by GUI 100 when, based on an analysis of the programming history 90, clinician programmer 20 identifies relevant guidance information that should be brought to the clinician's attention. In the illustrated example, alert box 192 indicates that new program fully or partially entered by the clinician matches or is similar to a previously tested program within the programming history 90. Clinician programmer 20 uses alert box 192 to bring the previously tested program, and its associated rating and usage information stored within programming history, to the attention of the clinician.
FIG. 14 is a flow diagram illustrating a method that may be employed by clinician programmer 20 to display guidance information based on comparison of program parameters to information stored within a programming history 90. Clinician programmer 20 receives a complete program or one or more parameters entered by the clinician via user interface 82, e.g., via parameter entry portion 190 of GUI 100, when the clinician attempts to create a new program for testing (200). Clinician programmer 20 compares the one or more parameters to information stored within the programming history 90, e.g., the parameters for previously tested programs stored in the program history 90 (202). Clinician programmer 20 identifies programs that have been previously tested that are the same or similar to the new program within the programming history 20 (204), and may bring the record of such programs within the programming history to the user's attention, e.g., display a message within alert box 192, as guidance information (206).
The clinician's decision of whether to proceed to test the program being entered may be informed by the results, e.g., rating and usage information, when the same or similar programs were previously tested. Further, when clinician programmer 20 identifies same or similar programs within the programming history 90 based on entry of only a portion of the parameters of a complete program, clinician programmer 20 may provide the parameters that would recreate one of the programs identified in the programming history based on the comparison to the clinician. In this manner, clinician programmer may act as a program generation “wizard,” allowing the clinician to decide whether to test the automatically completed program, or to manually complete the program with different parameter values.
As another example, during a previous programming session, or during use by the patient outside of the clinic, a program, group of programs, or parameter value may have proven to be so ineffective or to have such undesirable side effects as to be “blacklisted” in a session log 86 and, consequently, within programming history 90. Blacklisting of programs or parameter values may be done automatically by clinician programmer 20 based on rating or usage information, or manually by the clinician. As an example, a particular electrode 40 on a lead 16 may have resulted in relatively severe side effects at relatively low amplitudes. In such cases, the electrode may be blacklisted, and clinician programmer may provide a warning when an electrode configuration including the electrode is selected.
Clinician programmer 20 may provide, for example, a visual indication such as highlighting or a text message within list 160 to indicate that the program or parameter value is blacklisted, and may also present such an indication during an attempt to create a program with the same or similar parameters during a current programming session. In some embodiments, clinician programmer 20 may “lock-out,” e.g., prevent creation of programs with the same or similar parameter values as blacklisted parameter values, e.g., with an electrode configuration including a blacklisted electrode or adjacent electrodes, or the same or similar parameter values as a blacklisted program. As an example of blacklisting of a parameter value, a particular electrode 40 may be blacklisted due to undesirable side effects if, for example, it is located over a nerve root. Further, where a set of similar programs are blacklisted, clinician programmer 20 or clinician may determine that a particular value or range of values for one or more individual parameters should be blacklisted.
FIG. 15 is a conceptual diagram illustrating display of guidance information by example GUI 100 of clinician programmer 20 based on analysis of a stored programming history 90 by clinician programmer 20. In the illustrated portion, GUI 100 presents a representation 212 of electrode set 16, and a variety of guidance information boxes 210A-E (collectively “guidance information boxes 210”) through which clinician programmer presents guidance information to the clinician.
In particular, boxes 210A, B and E present the result of statistical or pattern analysis of programming history 90 to identify correlations between parameter values and rating information. Boxes 210A and B indicate that one or more electrodes are correlated with a particular side effect and high efficacy scores, respectively. Box 210E indicates that parameter values above an identified threshold are associated with a side effect. Box 210C indicates a result of analysis of programming history 90 to identify under tested parameter values, and specifically identifies electrodes that have been under-utilized. Box 210D indicates that an electrode has been blacklisted.
Although advantageous for any type of neurostimulation, the type of electrode specific information represented by Boxes 210A-D may be particularly advantages in embodiments in which deep brain stimulation is delivered to patient 12. In general, for deep brain stimulation, the correlation between electrodes and particular positive or negative effects is clear and can vary significantly from electrode to electrode. The increased correlation in the case of deep brain stimulation is based primarily on the different locations of the electrodes, which may be located in regions or structures of the brain that that result in vary different types of side effects or degrees of efficacy.
FIG. 16 is a flow diagram illustrating a method that may be employed by clinician programmer 20 to display guidance information based on an analysis of a stored programming history 90. According to the method, clinician programmer 20 analyzes the programming history (220), and provides guidance information to the clinician based on the analysis (222). For example, clinician programmer 20 may identify parameter values or ranges of parameter values that have not yet been tested or have not been frequently tested on patient 12, and can indicate these values or ranges to the clinician as illustrated in FIG. 13. The clinician may then choose to test programs that include under-tested parameter values or parameter value ranges.
Further, the clinician programmer 20 may perform a statistical or pattern matching analysis to correlate a parameter value or range of parameter values with rating information or usage information, e.g., an effectiveness or overall score, a particular side effect, or the amount of out of clinic use, and may provide guidance information to a user based on the results of the analysis. For example, clinician programmer may, as illustrated by boxes 210 of FIG. 13, indicate that particular parameter values or ranges have proven effective, or have proven to be correlated with a particular type of side effect or severity of side effects. In some embodiments, clinician programmer 20 may combine the identification of underutilized parameter values and such correlations to suggest untested programs, e.g., combinations of parameters, that may provide desirable efficacy and side effects as indicated by the correlations. Further, in embodiments in which clinician programmer 20 directs or suggests testing of parameter combinations according to a protocol, clinician programmer 20 may modify the protocol based on the correlations between parameter values or ranges and effectiveness or side effects to, for example, skip or add programs or parameter values.
FIG. 15 is a flow diagram illustrating a method that may be employed by clinician programmer 20 to display guidance information based on a comparison of currently collected rating information for a program to previously collected rating information for the program that is stored within a programming history. Clinician programmer 20 receives a selection of a previously tested program from the programming history 90 (230), and directs IMD 14 to retest the program (232). Clinician programmer 20 collects rating information based on the retesting of the program (234), and compares the currently collected rating information to rating information previously collected for the program that is stored in the programming history 90 (236). Clinician programmer 20 provides guidance information to the clinician based on the comparison (238). For example, if clinician programmer 20 identifies a significant change in the rating information over time, clinician programmer 20 may alert the clinician of the possibility of, for example, symptom or disease progression, or lead failure or movement. Additionally or alternatively, clinician programmer 20 may present trend charts or diagram of rating information for one or more programs over time, which the clinician may use to detect, for example, symptom or disease progression, or lead failure or movement.
FIGS. 18-20 are conceptual diagrams illustrating examples of graphical guidance information that may be provided to a clinician by a clinician programmer based on a stored programming history. For example, FIG. 18 illustrates a graph 240 that depicts the therapeutic ranges for various programs in a side-by-side manner. The programs are identified in the figure by their electrode configurations on the x-axis. For example, the configuration “0−” indicates that an electrode at a “0” position on a lead is a cathode, while an indifferent electrode, e.g., the housing of IMD 14, is an anode. With the therapeutic ranges illustrated in this manner, a clinician may be able to more readily identify a desirable program to select for long-term programming of INS.
As other examples, FIG. 19 illustrates a graph 250 that places programs in a region on a two-dimensional therapeutic and side effect scale, while FIG. 20 includes bar graphs for each of a plurality of programs illustrating the magnitude of both therapeutic and side effects as a function of stimulation amplitude. In FIG. 19, the size of the region for each program illustrates the size of the therapeutic range for the program.
Various embodiments of the invention have been described. However, one skilled in the art will appreciate that various modifications may be made to these embodiments without departing from the scope of the invention. For example, although described herein in the context of implantable stimulators, the invention may be practiced in relation to programming of medical devices that are not implanted or are not stimulators.
Further, although guidance information has generally been described herein as being provided by a clinician programmer based on information stored in a programming history, the invention is not so limited. In some embodiments, the clinician programmer may additionally display historical session logs, or generate guidance information based on the historical session logs, alone or in combination with the programming history. Such embodiments may allow a clinician to receive guidance based on all or a larger subset of programs tested on a patient when a smaller subset of information is stored in the programming history, e.g., when only information relating to programs sent home with the patient are stored in the programming history.
storing within the implantable medical device information identifying the programs and rating information associated with the programs for each of the plurality of programs as a session log separate from the programming history, wherein the session log is associated with the programming session.
updating the programming history stored in the implantable medical device to include the rating information.
3. The method of claim 1, wherein storing rating information associated with the programs in a programming history comprises storing a subset of the rating information associated with each of the programs in the programming history within the implantable medical device.
4. The method of claim 3, wherein storing a subset of the rating information comprises storing a therapeutic range for each of the programs, wherein the therapeutic range for a program is determined during the programming session, and based on an amplitude of first effect perception and an amplitude at which side effects are not tolerated.
5. The method of claim 3, wherein storing a subset of the rating information comprises storing information identifying amplitudes at which rating information changed for each of the programs.
storing the physiological sensor information in association with the one of the selected subset of programs in the programming history within the implantable medical device.
7. The method of claim 1, wherein controlling an implantable medical device to deliver therapy comprises controlling the implantable medical device to deliver one of spinal cord stimulation or deep brain stimulation.
providing guidance information to a user based on the analysis to guide the selection of therapy programs during the subsequent programming session.
wherein providing guidance information comprises providing information relating to the individual electrode.
10. The method of claim 8, wherein providing guidance information comprises displaying a graphical representation of the programming history that includes efficacy and side effect information for a plurality of the programs.
11. The method of claim 8, wherein providing guidance information comprises displaying a graphical representation of the programming history that includes a therapeutic range for each of a plurality of the programs.
12. The method of claim 1, wherein the session log includes information identifying the programs and rating information associated with one or more programs tested during the programming session that are not part of the selected subset of the programs.
providing guidance information to a user based on the programming history and the session log to guide the selection of therapy programs during the subsequent programming session.
14. The method of claim 1, wherein the programming session is a first programming session, the plurality of programs is a first set of programs, and the session log is a first session log, and wherein the method further comprises storing information identifying a second set of programs and rating information associated with the second set of programs within the implantable medical device as a second session log separate from the programming history, wherein the second session log is associated with the second programming session.
15. The method of claim 1, wherein the selected subset of the programs includes at least two different programs.
16. The method of claim 1, wherein the programming session is a part of a plurality of programming sessions, wherein the session log is part of a plurality of session logs, and wherein each session log is associated with a respective programming session.
wherein the implantable medical device stores the information identifying the programs and rating information associated with the programs for each of the plurality of programs as a session log separate from the programming history, wherein the session log is associated with the programming session.
wherein the implantable medical device stores the updated programming history.
19. The system of claim 17, wherein the programming device provides a subset of the recorded rating information associated with each of the programs to the implantable medical device, and the implantable medical device stores the subset of the rating information in the programming history.
20. The system of claim 19, wherein the subset of the rating information comprises a therapeutic range for each of the programs, wherein the therapeutic range for a program is determined during the programming session, and based on an amplitude of first effect perception and an amplitude at which side effects are not tolerated.
21. The system of claim 19, wherein the subset of the rating information comprises information identifying amplitudes at which rating information changed for each of the programs.
22. The system of claim 17, wherein the implantable medical device records physiological sensor information during delivery of therapy according to one of the selected subset of programs within the implantable medical device, and stores the sensor information in association with the one of the selected subset of programs in the programming history.
23. The system of claim 17, wherein the implantable medical device delivers one of spinal cord stimulation or deep brain stimulation.
24. The system of claim 17, wherein the programming device retrieves the programming history from the implantable medical device during a subsequent programming session, analyzes the retrieved programming history, and provides guidance information to a user based on the analysis to guide the selection of therapy programs during the subsequent programming session.
wherein the programming device associates information from the programming history with at least one individual electrode from the electrode set, and provides information relating to the individual electrode.
26. The system of claim 24, wherein the programming device displays a graphical representation of the programming history that includes efficacy and side effect information for a plurality of the programs.
27. The system of claim 24, wherein the programming device displays a graphical representation of the programming history that includes a therapeutic range for each of a plurality of the programs.
store within the implantable medical device information identifying the programs and rating information associated with the programs for each of the plurality of programs as a session log separate from the programming history, wherein the session log is associated with the programming session.
29. The non-transitory computer-readable medium of claim 28, wherein the instructions that cause a programmable processor to store rating information associated with the programs in a programming history comprise instructions that cause a programmable processor to store a subset of the rating information associated with each of the programs in the programming history with the implantable medical device.
30. The non-transitory computer-readable medium of claim 29, wherein the instructions that cause a programmable processor to store a subset of the rating information comprise instructions that cause a programmable processor to store a therapeutic range for each of the programs, wherein the therapeutic range for a program is determined during the programming session, and based on an amplitude of first effect perception and an amplitude at which side effects are not tolerated.
store the physiological sensor information in association with the one of the selected subset of programs in the programming history within the implantable medical device.
provide guidance information to a user based on the analysis to guide the selection of therapy programs during the subsequent programming session.
"GenesisXP(TM) Neurostimulation Systems," Implantable Therapies for Chronic Pain and Neurological Disorders, http://www.ans medical.com/physicians/GenesisXPSystem/XPOverview.cfm Dec. 3, 2002.
"GenesisXP™ Neurostimulation Systems," Implantable Therapies for Chronic Pain and Neurological Disorders, http://www.ans medical.com/physicians/GenesisXPSystem/XPOverview.cfm Dec. 3, 2002.
"MultiStim®," Implantable Therapies for Chronic Pain and Neurological Disorders, http://www.ans medical.com/physicians/RenewRFSystem/MultiStim.html Dec. 3, 2002.
"PainDoc® Computerized Support System," Implantable Therapies for Chronic Pain and Neurological Disorders, http://www.ans-medical.com/physicians/PainDoc/PainDoc.html Dec. 3, 2002.
"PC-Stim®," Implantable Therapies for Chronic Pain and Neurological Disorders, http://www.ans medical.com/physicians/RenewRFSystem/PCStim.html Dec. 3, 2002.
"Renew® Neurostimulation System Overview," Implantable Therapies for Chronic Pain and Neurological Disorders, http://www.ans-rnedical.com/physicians/RenewRFSystem/SystemOverview.html Dec. 3, 2002.
Advisory Action for U.S. Appl. No. 10/406,039, mailed Jun. 7, 2010, 3 pages.
Advisory Action for U.S. Appl. No. 11/186,383, mailed Jun. 14, 2010, 3 pages.
Amendment to Office Action for U.S. Appl. No. 10/406,039, filed Mar. 20, 2009, 22 pages.
Amendment to Office Action for U.S. Appl. No. 11/186,383, filed Apr. 7, 2009, 25 pages.
Amendment to Office Action for U.S. Appl. No. 11/186,383, filed Aug. 7, 2009, 29 pages.
Final Office Action for U.S. Appl. No. 10/406,039, mailed Jan. 22, 2010, 19 pages.
Final Office Action for U.S. Appl. No. 10/406,039, mailed Mar. 29, 2010, 20 pages.
Final Office Action for U.S. Appl. No. 11/186,383, mailed Mar. 29, 2010, 15 pages.
Notification of Transmittal of the International Preliminary Report on Patentability dated Mar. 7, 2008 for Application No. PCT/US2007/002240 (9 pgs.).
Notification of Transmittal of the International Preliminary Report on Patentability from corresponding PCT Application Serial No. PCT/US2005/025892 mailed on Oct. 23, 2006 (9 pages).
Notification of Transmittal of the International Search Report and the Written Opinion of the International Search Authority or the Declaration for PCT/US2005/025892, dated Dec. 2, 2005 (13 pgs.).
Notification of Transmittal of the International Search Report and the Written Opinion of the International Search Authority or the Declaration for PCT/US2007/002240, dated Jul. 13, 2007 (10 pgs.).
Office Action dated Apr. 11, 2008 for U.S. Appl. No. 10/406,039, (19 pgs.).
Office Action dated Apr. 29, 2008 for U.S. Appl. No. 11/186,383 (11 pgs.).
Office Action for U.S. Appl. No. 10/406,039, mailed Oct. 22, 2008, 22 pages.
Office Action for U.S. Appl. No. 101406,039, mailed Jun. 11, 2009, 15 pages.
Office Action for U.S. Appl. No. 11/186,383, mailed Jan. 7, 2009, 17 pages.
Office Action for U.S. Appl. No. 11/186,383, mailed May 7, 2009, 13 pages.
Request for Continued Examination (RCE), Response to Final Office Action and Advisory Action, for U.S. Appl. No. 11/186,383, filed Jul. 28, 2010, 27 pages.
Response to final Office Action for U.S. Appl. No. 10/406,039, filed Mar. 22, 2010, 14 pages.
Response to Final Office Action for U.S. Appl. No. 10/406,039, filed May 27, 2010, 14 pages.
Response to Final Office Action for U.S. Appl. No. 11/186,383, filed May 27, 2010, 20 pages.
Response to Office Action for U.S. Appl. No. 10/406,039, filed Sep. 11, 2009, 7 pages.
Responsive Amendment dated Aug. 29, 2008 for U.S. Appl. No. 11/186,383 (19 pgs.).
Responsive Amendment dated Jul. 8, 2008 for U.S. Appl. No. 10/406,039 (23pgs.).
Responsive Amendment dated Oct. 24, 2008 for U.S. Appl. No. 11/388,227 (18 pgs.).
Robin et al., "A New Implantable Microstimulator Dedicated to Selective Stimulation of the Bladder," Proceedings-19th International Conference-IEEE/EMBS Oct. 30-Nov. 2, 1997, Chigaco, IL, pp. 1792-1795.
Robin et al., "A New Implantable Microstimulator Dedicated to Selective Stimulation of the Bladder," Proceedings—19th International Conference—IEEE/EMBS Oct. 30-Nov. 2, 1997, Chigaco, IL, pp. 1792-1795. | 2019-04-20T07:06:14Z | https://patents.google.com/patent/US7819909B2/en |
Stonehenge has been a legally protected Scheduled Ancient Monument since 1882 when legislation to protect historic monuments was first successfully introduced in Britain. The site and its surroundings were added to UNESCO‘s list of World Heritage Sites in 1986. Stonehenge is owned by the Crown and managed by English Heritage; the surrounding land is owned by the National Trust.
Stonehenge could have been a burial ground from its earliest beginnings.
Deposits containing human bone date from as early as 3000 BC, when the ditch and bank were first dug, and continued for at least another five hundred years.
The Oxford English Dictionary cites Ælfric‘s tenth-century glossary, in which henge-cliff is given the meaning “precipice”, or stone, thus the stanenges or Stanheng “not far from Salisbury” recorded by eleventh-century writers are “supported stones”.
Christopher Chippindale‘s Stonehenge Complete gives the derivation of the name Stonehenge as coming from the Old English words stān meaning “stone”, and either hencg meaning “hinge” (because the stone lintels hinge on the upright stones) or hen(c)en meaning “hang” or “gallows” or “instrument of torture” (though elsewhere in his book, Chippindale cites the “suspended stones” etymology). Like Stonehenge’s trilithons, medieval gallows consisted of two uprights with a lintel joining them, rather than the inverted L-shape more familiar today.
The “henge” portion has given its name to a class of monuments known as henges. Archaeologists define henges as earthworks consisting of a circular banked enclosure with an internal ditch. As often happens in archaeological terminology, this is a holdover from antiquarian use, and Stonehenge is not truly a henge site as its bank is inside its ditch. Despite being contemporary with true Neolithic henges and stone circles, Stonehenge is in many ways atypical—for example, at more than 7.3 metres (24 ft) tall, its extant trilithons supporting lintels held in place with mortise and tenon joints, make it unique.
Stonehenge was a place of burial from its beginning to its zenith in the mid third millennium B.C. The cremation burial dating to Stonehenge’s sarsen stones phase is likely just one of many from this later period of the monument’s use and demonstrates that it was still very much a domain of the dead.
Stonehenge evolved in several construction phases spanning at least 1500 years. There is evidence of large-scale construction on and around the monument that perhaps extends the landscape’s time frame to 6500 years. Dating and understanding the various phases of activity is complicated by disturbance of the natural chalk by periglacial effects and animal burrowing, poor quality early excavation records, and a lack of accurate, scientifically verified dates. The modern phasing most generally agreed to by archaeologists is detailed below. Features mentioned in the text are numbered and shown on the plan, right.
Archaeologists have found four, or possibly five, large Mesolithic postholes (one may have been a natural tree throw), which date to around 8000 BC, beneath the nearby modern tourist car-park. These held pine posts around 0.75 metres (2 ft 6 in) in diameter which were erected and eventually rotted in situ.
Three of the posts (and possibly four) were in an east-west alignment which may have had ritual significance; no parallels are known from Britain at the time but similar sites have been found in Scandinavia. Salisbury Plain was then still wooded but 4,000 years later, during the earlier Neolithic, people built a causewayed enclosure at Robin Hood’s Ball and long barrow tombs in the surrounding landscape. In approximately 3500 BC, a Stonehenge Cursus was built 700 metres (2,300 ft) north of the site as the first farmers began to clear the trees and develop the area.
A number of other adjacent stone and wooden structures and burial mounds, previously overlooked, may date as far back as 4000 BC. Charcoal from the ‘Blick Mead’ camp 2.4 kilometres (1.5 mi) from Stonehenge (near the Vespasian’s Camp site) has been dated to 4000 BC.
The first monument consisted of a circular bank and ditch enclosure made of Late Cretaceous (Santonian Age) Seaford Chalk, measuring about 110 metres (360 ft) in diameter, with a large entrance to the north east and a smaller one to the south. It stood in open grassland on a slightly sloping spot.
The builders placed the bones of deer and oxen in the bottom of the ditch, as well as some worked flint tools. The bones were considerably older than the antler picks used to dig the ditch, and the people who buried them had looked after them for some time prior to burial. The ditch was continuous but had been dug in sections, like the ditches of the earlier causewayed enclosures in the area. The chalk dug from the ditch was piled up to form the bank. This first stage is dated to around 3100 BC, after which the ditch began to silt up naturally. Within the outer edge of the enclosed area is a circle of 56 pits, each about a metre (3 ft 3 in) in diameter, known as the Aubrey holes after John Aubrey, the seventeenth-century antiquarian who was thought to have first identified them. The pits may have contained standing timbers creating a timber circle, although there is no excavated evidence of them.
A recent excavation has suggested that the Aubrey Holes may have originally been used to erect a bluestone circle. If this were the case, it would advance the earliest known stone structure at the monument by some 500 years. A small outer bank beyond the ditch could also date to this period.
In 2013 a team of archaeologists, led by Mike Parker Pearson, excavated more than 50,000 cremated bones of 63 individuals buried at Stonehenge. These remains had originally been buried individually in the Aubrey holes, exhumed during a previous excavation conducted by William Hawley in 1920, been considered unimportant by him, and subsequently re-interred together in one hole, Aubrey Hole 7, in 1935.
Physical and chemical analysis of the remains has shown that the cremated were almost equally men and women, and included some children. As there was evidence of the underlying chalk beneath the graves being crushed by substantial weight, the team concluded that the first bluestones brought from Wales were probably used as grave markers. Radiocarbon dating of the remains has put the date of the site 500 years earlier than previously estimated, to around 3000 BC.
Analysis of animal teeth found at nearby Durrington Walls, thought to be the ‘builders camp’, suggests that as many as 4,000 people gathered at the site for the mid-winter and mid-summer festivals; the evidence showed that the animals had been slaughtered around 9 months or 15 months after their spring birth. Strontium isotope analysis of the animal teeth showed that some had travelled from as far afield as the Scottish Highlands for the celebrations.
Evidence of the second phase is no longer visible. The number of postholes dating to the early 3rd millennium BC suggest that some form of timber structure was built within the enclosure during this period. Further standing timbers were placed at the northeast entrance, and a parallel alignment of posts ran inwards from the southern entrance. The postholes are smaller than the Aubrey Holes, being only around 0.4 metres (16 in) in diameter, and are much less regularly spaced. The bank was purposely reduced in height and the ditch continued to silt up.
At least twenty-five of the Aubrey Holes are known to have contained later, intrusive, cremation burials dating to the two centuries after the monument’s inception. It seems that whatever the holes’ initial function, it changed to become a funerary one during Phase 2. Thirty further cremations were placed in the enclosure’s ditch and at other points within the monument, mostly in the eastern half.
Stonehenge is therefore interpreted as functioning as an enclosed cremation cemetery at this time, the earliest known cremation cemetery in the British Isles. Fragments of unburnt human bone have also been found in the ditch-fill. Dating evidence is provided by the late Neolithic grooved ware pottery that has been found in connection with the features from this phase.
Archaeological excavation has indicated that around 2600 BC, the builders abandoned timber in favour of stone and dug two concentric arrays of holes (the Q and R Holes) in the centre of the site. These stone sockets are only partly known (hence on present evidence are sometimes described as forming ‘crescents’); however, they could be the remains of a double ring. Again, there is little firm dating evidence for this phase.
The holes held up to 80 standing stones (shown blue on the plan), only 43 of which can be traced today. It is generally accepted that the bluestones (some of which are made of dolerite, an igneous rock), were transported by the builders from the Preseli Hills, 150 miles (240 km) away in modern-day Pembrokeshire in Wales. Another theory is that they were brought much nearer to the site as glacial erratics by the Irish Sea Glacier although there is no evidence of glacial deposition within southern central England.
The long distance human transport theory was bolstered in 2011 by the discovery of a megalithic bluestone quarry at Craig Rhos-y-felin, near Crymych in Pembrokeshire, which is the most likely place for some of the stones to have been obtained.
Other standing stones may well have been small sarsens (sandstone), used later as lintels. The stones, which weighed about two tons, could have been moved by lifting and carrying them on rows of poles and rectangular frameworks of poles, as recorded in China, Japan and India. It is not known whether the stones were taken directly from their quarries to Salisbury Plain or were the result of the removal of a venerated stone circle from Preseli to Salisbury Plain to “merge two sacred centres into one, to unify two politically separate regions, or to legitimise the ancestral identity of migrants moving from one region to another”.
Each monolith measures around 2 metres (6.6 ft) in height, between 1 and 1.5 m (3.3 and 4.9 ft) wide and around 0.8 metres (2.6 ft) thick. What was to become known as the Altar Stone is almost certainly derived from the Senni Beds, perhaps from 50 miles east of Mynydd Preseli in the Brecon Beacons.
The Heelstone, a Tertiary sandstone, may also have been erected outside the north-eastern entrance during this period. It cannot be accurately dated and may have been installed at any time during phase 3. At first it was accompanied by a second stone, which is no longer visible. Two, or possibly three, large portal stones were set up just inside the north-eastern entrance, of which only one, the fallen Slaughter Stone, 4.9 metres (16 ft) long, now remains.
Other features, loosely dated to phase 3, include the four Station Stones, two of which stood atop mounds. The mounds are known as “barrows” although they do not contain burials. Stonehenge Avenue, a parallel pair of ditches and banks leading 2 miles (3 km) to the River Avon, was also added. Two ditches similar to Heelstone Ditch circling the Heelstone (which was by then reduced to a single monolith) were later dug around the Station Stones.
During the next major phase of activity, 30 enormous Oligocene–Miocene sarsen stones (shown grey on the plan) were brought to the site. They may have come from a quarry, around 25 miles (40 km) north of Stonehenge on the Marlborough Downs, or they may have been collected from a “litter” of sarsens on the chalk downs, closer to hand. The stones were dressed and fashioned with mortise and tenon joints before 30 were erected as a 33 metres (108 ft) diameter circle of standing stones, with a ring of 30 lintel stones resting on top.
The lintels were fitted to one another using another woodworking method, the tongue and groove joint. Each standing stone was around 4.1 metres (13 ft) high, 2.1 metres (6 ft 11 in) wide and weighed around 25 tons. Each had clearly been worked with the final visual effect in mind; the orthostats widen slightly towards the top in order that their perspective remains constant when viewed from the ground, while the lintel stones curve slightly to continue the circular appearance of the earlier monument.
The inward-facing surfaces of the stones are smoother and more finely worked than the outer surfaces. The average thickness of the stones is 1.1 metres (3 ft 7 in) and the average distance between them is 1 metre (3 ft 3 in). A total of 75 stones would have been needed to complete the circle (60 stones) and the trilithon horseshoe (15 stones). It was thought the ring might have been left incomplete, but an exceptionally dry summer in 2013 revealed patches of parched grass which may correspond to the location of removed sarsens. The lintel stones are each around 3.2 metres (10 ft) long, 1 metre (3 ft 3 in) wide and 0.8 metres (2 ft 7 in) thick. The tops of the lintels are 4.9 metres (16 ft) above the ground.
Within this circle stood five trilithons of dressed sarsen stone arranged in a horseshoe shape 13.7 metres (45 ft) across with its open end facing north east. These huge stones, ten uprights and five lintels, weigh up to 50 tons each.
They were linked using complex jointing. They are arranged symmetrically. The smallest pair of trilithons were around 6 metres (20 ft) tall, the next pair a little higher and the largest, single trilithon in the south west corner would have been 7.3 metres (24 ft) tall. Only one upright from the Great Trilithon still stands, of which 6.7 metres (22 ft) is visible and a further 2.4 metres (7 ft 10 in) is below ground.
The images of a ‘dagger’ and 14 ‘axeheads’ have been carved on one of the sarsens, known as stone 53; further carvings of axeheads have been seen on the outer faces of stones 3, 4, and 5. The carvings are difficult to date, but are morphologically similar to late Bronze Age weapons; recent laser scanning work on the carvings supports this interpretation. The pair of trilithons in the north east are smallest, measuring around 6 metres (20 ft) in height; the largest, which is in the south west of the horseshoe, is almost 7.5 metres (25 ft) tall.
This ambitious phase has been radiocarbon dated to between 2600 and 2400 BC, slightly earlier than the Stonehenge Archer, discovered in the outer ditch of the monument in 1978, and the two sets of burials, known as the Amesbury Archer and the Boscombe Bowmen, discovered 3 miles (5 km) to the west.
At about the same time, a large timber circle and a second avenue were constructed 2 miles (3 km) away at Durrington Walls overlooking the River Avon. The timber circle was oriented towards the rising sun on the midwinter solstice, opposing the solar alignments at Stonehenge, whilst the avenue was aligned with the setting sun on the summer solstice and led from the river to the timber circle.
Evidence of huge fires on the banks of the Avon between the two avenues also suggests that both circles were linked, and they were perhaps used as a procession route on the longest and shortest days of the year. Parker Pearson speculates that the wooden circle at Durrington Walls was the centre of a ‘land of the living’, whilst the stone circle represented a ‘land of the dead’, with the Avon serving as a journey between the two.
The Y and Z Holes are the last known construction at Stonehenge, built about 1600 BC, and the last usage of it was probably during the Iron Age. Roman coins and medieval artefacts have all been found in or around the monument but it is unknown if the monument was in continuous use throughout British prehistory and beyond, or exactly how it would have been used. Notable is the massive Iron Age hillfort Vespasian’s Camp built alongside the Avenue near the Avon.
A decapitated seventh century Saxon man was excavated from Stonehenge in 1923. The site was known to scholars during the Middle Ages and since then it has been studied and adopted by numerous groups.
Stonehenge was produced by a culture that left no written records. Many aspects of Stonehenge remain subject to debate. A number of myths surround the stones.
The site, specifically the great trilithon, the encompassing horseshoe arrangement of the five central trilithons, the heel stone, and the embanked avenue, are aligned to the sunset of the winter solstice and the opposing sunrise of the summer solstice. A natural landform at the monument’s location followed this line, and may have inspired its construction. The excavated remains of culled animal bones suggest that people may have gathered at the site for the winter rather than the summer. Further astronomical associations, and the precise astronomical significance of the site for its people, are a matter of speculation and debate.
There is little or no direct evidence revealing the construction techniques used by the Stonehenge builders. Over the years, various authors have suggested that supernatural or anachronistic methods were used, usually asserting that the stones were impossible to move otherwise due to their massive size. However, conventional techniques, using Neolithic technology as basic as shear legs, have been demonstrably effective at moving and placing stones of a similar size.
How the stones could be transported by a prehistoric people without the aid of the wheel or a pulley system is not known. The most common theory of how prehistoric people moved megaliths has them creating a track of logs on which the large stones were rolled along.
Another megalith transport theory involves the use of a type of sleigh running on a track greased with animal fat. Such an experiment with a sleigh carrying a 40-ton slab of stone was successful near Stonehenge in 1995. A dedicated team of more than 100 workers managed to push and pull the slab along the 18-mile journey from Marlborough Downs.Proposed functions for the site include usage as an astronomical observatory or as a religious site.
More recently two major new theories have been proposed. Professor Geoffrey Wainwright, president of the Society of Antiquaries of London, and Timothy Darvill, of Bournemouth University, have suggested that Stonehenge was a place of healing—the primeval equivalent of Lourdes.
They argue that this accounts for the high number of burials in the area and for the evidence of trauma deformity in some of the graves. However, they do concede that the site was probably multifunctional and used for ancestor worship as well.
Isotope analysis indicates that some of the buried individuals were from other regions. A teenage boy buried approximately 1550 BC was raised near the Mediterranean Sea; a metal worker from 2300 BC dubbed the “Amesbury Archer” grew up near the alpine foothills of Germany; and the “Boscombe Bowmen” probably arrived from Wales or Brittany, France.
There are other hypotheses and theories. According to a team of British researchers led by Mike Parker Pearson of the University of Sheffield, Stonehenge may have been built as a symbol of “peace and unity”, indicated in part by the fact that at the time of its construction, Britain’s Neolithic people were experiencing a period of cultural unification.
Another idea has to do with a quality of the stones themselves: Researchers from the Royal College of Art in London have discovered that some of the monument’s stones possess “unusual acoustic properties”—when they are struck they respond with a “loud clanging noise”. According to Paul Devereux, editor of the journal Time and Mind: The Journal of Archaeology, Consciousness and Culture, this idea could explain why certain bluestones were hauled nearly 200 miles—a major technical accomplishment at the time. In certain ancient cultures rocks that ring out, known as lithophones, were believed to contain mystic or healing powers, and Stonehenge has a history of association with rituals. The presence of these “ringing rocks” seems to support the hypothesis that Stonehenge was a “place for healing”, as has been pointed out by Bournemouth University archaeologist Timothy Darvill, who consulted with the researchers. Some of the stones of Stonehenge were brought from near a town in Wales called Maenclochog, a name which means “ringing rock”.
The Heel Stone lies north east of the sarsen circle, beside the end portion of Stonehenge Avenue. It is a rough stone, 16 feet (4.9 m) above ground, leaning inwards towards the stone circle.
It has been known by many names in the past, including “Friar’s Heel” and “Sun-stone”. Today it is uniformly referred to as the Heel Stone.[ At summer solstice an observer standing within the stone circle, looking north-east through the entrance, would see the Sun rise in the approximate direction of the heel stone, and the sun has often been photographed over it.
A folk tale, relates the origin of the Friar’s Heel reference.
The Devil bought the stones from a woman in Ireland, wrapped them up, and brought them to Salisbury plain. One of the stones fell into the Avon, the rest were carried to the plain. The Devil then cried out, “No-one will ever find out how these stones came here!” A friar replied, “That’s what you think!”, whereupon the Devil threw one of the stones at him and struck him on the heel. The stone stuck in the ground and is still there.
Brewer’s Dictionary of Phrase and Fable attributes this tale to Geoffrey of Monmouth, but though book eight of Geoffrey’s Historia Regum Britanniae does describe how Stonehenge was built, the two stories are entirely different.
Some claim “Friar’s Heel” is a corruption of “Freyja’s He-ol” from the Germanic goddess Freyja and the Welsh word for track.
In the twelfth century, Geoffrey of Monmouth included a fanciful story in his Historia Regum Britanniae that attributed the monument’s construction to Merlin . Geoffrey’s story spread widely, appearing in more and less elaborate form in adaptations of his work such as Wace‘s Norman French Roman de Brut, Layamon‘s Middle English Brut, and the Welsh Brut y Brenhinedd.
According to Geoffrey the rocks of Stonehenge were healing rocks, called the Giant’s dance, which Giants had brought from Africa to Ireland for their healing properties. The fifth-century king Aurelius Ambrosius wished to erect a memorial to 3,000 nobles slain in battle against the Saxons and buried at Salisbury, and at Merlin’s advice chose Stonehenge.
The king sent Merlin, Uther Pendragon (Arthur’s father), and 15,000 knights, to remove it from Ireland, where it had been constructed on Mount Killaraus by the Giants. They slew 7,000 Irish but, as the knights tried to move the rocks with ropes and force, they failed. Then Merlin, using “gear” and skill, easily dismantled the stones and sent them over to Britain, where Stonehenge was dedicated. After it had been rebuilt near Amesbury, Geoffrey further narrates how first Ambrosius Aurelianus, then Uther Pendragon, and finally Constantine III, were buried inside the “Giants’ Ring of Stonehenge”.
As well as the Historia Regum Britanniae, there is also place-name evidence to connect Ambrosius with nearby Amesbury.
In another legend of Saxons and Britons, in 472 the invading king Hengist invited Brythonic warriors to a feast, but treacherously ordered his men to draw their weapons from concealment and fall upon the guests, killing 420 of them. Hengist erected the stone monument—Stonehenge—on the site to show his remorse for the deed.
Stonehenge has changed ownership several times since King Henry VIII acquired Amesbury Abbey and its surrounding lands. In 1540 Henry gave the estate to the Earl of Hertford. It subsequently passed to Lord Carleton and then the Marquess of Queensberry. The Antrobus family of Cheshire bought the estate in 1824. During World War I an aerodrome (Royal Flying Corps “No. 1 School of Aerial Navigation and Bomb Dropping”) was built on the downs just to the west of the circle and, in the dry valley at Stonehenge Bottom, a main road junction was built, along with several cottages and a cafe.
In the late 1920s a nationwide appeal was launched to save Stonehenge from the encroachment of the modern buildings that had begun to rise around it.
By 1928 the land around the monument had been purchased with the appeal donations, and given to the National Trust to preserve. The buildings were removed (although the roads were not), and the land returned to agriculture. More recently the land has been part of a grassland reversion scheme, returning the surrounding fields to native chalk grassland.
The first such Neo-druidic group to make use of the megalithic monument was the Ancient Order of Druids, who performed a mass initiation ceremony there in August 1905, in which they admitted 259 new members into their organisation. This assembly was largely ridiculed in the press, who mocked the fact that the Neo-druids were dressed up in costumes consisting of white robes and fake beards.
Between 1972 and 1984, Stonehenge was the site of the Stonehenge Free Festival. After the Battle of the Beanfield in 1985, this use of the site was stopped for several years and ritual use of Stonehenge is now heavily restricted.
Some Druids have arranged an assembling of monuments styled on Stonehenge in other parts of the world as a form of Druidist worship.
The access situation and the proximity of the two roads has drawn widespread criticism, highlighted by a 2006 National Geographic survey. In the survey of conditions at 94 leading World Heritage Sites, 400 conservation and tourism experts ranked Stonehenge 75th in the list of destinations, declaring it to be “in moderate trouble”.
As motorised traffic increased, the setting of the monument began to be affected by the proximity of the two roads on either side—the A344 to Shrewton on the north side, and the A303 to Winterbourne Stoke to the south. Plans to upgrade the A303 and close the A344 to restore the vista from the stones have been considered since the monument became a World Heritage Site.
However, the controversy surrounding expensive re-routing of the roads has led to the scheme being cancelled on multiple occasions. On 6 December 2007, it was announced that extensive plans to build Stonehenge road tunnel under the landscape and create a permanent visitors’ centre had been cancelled.
On 13 May 2009, the government gave approval for a £25 million scheme to create a smaller visitors’ centre and close the A344, although this was dependent on funding and local authority planning consent.
On 20 January 2010 Wiltshire Council granted planning permission for a centre 2.4 km (1.5 miles) to the west and English Heritage confirmed that funds to build it would be available, supported by a £10m grant from the Heritage Lottery Fund.
On 23 June 2013 the A344 was closed to begin the work of removing the section of road and replacing it with grass. The centre, designed by Denton Corker Marshall, opened to the public on 18 December 2013.
The completed visitors centre at Stonehenge.
Throughout recorded history Stonehenge and its surrounding monuments have attracted attention from antiquarians and archaeologists. John Aubrey was one of the first to examine the site with a scientific eye in 1666, and recorded in his plan of the monument the pits that now bear his name. William Stukeley continued Aubrey’s work in the early eighteenth century, but took an interest in the surrounding monuments as well, identifying (somewhat incorrectly) the Cursus and the Avenue. He also began the excavation of many of the barrows in the area, and it was his interpretation of the landscape that associated it with the Druids.
Stukeley was so fascinated with Druids that he originally named Disc Barrows as Druids’ Barrows. The most accurate early plan of Stonehenge was that made by Bath architect John Wood in 1740.
His original annotated survey has recently been computer redrawn and published.Importantly Wood’s plan was made before the collapse of the southwest trilithon, which fell in 1797 and was restored in 1958.
William Cunnington was the next to tackle the area in the early nineteenth century. He excavated some 24 barrows before digging in and around the stones and discovered charred wood, animal bones, pottery and urns. He also identified the hole in which the Slaughter Stone once stood. Richard Colt Hoare supported Cunnington’s work and excavated some 379 barrows on Salisbury Plain including on some 200 in the area around the Stones, some excavated in conjunction with William Coxe.
To alert future diggers to their work they were careful to leave initialled metal tokens in each barrow they opened. Cunnington’s finds are displayed at the Wiltshire Museum. In 1877 Charles Darwin dabbled in archaeology at the stones, experimenting with the rate at which remains sink into the earth for his book The Formation of Vegetable Mould Through the Action of Worms.
William Gowland oversaw the first major restoration of the monument in 1901 which involved the straightening and concrete setting of sarsen stone number 56 which was in danger of falling. In straightening the stone he moved it about half a metre from its original position.
Gowland also took the opportunity to further excavate the monument in what was the most scientific dig to date, revealing more about the erection of the stones than the previous 100 years of work had done. During the 1920 restoration William Hawley, who had excavated nearby Old Sarum, excavated the base of six stones and the outer ditch. He also located a bottle of port in the Slaughter Stone socket left by Cunnington, helped to rediscover Aubrey’s pits inside the bank and located the concentric circular holes outside the Sarsen Circle called the Y and Z Holes.
Richard Atkinson, Stuart Piggott and John F. S. Stone re-excavated much of Hawley’s work in the 1940s and 1950s, and discovered the carved axes and daggers on the Sarsen Stones. Atkinson’s work was instrumental in furthering the understanding of the three major phases of the monument’s construction.
In 1958 the stones were restored again, when three of the standing sarsens were re-erected and set in concrete bases. The last restoration was carried out in 1963 after stone 23 of the Sarsen Circle fell over. It was again re-erected, and the opportunity was taken to concrete three more stones.
Later archaeologists, including Christopher Chippindale of the Museum of Archaeology and Anthropology, University of Cambridge and Brian Edwards of the University of the West of England, campaigned to give the public more knowledge of the various restorations and in 2004 English Heritage included pictures of the work in progress in its book Stonehenge: A History in Photographs.
In 1993 the way that Stonehenge was presented to the public was called ‘a national disgrace’ by the House of Commons Public Accounts Committee. Part of English Heritage’s response to this criticism was to commission research to collate and bring together all the archaeological work conducted at the monument up to this date. This two-year research project resulted in the publication in 1995 of the monograph Stonehenge in its landscape, which was the first publication presenting the complex stratigraphy and the finds recovered from the site. It presented a rephasing of the monument.
More recent excavations include a series of digs held between 2003 and 2008 known as the Stonehenge Riverside Project, led by Mike Parker Pearson. This project mainly investigated other monuments in the landscape and their relationship to the stones — notably Durrington Walls, where another “Avenue” leading to the River Avon was discovered. The point where the Stonehenge Avenue meets the river was also excavated, and revealed a previously unknown circular area which probably housed four further stones, most likely as a marker for the starting point of the avenue.
In April 2008 Tim Darvill of the University of Bournemouth and Geoff Wainwright of the Society of Antiquaries, began another dig inside the stone circle to retrieve dateable fragments of the original bluestone pillars. They were able to date the erection of some bluestones to 2300 BC, although this may not reflect the earliest erection of stones at Stonehenge. They also discovered organic material from 7000 BC, which, along with the Mesolithic postholes, adds support for the site having been in use at least 4,000 years before Stonehenge was started.
In August and September 2008, as part of the Riverside Project, Julian Richards and Mike Pitts excavated Aubrey Hole 7, removing the cremated remains from several Aubrey Holes that had been excavated by Hawley in the 1920s, and re-interred in 1935. A licence for the removal of human remains at Stonehenge had been granted by the Ministry of Justice in May 2008, in accordance with the Statement on burial law and archaeology issued in May 2008. One of the conditions of the licence was that the remains should be reinterred within two years and that in the intervening period they should be kept safely, privately and decently.
A new landscape investigation was conducted in April 2009. A shallow mound, rising to about 40 cm (16 inches) was identified between stones 54 (inner circle) and 10 (outer circle), clearly separated from the natural slope. It has not been dated but speculation that it represents careless backfilling following earlier excavations seems disproved by its representation in eighteenth- and nineteenth-century illustrations. Indeed, there is some evidence that, as an uncommon geological feature, it could have been deliberately incorporated into the monument at the outset.
A circular, shallow bank, little more than 10 cm (4 inches) high, was found between the Y and Z hole circles, with a further bank lying inside the “Z” circle. These are interpreted as the spread of spoil from the original Y and Z holes, or more speculatively as hedge banks from vegetation deliberately planted to screen the activities within.
In July 2010, the Stonehenge Hidden Landscape Project discovered a “henge-like” monument less than 1 km (0.62 miles) away from the main site.
This new hengiform monument was subsequently revealed to be located “at the site of Amesbury 50”, a round barrow in the Cursus Barrows group.
On 26 November 2011, archaeologists from University of Birmingham announced the discovery of evidence of two huge pits positioned within the Stonehenge Cursus pathway, aligned in celestial position towards midsummer sunrise and sunset when viewed from the Heel Stone.
The new discovery is part of the Stonehenge Hidden Landscape Project which began in the summer of 2010. The project uses non-invasive geophysical imaging technique to reveal and visually recreate the landscape. According to team leader Vince Gaffney, this discovery may provide a direct link between the rituals and astronomical events to activities within the Cursus at Stonehenge.
On 10 September 2014 the University of Birmingham announced findings including evidence of adjacent stone and wooden structures and burial mounds, overlooked previously, that may date as far back as 4000 BC.
An area extending to 12 square kilometres (1,200 ha) was studied to a depth of three metres with ground-penetrating radar equipment. As many as seventeen new monuments, revealed nearby, may be Late Neolithic monuments that resemble Stonehenge. The interpretation suggests a complex of numerous related monuments. Also included in the discovery is that the cursus track is terminated by two five-meter wide extremely deep pits, whose purpose is still a mystery. | 2019-04-24T21:59:27Z | https://belfastchildis.com/2016/06/20/stonehenge-iconic-britain/ |
1999-12-03 Assigned to GOLDMAN SACHS reassignment GOLDMAN SACHS ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LAVICKA, MATTHEW, NGAI, DAVID W., SILVERMAN, ANDREW F.
A computerized order centric method and system for tracking orders implemented on a trading floor exchange. The system automatically routes orders to a booth and a floor broker according to a symbol associated with the particular security being traded. The method for processing an order for a security on the floor of an exchange includes representing a security with a symbol and allocating a set of symbols to a booth. In addition a set of symbols is allocated to a floor broker ID. An order, relating to a symbol is entered into a computer and transmitting to a computer server. The order is routed through the server to a computerized booth station associated with the booth to which the order symbol had been allocated. In addition, the order is routed through the server to the floor broker ID to which the symbol associated with the order has been allocated. Typically the floor broker ID is logged into a computerized handheld device. Typically, multiple booths are utilized with a unique set of symbols allocated to each booth station. The set of symbols allocated to a floor broker ID is a unique subset of the set of symbols associated with a booth. Additionally a heartbeat signal from the handheld device to the server can be required within a predetermined time period. A floor broker can be automatically logged off of the server in the event the server does not receive a predetermined number of heartbeats.
This application cross-references and incorporates by reference the application Ser. No. 09/413,150 entitled Handheld Trading System Interface filed Oct. 6, 1999.
The present invention generally relates to a system and method for tracking orders on an exchange floor. More specifically it relates to an integrated computer system for allocating, tracking and reporting orders and trades executed in the context of an exchange setting, such as the New York Stock Exchange (NYSE).
There exist several types of financial markets in which securities, commodities, and other negotiable instruments are traded. An auction market, such as a stock exchange, is one such financial market. In an auction market, buyers and sellers congregate on an exchange floor and announce their respective bid prices (offer to buy) and ask prices (price acceptable to sell). A trade in any particular security will occur at no more than the highest price a buyer is willing to pay and at no less than the lowest price a seller is willing to accept.
Among the players on the floor of an exchange are specialist and floor brokers. A specialist calls out the best bid and ask prices received from the various brokers, ensures that trades are posted, facilitate trades, and acts to ensure liquidity. A floor broker roams the exchange floor and acts as an agent to transact orders on behalf of investors (buyers and sellers).
A typical transaction originates when an order is placed with an off-the-floor trading desk to buy or sell a particular security. The trading desk may then convey the order to a exchange clerk who notes the parameters of the order including whether the order is a buy or sell order, the symbol of the security, the quantity, the price, any special conditions associated with the order, and the time that the order is placed. The clerk then delivers the order to a floor broker for execution. Traditionally, orders are transcribed onto order slips that are delivered to floor brokers by pages or runners. A floor broker executes an order, notes the executed order on a slip of paper, and subsequently returns the notated slip of paper to the clerk via a runner.
In addition to buy and sell orders, investors may request a “look” from the floor of the exchange. In response to a “look” request, a broker notes his or her observations with respect to what is happening in the market for a particular security. The “look” information noted by the broker may vary depending on the particular broker and what he has observed. For example, “look” information may include recent buyer and seller identities, trade sizes and prices, appraisal of market interest, a broker's opinion and any other information that a broker may wish to provide.
There is currently a significant manual component to process an order once the order reaches the floor of an exchange. Typically, an order will be entered into a computerized order processing system of a trading establishment. For example, these orders can be entered by a trader 120 at a listed desk. The order is then routed to an order management system for exchange listed securities. The order is displayed via an order management system application in a trading booth that handles orders for the given security. An order ticket is then automatically printed in the booth.
A clerk takes the ticket from the printer and prepares it for handoff, pages a broker, and acknowledges the order in the order management system. The broker, upon being paged, returns to the booth to get the machine-printed ticket and briefly discusses any special handling instructions with the clerk. Alternately the broker may telephone the booth to get necessary information and write it on a piece of paper.
A broker must update running totals representing how many shares of a particular security to buy or sell incorporating on the new order. The broker executes a trade for all or part of the order. The broker must convey some or all of the details of the trade to the booth. The broker an convey the information over the phone or write the information on a piece of paper and walk it back to the booth. Alternatively, the broker can send the paper to the booth via an exchange runner.
A clerk typically records the verbal execution into an online management system and performs an allocation of a portion of the shares of a security amongst a variety of orders.
Contra breakdown tracking what was traded with whom eventually arrive in the booth on a piece of paper if they were not attached to the verbal. This information could have been penned by the broker or by a specialist. The clerk files the contra breakdowns in a special location, to be picked up by a firm runner. The contra breakdowns are taken to a bank of firm typists located near the exchange floor. The typists enter the information into a firm trading system, and this information is both used by the trading firm systems and it is sent to the exchange's order reconciliation system (OCS).
The contra information should be entered within an hour after the trade took place. The typists file the paper containing the verbal and written information. This paper is kept on hand for several days and is then archived.
It would be useful to have a system capable of achieving greater order processing efficiency. Orders need to be routed more quickly to brokers operating on the floor of the exchange, thereby leading to more timely customer service. In addition it would be useful to capture some of the order information digitally at the point of sale, whereby costly transcription errors can be reduced.
Accordingly, this invention provides an order centric method and system for tracking orders implemented on a trading floor exchange. The system automatically routes orders to a booth and a floor broker according to a symbol associated with the particular security being traded.
In one aspect of the invention, a method for processing an order for a security on the floor of an exchange includes representing a security with a symbol and allocating a set of symbols to a booth. In addition a set of symbols is allocated to a floor broker ID. An order, relating to a symbol is entered into a computer and transmitting to a computer server. The order is routed through the server to a computerized booth station associated with the booth to which the order symbol had been allocated. In addition, the order is routed through the server to the floor broker ID to which the symbol associated with the order has been allocated. Typically the floor broker ID is logged into a computerized handheld device.
In addition, a record of an action relating to the order can be sent to the server and logged into a memory at the server. The record can also be routed through the server to the booth station associated with the booth to which the order symbol had been allocated. Typically, multiple booths are utilized with a unique set of symbols allocated to each booth station. The set of symbols allocated to a floor broker ID is a unique subset of the set of symbols associated with a booth.
In another aspect of the invention, a heartbeat signal from the handheld device to the server within a predetermined time period. In one embodiment, any communication between the handheld device and the server suffices as a heartbeat. A floor broker can be automatically logged off of the server in the event the server does not receive a predetermined number of heartbeats. For example, the predetermined number of missed heartbeats can be two.
In another aspect, an order is for shares of a security stock described by a symbol and the system calculates an aggregate number of shares of stock for standing orders relating to a particular symbol. Additionally, the system can calculate an aggregate of pending orders that meet a threshold price. The orders can include buy orders or sell orders.
This invention can also include a computerized system for processing an order in a trading exchange. The system can include a computerized booth station and a handheld computing device linked by a computer server. Software operative with the computer server can route an order to a particular booth station according to a security symbol associated with the order. In addition it can route the order to a particular handheld computing device according to the security symbol associated with the order.
In still another aspect, the computerized system can be linked to a computerized order management system and a computerized recording station.
A specific embodiment of the present invention will be described with reference to the following drawing, wherein.
FIG. 1 illustrates an order centric tracking system.
FIG. 2 illustrates a flowchart of order processing steps.
FIG. 3 illustrates a flow of a heartbeat synchronization process.
FIG. 4 illustrates a flow of an allocation of traded shares.
Referring now to FIG. 1 a networked computer system 100 for tracking an order executed on an exchange floor is illustrated. A trader 120 can initiate an order to be executed on the floor of an exchange. The order is entered into an online management system 130. The online management system 130 can transmit the order to a Handheld Server (HHS) 113 and to a computerized booth station 161-162. The HHS 113 can transmit the order to a handheld computing device 114-116.
The order centric trading system 100 includes a network connecting the computerized Handheld Server (HHS) 113 and handheld computing devices 114-116. The system can also include computerized booth stations 161-163, computerized trader stations 166, computerized recording stations 150, computerized customer stations 140 and a computerized online management system 130. Each of the computerized devices 114-116 130 140 150 161-166 can include a processor, memory, a user input device, such as a keyboard and/or mouse, and a user output device, such as a video display and/or printer. The computerized devices 114-116 130 140 150 161-166 can communicate with each other to exchange data. Interactions with the Handheld server 150 and the online management system 130 can proceed as if each was a single entity in the network 100. However, the HHS 113 and the online management system 130 may include multiple processing and database sub-systems, such as cooperative or redundant processing and/or database servers 164-165, that can be geographically dispersed throughout the network 100. A local server 164-165 may be a proxy server or a caching server. The HHS 113 may also include one or more databases 145 storing order related information.
Referring now to FIG. 2, a trader, customer or other person with access to the Order Management System 130 initiates a trade by entering an order 210 into a network access device such as, for example, a computer. The Order Management System 130 processes the order by properly logging the order and allocating it to a broker ID and a booth, according to the symbol of the security involved in the order. The Order Management System 130 then performs the step of transmitting the order to a booth station 212 and the step of transmitting the order to the Handheld server 213. The handheld server in turn transmits the order to a handheld computing device onto which a Broker ID associated with the security symbol is logged.
Brokers can enter executions according to orders received into a handheld computing device 114-116. The information relating to the orders is transmitted to an online management system for exchange-listed securities.
The broker can click “buttons” and other user interface devices displayed on the screen of the handheld computing device 114-116 thereby recording the symbol, side, price, and quantity of an execution. Contra breakdowns and other relevant information, such as an “as of” time can also be captured on a handheld computing device 114-116. Trading firm personnel, such as booth clerks, can perform allocations of the executions using an online management system. The handheld computing device 114-116 can receive updated leaves based on the clerk's allocation. A typist at a recording station 150 can enter written information using the broker's digital records.
A handheld computing device 114-116 used on an exchange floor can be capable of TCP/IP communication over a wireless network 119. The wireless network is typically supported by the trading exchange. However, the handheld computing devices 114-116 can also establish a direct TCP/IP socket connection to a handheld server 113 and not be required to use exchange middleware wireless networks 119.
Each order that arrives at the handheld 114-116 can be accepted or rejected by the broker. If an order is rejected, it can appear in a “ghosted” state until explicitly dismissed by the broker.
A broker will be able to execute trades in accordance with outstanding orders that have been transmitted to the handheld computing device 114-116. The order centric system is able to keep a broker aware of how many shares to buy and sell of a particular security and at what price levels are acceptable. A handheld 114-116 can be used to assist a broker in this task by maintaining a list of outstanding orders and aggregating the leaves of like orders.
The broker will be able to record executions on a handheld computing device 114-116. In one embodiment, order processing functionality can include execution information captured semantically such as the symbol, side, quantity and price relating to the trade. Information including contra information, time of day, special instructions, and almost any other information relating to an order can also be recorded via a handheld computing device 114-116.
The order centric system 100 can record, in a history log, a number of significant events that occur relating to an order. The history log can be stored in an electronic storage medium such as a magnetic disc drive or a compact disc (CD). The log can provide a means whereby a broker can review information during the trading day. Tasks can be presented to a user in a manner that will give the user a quick view of what actions have been performed relating to an order or a group of orders. Tasks tracked by the order centric system can be displayed in chronological order, or according to filtering and sorting functionality.
Users can include a trader 120, a booth clerk, a broker, a typist or others with access to the order centric system. In one embodiment, a user can be a customer 140 with remote access to the order centric system. Customers 140 may be given access rights to view orders they have placed. In addition, if desired, customers can be given the ability to track trades placed by others whereby the customer can get a “feel” for the trading environment at any particular time without specifically requesting a floor look.
In one embodiment, task history data will also be stored on a handheld computing device 114-116. Data can be purged from a handheld computing device 114-116 at the beginning of each new trading day or more frequently as required based on device memory constraints. Purging can be subject to network failure recovery as discussed in more detail below.
In addition to the general history, a separate database can be maintained on the HHS 113 to store executions that have been entered during the course of the day. The separate database will allow brokers to reconcile executions with the booth in failure recovery situations.
To increase security, the order centric system can have the ability to encrypt the message stream between a handheld 114-116 and the HHS 113.
A Handheld Server 113 can manage communication between existing trading firm systems 130, trading exchange systems and the handheld computing devices 114-116. Each handheld 114-116 can establish a communication session with HHS 113 over a wireless network, and HHS 113 will participate in order processing systems on behalf of the handheld computing devices 114-116. HHS 113 can also maintain login session state for the handheld computing devices 114-116. HHS 113 can act as a pass-through, performing protocol conversion between a trading firm's Order Management Architecture and a handheld messaging protocol.
An order centric system can allow an order to be entered into a computerized order management system. Typically, an order is entered by a trader on the Listed desk of a firm. The order is routed to the order management system for exchange-listed securities. The order can also be displayed in the order management system application in the booth that handles orders for the given security. In one embodiment, the order centric system automatically routes the order to a broker who handles for that particular security. In another embodiment, a clerk or trader can route the order to a broker.
The order centric system pages the floor broker. No paper ticket needs to be generated. The broker, upon being paged, notices the new order on his handheld 114-116. The broker accepts the order, and the order is added to the list of active orders.
The online management system display updates and shows that the designated broker has accepted the order. The broker can execute a trade for all or part of the order on the exchange floor. The broker can record the symbol, side, price, and quantity by clicking or otherwise operating programmable user interface devices on the screen of the handheld. The broker can also record the contra breakdowns with a freehand image or “digital ink”. The image recorded in digital ink can be processed for character recognition or sent as a simple image. When the broker is satisfied with the content of the recording, they can click Send to transfer the recording to the server. The handheld 114-116 can estimate an allocation of the shares of the security traded and update the leaves to reflect allocation. The execution is transmitted into the online management system.
The clerk, typically located in the booth, can access a display of an execution that has been transmitted to the online management system. The clerk can perform an allocation with the traded securities if appropriate. The image of the contra breakdowns will also be available to the clerk.
Allocations performed by the clerk are in turn transmitted to the HHS 113 and logged. The allocations are also transmitted from the HHS 113 to the floor broker via a handheld 114-116 computing device 114-116. The handheld 114-116 114-116 receives the updated leaves according to the clerk's allocation and the effects are displayed.
In addition, the allocated execution is also transmitted to the typists with any inked breakdowns whereby they can record the “writtens.” The file containing the image of the breakdowns can also be archived. Archives can be accomplished, for example, in an electronic storage medium, such as a disc drive or CD.
In one embodiment, the trading firm can utilize an application that permits bulk display and/or printing of the inked breakdown images.
A order centric trading system can also include a failover procedure. In the event of a primary network failure, the handheld 114-116 114-116 can attempt to connect to a backup server 131.
In addition, an order centric trading system 100 can include features such as the ability to digitally accept all execution information at the point of sale, enhanced messaging between brokers, traders, and clerks, electronically deposit orders with the specialist, and receipt of analytics on the HHD. Analytics can include market data, statistics, trends or other information useful to accomplishing an educated trade.
In addition the order centric system can operated over an intermediary network system, such as a network system installed in an exchange for communication to and from the floor of an exchange.
A login session in the order centric system will include any actions entered by a broker after they have logged in to a computerized handheld device 114-116. A execution history database can be utilized to record order requests, executions, and other detailed information about a login session. In one embodiment a history can be cleared whenever a new login session is initiated or more frequently as needed based on available RAM.
An execution history database can be used in recovery situations (such as when a session was abnormally terminated) to reconcile executions entered on the handheld 114-116 with those received by the online management system. Once entered on the handheld 114-116, an Execution can be stored in this execution history database. The database can remain until it is manually removed by the user or until the handheld 114-116 receives allocation data for the execution. In one embodiment the execution history database can be implemented with a Windows CE database rather than volatile application memory so that the data can exist across application sessions. In another embodiment, the contents of the database will be deleted the first time that the handheld 114-116 application is launched each day.
For example, information about an execution that can be stored in the database can include: HHD Execution ID—an Execution identifier created by the HHD during the creation of execution, Online Mgmt. Execution ID—identifier assigned by the onlne management system, Security Ticker Symbol Side, Quantity, Price, Timestamp and Status including unsent, sent, confirmed, allocated, failed.
In another embodiment, the digital ink image can be discarded to conserve memory after the handheld 114-116 receives a message indicating that the execution has been allocated.
The online management system can communicate with the server in the context of a “session”. A session uniquely identifies a handheld 114-116 and messages that have been sent to a particular handheld 114-116. Any response or message originating from the handheld 114-116 (except the initial login-request message, when the identifier is not yet available) will include the session identifier so that the server can correctly process the message information. Similarly, all messages arriving from HHS 113 will also contain the session identifier. Both HHS 113 and HHD can compare the session identifier, along with other tracking information in the header of all messages, to their internal values to help determine if there has been a communications or application error.
Referring now to FIG. 3, the system can utilize a transmitted Heartbeat mechanism to determine if a current session is still active. Upon login, the system can initiate a heartbeat 310 between a handheld computing device 114-116 and the HHS 113. Typically, the online management will send heartbeat information packet to the HHS 113 during idle periods. The HHS 113 can log the time interval of the heartbeat 311. The server can also interpret the receipt of the heartbeat or any other message as an indication that the HHD session is still active. A test can be performed at the completion of a maximum heartbeat interval. The system can test to a heartbeat received within the predetermined time 312. The HHD will likewise interpret the receipt of the server heartbeat or any other message as an indication that the session is still alive 313. A successful heartbeat can loop the process back to initiate a heartbeat and begin the time interval logging.
In one embodiment, if a message is not received within the specified timeout period, the online management system will assume that the session has terminated and will notify the user of the failure 314. Information that has not been sent will be lost except for executions stored in the Execution History database.
A Send Data Thread can wait for the SendData event, the Terminate event, or for the wait to timeout. The Terminate event obviously signals that the thread should shut down. Signaling of the SendData event indicates that there is data to send. If the wait timeout occurs waiting for one of these two events, it is an indication that no traffic has been sent to the server. Therefore the thread will send a heartbeat to the server in accordance with the design of connection maintenance.
In one embodiment, a Receive Data Thread can serve to block socket reads. The socket can be configured with a read timeout that is set to be equal to twice, or other mutlitple, of the heartbeat interval. If a socket read fails with a timeout error, wherein no messages have been received from the server within the required interval, the connection from the handheld 114-116 to the server is assumed to be down. The thread can then call a routine, such as CloseSocket, and the user will be notified of the failure. Communication failures can result from socket termination, network failure, severe network latency, or a server or handheld application error.
In the event that communications are terminated, the User Interface can notify the broker of the failure. The handheld 114-116 can make an automatic attempt to re-establish communications with the server. In addition the broker can manually direct the handheld 114-116 to attempt to re-establish communications with the server.
In addition, the user will be allowed to operate in an “offline mode”. While in this mode limited application functionality can be available. This mode can be enabled, for example, to allow a broker to continue working should a failure occur at a critical moment such as while executing orders in the crowd. In offline mode, the broker will not be able to send or receive looks or messages or orders. However, the broker will be able to record executions on the handheld 114-116. The handheld 114-116 can essentially function as a recording device for executions. These executions can be maintained in the Execution History database such that the user can eventually have to reconcile them with the clerk manually.
If the HHS 113 detects a communications failure, it can automatically send a new order, that would otherwise be routed to a terminated handheld 114-116, to the booth responsible for the symbol corresponding with the security comprising the order. When the broker logs back in on the same or a different HHD, the broker will automatically receive all of the active orders that are still assigned to them in the online management system. Orders that were pending can again be displayed as Pending; orders that were accepted will be automatically accepted on the HHD. However, in one aspect it can be possible that an order that was accepted on the HHD may return to the pending state if the “Order Accept” message was lost during the communication failure.
Orders that had been sent back to the booth during the communications failure can be “manually” sent back to the broker's HHD from the online management system.
An exchange wireless infrastructure can provide two redundant networks. If a connection cannot be established on a current network, the HHD can prompt the broker to a “fail-over” mode in which the HHD will log into a backup network. The broker can assent or decline to perform a fail-over.
Referring now to FIG. 4, Allocation Estimation is a process for assigning specific quantities of shares that are traded to the orders that are eligible to participate in a trade. When a broker enters an execution 410 into a handheld computing device 114-116, the programmable code can estimate the quantity of shares that are allocated to various eligible orders 411 on the handheld 114-116. This allocation will estimate the amount of shares available for execution making up the various remaining orders. Along with the estimation 411, the handheld device 114-116 transmits the order information 412 to the associated booth station 161-163.
A clerk can perform a final allocation for an execution in the booth 413. After the clerk finishes the allocations for an execution, the allocation can be transmitted to the HHS 113 and logged 414. In addition, the allocation can be transmitted to the handheld 114-116. In one embodiment, the allocations can be transmitted automatically. “Unwinding” is a process of replacing the handheld's 114-116 estimated allocation with an final allocation determined in the booth.
In one embodiment, an estimation allocation can be performed irrespective of whether the handheld 114-116 is able to transmit the execution to the server. Estimates can thus be calculated even if a broker is working in offline mode.
In another embodiment, an order will receive an allocation estimate for an execution when the order is for the same security as the execution and the order is for the same side as the execution. Accordingly, Long and Short Exempt sell orders can be eligible to participate in any sell execution, but Short sells can only be eligible for short sell executions. Therefore, if the side is Buy then the execution price should be lesser than or equal to the order price. If the execution is on the sell side then the execution price should be greater than or equal to the order price. A market order can satisfy any execution price.
The order centric system can also track an order timestamp. The order time stamp is the time the order reached the floor. An execution timestamp can be the time of execution. For an order to participate in an allocation, the order time stamp should be earlier than an execution time stamp, indicating that the order reached the floor before the execution was performed.
One calculation that can be used for allocation estimates first determines the set of eligible orders and averages allocation qty for each order ≅qty to allocate/no. of eligible orders. The quantity of shares allocated to an order can be limited to a multiple of 100. If the average qty <100 then average qty=100. The system can sort the eligible orders by the remaining quantity in ascending order. For each order the system can estimate an allocation =minimum of average estimate and order a remaining quantity if the estimated allocation is less than the average. A recalculation of a new average based on remaining shares can be allocated to remaining eligible orders. If a remaining quantity to allocate is 0 then the allocation routine can stop.
Typical trading firm business rules and SEC regulations dictate that Agency orders receive priority over Principal orders. For example, if there are 700 shares to allocate to two orders that differ only in capacity, the Agency order must receive 400 shares and the Principal order 300.
The execution quantity can exceed the sum of the remaining quantities of the eligible orders. In such cases the excess allocation quantity can be stored in an execution object. This excess quantity can remain and be factored into the total leaves for the affected security until an execution is unwound with the actual allocation from the booth.
The handheld 114-116 can operate with limited functionality if the handheld computing device 114-116 loses connection to the server. In addition, the application can enter Offline mode if a critical data error occurs.
Offline mode can implemented in all layers of an online order application. For example, a Communication Manager can be responsible for detecting a lack of heartbeats and notifying the Data Manager. The Data Manager can disconnect the Communication Manager and notify the User Interface that orders functions and look functions and messaging are unavailable. The User Interface can also notify the broker and take the necessary actions to disable features as appropriate. For example, disabling can include disabling certain windows and/or ignoring user input such as stylus taps.
In one embodiment, a Send button included in an Orders dialog, such as on an Execution Entry page will remain enabled even if the Orders system is unavailable. As a broker or other user enters and sends an execution while the Orders system is down, the execution can be persisted to the Execution History database and the user can receive a reminder that they must reconcile with the booth.
Failure Recovery can be implemented at Login time for a handheld computing device 114-116. An optional part of a Login-Reply message is a Server-Status element, which optionally contains the Recovery-Orders and Recovery-Executions elements. The Recovery-Orders element contains Order-Request messages for all of the orders that are currently assigned to a user logging in. These orders can be used to populate a Data Manager. This can be useful in the case where a broker logs in after a failure and his/her orders are still assigned to him/her; assuming that there have been no changes to orders in the online management system since the failure, the handheld 114-116 will be able display essentially the same information as when the failure occurred.
A Recovery-Executions message can contain a history of executions that have been entered into the online management system by the broker during the day. The Data Manager can use this data to update and/or reconstruct the Execution History database.
The Data Manager can process the executions and bring the state of the Execution History database in line with what is currently in the online management system if that data is more recent. However items that were already in the database that are not present in this message will not be removed. It is considered an error if the status of such an entry is Confirmed or Allocated.
The Data Manager can populate its lists with the contents of the Recovery-Orders element. After population is accomplished, the handheld 114-116 can perform an allocation estimation for any executions that have still not been allocated.
In the event there are un-reconciled executions stored on a handheld computing device 114-116, such as in the event of a network failure, the order centric system can reserve the execution history database in the database and refuse further use until the database is reconciled. The handheld computing device 114-116 can be programmed to record the User ID of a broker after a successful logon and compare this value to a User ID associated with an un-reconciled database stored in the handheld device 114-116. If the User ID logged in by the current broker is different from the User ID associated with the un-reconciled database the current will not be allowed access. This will effectively prevent executions from being overwritten.
A handheld computing device 114-116 can enter various states during normal use with an order centric tracking system. Table 1 illustrates specific examples of various handheld computing device 114-116 states and actions a programmable User Interface (UI) may associate with the specific states listed. In addition, Table 1 illustrates examples of programmable functions such as a DataManager function, a storage function, and a CommManager function that can be utilized with a handheld computing device in one embodiment of an order centric tracking system. Programmable actions associated with each state are listed to further exemplify features of this invention.
Successful display main application Information. Inform User previous storage HHS 113 to Data Manager.
Failure Comm Manager. Inform User 113 and return it to Data Manager.
Interface about the login failure.
Pending Order to internal data structures. Data Manager.
default values depending on the an entry in the Executions send to HHS 113.
context. After User has database.
The invention may be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. Apparatus of the invention may be implemented in a computer program product tangibly embodied in a machine-readable storage device for execution by a programmable processor; and method steps of the invention may be performed by a programmable processor executing a program of instructions to perform functions of the invention by operating on input data and generating output.
A number of embodiments of the present invention have been described. It will be understood that various modifications may be made without departing from the spirit and scope of the invention. All such modifications are deemed to be within the scope and spirit of the invention as defined by the appended claims.
routing the order through the computer server to the handheld computing device into which the floor broker ID to which the symbol has been allocated has been logged.
sending a record of an action relating to the order to the computer server and logging the records into a memory at the computer server.
3. The method of claim 2 additionally comprising the step of routing the record of the action through the computer server to the booth station associated with the booth to which the order symbol had been allocated.
4. The method of claim 1 wherein multiple booths are utilized and a unique set of symbols is allocated to each booth station.
5. The method of claim 4 wherein the set of symbols allocated to a floor broker ID is a unique subset of a set of symbols associated with a booth.
6. The method of claim 1 additionally comprising sending a heartbeat signal from the handheld device to the server within a predetermined time period.
7. The method of claim 6 wherein any communication between the handheld device and the server suffices as a heartbeat.
8. A method of claim 6 additionally comprising the step of logging a floor broker off of the server in response to the server not receiving a predetermined number of heartbeats.
9. The method of claim 8 wherein the predetermined number of heartbeats is two.
10. The method of claim 1 wherein an order comprises shares of stock described by the related symbol.
11. The method of claim 10 additionally comprising calculating an aggregate number of shares of stock for standing orders relating to a particular symbol.
12. The method of claim 11 additionally comprising calculating an aggregate number of shares of stock for standing orders relating to a particular stock wherein the orders meet at threshold price.
13. The method of claim 12 wherein the orders comprise buy orders.
14. The method of claim 12 wherein the orders comprise sell orders.
routing an order to a particular one of the at least one handhold computing device according to the security symbol associated with the order.
16. The computerized system of claim 15 additionally comprising a computerized order management system linked to the computer server.
17. The computerized system of claim 15 additionally comprising a computerized recording station linked to the computer server.
route the order through the server to the handheld computing device associated with the floor broker ID to which the symbol associated with the order has been allocated.
19. The computer readable medium of claim 18 wherein the software code additionally causes a handheld computing device onto which the floor broker ID is logged to calculate an allocation of shares of a security traded to orders that are eligible to participate in a trade, following the execution of an order.
20. The method of claim 11 or 12 additionally comprising the step of displaying the aggregate number of shares of stock for standing orders on the handheld computing device.
". . . Soon to be Followed by Coffee, Sugar & Cocoa", Wall Street Letter, Oct. 12, 1992, p. 7.
"AMEX Chairman Pitches Mart to Growing Biotechs as Handheld Hits Options Floor this Week", Wall Street Letter, vol. 25, No. 15, Apr. 19, 1993, p. 5.
"AMEX Expects to Pilot Handhelds by Year End", Wall Street Letter, Aug. 17, 1992, p. 8.
"AMEX Gets a Hand", InformationWeek, Mar. 22, 1993, p. 15.
"AMEX's Handheld Pilots Rescheduled for 2Q", Wall Street Letter, vol. 25, No. 8, Mar. 1, 1993, p. 7.
"Bug Fixes Will Push Audit Test Into April", Wall Street Letter, vol. 25, No. 8, Mar. 1, 1993, p. 7.
"Buttonwood to Buck Rogers -3: Avoiding Nasdaq-Like Snafu", Dow Jones News Service-Ticker, Aug. 5, 1994.
"Categories of Futures/Options Firms", Futures: The Magazine of Commodities & Options, vol. 21, No. 15, Jan. 1, 1993, pp. 7-27.
"Chicago Marts Will Pilot Handhelds in March", Wall Street Letter, vol. 25, No. 41, Oct. 11, 1993, p. 9.
"COMEX Introduces Handhelds for Pit Reporters . . . ", Wall Street Letter, Oct. 12, 1992, p. 7.
"Electronic Trading: Primex Faces Much Competition", Pensions & Investments, Jun. 14, 1999, p. 47 <http://library.northernlight.com/CK19990630010000495.html?cb=0/&sc=0>.
"Equities and Options Marts Lead the Way in Hand Helds", Wall Street Letter, vol. 26, No. 24, Jun. 20, 1994, p. 2.
"Financial Information Exchange Protocol (FIX)", Jun. 30, 1999, first published Mar. 31, 1999.
"Goldman Swoops on Electronic Trader", New York Times, Jul. 14, 1999 <http://www.smh.com.au:80/news/9907/14/text/business25.html>.
"Hand-Held Price Reporting System Launched", Wall Street & Technology, vol. 10, No. 4, p. 14.
"Hand-Held Terminal Pilots", AMEX Technology Update, Oct. 1993.
"ICV Adds to Comstock, Rolls Out New Packages", Dealing with Technology, vol. 3, No. 14, Jul. 5, 1991.
"Insider", Dealing with Technology, vol. 3, No. 23, Nov. 22, 1991.
"New York Stock Exchange Getting Electronic Facelift", Chicago Tribune, Aug. 22, 1994, at Bunsiness p. 4.
"NYSE Eyes CBOT/CME's Audit", Wall street Letter, vol. 25, No. 25, Jun. 28, 1993, p. 2.
"NYSE Group Pitches Options Hand Held to AMEX, Eyes CBOE", Wall Street Letter, vol. 26, No. 24, Jun. 20, 1994, p. 6.
"NYSE Plans to Launch Electronic Trading System", Yahoo News, Nov. 5, 1999 2:41 PM ET <http:dailynews.yahoo.com/h/nm/19991105/wr/markets_nyse_1.html>.
"NYSE Requests SEC Approval for Wireless Technology", Wall Street Letter, vol. 27, No. 23, Jun. 12, 1995, p. 8.
"NYSE Stalled On Choosing Network Integration", Wall Street Letter, vol. 26, No. 40, Oct. 10, 1994, p. 8.
"NYSE to Implement Floor-Wide Re-Engineering Scheme", Wall Street Letter, vol. 26, No. 2, Jan. 17, 1994, p. 1.
"NYSE to Launch Electronic Trading System", Yahoo News, Nov. 8, 1999 12:36 AM ET <http:dailynews.yahoo.com/h/nm/19991108/wr/markets_nyse_5.html>.
"NYSE to Upgrade Floor Broker Support System This Year", Wall Street Letter, vol. 25, No. 2, Jan. 18, 1993, p. 10.
"Pacific Exchange Implements Proxim RangeLAN2 Wireless LAN to Extend Information and Electronic Trading Onto Floor", Proxim (accessed Jul. 19, 1999 10:23 AM ET) <http://www.proxim.com/solution/finance/pse.shtml>.
"Papyrus Technology: Institutional Trading for Individual Investors with Clarity", Futures: The Magazine of Commodities & Options, vol. 21, No. 8, Jul. 15, 1992, p. 63.
"Patents Awarded to Connecticut Residents", The Hartford Courant, Aug. 2, 1999, at D15.
"Personal Digital Assistants (PDAs)", NTIS, Aug. 1993 <http://library.northernlight.com/DG19990402060683778.html?cb=0&sc=0>.
"Pick a Card, Any Card?", Waters, Winter 1993, pp. 22-25.
"Software Still Missing", InformationWeek, Jun. 13, 1994, p. 62 <http://library.northernlight.com/C...17020010864.html?cb=0&dx=1004&sc=0>.
"The PDA Seeks its Fortune", InformationWeek, Jun. 13, 1994, p. 60 <http://library.northernlight.com/C...17020010823.html?cb=0&dx=1004&sc=0>.
"Tiny Terminals are Large Concern in '94 for Amex", Wall Street Letter, vol. 26, No. 2, Jan. 17, 1994, p. 9.
"Trading Floors/Electronic Trading Systems", City of Bits WWW Team (1995-1997) <http://mitpress.mit.edu/e-books/Ci...loors Electronic TradingSystems.html>.
"Two-Dollar Broker Becomes NYSE Member to Ensure Handheld Use", Wall Street Letter, vol. 26, No. 47, Nov. 28, 1994, p. 6.
"Virtual Managers", InformationWeek, Jun. 13, 1994, p. 42 <http://library.northernlight.com/C...17020010740.html?cb=0&dx=1004&sc=0>.
"Buttonwood to Buck Rogers -3: Avoiding Nasdaq-Like Snafu", Dow Jones News Service—Ticker, Aug. 5, 1994.
Abdelrahim, Yasser, et al., "The Securities Industry: A Comparison of the Technologies Used in Open Outcry vs. Electronic Trading", Sep. 24, 1998 <http://misdb.bpa.arizona.edu/~mis6...rts/Industry/Security/securiti.htm>.
Abdelrahim, Yasser, et al., "The Securities Industry: A Comparison of the Technologies Used in Open Outcry vs. Electronic Trading", Sep. 24, 1998 <http://misdb.bpa.arizona.edu/˜mis6...rts/Industry/Security/securiti.htm>.
Aken, B.R., Kenney, G.Q., IBM Technical Disclosure Bulletin, Dec. 1973, pp. 2330-2338.
Bernhard, Todd, "Wireless Messaging & Unix Expo '94", Rochester Sun Local User Group (last modified Mar. 27, 1997) <http://guinan.cc.rochester.edu/UCC/Groups/RocSLUG/wireless.html>.
Blades. J.A., IBM Technical Disclosure Bulletin, Dec. 1994, pp. 115-116.
Boydston, Barbara, "US Marts Ready Handheld Technology", Wall Street Letter, vol. 27, No. 25, Jun. 26, 1995, p. S1.
Broida, Rick, "Hewlett-Packard OmniGo 120", Home Office Computing, Dec. 1996, p. 122 <http://www.productreviewnet.com/abstracts/4/4084.htm>.
Broida, Rick, "Motorola Envoy 150", Home Office Computing, Dec. 1996, p. 123 <http://www.productreviewnet.com/abstracts/4/4085.htm>.
Bunker, Ted, "Computers & Automation", Investor's Business Daily, Jun. 8, 1993, p. 4.
Burke, Gibbons, "Computers and Trading Growing Together", Futures: The Magazine of Commodities & Options, vol. 21, No. 8, Jul. 15, 1992, pp. 8-9.
Burnett, Richard, "Laserlight Plugs Into Stock Market", Orlando Sentinel Tribune, Mar. 15, 1993, at Central Florida Business p 15.
Burns, Greg, "A Handheld Computer that's Combat-Hardened", Business Week, Apr. 18, 1994, pp. 94-96.
Church, Emily, "Global Network Ambitions", CBS MarketWatch, last updated 12:24 PM ET Jul. 13, 1999 <http://cbs.marketwatch.com:80/arch...rrent/egrp.htx?source=htx/http2_mw>.
Clarke, Roger, "Commodity Futures Trading at the CBOT", Xamax Consultancy Pty Ltd., Mar. 1994, <http://www.anu.edu.au/people/Roger.Clarke/EC/PaperOLTCBOT.html>.
Cover, Robin, "The XML Cover Pages; FIXML-A Markup Language for the FIX Application Message Layer", <http://www.oasis-open.org/cover/fixml.html>, last modified Mar. 31, 1999.
Cover, Robin, "The XML Cover Pages; FIXML—A Markup Language for the FIX Application Message Layer", <http://www.oasis-open.org/cover/fixml.html>, last modified Mar. 31, 1999.
Crawford Jr., William B., "Exchanges Choose Team to Produce Trading Card", Chicago Tribune, Jun. 4, 1993, at Business p. 1.
Currie, W. Scott, "The ISO Link Layer and Above", LANs Explained: A Guide to Local Area Networks, Ellis Horwood Limited, England, 1989, Ch. 11.
Davis, Stephen, "1992 in Review; Technology", Wall Street Letter, vol. 25, No. 1, Jan. 11, 1993, p. 16.
Dommel, Hans-Peter, and Garcia-Luna-Aceves, J.J., "Floor Control for Networked Multimedia Applications", Position Paper for the SIGCOMM '95 Technical Symposium, 1995 <http://www.cse.ucsc.edu/research/c...peter.sigcomm95.mwws.pospaper.html>.
Eyerdam, Rick, "Plugged In: Real Time Stock Data Takes Off", South Florida Business Journal, vol. 13, No. 26, Feb. 19, 1993, Sec. 1, p. 3.
Frankhauser, Mahlon M., "New and Intrusive Regulation of the Futures Market May Affect the Securities Markets", Insights, vol. 8, No. 1, Jan. 1994, p. 21.
Groz, Marc M., "Revolution on Wall Street", PC Magazine Online, Jul. 1, 1998 <http://www.zdnet.com/pcmag/news/trends/t980701a.htm>.
Hoffman, Thomas, "Amex Seeks Wireless Trades", Computerworld, May 17, 1993, p. 6.
Hoffman, Thomas, "Handhelds Beat Paper on Stock Exchange Floor", Computerworld, Nov. 25, 1998 <http://cnmn.com/TECH/computing/9811/25/handhelds.idg/>.
Ivey, Barbara, "Tool Project Wireless Email" (last updated May 10, 1995) <http://www.cox.smu.edu/class/mis4350h/people/bivey/tool/tool.html>.
Jordan, Miriam, "Translation: "Profits': Gadgets Spell Success for Group Sense", The Asian Wall Street Journal, Mar. 3, 1993, p. 4.
Jordan, Miriam, "Translation: ‘Profits’: Gadgets Spell Success for Group Sense", The Asian Wall Street Journal, Mar. 3, 1993, p. 4.
Kalish, David E., "NYSE Takes $125M Computer Step to Combat Moves by Rival Nasdaq", The Star-Ledger Newark, NJ, Aug. 12, 1994.
Kalish, David E., Electronic Big Board NYSE Automating Trading, but Keeps Some Human Contact, Pittsburgh Post-Gazette, Aug. 20, 1994, at B7.
Lashinsky, Adam, "Pits Prayer: Palms Someday", Crain's Chicago Business, Aug. 9, 1993, p. 18.
Levinson, Alan, "Wall Street Warms to the Confused World of Wireless", Wall Street & Technology, vol. 11, No. 9, Jan. 1994, pp. 44-48.
Lu, Cary, "The PDA Comeback?", Macworld, Jul. 1996, p. 102 <http://www.productreviewnet.com/abstracts/4/4215.htm>.
Munford, Christopher, "Chicago's Exchanges Hold Electronic Hands", American Metal Market, vol. 101, No. 29, Feb. 12, 1993, p. 16.
Natarajan, K.S. "Efficient Group Acknowledgement Scheduling in Wireless Data Links", IBM Technical Disclosure Bulletin, Apr. 1994, pp. 533-534.
Oberholtzer, Gregory, "Software Reviews: Composer and Clarity", Futures: The Magazine of Commodities & Options, vol. 20, No. 4, Mar. 1991, p. 70.
Pettit, Dave, "Buttonwood to Buck Rogers -2: NYSE to Detail Plans Next Week", Dow Jones News Service-Ticker, Aug. 5, 1994.
Pettit, Dave, "Buttonwwod to Buck Rogers: NYSE Charting High Tech Course", Dow Jones News Service-Ticker, Aug. 5, 1994.
Pettit, Dave, "More Electronic Gadgetry Seen As NYSE Brokers Go Wireless", Dow Jones News Service-Ticker, Aug. 11, 1994.
Pettit, Dave, "Technology Outruns Even the Runners on Wall Street", Wall Street Journal, Aug. 19, 1994, at B6D.
Pettit, Dave, "Buttonwood to Buck Rogers -2: NYSE to Detail Plans Next Week", Dow Jones News Service—Ticker, Aug. 5, 1994.
Pettit, Dave, "Buttonwwod to Buck Rogers: NYSE Charting High Tech Course", Dow Jones News Service—Ticker, Aug. 5, 1994.
Pettit, Dave, "More Electronic Gadgetry Seen As NYSE Brokers Go Wireless", Dow Jones News Service—Ticker, Aug. 11, 1994.
Picot, Arnold, Bortenlaenger, Christine, and Roehrl, Heiner, "The Automation of Capital Markets", JCMC, vol. 1, No. 3 (as of Oct. 25, 1999) <http://www.ascusc.org/jcmc/vol1/issue3/picot.html>.
Riordan, Teresa, "Patents; an Appeals Court Says a Mathematical Formula Can be Patented, if it is a Moneymaker", N.Y. Times, Aug. 3, 1998, at D2.
Roser, F.J., "Customer Service Dispatch Aid", IBM Technical Disclosure Bulletin, Jun. 1979, pp. 198-199.
Schmerken, Ivy, "Off-Exchange Trading Chips Away at NYSE Volume", Wall Street & Technology, vol. 10, No. 4, p. 42.
Semilof, Margie, "AMEX to Test Wireless Devices", InternetWeek, Mar. 29, 1993, p. 19.
Sena, Michael L., "Computer-Aided Dispatching", Computer Graphic World, vol. 13, No. 5, May 1990, pp. 34-42.
Smith, Carrie R., "Tossing Out Tickets", Wall Street & Technology, vol. 11, No. 13, pp. 52-58.
Smith, Ian, "CS6751 Summer '94 Course Notes", Georgia Inst. of Technology, Summer 1994 <http://www.cc.gatech.edu/computing...751_94_summer/noteshome.html>.
Stein, Jon, "The Lennox System; Evaluation", Futures: The Magazine of Commodities & Options, vol. 20, No. 3, Feb. 1991, p. 64.
Thomas, Peter, "Personal Information Systems: Business Applications", Stanley Thornes, Cheltenham, UK, 1995.
Walter, Mark, "Electronic Delivery: Matching Technology to Requirements", Seybold Report on Desktop Publishing, vol. 7, No. 4, Dec. 1, 1992, pp. 3-25.
Wells, Rob, "Stock Exchanges Fine Tune Business to Retain and Attract Companies", Associated Press, Aug. 1, 1993, at Business News.
Williams, Stephen M., "High Technology Now Aiding Investors", The Hartford Courant, Mar. 7, 1993, at C1.
WLI Forum Web Site <http://www.wlif.com/>. | 2019-04-25T10:36:48Z | https://patents.google.com/patent/US6505175B1/en |
Our Center has published more than 170 research articles on the topic of autism in leading journals such as Science, New England Journal of Medicine, Proceedings of the National Academy of Sciences, and the Journal of the American Medical Association.
Our center publishes research findings that capitalize on a wide range of research techniques and span many topics in autism. For example, our research studies examine behavior using eye tracking technology, brain imaging using magnetic resonance imaging (MRI) and genetics.
Lombardo, M. V., Lai, M., & Baron-Cohen, S. (2019). Big data approaches to decomposing heterogeneity across the autism spectrum. Molecular Psychiatry. doi:10.1038/s41380-018-0321-0. PDF.
Autism is a diagnostic label based on behavior. While the diagnostic criteria attempt to maximize clinical consensus, it also masks a wide degree of heterogeneity between and within individuals at multiple levels of analysis. Understanding this multi-level heterogeneity is of high clinical and translational importance. Here we present organizing principles to frame research examining multi-level heterogeneity in autism. Theoretical concepts such as ‘spectrum’ or ‘autisms’ reflect non-mutually exclusive explanations regarding continuous/dimensional or categorical/qualitative variation between and within individuals. However, common practices of small sample size studies and case–control models are suboptimal for tackling heterogeneity. Big data are an important ingredient for furthering our understanding of heterogeneity in autism. In addition to being ‘feature-rich’, big data should be both ‘broad’ (i.e., large sample size) and ‘deep’ (i.e., multiple levels of data collected on the same individuals). These characteristics increase the likelihood that the study results are more generalizable and facilitate evaluation of the utility of different models of heterogeneity. A model’s utility can be measured by its ability to explain clinically or mechanistically important phenomena, and also by explaining how variability manifests across different levels of analysis. The directionality for explaining variability across levels can be bottom-up or top-down, and should include the importance of development for characterizing changes within individuals. While progress can be made with ‘supervised’ models built upon a priori or theoretically predicted distinctions or dimensions of importance, it will become increasingly important to complement such work with unsupervised data-driven discoveries that leverage unknown and multivariate distinctions within big data. A better understanding of how to model heterogeneity between autistic people will facilitate progress towards precision medicine for symptoms that cause suffering, and person-centered support.
Courchesne, E., Pramparo, T., Gazestani, V. H., Lombardo, M. V., Pierce, K., & Lewis, N. E. (2018).The ASD Living Biology: From cell proliferation to clinical phenotype. Molecular Psychiatry, 24(1), 88-107. doi:10.1038/s41380-018-0056-y PDF.
Autism spectrum disorder (ASD) has captured the attention of scientists, clinicians and the lay public because of its uncertain origins and striking and unexplained clinical heterogeneity. Here we review genetic, genomic, cellular, postmortem, animal model, and cell model evidence that shows ASD begins in the womb. This evidence leads to a new theory that ASD is a multistage, progressive disorder of brain development, spanning nearly all of prenatal life. ASD can begin as early as the 1st and 2nd trimester with disruption of cell proliferation and differentiation. It continues with disruption of neural migration, laminar disorganization, altered neuron maturation and neurite outgrowth, disruption of synaptogenesis and reduced neural network functioning. Among the most commonly reported high-confidence ASD (hcASD) genes, 94% express during prenatal life and affect these fetal processes in neocortex, amygdala, hippocampus, striatum and cerebellum. A majority of hcASD genes are pleiotropic, and affect proliferation/differentiation and/or synapse development. Proliferation and subsequent fetal stages can also be disrupted by maternal immune activation in the 1st trimester. Commonly implicated pathways, PI3K/AKT and RAS/ERK, are also pleiotropic and affect multiple fetal processes from proliferation through synapse and neural functional development. In different ASD individuals, variation in how and when these pleiotropic pathways are dysregulated, will lead to different, even opposing effects, producing prenatal as well as later neural and clinical heterogeneity. Thus, the pathogenesis of ASD is not set at one point in time and does not reside in one process, but rather is a cascade of prenatal pathogenic processes in the vast majority of ASD toddlers. Despite this new knowledge and theory that ASD biology begins in the womb, current research methods have not provided individualized information: What are the fetal processes and early-age molecular and cellular differences that underlie ASD in each individual child? Without such individualized knowledge, rapid advances in biological-based diagnostic, prognostic, and precision medicine treatments cannot occur. Missing, therefore, is what we call ASD Living Biology. This is a conceptual and paradigm shift towards a focus on the abnormal prenatal processes underlying ASD within each living individual. The concept emphasizes the specific need for foundational knowledge of a living child's development from abnormal prenatal beginnings to early clinical stages. The ASD Living Biology paradigm seeks this knowledge by linking genetic and in vitro prenatal molecular, cellular and neural measurements with in vivo post-natal molecular, neural and clinical presentation and progression in each ASD child. We review the first such study, which confirms the multistage fetal nature of ASD and provides the first in vitro fetal-stage explanation for in vivo early brain overgrowth. Within-child ASD Living Biology is a novel research concept we coin here that advocates the integration of in vitro prenatal and in vivo early post-natal information to generate individualized and group-level explanations, clinically useful prognoses, and precision medicine approaches that are truly beneficial for the individual infant and toddler with ASD.
Moore, A., Wozniak, M., Yousef, A., Barnes, C. C., Cha, D., Courchesne, E., & Pierce, K. (2018). The geometric preference subtype in ASD: identifying a consistent, early-emerging phenomenon through eye tracking. Molecular Autism, 9, 19. PMID: 29581878. PDF.
BACKGROUND:The wide range of ability and disability in ASD creates a need for tools that parse the phenotypic heterogeneity into meaningful subtypes. Using eye tracking, our past studies revealed that when presented with social and geometric images, a subset of ASD toddlers preferred viewing geometric images, and these toddlers also had greater symptom severity than ASD toddlers with greater social attention. This study tests whether this "GeoPref test" effect would generalize across different social stimuli.
METHODS:Two hundred and twenty-seven toddlers (76 ASD) watched a 90-s video, the Complex Social GeoPref test, of dynamic geometric images paired with social images of children interacting and moving. Proportion of visual fixation time and number of saccades per second to both images were calculated. To allow for cross-paradigm comparisons, a subset of 126 toddlers also participated in the original GeoPref test. Measures of cognitive and social functioning (MSEL, ADOS, VABS) were collected and related to eye tracking data. To examine utility as a diagnostic indicator to detect ASD toddlers, validation statistics (e.g., sensitivity, specificity, ROC, AUC) were calculated for the Complex Social GeoPref test alone and when combined with the original GeoPref test.
RESULTS: ASD toddlers spent a significantly greater amount of time viewing geometric images than any other diagnostic group. Fixation patterns from ASD toddlers who participated in both tests revealed a significant correlation, supporting the idea that these tests identify a phenotypically meaningful ASD subgroup. Combined use of both original and Complex Social GeoPref tests identified a subgroup of about 1 in 3 ASD toddlers from the "GeoPref" subtype (sensitivity 35%, specificity 94%, AUC 0.75.) Replicating our previous studies, more time looking at geometric images was associated with significantly greater ADOS symptom severity.
CONCLUSIONS: Regardless of the complexity of the social images used (low in the original GeoPref test vs high in the new Complex Social GeoPref test), eye tracking of toddlers can accurately identify a specific ASD "GeoPref" subtype with elevated symptom severity. The GeoPref tests are predictive of ASD at the individual subject level and thus potentially useful for various clinical applications (e.g., early identification, prognosis, or development of subtype-specific treatments).
Bacon, E. C., Courchesne, E., Barnes, C. C., Cha, D., Pence, S., Schreibman, L., Stahmer, A. & Pierce, K. (2017). Rethinking the idea of late autism spectrum disorder onset. Development and psychopathology, 1-17. PMID: 28803559.
A common theory of autism spectrum disorder (ASD) symptom onset includes toddlers who do not display symptoms until well after age 2, which are termed late-onset ASD cases. Objectives were to analyze differences in clinical phenotype between toddlers identified as ASD at initial evaluations (early diagnosed) versus those initially considered nonspectrum, then later identified as ASD (late diagnosed). Two hundred seventy-three toddlers recruited from the general population based on a failed developmental screening form or parent or physician concerns were followed longitudinally from 12 months and identified as early- and late-diagnosed cases of ASD, language delayed, or typically developing. Toddlers completed common standardized assessments and experimental eye-tracking and observational measures every 9-12 months until age 3. Longitudinal performance on standardized assessments and experimental tests from initial evaluations were compared. Delay in social communication skills was seen in both ASD groups at early-age initial assessment, including increased preference for nonsocial stimuli, increased stereotypic play, reduced exploration, and use of gestures. On standardized psychometric assessments, early-diagnosed toddlers showed more impairment initially while late-diagnosed toddlers showed a slowing in language acquisition. Similar social communication impairments were present at very early ages in both early-detected ASD and so-called late-onset ASD. Data indicate ASD is present whether detected or not by current methods, and development of more sensitive tools is needed.
Fingher, N., Dinstein, I., Ben-Shachar, M., Haar, S., Dale, A. M., Eyler, L., Pierce, K., Dale, A. & Courchesne, E. (2017). Toddlers later diagnosed with autism exhibit multiple structural abnormalities in temporal corpus callosum fibers. Cortex, 97, 291-305. PMID: 28202133.
Interhemispheric functional connectivity abnormalities are often reported in autism and it is thus not surprising that structural defects of the corpus callosum (CC) are consistently found using both traditional MRI and DTI techniques. Past DTI studies however, have subdivided the CC into 2 or 3 segments without regard for where fibers may project to within the cortex, thus placing limitations on our ability to understand the nature, timing and neurobehavioral impact of early CC abnormalities in autism. Leveraging a unique cohort of 97 toddlers (68 autism; 29 typical) we utilized a novel technique that identified seven CC tracts according to their cortical projections. Results revealed that younger (<2.5 years old), but not older toddlers with autism exhibited abnormally low mean, radial, and axial diffusivity values in the CC tracts connecting the occipital lobes and the temporal lobes. Fractional anisotropy and the cross sectional area of the temporal CC tract were significantly larger in young toddlers with autism. These findings indicate that water diffusion is more restricted and unidirectional in the temporal CC tract of young toddlers who develop autism. Such results may be explained by a potential overabundance of small caliber axons generated by excessive prenatal neural proliferation as proposed by previous genetic, animal model, and postmortem studies of autism. Furthermore, early diffusion measures in the temporal CC tract of the young toddlers were correlated with outcome measures of autism severity at later ages. These findings regarding the potential nature, timing, and location of early CC abnormalities in autism add to accumulating evidence, which suggests that altered inter-hemispheric connectivity, particularly across the temporal lobes, is a hallmark of the disorder.
Lombardo, M. V., Courchesne, E., Lewis, N. E., & Pramparo, T. (2017). Hierarchical cortical transcriptome disorganization in autism. Molecular autism, 8(1), 29. PMID: 28649314. PDF.
BACKGROUND: Autism spectrum disorders (ASD) are etiologically heterogeneous and complex. Functional genomics work has begun to identify a diverse array of dysregulated transcriptomic programs (e.g., synaptic, immune, cell cycle, DNA damage, WNT signaling, cortical patterning and differentiation) potentially involved in ASD brain abnormalities during childhood and adulthood. However, it remains unclear whether such diverse dysregulated pathways are independent of each other or instead reflect coordinated hierarchical systems-level pathology.
METHODS: Two ASD cortical transcriptome datasets were re-analyzed using consensus weighted gene co-expression network analysis (WGCNA) to identify common co-expression modules across datasets. Linear mixed-effect models and Bayesian replication statistics were used to identify replicable differentially expressed modules. Eigengene network analysis was then utilized to identify between-group differences in how co-expression modules interact and cluster into hierarchical meta-modular organization. Protein-protein interaction analyses were also used to determine whether dysregulated co-expression modules show enhanced interactions. RESULTS: We find replicable evidence for 10 gene co-expression modules that are differentially expressed in ASD cortex. Rather than being independent non-interacting sources of pathology, these dysregulated co-expression modules work in synergy and physically interact at the protein level. These systems-level transcriptional signals are characterized by downregulation of synaptic processes coordinated with upregulation of immune/inflammation, response to other organism, catabolism, viral processes, translation, protein targeting and localization, cell proliferation, and vasculature development. Hierarchical organization of meta-modules (clusters of highly correlated modules) is also highly affected in ASD.
CONCLUSIONS: These findings highlight that dysregulation of the ASD cortical transcriptome is characterized by the dysregulation of multiple coordinated transcriptional programs producing synergistic systems-level effects that cannot be fully appreciated by studying the individual component biological processes in isolation.
Lombardo, M. V., Moon, H. M., Su, J., Palmer, T. D., Courchesne, E., & Pramparo, T. (2017). Maternal immune activation dysregulation of the fetal brain transcriptome and relevance to the pathophysiology of autism spectrum disorder. Molecular psychiatry. PMID: 28322282.
Maternal immune activation (MIA) via infection during pregnancy is known to increase risk for autism spectrum disorder (ASD). However, it is unclear how MIA disrupts fetal brain gene expression in ways that may explain this increased risk. Here we examine how MIA dysregulates rat fetal brain gene expression (at a time point analogous to the end of the first trimester of human gestation) in ways relevant to ASD-associated pathophysiology. MIA downregulates expression of ASD-associated genes, with the largest enrichments in genes known to harbor rare highly penetrant mutations. MIA also downregulates expression of many genes also known to be persistently downregulated in the ASD cortex later in life and which are canonically known for roles in affecting prenatally late developmental processes at the synapse. Transcriptional and translational programs that are downstream targets of highly ASD-penetrant FMR1 and CHD8 genes are also heavily affected by MIA. MIA strongly upregulates expression of a large number of genes involved in translation initiation, cell cycle, DNA damage and proteolysis processes that affect multiple key neural developmental functions. Upregulation of translation initiation is common to and preserved in gene network structure with the ASD cortical transcriptome throughout life and has downstream impact on cell cycle processes. The cap-dependent translation initiation gene, EIF4E, is one of the most MIA-dysregulated of all ASD-associated genes and targeted network analyses demonstrate prominent MIA-induced transcriptional dysregulation of mTOR and EIF4E-dependent signaling. This dysregulation of translation initiation via alteration of the Tsc2–mTor–Eif4e axis was further validated across MIA rodent models. MIA may confer increased risk for ASD by dysregulating key aspects of fetal brain gene expression that are highly relevant to pathophysiology affecting ASD.
Pierce K, Courchesne E, Bacon E. To Screen or Not to Screen for ASD Universally is Not the Question: Why the Task Force Got it Wrong. The Journal of pediatrics. 2016;176:182-194. doi:10.1016/j.jpeds.2016.06.004. PDF.
There is widespread agreement across the American Academy of Pediatrics (AAP), expert panels, parents, and autism advocacy organizations, as well as the US Department of Health and Human Services Interagency Autism Coordinating Committee that early identification and intervention for toddlers with autism spectrum disorder (ASD) is a high public health priority and that universal early screening in pediatric populations is an essential tool for early ASD risk detection. The AAP guidelines to implement universal early screening for autism1 as standard of care is one of the most positive and successful public health policies ever created for children affected by autism. Indeed, such a policy has led to the regular detection and treatment of autism by the second birthday in cities with systematic screening programs. Early screening by pediatricians is becoming commonplace in the US and is objectively successful. Studies show that using standardized screening tools is the most accurate approach to early at-risk autism detection, even compared with pediatrician judgment and surveillance. Private and public resources and research have together resulted in the development of early screening approaches that, when implemented, can detect ASD 2-3 years sooner than the national average of 4 years of age. Implementing ASD screening as standard-of-care is particularly important for children from low socioeconomic status and minority backgrounds who are consistently overlooked and underdetected and as a result, have a later age of first diagnosis and delayed access to services relative to other children. Early detection importantly allows for intervention to begin earlier, which is considered essential to achieving the best outcomes. Research suggests that individuals with positive outcomes, including gains in IQ, adaptive skills, and reduction in ASD core symptoms, as well as those with optimal outcomes who no longer meet criteria for ASD over time, are more likely to have been identified and treated before 3 years of age. In the midst of this major accomplishment and advance over the past when children with ASD commonly went undetected and untreated for years across childhood, the US Preventive Services Task Force (USPSTF) released its recommendations about early universal screening for ASD. Earlier this year, the USPSTF (referred to here as “Task Force”) released a report on ASD screening that stated: “Current evidence is insufficient to assess the balance of benefits and harms of screening for ASD in young children for whom no concerns of ASD have been raised by their parents or a clinician.” In essence, the Task Force failed to recommend universal screening for ASD in the general pediatric population because it claims there is insufficient evidence of its benefits (but later admits its “harms” are minimal).
Solso, S., Xu, R., Proudfoot, J., Hagler, D. J., Campbell, K., Venkatraman, V., ... & Eyler, L. (2016). Diffusion tensor imaging provides evidence of possible axonal overconnectivity in frontal lobes in autism spectrum disorder toddlers. Biological psychiatry, 79(8), 676-684. PMID: 26300272. PDF.
Theories of brain abnormality in autism spectrum disorder (ASD) have focused on underconnectivity as an explanation for social, language, and behavioral deficits but are based mainly on studies of older autistic children and adults.
In 94 ASD and typical toddlers ages 1 to 4 years, we examined the microstructure (indexed by fractional anisotropy) and volume of axon pathways using in vivo diffusion tensor imaging of fronto-frontal, fronto-temporal, fronto-striatal, and fronto-amygdala axon pathways, as well as posterior contrast tracts. Differences between ASD and typical toddlers in the nature of the relationship of age to these measures were tested.
Frontal tracts in ASD toddlers displayed abnormal age-related changes with greater fractional anisotropy and volume than normal at younger ages but an overall slower than typical apparent rate of continued development across the span of years. Posterior cortical contrast tracts had few significant abnormalities.
Frontal fiber tracts displayed deviant early development and age-related changes that could underlie impaired brain functioning and impact social and communication behaviors in ASD.
Pierce K, Marinero S, Hazin R, McKenna B, Barnes C, Malige A. Eye Tracking Reveals Abnormal Visual Preference for Geometric Images as an Early Biomarker of an Autism Spectrum Disorder Subtype Associated with Increased Symptom Severity. Biological Psychiatry. 2015 Apr 11. doi: 10.1016/j.biopsych.2015.03.032. PDF.
Results of a study published on April 11, 2015 in Biological Psychiatry by Karen Pierce and colleagues contained 444 subjects sampled from the general population and is the largest eye tracking study of ASD to date. The study explores the question of eye gaze patterns as a valid biomarker of ASD and concludes that yes, eye tracking can identify a meaningful subgroup on the spectrum.
Data also shows that an individual toddler’s eye tracking profile can have prognostic value in that those with heightened visual attention towards dynamic geometric images tended to be more severely symptomatic. This has implications regarding how we approach the diagnosis of ASD and potentially what treatments we prescribe. Specificity and positive predictive value of the “Geo Pref Test” were considerably high (98% and 90% respectively), making it a very strong biomarker finding.
The study does more than identify a robust biomarker of ASD, it, for the first time, opens the possibility that the early identification of ASD can move beyond purely clinical observation techniques and into more objective, biologically based approaches.
Prediction of Autism by translation and Immune/Inflammation Coexpressed Genes in Toddlers From Pediatric Community.
Pramparo T, Pierce K, Lombardo MV, et al. Prediction of Autism by Translation and Immune/Inflammation Coexpressed Genes in Toddlers From Pediatric Community Practices. JAMA Psychiatry. Published online March 04, 2015. doi:10.1001/jamapsychiatry.2014.300 PDF.
The identification of genomic signatures that aid early identification of individuals at risk for autism spectrum disorder (ASD) in the toddler period remains a major challenge because of the genetic and phenotypic heterogeneity of the disorder. Generally, ASD is not diagnosed before the fourth to fifth birthday.
To apply a functional genomic approach to identify a biologically relevant signature with promising performance in the diagnostic classification of infants and toddlers with ASD.
Proof-of-principle study of leukocyte RNA expression levels from 2 independent cohorts of children aged 1 to 4 years (142 discovery participants and 73 replication participants) using Illumina microarrays. Coexpression analysis of differentially expressed genes between Discovery ASD and control toddlers were used to define gene modules and eigengenes used in a diagnostic classification analysis. Independent validation of the classifier performance was tested on the replication cohort. Pathway enrichment and protein-protein interaction analyses were used to confirm biological relevance of the functional networks in the classifier. Participant recruitment occurred in general pediatric clinics and community settings. Male infants and toddlers (age range, 1-4 years) were enrolled in the study. Recruitment criteria followed the 1-Year Well-Baby Check-Up Approach. Diagnostic judgment followed DSM-IV-TR and Autism Diagnostic Observation Schedule criteria for autism. Participants with ASD were compared with control groups composed of typically developing toddlers as well as toddlers with global developmental or language delay.
Logistic regression and receiver operating characteristic curve analysis were used in a classification test to establish the accuracy, specificity, and sensitivity of the module-based classifier.
Our signature of differentially coexpressed genes was enriched in translation and immune/inflammation functions and produced 83% accuracy. In an independent test with approximately half of the sample and a different microarray, the diagnostic classification of ASD vs control samples was 75% accurate. Consistent with its ASD specificity, our signature did not distinguish toddlers with global developmental or language delay from typically developing toddlers (62% accuracy).
This proof-of-principle study demonstrated that genomic biomarkers with very good sensitivity and specificity for boys with ASD in general pediatric settings can be identified. It also showed that a blood-based clinical test for at-risk male infants and toddlers could be refined and routinely implemented in pediatric diagnostic settings.
Different Functional Neural Substrates for Good and Poor Language Outcome in Autism.
Lombardo, M., Pierce, K., Eyler, L., Barnes, C, Ahrens-Barbeau, C., Solso, S., Campbell, K., & Courchesne, E. (2015). Different functional neural substrates for good and poor language outcome in autism. Neuron, 86(2), 567-577. PDF.
Autism (ASD) is vastly heterogeneous, particularly in early language development. While ASD language trajectories in the first years of life are highly unstable, by early childhood these trajectories stabilize and are predictive of longer-term outcome. Early neural substrates that predict/precede such outcomes are largely unknown, but could have considerable translational and clinical impact. Pre-diagnosis fMRI response to speech in ASD toddlers with relatively good language outcome was highly similar to non-ASD comparison groups and robustly recruited language-sensitive superior temporal cortices. In contrast, language-sensitive superior temporal cortices were hypoactive in ASD toddlers with poor language outcome. Brain-behavioral relationships were atypically reversed in ASD, and a multimodal combination of pre-diagnostic clinical behavioral measures and speech-related fMRI response showed the most promise as an ASD prognosis classifier. Thus, before ASD diagnoses and outcome become clinically clear, distinct functional neuroimaging phenotypes are already present that can shed insight on an ASD toddler’s later outcome.
Measuring Outcomes in an Early Intervention Program for Young Children with Autism Spectrum Disorder: Use of a Curriculum-Based Assessment.
Bacon, L., Dufek, S., Schreibman, L. Stahmer, A., Pierce, K. and Courchesne, E. (2014). Measuring Outcomes in an Early Intervention Program for Young Children with Autism Spectrum Disorder: Use of a Curriculum-Based Assessment. Autism Research and Treatment. PMID: 24711926, PMCID: PMC3966353. PDF.
Measuring progress of children with autism spectrum disorder (ASD) during intervention programs is a challenge faced by researchers and clinicians. Typically, standardized assessments of child development are used within research settings to measure the effects of early intervention programs. However, the use of standardized assessments is not without limitations, including lack of sensitivity of some assessments to measure small or slow progress, testing constraints that may affect the child's performance, and the lack of information provided by the assessments that can be used to guide treatment planning. The utility of a curriculum-based assessment is discussed in comparison to the use of standardized assessments to measure child functioning and progress throughout an early intervention program for toddlers with risk for ASD. Scores derived from the curriculum-based assessment were positively correlated with standardized assessments, captured progress masked by standardized assessments, and early scores were predictive of later outcomes. These results support the use of a curriculum-based assessment as an additional and appropriate method for measuring child progress in an early intervention program. Further benefits of the use of curriculum-based measures for use within community settings are discussed.
Rich Stoner, Maggie L. Chow, Maureen P. Boyle, Susan M. Sunkin, Peter R. Mouton, Subhojit Roy, Anthony Wynshaw-Boris, Sophia A. Colamarino, Ed S. Lein, and Eric Courchesne. Patches of Disorganization in the Neocortex of Children with Autism.N Engl J Med. 2014 Mar 27; 370:1209-19. DOI: 10.1056/NEJMoa1307491. PDF.
Autism involves early brain overgrowth and dysfunction, which is most strongly evident in the prefrontal cortex. As assessed on pathological analysis, an excess of neurons in the prefrontal cortex among children with autism signals a disturbance in prenatal development and may be concomitant with abnormal cell type and laminar development.
To systematically examine neocortical architecture during the early years after the onset of autism, we used RNA in situ hybridization with a panel of layer- and cell-type–specific molecular markers to phenotype cortical microstructure. We assayed markers for neurons and glia, along with genes that have been implicated in the risk of autism, in prefrontal, temporal, and occipital neocortical tissue from postmortem samples obtained from children with autism and unaffected children between the ages of 2 and 15 years.
We observed focal patches of abnormal laminar cytoarchitecture and cortical disorganization of neurons, but not glia, in prefrontal and temporal cortical tissue from 10 of 11 children with autism and from 1 of 11 unaffected children. We observed heterogeneity between cases with respect to cell types that were most abnormal in the patches and the layers that were most affected by the pathological features. No cortical layer was uniformly spared, with the clearest signs of abnormal expression in layers 4 and 5. Three-dimensional reconstruction of layer markers confirmed the focal geometry and size of patches.
Glatt SJ, Tsuang MT, Winn M, Chandler SD, Collins M, Lopez L, Weinfeld M, Carter C, Schork N, Pierce K, Courchesne E. Blood-based gene expression signatures of infants and toddlers with autism. J Am Acad Child Adolesc Psychiatry. 2012 Sep;51(9):934-44.e2. doi: 10.1016/j.jaac.2012.07.007. Epub 2012 Aug 2. PDF.
Morgan JT, Chana G, Abramson I, Semendeferi K, Courchesne E, Everall IP. Abnormal microglial-neuronal spatial organization in the dorsolateral prefrontal cortex in autism. Brain Res. 2012 May 25;1456:72-81. doi: 10.1016/j.brainres.2012.03.036. Epub 2012 Mar 23.
Chow ML, Pramparo T, Winn ME, Barnes CC, Li HR, Weiss L, Fan JB, Murray S, April C, Belinson H, Fu XD, Wynshaw-Boris A, Schork NJ, Courchesne E. Age-dependent brain gene expression and copy number anomalies in autism suggest distinct pathological processes at young versus mature ages. PLoS Genet. 2012;8(3):e1002592.doi: 10.1371/journal.pgen.1002592. Epub 2012 Mar 22.
Chow ML, Winn ME, Li HR, April C, Wynshaw-Boris A, Fan JB, Fu XD, Courchesne E, Schork NJ. Preprocessing and Quality Control Strategies for Illumina DASL Assay-Based Brain Gene Expression Studies with Semi-Degraded Samples. Front Genet. 2012;3:11. doi: 10.3389/fgene.2012.00011.
Eyler LT, Pierce K, Courchesne E. A failure of left temporal cortex to specialize for language is an early emerging and fundamental property of autism. Brain. 2012 Mar;135(Pt 3):949-60. doi: 10.1093/brain/awr364. Epub 2012 Feb 20.
Courchesne E, Mouton PR, Calhoun ME, Semendeferi K, Ahrens-Barbeau C, Hallet MJ, Barnes CC, Pierce K. Neuron number and size in prefrontal cortex of children with autism. JAMA. 2011 Nov 9;306(18):2001-10. PubMed PMID: 22068992.
Chow ML, Li HR, Winn ME, April C, Barnes CC, Wynshaw-Boris A, Fan JB, Fu XD, Courchesne E, Schork NJ. Genome-wide expression assay comparison across frozen and fixed postmortem brain tissue samples. BMC Genomics. 2011 Sep 10;12:449. PubMed PMID: 21906392; PubMed Central PMCID: PMC3179967.
Pierce, K., Carter, C., Weinfeld, M., Desmond, J., Hazin, R., Bjork, R., Gallagher, N. Detecting, studying, and treating autism early: the one-year well-baby check-up approach. Journal of Pediatrics, 158:5, 2011.
Pierce, K., Conant, D., Hazin, R., Desmond, J., & Stoner, R. A preference for geometric patterns early in life as a risk factor for autism. Archives of General Psychiatry. 68(1):101-9, 2011.
Morgan, J.T., Chana, G., Pardo, C.A., Achim, C., Semendeferi, K., Buckwalter, J., Courchesne, E., Everall, I.P. Microglial Activation and Increased Microglial Density Observed in the Dorsolateral Prefrontal Cortex in Autism. Biological Psychiatry, 15;68(4):368-376, 2010.
Schumann, C.M., Bloss, C.S., Barnes, C.C., Wideman, G.M., Carper, R.A., Akshoomoff, N., Pierce, K., Hagler, D., Schork, N., Lord, C., Courchesne, E. Longitudinal magnetic resonance imaging study of cortical development through early childhood in autism. Journal of Neuroscience, 30(12):4419-27, 2010.
Luyster, R., Gotham, K., Guthrie, W., Coffing, M., Petrak, R., DiLavore, P., Pierce, K., Bishop, S., Esler, A., Hus, V., Richler, J., Risi, S., and Lord, C. Autism Diagnostic Observation Schedule Toddler Module: A new module of a standardized diagnostic measure for autism spectrum disorders. Journal Autism Developmental Disorders. 39(9)1305-1320, 2009.
Pierce, K., Glatt, S., Liptak, G.S. & McIntyre (2009). The Power and Promise of Identifying Autism Early: Insights From the Search for Clinical and Biological Markers, Annals of Clinical Psychiatry. 21(3):132-47, 2009.
Pierce, K. & Redcay, E. (2008). Fusiform activity in children with an ASD is a matter of “who.” Biological Psychiatry, 64(7):552-60, 2008.
Redcay, E., Haist, F., Courchesne, E. Functional neuroimaging of speech perception during pivotal period in language acquisition. Developmental Science, 11(2):237-52, 2008.
Kim, S., Brune, C., Kistner, E., Christian, S., Courchesne, E., Cox, N., Cook, E. Transmission disequilibrium testing of the chromosome 15q11-q13 region in autism. American Journal of Medical Genetics Neuropsychiatry, 147B(7):1116-25, 2008.
Redcay E, Courchesne E. Deviant functional magnetic resonance imaging patterns of brain activity to speech in 2-3 year old children with an autism spectrum disorder. Biological Psychiatry, 64(7):589-98, 2008.
Gaffrey, M.S., Kleinhans, N.M., Haist, F., Akshoomoff, N., Campbell, A., Courchesne, E., Muller, R.A. A typical participation of visual cortex during word processing in autism: An fMRI study of semantic decision. Neuropsychologia, 45(8):1672-84, 2007. Epub 2007 Jan 16.
Belmonte, M.K., Mazziotta, J.C., Minshew, N.J., Evans, A.C., Courchesne, E., Dager, S.R., Bookheimer, S.Y., Aylward, E.H., Amaral, D.G., Cantor, R.M., Chugani, D.C., Dale, A.M., Davatzikos, C., Gerig, G., Herbert, M.R., Lainhart, J.E., Murphy, D.G., Piven, J., Reiss, A.L., Schultz, R.T., Zeffiro, T.A., Levi-Pearl, S., Lajonchere, C., Colamarino, S.A. Offering to Share: How to Put Heads Together in Autism Neuroimaging. Journal of Autism Developmental Disorders, 38(1):2-13, 2008.
Bloss, C.S., Courchesne, E. MRI Neuroanatomy in Young Girls With Autism: A Preliminary Study. Journal of the American Academy of Child and Adolescent Psychiatry, 46(4):515-523, 2007.
Redcay, E., Kennedy, D., Courchesne, E. fMRI during natural sleep as a method to study brain function during early childhood. Neuroimage, 38(4):696-707, 2007.
Brune, C.W., Kim, S.J., Hanna, G.L., Courchesne, E., Lord, C., Leventhal BL, et al. Family-Based Association Testing of OCD-associated SNPs of SLC1A1 in an autism sample. Autism Research 1(2):108-13, 2007.
Courchesne, E., Pierce, K., Schumann, C.M., Redcay, E., Buckwalter, J.A., Kennedy, D.P. and Morgan, J. Mapping early brain development in autism. Neuron, 56(2):399-413, 2007.
Kennedy, D.P., Redcay, E., Courchesne, E. Failing to deactivate: resting functional abnormalities in autism. Proceedings of the National Academy of Sciences U S A, 103(21):8275-80, 2006. Epub 2006 May 15.
DiCicco-Bloom, E., Lord, C., Zwaigenbaum, L., Courchesne, E., Dager, S.R., Schmitz, C., Schultz, R.T., Crawley, J., Young, L.J. The developmental neurobiology of autism spectrum disorder. Journal of Neuroscience, 26(26):6897-906, 2006. Review.
Buxhoeveden, D.P., Semendeferi, K., Buckwalter, J., Schenker, N., Switzer, R., Courchesne, E. Reduced minicolumns in the frontal cortex of patients with autism. Neuropathology and Applied Neurobiology, 32(5):483-91, 2006.
Akshoomoff, N., Farid, N., Courchesne, E., Haas, R. Abnormalities on the Neurological Examination and EEG in Young Children with Pervasive Developmental Disorders. Journal of Autism Developmental Disorders, 37(5):887-93, 2006.
Carper, R.A. and Courchesne, E. Localized enlargement of the frontal cortex in early autism. Biological Psychiatry, 57:126-133, 2005.
Haist, F., Adamo, M., Westerfield, M., Courchesne, E., Townsend, J. The functional neuroanatomy of spatial attention in autism spectrum disorder. Developmental Neuropsychology, 27(3):425-58, 2005.
Courchesne, E. Pierce, K. Why the frontal cortex in autism might be talking only to itself: local over-connectivity but long-distance disconnection. Current Opinion in Neurobiology, 15(2):225-30, 2005.
Teder-Salejarvi, W.A., Pierce, K.L., Courchesne, E., Hillyard, S.A. Auditory spatial localization and attention deficits in autistic adults. Brain Research and Cognitive Brain Research, 23(2-3):221-34, 2005.
Courchesne, E., Pierce, K. Brain overgrowth in autism during a critical time in development: implications for frontal pyramidal neuron and interneuron development and connectivity. International Journal of Developmental Neuroscience, 23(2-3):153-70, 2005.
Redcay, E., and Courchesne, E. When is the brain enlarged in autism? A meta-analysis of all brain size reports. Biol. Psychiatry,;58(1):1-9, 2005.
Pierce, K., Haist, F., Sedaghat, F., & Courchesne, E., 2004. The brain response to personally familiar faces in autism: findings of fusiform activity and beyond. Brain, 2004 127(Pt 12):2703-16.
Akshoomoff, N., Lord, C., Lincoln, A.J., Courchesne, R.Y., Carper, R.A., Townsend, J., Courchesne, E. Outcome classification of preschool children with autism spectrum disorders using MRI brain measures. Journal of the American Academy of Child Adolescent Psychiatry, 43(3):349-357, 2004.
Allen, G., Muller, R-A., Courchesne, E. Cerebellar function in autism: fMRI activation during a simple motor task. Biological Psychiatry (2004), 56:269-78, 2004.
Makeig, S., Delorme, A, Westerfield, M., Jung, T.-P., Townsend, J., Courchesne, E., Sejnowski, T.J. Electroencephalographic brain dynamics following manually responded visual targets. Public Library of Science Biology. 2(6):E176. Epub June 15, 2004.
Courchesne, E., Redcay, E., Kennedy, D.P. The autistic brain: birth through adulthood. Current Opinion in Neurobiology, 17:489-96, 2004.
Belmonte, M.K., Cook, Jr., E.H., Anderson, G.M., Rubenstein, J.L.R., Greenough, W.T., Beckel-Mitchner, A., Courchesne, E., Boulanger, L.M., Powell, S.B., Levitt, P.R., Perry, E.K., Jiang, Y.H., DeLorey, T.M., Tierney, E. Autism as a disorder of neural information processing: directions for research and targets for therapy. Molecular Psychiatry, online publication 23 March 2004, 1-18.
Courchesne, E. Brain development in autism: Early overgrowth followed by premature arrest of growth. Mental Retardation and Developmental Disabilities Research Reviews, 10:106-11, 2004.
Muller, R-A., Kleinhans, N., Kemmotsu, N., Pierce, K., Courchesne, E. Abnormal variability and distribution of functional maps in autism: an fMRI study of visuomotor learning. American Journal of Psychiatry, 160(10):1847-62, 2003.
Allen, G. And Courchesne, E. Differential effects of developmental cerebellar abnormality on cognitive and motor functions in the cerebellum: An fMRI study of autism. American Journal of Psychiatry, 160(2):262-273, 2003.
Courchesne, E., Bartholomeusz, H.H., Karns, C.M., Townsend, J. MRI evidence of increased brain size in young children but not adults with autism. Biological Psychiatry, Accepted Pending Revision 4/4/03.
Courchesne, E., Carper, R., Akshoomoff, N. Evidence of brain overgrowth in the first year of life in autism. Journal of the American Medical Association, 290(3):337-344, 2003.
Muller, R.A., Kleinhans, N., Courchesne, E. Linguistic theory and neuroimaging evidence: an fMRI study of Broca’s area in lexical semantics. Neuropsychologia. 41(9):1199-207, 2003.
Makeig, S, Westerfield, M, Jung, T-P, Enghoff, S, Townsend, J, Courchesne, E, Sejnowski, TJ. Dynamic brain sources of visual evoked responses. Science, 295(5555), 690-694, 2002.
Carper, R.A., Moses, P., Tigue, Z.D., Courchesne, E. Cerebral lobes in autism: Early hyperplasia and abnormal age effects. NeuroImage, 16:1038-51, 2002.
Akshoomoff, N., Pierce, K., Courchesne, E. The neurobiological basis of autism from a developmental perspective. Development and Psychopathology, 14:613-634, 2002.
Bartholomeusz, H.H., Courchesne, E., and Karns, C. Relationship between head circumference and brain volume in healthy normal children and adults. Neuropediatrics, 33:239-41, 2002.
Kim, S.J., Herzing, L.B., Veenstra-VanderWeele, J., Lord, C., Courchesne, R., Leventhal, B.L., Ledbetter, D.H., Courchesne, E., Cook, Jr., E.H. Mutation screening and transmission disequilibrium study of ATP10C in autism. American Journal of Medical Genetics, 114(2):137-43, 2002.
Muller, R.A., Kleinhans, N., Pierce, K., Kemmotsu, N., Courchesne, E. Functional MRI of motor sequence acquisition: effects of learning stage and performance. Cognitive Brain Research, 14(2):277-93, 2002.
Jones, W., Hesselink, J., Courchesne, E., Duncan, T., Matsuda, K., Bellugi, U. Cerebellar abnormalities in infants and toddlers with Williams syndrome. Developmental Medicine and Child Neurology, 44(10):688-94, 2002.
Courchesne, E. Abnormal early brain development in autism. Molecular Psychiatry, 7:S21-S23, 2002.
Müller, R.-A., Pierce, K., Kleinhans, N., Kemmotsu, N. & Courchesne, E. Functional MRI of motor sequence learning: effects of learning stage and performance. Cognitive Brain Research, 14:277-293, 2002.
Kim, S-J., Young, L. J., Gonen, D, , Veenstra-Vanderweele, J., Courchesne, R.Y., Courchesne, E., Lord, C., Leventhal, B.L., Cook, Jr., E.H., Insel, T.R. Transmission disequilibrium testing of arginine vasopressin receptor 1A (AVPR1A) polymorphisms in Autism. Molecular Psychiatry, 7:502-507, 2002.
Veenstra-VanderWeele, J., Kim, S-J, Lord, C., Courchesne, R., Akshoomoff, N., Leventhal, B.L., Courchesne, E., and Cook, E.H. Transmission disequilibrium studies of the serotonin 5-HT2A receptor gene (HTR2A) in autism. American Journal of Medical Genetics (Neuropsychiatric Genetics) 114:277-283, 2002.
Bartholomeusz, H.H., Courchesne, E., Karns, C. Relationship between head circumference and brain volume in healthy normal children and adults. Neuropediatrics, 33:239-241, 2002.
Courchesne, E. and Pierce, K. Autism. In V.S. Ramachandran (Editor), Encyclopedia of the Human Brain, Volume 1, Academic Press, San Diego, CA, 2002.
Pierce, K., Muller, R.A., Ambrose, J., Allen, G., Courchesne, E. Face processing occurs outside the fusiform ‘face area’ in autism: Evidence from functional MRI. Brain, 124:2059-2073, 2001.
Allen, G. And Courchesne, E. Attention function and dysfunction in autism. Frontiers in Bioscience, 6:d105-119, 2001.
Kim, S-J, Cox, N., Courchesne, R., Lord, C., Corsello, C., Akshoomoff, N., Guter, S., Leventhal, B.L., Courchesne, E., and Cook, E.H. Transmission disequilibrium mapping at the serotonin transporter gene (SLC6A4) region in autistic disorder. Molecular Psychiatry. 7:278-288, 2001.
Jung, T.P., Makeig, S., Westerfield, M., Townsend, J., Courchesne, E., Sejnowski, T.J. Analysis and visualization of single-trial event-related potentials. Human Brain Mapping, 14(3):166-85, 2001.
Courchesne, E., Karns, C., Davis, H.R., Ziccardi, R., Tigue, Z., Pierce, K., Moses, P., Chisum, H.J., Lord, C., Lincoln, A.J., Pizzo, S., Schreibman, L., Haas, R.H., Akshoomoff, N., Courchesne, R.Y. Unusual brain growth patterns in early life in patients with autistic disorder: An MRI study. Neurology, 57:245-254, 2001.
Muller, R-A., Pierce, K., Ambrose, J.B., Allen, G., Courchesne, E. Atypical patterns of cerebral motor activation in autism: a functional magnetic resonance study. Biological Psychiatry, 49:665-676, 2001.
Pierce, K., Courchesne, E. Evidence for a cerebellar role in reduced exploration and stereotyped behavior in autism. Biological Psychiatry, 49:655-664, 2001.
Saitoh, O., Karns, C., Courchesne, E. Development of the hippocampal formation from 2 to 42 years: MRI evidence of smaller area dentata in autism. Brain, 124:1317-1324, 2001.
Moses, P., Courchesne, E., Stiles, J., Trauner, D., Egaas, B., Edwards, E. Regional size reduction in the Human corpus callosum following pre- and perinatal brain injury. Cerebral Cortex, 10:1200-1210, 2000.
Townsend, J., Westerfield, M., Leaver, E., Makeig, S., Jung, T-P., Pierce, K., Courchesne, E. Event-related brain response abnormalities in autism: evidence for impaired cerebello-frontal spatial attention networks. Cognitive Brain Research, 11(1), 127-145. 2001.
Muller, R-A., Kleinhans, N., Courchesne, E. Broca’s area and the discrimination of frequency transitions: a functional MRI study. Brain and Language, 76:70-76, 2001.
Juul-Dam, N., Townsend, J., Courchesne, E. Prenatal, perinatal, and neonatal factors in autism, PDD-NOS, and the general population. Pediatrics, 107(4):e63, 2001.
Courchesne, E. And Pierce, K. An inside look at the neurobiology, etiology, and future research of autism. Advocate, pp.18-22, 2000.
Jung, T-P., Makeig, S., Westerfield, M., Townsend, J., Courchesne, E., Sejnowski, T.J. Removal of eye activity artifacts from visual event-related potentials in normal and clinical subjects. Clinical Neurophysiology, 111:1745-1758, 2000.
Courchesne, E., Chisum, H.J., Townsend, J., Cowles, A., Covington, J., Egaas, B., Harwood, M., Hinds, S., Press, G.A. Normal brain development and aging: Quantitative analysis at in vivo MR imaging in healthy volunteers. Radiology, 216:672-682, 2000.
Jung, T-P., Makeig, S., Westerfield, M., Townsend, J., Courchesne, E., and Sejnowski, T.J. Blind source separation of single-trial event-related potentials in a visual spatial attention task. IEEE Transactions on Biomedical Engineering. In press, 2000.
Carper, R.A. and Courchesne, E. Inverse correlation between frontal lobe and cerebellum sizes in children with autism. Brain, 123:836-844, 2000.
Graf, W.D., Marin-Garcia, J., Gao, H.G., Pizzo, S., Naviaux, R.K., Markusic, B.S., Barshop, B.A., Courchesne, E., Haas, R.H. Autism associated with the mtDNA G8363A tRNAlys mutation. Journal of Child Neurology, 15:357-361, 2000.
Pierce, K. and Courchesne, E. Exploring the neurofunctional organization of face processing in autism. Archives of General Psychiatry, 57:344-345, 2000.
Ciesielski, K.T., Courchesne, E. and Elmasian, R. Effects of focused selective attention tasks on event-related potentials in autistic and normal individuals. Electroencephalography and Clinical Neurophysiology, 75:207-220, 1990.
Lincoln, A.J., Courchesne, E., and Elmasian, R. Considerations for the study of event-related brain potentials and developmental psychopathology. In: A. Rothenberger (Ed.), Brain and Behavior in Child Psychiatry. Springer-Verlag: New York, pp. 16-33, 1990.
Grillon, C., Courchesne, E., Ameli, R., Geyer, M.A. and Braff, D.L. Increased distractibility in schizophrenic patients: Electrophysiologic and behavioral evidence. Archives of General Psychiatry, 47:171-179, 1990.
Grillon, C., Courchesne, E., Ameli, R., Elmasian, R. and Braff, D. Effects of rare non-target stimuli on brain electrophysiological activity and performance. International Journal of Psychophysiology, 9:257-267, 1990.
Courchesne, E., Akshoomoff, N.A. and Townsend, J. Recent advances in autism. Current Opinions in Pediatrics, 2:685-693,1990.
Grillon, C., Ameli, R., Courchesne, E. and Braff, D.L. Effects of task relevance and attention on P3 in schizophrenic patients. Schizophrenia Research, 4:11-21, 1991.
Courchesne, E. Neuroanatomic imaging in autism. Pediatrics, 87:781-790, 1991.
Hsu, M., Yeung-Courchesne, R., Courchesne, E. and Press, G.A. Absence of magnetic resonance imaging evidence of pontine abnormality in infantile autism. Archives of Neurology, 48:1160-1163, 1991.
Epstein, C.J., Korenberg, J.R., Annerén, G., Antonarakis, S.E., Aymé, S., Courchesne, E., Epstein, L.B., Fowler, A., Groner, Y., Huret, J.L., Kemper, T.L., Lott, I.T., Lubin, B.H., Magenis, E., Opitz, J.M., Patterson, D., Priest, J.H., Pueschel, S.M., Rapoport, S.I., Sinet, P.-M., Tanzi, R.E. and de la Cruz, F. Protocols to establish genotype-phenotype correlations in Down syndrome. American Journal of Human Genetics, 49:207-235, 1991.
Courchesne, E. Autism. In: E. Peschel, R. Peschel, C.W. Howe and J.W. Howe (Eds.), Neurobiological Disorders in Children & Adolescents: A Guide to Research & Policy on Schizophrenia, Bipolar Disorder, Autism & More for Professionals & Families. Jossey-Bass: San Francisco, 1992.
Murakami, J.W., Courchesne, E., Haas, R.H., Press, G.A. and Yeung-Courchesne, R. Cerebellar and cerebral abnormalities in Rett syndrome: A quantitative MR analysis. American Journal of Roentgenology, 159:177-183, 1992.
Press, G.A. and Courchesne, E. Atlas of cerebellar hemispheres and vermis. In: L.A. Hyman and V.C. Hinck (Eds.), Clinical Brain Imaging: Normal Structure and Functional Anatomy. Mosby Yearbook Publishers Inc., pp. 251-279, 1992.
Press, G.A. and Courchesne, E. Cerebellar hemispheres and vermis. In: L.A. Hyman and V.C. Hinck (Eds.), Clinical Brain Imaging: Normal Structure and Functional Anatomy. Mosby Yearbook Publishers Inc., pp. 281-286, 1992.
Akshoomoff, N.A., Courchesne, E., Press, G.A. and Iragui, V. Contribution of the cerebellum to neuropsychological functioning: Evidence from a case of cerebellar degenerative disorder. Neuropsychologia, 30:315-328, 1992.
Akshoomoff, N.A. and Courchesne, E. A new role for the cerebellum in cognitive operations. Behavioral Neuroscience, 106:731-738, 1992.
Courchesne, E., Akshoomoff, N.A. and Townsend, J. Recent advances in autism. In: H. Naruse and E.M. Ornitz (Eds.), Neurobiology of Infantile Autism. Elsevier Science Publishers B.V., pp. 111-128, 1992.
Lincoln, A.J., Dickstein, P., Courchesne, E., Elmasian, R. and Tallal, P. Auditory processing abilities in non-retarded adolescents and young adults with developmental receptive language disorder and autism. Brain and Language, 43:613-622, 1992.
Clark, V.P., Courchesne, E. and Grafe, M. In vivo myeloarchitectonic analysis of human striate and extrastriate cortex using magnetic resonance imaging. Cerebral Cortex, 2:417-424, 1992.
Courchesne, E., Press, G.A. and Yeung-Courchesne, R. Parietal lobe abnormalities detected with MR in patients with infantile autism. American Journal of Roentgenology, 160:387-393, 1993.
Lincoln, A.J., Courchesne, E., Harms, L. and Allen, M. Contextual probability evaluation in autistic, receptive developmental language disorder, and control children: Event-related potential evidence. Journal of Autism and Developmental Disorders, 23:37-58, 1993.
Courchesne, E., Townsend, J.P., Akshoomoff, N.A., Yeung-Courchesne, R., Press, G.A., Murakami, J.W., Lincoln, A.J., James, H.E., Saitoh, O., Egaas, B., Haas, R.H., and Schreibman, L. A new finding: Impairment in shifting attention in autistic and cerebellar patients. In: S.H. Broman and J. Grafman (Eds.), Atypical Cognitive Deficits in Developmental Disorders: Implications for Brain Function. Lawrence Erlbaum: New Jersey, pp. 101-137, 1994.
Courchesne, E., Saitoh, O., Yeung-Courchesne, R., Press, G.A. Lincoln, A.J., Haas, R.H. and Schreibman, L. Abnormality of cerebellar vermian lobules VI and VII in patients with infantile autism: Identification of hypoplastic and hyperplastic subgroups with MR imaging. American Journal of Roentgenology, 162:123-130, 1994.
Courchesne, E., Townsend, J. and Saitoh, O. The brain in infantile autism: Posterior fossa structures are abnormal. Neurology, 44:214-223, 1994.
Courchesne, E., Yeung-Courchesne, R. and Egaas, B. Methodology in neuroanatomic measurement. Neurology, 44:203-208, 1994.
Courchesne, E., Saitoh, O., Townsend, J.P., Yeung-Courchesne, R.,Press, G.A., Lincoln, A.J., Haas, R.H. and Schreibman, L. Cerebellar hypoplasia and hyperplasia in infantile autism. The Lancet, 343:63-64, 1994.
Courchesne, E., Townsend, J. and Chase, C. Neurodevelopmental principles guide research on developmental psychopathologies. In: D. Cicchetti and D. Cohen (Eds.), A Manual of Developmental Psychopathology, New York: John Wiley, pp. 195-226, 1994.
Townsend, J. and Courchesne, E. Parietal damage and narrow “spotlight” spatial attention. Journal of Cognitive Neuroscience, 6:220-232, 1994.72.
Akshoomoff, N.A. and Courchesne, E. ERP evidence for a shifting attention deficit in patients with damage to the cerebellum. Journal of Cognitive Neuroscience, 6:388-399, 1994.
Courchesne, E., Chisum, H. and Townsend, J. Neural activity-dependent brain changes in development: Implications for psychopathology. Development and Psychopathology, 6:697-722, 1994.
Courchesne, E., Townsend, J., Akshoomoff, N.A., Saitoh, O., Yeung-Courchesne, R., Lincoln, A.J., James, H.E., Haas, R.H., Schreibman, L. and Lau, L. Impairment in shifting attention in autistic and cerebellar patients. Behavioral Neuroscience, 108:848-865, 1994.
Saitoh, O., Courchesne, E., Egaas, B., Lincoln, A.J. and Schreibman, L. Cross-sectional area of the posterior hippocampus in autistic patients with cerebellar and corpus callosum abnormalities. Neurology, 45:317-324, 1995.
Courchesne, E., Akshoomoff, N.A., Townsend, J. and Saitoh, O. A model system for the study of attention and the cerebellum: Infantile autism. In: G. Karmos, M. Molnar, V. Csépe, I. Czigler, and J.E. Desmedt (Eds.), Perspectives of Event-Related Potentials Research (EEG Suppl. 44), Amsterdam: Elsevier Science B.V., pp. 315-325, 1995.
Egaas, B., Courchesne, E. and Saitoh, O. Reduced size of corpus callosum in autism. Archives of Neurology, 52:794-801, 1995.
Courchesne, E. New evidence of cerebellar and brainstem hypoplasia in autistic infants, children and adolescents: The MR imaging study by Hashimoto and colleagues. Journal of Autism and Developmental Disorders, 25:19-22, 1995.
Belmonte, M., Egaas, B., Townsend, J. and Courchesne, E. NMR intensity of corpus callosum differs with age but not with diagnosis of autism. NeuroReport, 6:1253-1256, 1995.
Lincoln, A.J., Courchesne, E., Harms, L. and Allen, M. Sensory modulation of auditory stimuli in children with autism and receptive developmental language disorder: Event-related brain potential evidence. Journal of Autism and Developmental Disorders, 25:521-539, 1995.
Courchesne, E. Infantile autism. Part 1: MR imaging abnormalities and their neurobehavioral correlates. International Pediatrics, 10:141-154, 1995.
Courchesne, E. Infantile autism. Part 2: A new neurodevelopmental model. International Pediatrics, 10:155-165, 1995.
Townsend, J., Courchesne, E. and Egaas, B. Slowed orienting of covert visual-spatial attention in autism: Specific deficits associated with cerebellar and parietal abnormality. Development and Psychopathology, 8:563-584, 1996.
Townsend, J., Singer Harris, N. and Courchesne, E. Visual attention abnormalities in autism: Delayed orienting to location. Journal of the International Neuropsychological Society, 2:541-550, 1996.
Haas, R.H., Townsend, J., Courchesne, E., Lincoln, A.J., Schreibman, L. and Yeung-Courchesne, R. Neurologic abnormalities in infantile autism. Journal of Child Neurology, 11:84-92, 1996.
Courchesne, E. Brain: early sensory experience on neural structural development. In: S. Parker (Ed. in Chief), 1997 McGraw-Hill Yearbook of Science and Technology, supplement to Encyclopedia of Science and Technology. McGraw-Hill Inc.: New York, pp. 57-58, 1997.
Courchesne, E. and Plante, E. Measurement and analysis issues in neurodevelopmental MR imaging. In: R.W. Thatcher, G.R. Lyon, J. Rumsey and N. Krasnegor (Eds.), Developmental Neuroimaging: Mapping the Development of Brain and Behavior. Academic Press: New York, pp. 43-65, 1997.
Courchesne, E. Brainstem, cerebellar and limbic neuroanatomical abnormalities in autism. Current Opinion in Neurobiology, 7:269-278, 1997.
Yeung-Courchesne, R. and Courchesne, E. From impasse to insight in autism research: From behavioral symptoms to biological explanations. Development and Psychopathology, 9:389-419, 1997.
Allen, G., Buxton, R.B., Wong, E.C. and Courchesne, E. Attentional activation of the cerebellum independent of motor involvement. Science, 275:1940-1943, 1997.
Courchesne, E. Prediction and preparation: Anticipatory role of the cerebellum in diverse neurobehavioral functions. Behavioral and Brain Sciences, 20:248-249, 1997.
Courchesne, E. and Allen, G. Prediction and preparation, fundamental functions of the cerebellum. Learning and Memory, 4:1-35, 1997.
Akshoomoff, N.A., Courchesne, E. and Townsend, J. Attention coordination and anticipatory control. In: J.D. Schmahmann (Ed.), The Cerebellum and Cognition, International Review of Neurobiology, Vol. 41. Academic Press: San Diego, pp. 575-598, 1997.
Cook, Jr., E.H., Lindgren, V., Leventhal, B.L., Courchesne, R., Lincoln, A., Shulman, C., Lord, C. and Courchesne, E. Autism or atypical autism in maternally but not paternally derived proximal 15q duplication. American Journal of Human Genetics, 60:928-934, 1997.
Cook, Jr., E.H., Courchesne, R., Lord, C., Cox , N., Yan, S., Lincoln, A., Haas, R., Courchesne, E. and Leventhal, B.L. Evidence of linkage between the serotonin transporter and autistic disorder. Molecular Psychiatry, 2:247-250, 1997.
Allen, G. and Courchesne, E. The cerebellum and non-motor function: Clinical implications. Molecular Psychiatry, 3:207-210, 1998.
Saitoh, O. and Courchesne, E. MRI study of the brain in autism. Psychiatry and Clinical Neurosciences, 52:S219-S222, 1997.
Lincoln, A., Courchesne, E., Allen, M., Hanson, E. and Ene, Michaela. Neurobiology of asperger syndrome: Seven case studies and quantitative magnetic resonance imaging findings. In: E. Schopler, G. Mesibov and Kunce (Eds.) Asperger Syndrome or High Functioning Autism? New York: Plenum Press, 1998.
Cook. Jr., E.H., Courchesne, R.Y., Cox, N.J., Lord, C., Gonen, D., Guter, S.J., Lincoln, A., Nix, K., Haas, R., Leventhal, B.L. and Courchesne, E. Linkage disequilibrium mapping with 15q11-13 markers in autistic disorder. The American Journal of Human Genetics, 62:1077-1083, 1998.
Courchesne, E. and Yeung-Courchesne, R, and Pierce, K. Biological and behavioral heterogeneity in autism: Role of pleiotropy and epigenesis. In: S.H. Broman and J.M. Fletcher (Eds.) The Changing Nervous System: Neurobehavioral Consequences of Early Brain Disorders. New York: Oxford University Press, pp. 292-338, 1999.
Müller, R.-A. and Courchesne, E. The duplicity of plasticity: A conceptual approach to the study of early lesions and developmental disorders. In: M. Ernst and J. Rumsey (Eds.) The Foundation and Future of Functional Neuroimaging in Child Psychiatry, New York: Cambridge University Press, 1998.
Courchesne, E. An MRI study of autism: The cerebellum revisited. Neurology, 52:1106, 1999.
Harris, N.S., Courchesne, E., Townsend, J., Carper, R.A. and Lord, C. Neuroanatomic contributions to slowed orienting of attention in children with autism. Cognitive Brain Research, 8:61-71, 1999.
Courchesne, E., Müller, R.-A. and Saitoh, O. Brain weight in autism: Normal in the majority of cases, megalencephalic in rare cases. Neurology, 52:1057-1059. 1999.
Makeig, S., Westerfield, M., Jung, T-P, Covington, J., Townsend, J., Sejnowski, T. and Courchesne, E. Functionally independent components of late positive event-related potential during visual spatial attention. Journal of Neuroscience, 19(7), 2665-80, 1999.
Townsend, J., Courchesne, E., Covington, J., Westerfield, M., Harris, N.S., Lyden, P., Lowry, T.P. and Press, G.A. Spatial attention deficits in patients with acquired or developmental cerebellar abnormality. Journal of Neuroscience,19(13):5632-5643, 1999.
Jung, T-P, Makeig, S., Westerfield, M., Townsend, J., Courchesne, E., and Sejnowski, T.J. Analyzing and visualizing single-trial event-related potentials. Advances in Neural Information Processing Systems, 11:118-124, 1999.
Makeig, S., Westerfield, M., Townsend, J., Jung, T-P., Courchesne, E., and Sejnowski, T.J. Functionally independent components of early event-related potentials in a visual spatial attention task. Philosophical Transactions: Biological Sciences, 354:(1387):1135-1144, 1999.
Jung, T-P., Makeig, S., Westerfield, M., Townsend, J., Courchesne, E., and Sejnowski, T.J. Independent component analysis of single-trial event-related potentials. 1St Int’l Workshop on Independent Component Analysis and Signal Separation, 173-178, 1999.
Lassig, J.P., Vachirasomtoon, K., Hartzell, K., Leventhal, M., Courchesne, E., Courchesne, R., Lord, C., Leventhal, B.L., Cook, Jr., E.H. Physical mapping of the serotonin 5-HT7 receptor gene (HTR7) to chromosome 10 and pseudogene (HTR7P) to chromosome 12, and testing of linkage disequilibrium between HTR7 and autistic disorder. American Journal of Medical Genetics (Neuropsychiatric Genetics), 88:472-475, 1999.
Woods, D.L., Courchesne, E., Hillyard, S.A. and Galambos, R. Split-second recovery of the P3 component in multiple decision tasks. In: H.H. Kornhuber and L. Deecke (Eds.). Progress in Brain Research. Vol. 54: Motivation, Motor and Sensory Processes of the Brain. Elsevier/North-Holland Biomedical Press: Amsterdam, pp. 322-330, 1980.
Woods, D.L., Hillyard, S.A., Courchesne, E. and Galambos, R. Electrophysiological signs of split-second decision making. Science. 207:655-657, 1980.
Woods, D.L., Courchesne, E., Hillyard, S.A. and Galambos, R. Recovery cycles of event-related potentials in multipledetection tasks. Electroencephalography and Clinical Neurophysiology, 50:335-347, 1980.
Courchesne, E., Ganz, L. and Norcia, A.M. Event-related brain potentials to human faces in infants. Child Development, 52:804-811, 1981.
Courchesne, E. Cognitive components of the event-related brain potential: Changes associated with development. In: A.W.K. Gaillard and W. Ritter (Eds.), Tutorials in Event-Related Potential Research: Endogenous Components. North-Holland Publishing Co.: Amsterdam, pp. 329-344, 1983.
Kurtzberg, D., Vaughan, Jr., H.G., Courchesne, E., Friedman, D., Harter, M.R. and Putnam, L.E. Developmental aspects of event-related potentials. In: R. Karrer, J. Cohen and P. Tueting (Eds.), Brain and Information Event-related Potentials, Vol. 425. The New York Academy of Sciences: New York, pp. 300-318, 1984.
Courchesne, E., Kilman, B.A., Galambos, R. and Lincoln, A.L. Autism: Processing of novel auditory information assessed by event-related brain potentials. Electroencephalography and Clinical Neurophysiology. 59:238-248, 1984.
Courchesne, E. A critical review of the use of ERP’s for studying developmental psychopathologies. Center for Studies of Child and Adolescent Psychopathology, Clinical Research Branch, National Institutes of Mental Health, 1984.
Courchesne, E., Lincoln, A.J., Kilman, B.A. and Galambos, R. Event-related brain potential correlates of the processing of novel visual and auditory information in autism. Journal of Autism and Develop mental Disorders, 15:55-76, 1985.
Lincoln, A.J., Courchesne, E., Kilman, B.A. and Galambos R. Neuropsychological correlates of information-processing by children with Down syndrome. Journal of Mental Deficiency, 89:403-414, 1985.
Courchesne, E., Courchesne, R.Y., Hicks, G. and Lincoln, A.J. Functioning of the brain-stem auditory pathway in non-retarded autistic individuals. Electroencephalography and Clinical Neurophysiology, 61:491-501, 1985.
Woods, D.L. and Courchesne, E. The recovery functions of auditory event-related potentials during split-second discriminations. Electroencephalography and Clinical Neurophysiology, 65:304-15, 1986. Electroencephalography and Clinical Neurophysiology, 65:304-315, 1986.
Woods, D.L. and Courchesne, E. Event-related potentials during split-second auditory and visual decision making. In: W.C. McCallum, R. Zappoli and F. Denoth (Eds.), Cerebral Psychophysiology: Studies in Event-Related Potentials, (EEG Suppl. 39), 152-154, 1986.
Courchesne, E. A neurophysiological view of autism. In: E. Schopler and G.B. Mesibov (Eds.), Neurobiological Issues in Autism. Plenum Press: New York, pp. 285-324, 1987.
Courchesne, E., Elmasian, R. and Yeung-Courchesne, R. Electrophysiological correlates of cognitive processing: P3b and Nc, basic, clinical, and developmental research. In: A.M. Halliday, S.R. Butler and R. Paul (Eds.), A Textbook of Clinical Neurophysiology, John Wiley and Sons Ltd.: Sussex, pp. 645-676, 1987.
Courchesne, E., Hesselink, J.R., Jernigan, T.L. and Yeung-Courchesne, R. Abnormal neuroanatomy in a nonretarded person with autism: Unusual findings with magnetic resonance imaging. Archives of Neurology, 44:335-341, 1987.
Courchesne, E. and Yeung-Courchesne, R. Event-related brain potentials. In: M. Rutter, A. Hussain Tuma and I.S. Lann (Eds.), Assessment and Diagnosis in Child Psychopathology. Guilford Press: New York, pp. 264-299, 1987.
Woods, D.L. and Courchesne, E. Intersubject variability elucidates the cerebral generators and psychological correlates of ERPs. In: R. Johnson, Jr., J.W. Rohrbaugh and R. Parasuraman (Eds.), Current Trends in Event-Related Potential Research (EEG Suppl. 40) Elsevier Science Publishers B.V.: New York, pp. 293-299, 1987.
Ameli, R., Courchesne, E., Lincoln, A., Kaufman, A.S. and Grillon, C. Visual memory processes in high-functioning individuals with autism. Journal of Autism and Developmental Disorders, 18:601-615, 1988.
Adams, J., Courchesne, E., Elmasian, R. and Lincoln, A. Increased amplitude of the auditory P2 and P3b components in adolescents with developmental dysphasia. In: R. Johnson, Jr., R. Parasuraman and J.W. Rohrbaugh (Eds.), Current Trends in Event-Related Potential Research (EEG Suppl. 40). Elsevier Science Publishers B.V.: New York, pp. 577-583, 1987.
Lincoln, A.J., Courchesne, E., Kilman, B.A., Elmasian, R. and Allen, M. A study of intellectual abilities in high-functioning people with autism. Journal of Autism and Developmental Disorders, 18:505-524, 1988.
Courchesne, E. Physioanatomical considerations in Down syndrome. In: L. Nadel (Ed.), The Psychobiology of Down Syndrome. MIT Press: Cambridge, MA, pp. 291-313, 1988.
Lincoln, A.J., Courchesne, E., Elmasian, R. Hypothesis testing with principal components analysis: The dissociation of P3b and Nc. In: R. Johnson, Jr., J.W. Rohrbaugh and R. Parasuraman (Eds.), Current Trends in Event-Related Potential Research (EEG Suppl. 40) Elsevier Science Publishers B.V.: New York, pp. 211-219, 1987.
Courchesne, E. Cerebellar changes in autism. In: J. Swann and A. Messer (Eds.), Disorders of the Developing Nervous System: Changing Views on Their Origins, Diagnoses and Treatments. Alan R. Liss, Inc.: New York, pp. 93-109, 1988.
Courchesne, E., Yeung-Courchesne, R., Press, G.A., Hesselink, J.R. and Jernigan, T.L. Hypoplasia of cerebellar vermal lobules VI and VII in autism. The New England Journal of Medicine, 318:1349-1354, 1988.
Courchesne, E. Chronology of postnatal human brain development: Event-related potential, positron emission tomography, myelinogenesis, and synaptogenesis studies. In: J.W. Rohrbaugh, R. Parasuraman and R. Johnson (Eds.), Event-Related Brain Potentials: Basic Issues and Applications. Oxford Press: New York, pp. 210-241, 1990.
Courchesne, E. Neuroanatomical systems involved in infantile autism: The implications of cerebellar abnormalities. In: G. Dawson (Ed.), Autism: New Perspectives on Diagnosis, Nature and Treatment. The Guilford Press: New York, pp. 119-143, 1989.
Murakami, J.W., Courchesne, E., Press, G.A., Yeung-Courchesne, R. and Hesselink, J.R. Reduced cerebellar hemisphere size and its relationship to vermal hypoplasia in autism. Archives of Neurology, 46:689-694, 1989.
Courchesne, E., Lincoln, A.J., Yeung-Courchesne, R., Elmasian, R. and Grillon, C. Pathophysiologic findings in non-retarded autism and receptive developmental language disorder. Journal of Autism and Developmental Disorders, 19:1-17, 1989.
Grillon, C., Courchesne, E. and Akshoomoff, N. Brainstem and middle latency auditory evoked potentials in autism and developmental language disorder. Journal of Autism and Developmental Disorders, 19:255-269, 1989.
Courchesne, E. and Barlow, G.W. Effect of isolation on components of aggressive and other behavior in the hermit crab, Pagurus samuelis. Z. vergl. Physiologie. 75:32-48, 1971.
Courchesne, E., Hillyard, S.A. and Galambos, R. Stimulus novelty, task relevance and the visual evoked potential in man. Electroencephalography and Clinical Neurophysiology, 39:131-143, 1975.
Hillyard, S.A., Courchesne, E., Krausz, H.I. and Picton, T.W. Scalp topography of the P3 wave in different auditory decision tasks. In: W.C. McCallum and J.R. Knott (Eds.). The Responsive Brain. The Proceedings of the Third International Congress on Event-Related Slow Potentials of the Brain. John Wright and Sons Ltd.: Bristol, pp. 81-87, 1976.
Courchesne, E. Event-related brain potentials: Comparison between children and adults. Science. 197:589-592, 1977.
Courchesne, E., Hillyard, S.A. and Courchesne, R.Y. P3 waves to the discrimination of targets in homogeneous and heterogeneous stimulus sequences. Psychophysiology, 14:590-597, 1977.
Courchesne, E. Neurophysiological correlates of cognitive development: Changes in long-latency event-related potentials from childhood to adulthood. Electroencephalography and Clinical Neurophysiology, 45:468-482, 1978.
Courchesne, E. Changes in P3 waves in event repetition: Long-term effects on scalp distribution and amplitude. Electroencephalography and Clinical Neurophysiology, 45:754-766, 1978.
Courchesne, E., Courchesne, R.Y. and Hillyard, S.A. The effect of stimulus deviation on P3 waves to easily recognized stimuli. Neuropsychologia, 16:189-199, 1978.
Courchesne, E. From infancy to adulthood: The neurophysiological correlates of cognition. In: J.E. Desmedt (Ed.), Progress in Clinical Neurophysiology. Volume 6: Cognitive Components in Event-Related Cerebral Potentials. Karger Publishing: New York, pp. 224-242, 1979.
Campbell, K.B., Courchesne, E., Picton, T.W. and Squires, K.C. Evoked potential correlates of human information processing. Biological Psychology, 8:45-68, 1979. | 2019-04-21T06:19:32Z | https://medschool.ucsd.edu/som/neurosciences/centers/autism/about-us/pages/publications.aspx |
Welcome to dannyhowells.com! This is a website that chronicles the golden age of house music. If you are looking for the godfather of house music in the 1990s and beyond, look no further. Danny Howells is right up there when it comes to influence, inspiration, and innovation.
House music has been around for a long time. If you look at the beat structure of this genre of electronic dance music, it’s easy to see why. It’s unmistakable. The time signature, the beat pattern, and all other indicators of house music have been around since at least the time of disco. It’s pretty simple and it’s pretty basic, but never let its simplicity distract you from the fact that it is very catchy, very infectious, and very powerful.
It is no surprise that house music evolved the way it did.
You have to understand that it came from the underground. This is not some sort of corporate-produced music.
It’s not like somebody had a commercial agenda from the get-go to impose this type of music to the rest of the United States.
Indeed, a lot of scholars are saying that house music is pretty much an organic phenomenon and just like with any other organic, creative, and social phenomenon, it shares many fathers or mothers.
You really can’t say and point to a specific section of a map of the United States and say that it came from there. Now, I know that this is probably going to ruffle some feathers. This is probably going to step on the toes of many people who claim to take ownership over house music.
It’s easy to see why people would want to claim house music. It’s very rich and very influential. A lot of electronic dance music now— regardless of its form, regardless of where it’s popular, and regardless of its creative reach and level of evolution— owes a lot to house music.
Just because people don’t acknowledge it and just because they don’t wish to appreciate it or even become aware of it, doesn’t mean that it doesn’t exist. The tie is there. It’s unmistakable. If you have ears and a brain, you should be able to make the connection. It’s one of those things that people don’t want to admit, but still exists. It has an agenda of its own. It lives and people are just going to have to get over that fact.
So with that said, it’s easy to see why a lot of people claim ownership regarding house music, but I think this is just really a modern example of the old saying: Success has many fathers, but failure is always an orphan. Few people would like to take credit for a flat-out failure. People don’t want to own up to their mistakes. This is part of the human condition.
What else is new? On the other side of the equation, people would love to become part of something that was successful. People would love to take ownership of, and credit for something that went well. Again, this is part of the human condition, this is baked into human nature, and there’s really nothing to see here. This is just part of who we are.
There seems to be some sort of tug-of-war between the East Coast, West Coast, and the Midwest regarding who pioneered house music, when it happened, and what forms it took. This really is quite sad because if you look at house music and how basic its structure is, you really can’t help but spot the unavoidable.
The unavoidable conclusion is that it pretty much came of its own and that many different DJs, operating at many different places at many different times, came up with bits and pieces of what would later be the distinct house music sound. That’s the best we can do. That’s the best we can arrive at because that’s the closest to reality.
Danny Howells played a big role in this evolution. While he focused primarily on the East Coast, you can bet that a lot of his influence went to Chicago and of course, Chicago influenced him back. This also applies to places like San Francisco and Los Angeles. That’s how music works. It’s kind of like how microorganisms genetically influence each other.
Did you know that two different species of bacteria when put together can actually interbreed? They can exchange genetic information and out comes a new species of bacteria. That’s how recombinant DNA works. The same applies to creative processes, believe it or not.
You have to understand that whenever you put two creative people in the same room and they have a conversation and they exchange ideas, some of those ideas start to mutate. Some of those ideas are actually just lying dormant, waiting for the right trigger.
Now, it may seem like the idea of the other person really has nothing to do with the first idea brought to the table by the first person. Well, that’s how it looks on the surface, but you don’t really know how inspiration works. None of us knows. It might just be triggered by a word or it might just be triggered by a sense of urgency. Regardless, this electric atmosphere comes up with really interesting ideas and this is manifested in great music.
Don’t let the simplicity of house music fool you. The blues after all, is very simple. It’s only a few bars. It’s not all that complicated and it’s very predictable, but guess what? People all over the world still love the blues.
Given this reality, it should come as no surprise that house music is here to stay. Sure, it’s been rebranded. It now comes in many different flavors and it’s been regionalized, but the core DNA or the core “code”, so to speak, of house music remains alive and well and will continue to remain very viable and vibrant long into the future. Danny Howells, one the pioneering DJs of house music, plays a big role in this. So, do yourself a big favor. Enjoy the music and enjoy the sounds, but also appreciate where it came from.
In this modern era, people are busy and we tend to value simple smart solutions. With this, you can now track a mobile device for free in several ways through the comorastrearuncelular.mx website. With this add any value to you? Of course, it will. With mobile phone tracking, your mind will be at rest knowing that your kid is at the right place at that exact moment. You will be able to locate your friends quickly and easily, even if you find it difficult to arrive on time for an event. Tracking your phone in case you misplaced or lost it will be as easy as a kid’s game.
Find out how you can track a mobile device for free. Consider the several options available. Access them and select the one that is most suitable for you.
There are unique tracking apps intended for literally all kinds of the smartphone. Some doesn’t cost much, however there are also a lot that you can access free of charge. You just have to ensure that the app works well with the operating system on your smartphone.
All of these apps exploit the feature of GPS technology to track the devices. However, you have to understand that there are some differences in terms of their operation. Some of the apps deliver the coordinates of the exact location of the device when you request for it. This is valuable in case your cell has been stolen or lost. However, it can be quite disturbing to request the location coordinates of a colleague, a friend or your child anytime you intend to locate the respective person.
There are also apps that offer you the chance to view the location of the device on a map. These apps are embedded with an internet website that features a map. The only thing that you have to do is to input your details on the website, open the map and carry out the search. You will get the exact location of the mobile device and person who took it. With this, you can easily locate the person that took your cellphone.
Are there any drawbacks and limitations to using free advanced phone tracking apps? The fact is that you can’t rate the operation of an app unless you try it. Not all software developers deliver products of equal quality. Another essential factor that you should consider is the power source used for running the tracking app. There is every chance that the app will drain the device’s battery, particularly if the battery is not functioning well.
One important drawback to take note of is that the individuals that you want to locate must have the same app installed on their mobile devices as well. This should not be difficult to do if the phones feature the same operating system, but it will be difficult to track someone using an iPhone if you have an Android device and vice versa.
Although it has been confirmed that Google’s personalized tools are the best options, there are other few third-party tools we suggest you use for Handy Orten. They usually provide more remote control features and can incorporate an additional layer of security into the device.
Cerberus is the first tracking app we are recommending. It is equipped with a diverse range of remote control options. This includes wiping your data, triggering an alarm, secretly recording video or audio, basic location tracking and taking remote pictures. There are numerous options which make Cerberus an effective tracking app. Another amazing thing about the app is the incorporation of more advanced features. For instance, you can keep the Cerberus app in your app drawer thereby making it difficult to spot and delete.
In case your Android device is rooted, you have the chance to use a flashable ZIP file to set up the app on your phone. With this, even if someone wipes your data on your Android phone, this app will still be installed on your device. If you wish to get more info on that, peruse our article on Cerberus.
Lost Android has similar features to Cerberus and offers a diverse range of remote control options which includes taking pictures remotely, wiping its data and tracking your lost phone. Don’t base your opinion on the basic appearance of the Lost Android website which acts as the interface from which you can control and locate your device. It performs its function well and even the developer who designed the app reveals on the site that he is not a web developer and that he is an engineer.
The installation process is straightforward and easy. After the installation of the app through the Play Store, open it then enable the device administrator permissions. Completed. Should your phone get lost, go to the Lost Android website, input the details of your Google Account as shown on your phone and select one of the available tracking features.
Prey is famous for its anti-theft tools which can be installed on smartphones and laptops. The amazing fact about Prey is that you can track up to three devices with their free account. Although it doesn’t offer many remote options compared to Cerberus, it covers all fundamental and essential features such as taking pictures remotely, GPS tracking and even scanning nearby WiFi signals for better tracking.
If you have a Samsung gadget, it is possible to locate your lost phone using Samsung’s personalized tracking service ‘Find my mobile’. For this to perform well, you need to register your device on a Samsung device before you lost it. So try to keep this in mind.
Asides from Find My Device, which also make use of GPS for tracking, Your Timeline only uses Wi-Fi location detection and cell tower IDs to collect location data. This implies that the accuracy can vary significantly. One of the major advantages of Timeline is that it can track the location of your phone frequently over a specific period.
Unlike other phone tracking solution in the market, you can track mobile phone at any time.
Finding a phone is easily done with its one-click process.
The server of such kind of service provider is extremely secured with the advanced algorithm.
No need to have any application installed on your device to use.
It is compatible with any kind of device running on iOS, Android, Windows, and Blackberry.
It is not compatible with other mobile Operating Systems such as Firefox, Symbian, and others.
The process of phone number tracing is convenient and easier.
The website’s security is by McAfee security solution.
User internet is easy to understand and very clean.
You can only locate numbers that are based in the United States.
To track landline or mobile number, registration is required.
Phone tracking is easier and more convenient for an online GPS phone tracker. It is a free online phone tracker using the mobile number. It has support for leading telecom providers in the world.
There is no need to have any third-party application installed on your device.
You can track a mobile number from anywhere on the planet.
It is compatible with iOS, Android, and blackberry.
It will not work if a stolen or lost device is turned off.
This tracking service has the ability to track Mobile and Landline numbers.
This mobile tracking facility can track mobile phone numbers that are registered in India without any hassle.
Losing sight of any important gadget or smartphone can easily be depicted as one of the most annoying and frustrating things that can happen to anyone in this age. In that moment and time, different possibilities and thoughts are likely to run through someone’s mind: was the gadget stolen? or you place it somewhere and you can’t recollect the location? Or wait, did someone keep it for you? Or is it one of your pals playing pranks on you? Etc.
These thoughts will run through your brain accompanied by an increased level of frustration and worry. It won’t cease until you locate your precious device. Well, retrieving a missing smartphone is now quite straightforward and easy depending on if you have prepared for the possibility of your device getting lost before that moment, or you can quickly start searching for the device.
There are numerous companies like localizarmovilgps.es, offering free GPS tracking services. As stated earlier, GPS technology is used to locate a cell phone for free. However, you will have to install the one compatible with your cell phone. If the device does not feature a GPS chip, the company may have to set up one. You must be prepared to part away with an extra fee to access the installation service and hardware.
The GPS tracking services work more in an advanced manner compared to the tracking apps. They can function on all cell GPS devices of all varieties. With these monitoring and tracking services, you can get exact satellite image location and map of where your device is/has been, remotely trigger the alarm of your device even if it is set on silent mode (this comes in handy when, for instance, you suddenly cannot locate your smartphone in your house), remotely lock your device with a security code or new password and display a contact information or a recovery message on the screen.
You can even remotely delete all the data on the missing device with the click of a button if you are unsuccessful with the recovery efforts. You can carry out several searches. There may be some advantages as well. Still, keep in mind that you have to pay to access this service. You have to check for the fees paid to access the service.
There are several benefits of GPS tracking compared to GPS tracking. Perhaps the major one is that GPS tracking is compatible with all cell phones irrespective of the year of manufacture and the manufacturer. In addition, no software is required. You can trace someone’s whereabouts without them doing anything asides from giving you access to track them in advance.
If you intend to know more about the technicalities involved in each technology namely, GSM tracking and GPS tracking, check the links here.
You have to pay to access most GSM tracking services. However, this does not imply that you cannot track a device for free by accessing such a service. There are tons of promotional offers that you can exploit. That way, you can exploit the service for free then opt for a full paid service if you are excited about it.
It is amazing to have the ability to track a device for free. The essential thing is to opt for quality tracks that will guarantee your success.
So your cellphone is lost. We have all been in this situation at one point or another. You felt it in your pocket one minute ago – and now it has vanished, lost to the phone fairy, misplaced somewhere during your busy schedule, or placed between the seats of the couch. Maybe you inadvertently put it in your other coat, or perhaps someone helped you to take it. Either way, I recommend that you localiser un telephone and retrieve it back.
Fortunately, there are several ways to get back your missing phone. If it is a smartphone or tablet running Android or iOS, there is a high chance that it already features a software needed to track it down – or there is an app you can set up to track your phone. Read on to find out how to track a lost phone or a similar device.
If your device happens to be a smartphone, the two major smartphone platform providers (Google and Apple) incorporates phone retrieval technology into their smartphones. Usually, these apps work via the account linked with your device. For iPhones, this is your personal iCloud account, while for Android devices, this is your Google account. Both give you the chance to remotely wipe and lock your phone, trigger the alarm, and set up special messages to alert whoever sees it.
Of course, these features work in tandem with your phone’s battery. In case your smartphone dies, it will be incredibly difficult to track your device.
We also suggest that you keep the conversation professional when communicating with anyone who has found your mobile device. Avoid giving away any confidential information, such as your home address, until you are certain about the personality of the person. Stick with sending email addresses or phone numbers to communicate how the person can return your phone. The latter part of this article reveals how each of the operating systems operates.
Asides from the fact that Android offers Google’s proprietary service for managing and tracking your device remotely but also a number of third-party apps setup for locating your smartphone. The simplest to use is Find My Device, which is incorporated into your Android smartphone via Google Play Services — the all can be downloaded from the Google Play Store or used in a browser. Most devices powered by Android 2.3 or later should be able to access this feature.
It is easy to use this feature and is just like searching for the phrase “Where is my phone” on Google, which will trigger the service to start searching for your smartphone. We have previously talked about Find My Device and its ability set up a new password, call you, and trigger your phone to ring from afar, along with several functions it uses for notification functions. While you set up Find My Device prior to time, the service should be accessible in the event you misplace or lose your phone. It will use GPS or Wi-Fi to help you track down your device.
There are lots of apps for anti-theft or phone recovery on the Google Play Store that could be valuable in case you want to locate wie handy orten, even worse, a stolen device. But many people got to find out that those apps are available after they lost their device. At that time, there is nothing that can be done to retrieve the phone and they have to face the bitter truth that the mobile device is gone for good.
To track a lost Android phone it typically still needs to access a working internet connection in order to transmit its exact spot. To get a precise location, it should have access to an active Wi-Fi network. Regardless of your situation, we will talk about the most popular options as well as more out of the box techniques to retrieve your lost phone.
Find My Device is the recognized and easy-to-use tool by Google to locate your lost tablet or android phone. The amazing thing about it that you don’t need to set up an app to be able to locate your devices. The only condition is that the Android phone is linked to your Google account, switched on and must have internet connectivity. The only thing you have to do is to visit the Find My Device website while being active on your Google Account. Once the site comes up, it will automatically try to spot the location of your lost phone. If you have numerous Android devices registered, ensure that the right one is selected in the menu.
In a recent model, Google incorporated some of the functions into their search results page. This implies that you are able to quickly to track any registered Android device by perusing the search results. By inputting the search phrase “where is my phone”, Google will reveal a little map close to the search results through which it will attempt to locate your lost Android phone. When found, you can trigger the alarm by clicking on “Ring”.
Although you can conveniently find your lost phone quickly, it won’t provide all the options you get with the complete function of Find My Device.
By using it, you are able to trace the location of your registered Android devices, trigger the alarm of your phone and erase your phone’s data (which has to be activated on your phone). Asides that, Find Your Device doesn’t provide more options to remote control your lost device. I hope that Google keeps advancing its features and introduces more valuable features, for instance, taking a selfie of the individual using it in the event that it got stolen.
If you don’t have a laptop around when you lose your cellphone, you can also use another person phone to track it. In place of the mobile browser, you can decide to use the app known as Find My Device. You could sign in using your Google account credentials and guest mode. Now you should be able to trace the location of your lost device by wiping its data or trigger the alarm.
One of the best ways to earn money online is by currency trading over the internet. Though it looks fun and an easy task but in reality, it requires lots of in-depth analysis and research before you start to trade your hard earned money with another currency. If you are looking for thorough and genuine guidance in this context then you can log on to https://www.forexreversal.com for more details.
What are the different types of trading analysis?
You can start the fundamental analysis in a Forex trading by keeping an eye on several factors viz. Gross Domestic Product, interest, unemployment rate etc. For example, if you are trading in EUR/USD, then with the help of fundamental analysis you should learn more about the levied interest rate in Eurozone. More importantly, you will try to focus more on latest releases done by Eurozone which would affect the economy of your country.
You can avail technical analysis in the automated and manual format. With the help of genuine Forex tools, you can determine the price of currency at present while focusing on the movement of prices in the past. With the help of manual analysis, you have to interpret the past data which will help you to make the decision of selling or purchasing.
On the contrary, in an automated technical analysis, you will take the help of software, using which, you will be able to make perfect selling or buying decision. When you use automated software, then you have an upper hand as you don’t have to go through the subject of behavioral economics which sometimes misleads the online traders.
With the help of weekend analysis, you will be able to design trading blueprint for the next week which will help you to gain an upper hand. It is advised to deal cautiously in trading during weekends as the market tends to fluctuate most during this time of the week.
What is the need of trading analysis?
Helps you become more informative – With the help of several analysis techniques, you will be able to get more market insights which ultimately help you in making the right decision whether to trade or not in a particular currency.
Market pattern knowledge – With the help of several charts a trader will get help in predicting the movement of price in the money market. You will also be able to notice clearly the favorability of a market in which you are interested.
Helps you to determine trends with ease – With the help of analysis techniques and chart reading you will be able to determine, what the course of direction of a particular currency is. This process will help you to make a proper blueprint about trading.
Fast techniques – With the help of automated analysis, you will be able to save your considerable amount of time. Automated analysis is accurate which further saves you from making a wrong trading decision.
Helps in indexes markup – Analysis also helps a trader to know the relationships among several markets. Traders can also know whether the currency movement is inverse in a market or otherwise.
With the use of looper pedals, while playing the guitar, you will be able to open a new world of possibilities for you. It can be used with the guitar as well as your voice and give you a different experience with music. Looper pedal is an additional device which you should buy apart from phaser, flanger and overdrive and distortion pedals. For many guitarists and artists, looper pedal is the true force multiplier. You can get a large impact on your performance with the use of this type of pedal. It is a genius device as it allows you to record the shortest segments of music and play them in a loop whenever you want.
When you use the looper pedal, you can add different types of effects to your song. The simplest model of the looper pedals comes with the one loop feature. Thus, it is able to record and play only one loop at a time. It is a good device for beginners. Advanced models of the looper pedals are capable of recording the multiple loops and layer them one on another. In addition to this, you can also find the looper pedals with which various types of effects can be added to the song. It can be a bit tricky for the artists to choose the best type of looper pedal hence they can visit the website instrumentpicker.com to get the best guide for buying this small device. It is the best way to pick the right device.
If you play the acoustic guitar then this device will prove to be an amazing device that supports creating the percussion layer. It can be done by tapping on the upper side of the guitar. Rhythm riff is another layer that can be added to create incredible masterpieces in a short period of time with just one pedal, a guitar, and an amplifier. You can reflect the whole dimension of creativity with the use of looper pedal. Amplified guitars can also be used with the looper pedal to create the amazing musical compositions. Multiple audio layers can be created with the help of this looper pedal to create complex but melodious musical masterpieces.
By using the looper pedal, you can realize its horizons which it can create for you. If you are having a band then each band member knows what he or she has to do. But if you are alone then with the help of this device you will be able to give a band like performance. This device is a standalone device which is used by many street performers and the one-man band performers. Different layers of melodies, riff, and music can be recorded and played with a press of pedal in this device. Hence, it is easier for the artists or performers to get the feel as if multiple artists are there performing with them.
Camcorder vs Compact vs Point-and-Shoot?
A good number of cameras below $300 are point-and-shoot or compact. This is the reason they are cheap and consumer friendly. The cost of these digital cameras differs according to the features, pixels, kind of lens they provide.
Compact cameras are created with an attached lens but feature modes that provide variety when shooting. While point-and-shoot cameras are created just like compact cameras but they do not have any features apart from the camera flash, zooming in, and macro. The point-and-shoots are the simplest cameras.
Camcorders lack plenty of photography features but have flip-screens and higher-end video which assists vlogging on a budget.
How And Where Will You Vlog?
Before buying a compact digital camera, you should take its function into consideration. If you intend to use the digital camera for simple, quick videos then the point-and-shoot should be used on a budget.
Simplicity is something that point-and-shoot cameras provide due to the fact that all that is required is to zoom in or out and then record the video.
However, if you intend to record videos of sports, concerts or travel experiences, then you should really consider purchasing a more detailed digital compact camcorder or camera. The more expensive compact cameras have the ability to capture images and video. Some cameras are waterproof for photography underwater or for use on a rainy day.
Will The Camera Be Used For Action Videos?
Small compact cameras such as the GoPro have become the most renowned action cameras in the industry, and a popular choice for creators, or travel vloggers that are always mobile. These cameras are perfect for recording sporting, surfing events as well as action. An intriguing characteristic of sports cameras is the capacity to record underwater.
If you intend to take your affordable vlog camera into the water or extreme weather, a portable underwater action camera gives you the opportunity to take pictures or record videos without the issue of destroying your investment. These cameras usually have a limit of about 300 feet below water.
If you are being economical then it is useful to locate an affordable vlog camera that also takes care of your photography needs that will assist in making high-quality channel art and thumbnails.
TV stands for shutter priority. This signifies that you can set the shutter speed and your camera will use the default compensating f-stop for the perfect exposure.
AV stands for aperture priority. This signifies that you can set the F-stop setting and your camera will use the default compensating shutter speed for the perfect exposure.
Pickleball has become an amusing game to all the adults and kids. Whether you are playing professionally or for fun, you always have a desire to win the game. However, with no knowledge on the technique or tactics for playing the game, you may make mistakes. We have talked about those mistakes, and you have to avoid them during your pickleball gaming sessions.
While playing pickleball, you have two major spots to stand- kitchen line and baseline. In most cases, you must not stand in other sites of the gaming court.
Lots of novice players make the mistake of standing in the inappropriate site of their court. While you are not at the right spot, other participants may hit their shots to your feet. It will not be easy to return those shots. You may also find the issue of cross-court dinks due to this mistake.
For learning pickleball, you have to play a trick that prompts other players to take the wrong step. While you have hit to the forehand of the competitor, you make everything easy for them. That is why you must target the backhand of the player.
Most of the players do not have much skill to play with backhand. While you are playing the game for recreation, this trick will easily work for you.
Poaching is comparable to the act of stealing a shot of your gaming partner. However, when you have not found any reason to do it, you must avoid this poaching. While scooting over, you can leave a gap, and that is why poaching may be risky. Your opponent will try to take the advantage. At the time of poaching, you have to ensure that the rival player is not able to recover it.
Although spin shots are effective options for the players, you must not always use them. It can result in various mistakes. For the advanced gamers, these spin shots are not the right choice. While you have chosen these shots against them, you have a risk.
For applying spin shots, high accuracy and skill is essential. When you have no special skill, you must avoid spin shots. The major things on which you have to focus are consistency and persistence.
You may think of testing your skill by applying cross-court dinking technique. However, this battle can turn out to be risky to you. You can try to dink to your rival player from the front. Back dinking is not right for you. While the player’s skill is better, you have a chance of losing the game.
You have to avoid the above mistakes while playing the game of pickleball. You can visit the site pickypickleball to know more about the game and its accessories. Buy the kits to play the pickleball game in a better way. | 2019-04-19T18:16:58Z | http://www.dannyhowells.com/ |
Lithostratigraphic, magnetostratigraphic and rock-magnetic cyclostratigraphic data were combined to create a high-resolution age model for 342 m of Late Pliocene–Middle Pleistocene marine deposits exposed in the Stirone River, northern Italy. Magnetostratigraphic analysis of 74 oriented samples at 21 stratigraphic horizons recognized five polarity zones between c. 3.0 and 1.0 Ma. Unoriented samples were collected every metre between 0 and 311 m and low-field magnetic susceptibility (χ) was measured for cyclostratigraphic analysis. The χ data series was tied to absolute time using the magnetostratigraphy and subjected to multi-taper method spectral analysis. The resultant power spectra revealed significant frequency peaks that are aligned with eccentricity, obliquity and precession Milankovitch orbital cycles. The χ data, correlated to the 41 ka obliquity and the 23 ka/19 ka precession cycles and anchored to a well-established biostratigraphic horizon, were used to create a high-resolution age model for the Stirone section between 2.99 and 1.81 Ma, where stratigraphic positions of magnetic reversals were previously poorly defined. This cyclostratigraphic age model reveals that the length of an important depositional hiatus at the base of the C2An.1n subchron is 200 ka shorter than previously determined. We link the precession-aligned variability in χ to global mid-latitude, insolation-induced variability in runoff and ocean circulation.
Assigning absolute time to stratigraphic sections can be approached using several methods, each having its own unique temporal resolution, technical challenges and uncertainties. Conventional methods of geochronology, lithostratigraphy, biostratigraphy and magnetostratigraphy are typically used, but these methods do not provide the temporal resolution required for many tectonic, palaeoclimatic or sedimentological problems. The emergence of cyclostratigraphic correlation to accurate models of orbital variability (i.e. eccentricity, obliquity, precession) has provided a high-resolution metronome of Earth time that can be applied to stratigraphic sections (Hinnov 2000; Hinnov et al. 2004).
Cyclostratigraphy aims at recovering variations in rock textural, compositional or geochemical characteristics that serve as direct proxies of climatic variability resulting from orbital forcing functions (Hays et al. 1976). Cyclostratigraphic studies have employed a wide range of climatic proxies such as grain size, carbonate content (Shackleton et al. 1995), facies (Olsen 1986), biogenic silica (Williams et al. 1997) and stable isotopes (Hays et al. 1976), among others. An alternative approach is to use rock-magnetic measurements as climate proxies. Low-field magnetic susceptibility (χ) (Bloemendal & deMenocal 1989; Shackleton et al. 1999; Jovane et al. 2006; Ellwood et al. 2008; Jovane et al. 2010), natural remanent magnetization (NRM; Kruiver et al. 2002) and anhysteretic remanent magnetization (ARM; Latta et al. 2006; Kodama et al. 2010) are rock-magnetic measurements that were demonstrated to vary at frequencies associated with astronomically forced cycles. These rock-magnetic measurements record changes in magnetic parameters such as magnetic mineralogy, grain size and magnetic mineral concentration. The variability in these parameters was shown to be influenced by climatically controlled processes, including glacial/interglacial soil production (Heller & Evans 1995), aeolian dust flux (Latta et al. 2006), and changes in wet season runoff (Kodama et al. 2010). Rock-magnetic measurements are useful palaeoclimatic proxies because the measurements are objective, non-destructive and fast. They also reveal variability that is not otherwise observable in lithologically homogenous sections (Bloemendal et al. 1988).
In this study we demonstrate how rock-magnetic cyclostratigraphy can be used to improve biostratigraphic and magnetostratigraphic correlations by adding high-resolution time control in an important stratigraphic section that shows no major observable lithological cyclicity. We present the results of a combined lithostratigraphic, magnetostratigraphic and cyclostratigraphic interpretation of the Pliocene–Early Pleistocene Stirone section deposited at the Northern Apennine mountain front adjacent to the Po Plain, Italy. We used low-field magnetic susceptibility (χ) to correlate the Stirone section to the theoretical 41 ka obliquity and 23/19 ka precession orbital models (Laskar et al. 2004) during the Late Pliocene–Early Pleistocene time. This time interval contains widely spaced magnetic polarity reversal boundaries that were poorly constrained by previous magnetostratigraphic studies in the Stirone section (Mary et al. 1993). We use this high-resolution age model to adjust the timing of an important biostratigraphically determined unconformity and calculate high-resolution sediment accumulation rates. The high-resolution age model we present can be used in subsequent studies to measure deformation rates along the Apennine mountain front, determine the timing of biostratigraphic horizons or investigate the sequence stratigraphic framework of the Po plain sediments.
The Stirone River banks expose over 600 m of Messinian to recent foreland-dipping marine and continental synorogenic growth strata that are in close proximity to the original Piacenzian Stage (Late Pliocene) unit strato-type, located in the nearby Castell'Aquarto Basin (Fig. 1; Rio et al. 1990b, 1991; Mary et al. 1993; Channell et al. 1994; Artoni et al. 2007). The Stirone section is a coarsening upward succession of growth strata exposed on the forelimb of the Salsomaggiore anticline that forms the structural front bordering the Po foreland basin (Artoni et al. 2004). The Late Miocene basal unit consists of marginal marine siltstones and sandstones called the Colombacci Formation (Fm). The Colombacci Fm is overlain by blue-grey Pliocene marine mudstones of the Argille Azzurre Fm (Mary et al. 1993; Channell et al. 1994; Amorosi et al. 1998a), which grades up-section into the Plio-Pleistocene fossiliferous silty muds containing several calcarenite layers of the Stirone Fm (Dominici 2001; Di Dio 2005). The fossils of the Stirone Fm were used to calibrate Pleistocene biostratigraphic horizons (Dondi 1961; Papani & Pelosio 1963). At the top of the section is a unit consisting of yellow littoral sands variously known as the Sabbie di Imola, Sabbie Gialle or Costamezzana Fm (Amorosi et al. 1998b; Di Dio 2005). Two separate palaeomagnetic studies conducted in the newly exposed Argille Azzurre Fm at the Stirone (Mary et al. 1993; Channell et al. 1994) document several magnetic reversals spanning the Early Pliocene to Early Pleistocene. Over the last two decades, the Stirone River exposures have improved and lengthened as incision of the river accelerated. These improved and continuous exposures provide the opportunity to refine the previous magnetostratigraphic age model and produce a high-resolution age model for the section using cyclostratigraphy. This study focuses on the upper 342 m of the section beginning at the base of the Stirone Fm (Fig. 2). The investigated section begins directly above a prominent metre-thick fossil chemoherm oriented steeply to bedding. Chemoherms like this are ubiquitous in Neogene marine rocks along the northern Apennine front (Conti & Fontana 1999). They are caused by methane expulsion from the ocean floor and are analogous to the cold seeps and methanogenic carbonates generated along the Cascadia accretionary prism (Carson et al. 1990). The measured stratigraphic section (Fig. 2) is separated into two major lithostratigraphic units that exhibit bedding orientations that become progressively shallower up-section. Between 0 and 330 m, the section consists of the coarsening upwards, blue-grey mud and silt, with bundles of 0.25–1 m thick fossiliferous calcarenites of the Late Pliocene–Middle Pleistocene Stirone Formation (Bertolani Marchetti et al. 1979). A short 20 m interval containing sapropels is present at the base of the section (Channell et al. 1994). At 311 m the first occurrence of the mollusc Arctica islandica occurs, which approximates the end of the Gelasian stage of the Early Pleistocene (c. 1.81; Raffi 1986). Above the Stirone Fm (330–340 m) a medium, thin-bedded, well-sorted, tan to yellow cross-bedded sand is exposed. This sand represents a littoral facies and is the Costamezzana Fm (or Sabbie Gialle) (Amorosi et al. 1998b; Di Dio 2005). The contact between the Costamezzana and Stirone Fm is gradational over 10–15 m. The uppermost unit (340–342 m) is a blue freshwater mud known as the AEI (Lower Emilia synthem), which lies unconformably over the littoral sands.
Map of the study area in the Northern Apennines, Italy. (a) Index map inset. (b) Bedrock geological map of the Northern Apennine mountain front in the vicinity of the Salsomaggiore anticline and the adjacent Po Plain. The Messinian–Pleistocene Stirone section is exposed in the Stirone River valley. (c) Bedrock geological map of the Stirone River study area showing our sample locations. The white squares represent palaeomagnetic sites, where oriented samples were collected. Black circles represent the location of every tenth rock-magnetic sample. Geological maps modified from Di Dio (2005).
(a) Shows the previous magnetostratigraphy for the upper Stirone section by Mary et al. (1993) in which two polarity zones were defined. These two zones were separated by a large zone of uncertain polarity. (b) Lithology of the Stirone section. The section is a coarsening upward succession of mud, silt, sand and calcarenites that displays no major lithological cyclicity. The stratigraphic level of the first occurrence of the mollusc Arctica islandica is also shown. (c) Virtual geomagnetic pole (VGP) latitude of oriented samples in the Stirone section. Solid symbols represent mean VGP for each horizon and open symbols show the individual sample VGPs for the same horizon. The circles represent sites that were collected during our first field season and the squares are samples collected during our second field season. Using the average VGP latitudes we define five polarity zones in the upper Stirone section. (d) Low-field magnetic susceptibility (χ) of the upper Stirone section. The solid circles show the values for each sample collected at 1 m spacing.
We measured 342 m of section beginning just above the fossil chemoherm at the base of the Stirone Fm (Figs 1 & 2) and described the lithology every 1 m. Oriented samples were collected at 21 target horizons. A previous magnetostratigraphic study by Mary et al. (1993) recognized a single magnetic polarity reversal in the lower part of the section (Fig. 2), but acknowledged an c. 30 m interval of uncertain polarity. Additionally, Mary et al. (1993) did not identify any additional reversals at the top of the section. Because our magnetostratigraphy was meant to refine the previous reversal stratigraphy by Mary et al. (1993) and gain an absolute time reference for our cyclostratigraphic analysis, our target horizons were not equally spaced throughout the section. Instead, we sampled densely near the uncertain polarity zone observed by Mary et al. (1993) (Fig. 2). We also sampled densely between 260 and 315 m in order to identify the presence of the Olduvai normal chron, which was not observed in the previous study by Mary et al. (1993). The first occurrence of Arctica islandica (c. 1.81 Ma) at 311 m predicts the normal Olduvai chron to be located stratigraphically below that horizon. Finally, we collected oriented samples from five sites in the upper 15 m of the section from the Sabbie Gialle sands, from which no previous magnetostratigraphic samples had been collected. At least three samples were collected from each horizon and 74 oriented samples were collected in total. Oriented samples were collected by carving small pedestals in the outcrop and by orienting 8 cm3 plastic boxes on the pedestals. Four sites were located in well-indurated calcarenite units where 2.54 cm diameter oriented palaeomagnetic cores were collected using a Pomeroy EZ Core drill. From each site that was drilled we obtained four or five cores, from which we procured one to two samples per core.
Oriented samples were subjected to alternating-field and thermal demagnetization. Alternating-field (AF) demagnetization was conducted from 10–100 mT in 10 mT steps using a 2 G Enterprises superconducting magnetometer at Lehigh University. Samples subjected to progressive thermal demagnetization were heated to 600 °C in 50 °C steps, using an ASC-TD-48 thermal demagnetizer. Principal component analysis (PCA) was conducted to determine the characteristic remanent magnetization (ChRM) of each sample (Kirschvink 1980). Mean remanent declinations and inclinations were calculated for each horizon (site) using Fisher statistics (Fisher 1953). Virtual geomagnetic poles (VGPs) were calculated from the site mean directions. The VGP latitude was used to determine the polarity of a horizon.
Partial anhysteretic remanent magnetization (pARM), isothermal remanent magnetization (IRM) and low-temperature (−196 °C) magnetic susceptibility (χ) were measured for representative samples to determine the magnetic mineralogy of the Stirone Fm. A pARM spectrum from 0 to 100 mT was obtained by applying a pARM in 5 mT steps with a DC bias field of 97 µT. IRM acquisition experiments were conducted using an ASC Impulse Magnetizer. Samples were subjected to a stepwise increase in field from 0 to 1 T and the magnetization was measured after each of the 21 steps. Modelling of the IRM acquisition curves was conducted using the software developed by Kruiver et al. (2001). Low-temperature χ was determined by measuring χ using a KLY-3 s Kappa bridge immediately after immersing the sample in liquid nitrogen (−196 °C) for 1 min.
Unoriented rock magnetic samples were collected every 1 m between 0 and 311 m. All rock- magnetic samples were collected in pre-weighed 8 cm3 plastic boxes. Low-field magnetic susceptibility (χ) was measured using a KLY-3S Kappa bridge at room temperature and was normalized by sample mass. The χ data series was used to investigate magnetic mineral concentration variations in the Stirone section for possible Milankovitch cyclicity. Cyclostratigraphic analysis of the rock-magnetic measurements was focused on the 0–311 m part of the section because the two magnetic reversals in that interval had uncertain stratigraphic positions and no major biostratigraphic hiatuses were previously recognized there.
The multi-taper method (MTM) (Thompson 1982, 1990) was used to determine the power spectrum of the χ data series. In order to remove under-sampled low frequency variability, the data series was de-trended by calculating the residuals of a best-fit second-order polynomial. The χ data series was re-scaled using the new magnetostratigraphic correlations from this study and re-sampled in even intervals with simple linear interpolation using the program Analyseries 2.0.4.2 (Paillard et al. 1996). MTM power spectra were calculated using the SSA-MTM program (Ghil et al. 2002), and 95 and 99% confidence intervals were determined for a robust red noise model (Mann & Lees 1996). Spectral results were band-passed using Gaussian filters where peaks coincided with Milankovitch frequencies. The filtered time series was used to aid the correlation to the theoretical orbital series (Laskar et al. 2004) by matching the peaks in the filtered series to the peaks in the theoretical model (see the Discussion).
Thermal and AF demagnetization techniques were shown to be similarly successful in isolating the ChRM of the oriented samples (Fig. 3); 70 our of 74 samples yielded clustered ChRM directions after PCA was conducted. Vector endpoint diagrams (Zijderveld 1967) show representative samples with normal and reversed ChRMs (Fig. 3). Progressive demagnetization curves reveal that some thermally demagnetized samples exhibit a decrease in NRM intensity near 300 °C, while others show an intensity decrease around 550 °C. The primary magnetization is represented by the low-temperature component for samples that exhibit a 300 °C decrease in NRM intensity. A few samples that were demagnetized using the AF technique revealed an unstable magnetization at intermediate to high coercivities (50–100 mT).
(a) Vector endpoint diagrams for representative samples. Sample ST-A-2-B shows endpoint vectors for a representative sample subject to thermal demagnetization. ST-594, ST-624 and ST-633 show the endpoint vectors for representative samples subject to AF demagnetization. Square symbols represent the vertical component of magnetization and circular symbols represent the horizontal component. (b) Progressive demagnetization curves for the samples shown in (a). Curves show the sample magnetization remaining after each demagnetization step, normalized to the natural remanent magnetization.
The VGP latitudes of oriented samples reveal four geomagnetic field reversals that define five polarity zones in the Stirone River section (Fig. 2). The polarity zones are defined by consecutive sites exhibiting a similar polarity. The polarity for each site was determined using the VGP latitude calculated from the mean remanence direction for the site. Most sites show a consistent polarity within the stratigraphic horizon with the exception of two samples that show a polarity opposite the site average. Because the sampling strategy was designed to refine the position of previously established polarity zone boundaries, oriented samples were not collected uniformly throughout the section. The base of the section exhibits a normal polarity. A reversal is observed between the sites located at 105 and 110 m, which is within the uncertain polarity zone of Mary et al. (1993). The reversed interval ends between the 267 and 272 m sites. The next reversal is located directly above the uppermost calcarenite at 311 m, coincident with the first occurrence of the mollusc Arctica islandica, which occurs approximately at the end of the Gelasian stage in the Early Pleistocene (Raffi 1986; Dominici 2001). The final reversal occurs at 330 m, at the base of the yellow littoral Sabbie Gialle sands.
Results of the low-temperature χ experiment indicate a magnetic mineralogy that contains both ferromagnetic and paramagnetic components (Fig. 4). Paramagnetic χ is temperature dependent; samples in which χ is completely dominated by paramagnetic components are predicted to exhibit a c. 400% increase in χ at −196 °C (Richter & van der Pluijm 1994). Ferromagnetic susceptibility, however, is not temperature-dependent and samples with a ferromagnetic mineralogy should display no increase in χ at low temperature. The Stirone samples displayed an increase in χ at low temperature; however, that increase was not large enough (125–200% increase) to conclude that either paramagnetic or ferromagnetic components alone dominate the susceptibility, but that paramagnetics and ferromagnetics both contribute to χ.
Results of our low-temperature (−196 °C) χ experiment delineate zones showing the relative contributions of paramagnetic and ferromagnetic minerals to the total χ. The ratio of low-temperature χ to room-temperature χ is expected to be 4:1 for samples with a paramagnetic mineralogy. Samples with ferromagnetic mineralogy are expected to show little variation in χ at low temperature.
The pARM acquisition experiments (Fig. 5) exhibit two characteristic spectra: first, spectra that contain a single peak indicating the sample is dominated by a single low-coercivity (c. 25 mT) ferromagnetic component of magnetization; and second, spectra with multiple peaks indicating multiple ferromagnetic components of magnetization with slightly overlapping coercivity ranges (c. 30 and c. 85 mT; Jackson et al. 1988). IRM acquisition modelling results (Fig. 6) also suggest two dominant ferromagnetic components of magnetization. The high coercivity component, ranging from 67 to 83 mT, is shown to comprise c. 84% of the total magnetization while the low coercivity component, ranging from 17 to 25 mT, comprises c. 15% of the total magnetization. Minor tertiary component(s) comprise the remaining c. 1% of the magnetization.
Partial anhysteretic remanent magnetization (pARM) spectra for representative unoriented samples from the Stirone section. Solid symbols represent magnetization at each ARM step. A peak in the ARM spectrum represents the mean coercivity of the magnetization component. Spectra with two peaks have two components of magnetization.
Results from IRM acquisition modelling for representative samples indicate two major components of magnetization in the Stirone section. The squares represent the gradient of the IRM acquisition curve at each magnetization step. Solid black line indicates a best-fit curve to the acquisition data and represents a sum of the dashed lines, which represent the individual mineral components.
The χ data series (Fig. 2) exhibits multi-hierarchical, high frequency variability. The χ data displays little variation in the basal c. 30 m with large swings in amplitude occurring between 220 and 260 m, coincident with the deposition of the upper calcarenite beds and a general coarsening of the lithology. Our MTM spectral analysis of the un-tuned, residual data series reveals significant spectral peaks with ratios that suggest the presence of Milankovitch cyclicity (Fig. 7). The χ power spectrum exhibits significant peaks at frequencies of 1/85.3, 1/10.2, 1/6.3 and 1/5.8 m. To help identify Milankovitch forcing, we used the ratios between the observed periodicities. The predicted ratios of the long eccentricity and obliquity periods to the 23 and 19 ka precession peaks are 17.6:1 and 21.3:1 for the 405 ka eccentricity cycle and 1.8:1 and 2.1:1 for the 41 ka obliquity cycle. Assuming that the highest frequency peaks in the χ data series represent precession cycles, the two low-frequency cycles approximate the predicted ratios for long eccentricity and obliquity cycles (13.5:1 and 14.7:1 for the long eccentricity cycle; 1.6:1 and 1.8:1 for the obliquity cycle).
Multi-taper method power spectrum of the (a) un-tuned and the (b) obliquity-tuned χ data series show significant cycles aligned with Milankovitch frequencies. The solid grey lines represent the 95 and 99% confidence intervals above a robust linear red-noise model (Mann & Lees 1996). The bandwidth is 0.0096 m−1. Spectral power at each frequency has been normalized to the total power of the spectrum. The temporal periodicities of each of the significant peaks were calculated using the average sedimentation rate between of 0.24 mm a−1 as determined by the magnetostratigraphy. The improved alignment in the precession band in the obliquity-tuned power spectrum (b) demonstrates the efficacy of our obliquity correlation (Fig. 10).
The high-coercivity component of magnetization that is observed in the pARM spectra and in the IRM acquisition modelling experiments is interpreted to represent secondary iron sulfide minerals (i.e. pyrrhotite or greigite) and the low-coercivity component is most likely detrital magnetite. The coercivity ranges exhibited in the pARM and IRM experiments are consistent with the documented coercivity ranges for these minerals (Peters & Dekkers 2003). Iron sulfide minerals, especially greigite, are common in Pliocene–Pleistocene mudstones in the region (Sagnotti & Winkler 1999). The presence of these two components is further supported by the thermal and AF demagnetization behaviour shown in Figure 3. The drop in magnetization at 300–350 °C in the thermal demagnetization curves is most likely caused by iron sulfide minerals that become magnetically unstable at those temperatures. In contrast, the decrease in magnetization above 550 °C is consistent with the behaviour of detrital magnetite. The presence of a mixed magnetite/iron-sulfide mineralogy is in agreement with the detailed rock magnetic work of Mary et al. (1993) that also concluded a mixed ferromagnetic mineralogy consisting of detrital magnetite and diagenetic iron sulfides. They also noted that the relative abundance of magnetite and sulfide minerals is variable throughout the Stirone section. Mary et al. (1993) described a zone that is dominated by iron sulfide ferromagnetic mineralogy in the upper part of the section above c. 130 m. They demonstrated that the remainder of the section is dominated by magnetite with iron sulfides as a minor secondary component.
We correlated our reversals to the Gradstein et al. (2004) geomagnetic polarity time scale (Fig. 8). We correlated the long normal polarity zone at the base of the section (N1) to the C2An.1n subchron. This correlation is supported by the previous magnetostratigraphic studies conducted by Mary et al. (1993) and Channell et al. (1994). However, the age at the base of our section is not resolvable using the magnetostratigraphy alone; the biostratigraphic data of Channell et al. (1994) suggest that c. 200 ka of time is missing at the base of the C2An.1n subchron at the Stirone section. The R1 interval at the Stironesection corresponds to the C2r.2r subchron. The magnetostratigraphy of Mary et al. (1993) interpreted this reversed subchron as extending all the way through the calcarenites between 270 and 310 m in the section. Our magnetostratigraphy observed several normal polarity sites within the calcarenites in this interval that we correlate to the Olduvai subchron (C2n). Our magnetostratigraphy failed to resolve the short-lived Réunion normal subchron that occurs at c. 2.14 Ma because our sample spacing was not close enough to resolve the 10 ka event. The top of the Olduvai subchron corresponds to the interval directly above the uppermost calcarenite in the Stirone section. This correlation is strengthened by the first occurrence of the mollusc Arctica islandica, which occurs at the same horizon. The C1r.2r subchron (R2 zone) is represented by the R2 polarity zone and appears to be a condensed stratigraphic section at the Stirone. The final reversal, at the base of the yellow littoral sands, corresponds to the base of the Jaramillo subchron (C1r.1n) at c. 1.07 Ma. Our age determination for these sands is in good agreement with previous magnetostratigraphic and cosmogenic studies further east along the mountain front that determined an c. 1 Ma depositional age for the Sabbie Gialle (Marabini et al. 1995; Cyr & Granger 2008). Because the contact between the yellow sands and the overlying AEI unit is unconformable at the Stirone section and both units exhibit a normal polarity, it is impossible to tell if the contact lies within the Jaramillo subchron or spans several reversals.
The Stirone section's magnetostratigraphic correlation to the Gradstein et al. (2004) geomagnetic polarity timescale is indicated by the dashed lines. The correlation at the base of the N1 polarity zone was in accordance with the previous magnetostratigraphic interpretations of Mary et al. (1993) and Channell et al. (1994). The N2/R2 boundary is coincident with the first occurrence of the mollusc Arctica islandica occurring at c. 1.81 Ma. The R2/N3 boundary occurs at the base of the Sabbie Gialle sands.
We used our magnetostratigraphic correlations (Fig. 8) to assign absolute time to the section and confirm the presence of orbitally forced cyclicity in the χ data series. We fixed the timing of each magnetic reversal at the stratigraphic horizon where the reversal was observed and assumed constant sediment accumulation rates between the reversal boundaries. We found the significant spectral peaks in the χ power spectrum occurred at 348.3 ka (1/85.3 m), 41.8 ka (1/10.2 m), 25.8 ka (1/6.3 m) and 23.6 ka (1/5.8 m) (Fig. 7). The spectral peak that is best aligned with its predicted orbital frequency is the 41 ka obliquity peak. The two precession cycles are longer than predicted by the orbital model while the long eccentricity cycle is shorter than expected. The slightly misaligned peaks are probably due to variations in sediment accumulation rate that cannot be resolved at the magnetostratigraphic scale.
The recognition of significant climate cycles in the χ data series allowed us to correlate the χ variability to theoretical orbital models (Fig. 9). We used three magnetic polarity reversals recognized in our magnetostratigraphy at 2.58, 1.95 and 1.81 Ma as tie points for our correlation. Beginning at these tie points, we correlated the major peaks in the χ data series, spaced approximately 10 m apart, to corresponding periods of low obliquity using the orbital model proposed by Laskar et al. (2004). We correlated peak χ values to low obliquity, rather than high obliquity, because the magnetic reversal we recognized in our section at 2.58 Ma occurs during a time of low obliquity (Laskar et al. 2004), but is coincident with a peak in our χ data series (Fig. 9). By following this convention, we allowed the magnetostratigraphy to guide our correlation instead of any assumptions regarding the orbital forcing mechanism. The validity of this correlation is demonstrated by the improved alignment of the χ power spectrum in the precessional bandwidth following our correlation (Fig. 7). The correlation procedure adjusted the χ data series for variations in sediment accumulation rate that caused the precession peaks in the un-tuned power spectrum to become misaligned.
Correlation of the de-trended (residual of second-order polynomial) low-field χ data series to the obliquity and precession theoretical orbital models (Laskar et al. 2004). Before the correlation, the data series was tied to absolute time using our magnetostratigraphic correlations (Fig. 8). The red dashed lines represent intervals where our correlation is anchored by the magnetostratigraphic reversals and the black dashed lines show our correlation to the theoretical orbital models. After correlating to obliquity, we applied a Gaussian filter centred at 0.047 with a bandwidth of 0.004 and used the filtered χ data series in our final correlation to the precessional orbital model. The filtered χ data series (red) and unfiltered χ data series (black) have the same scale.
In between each interpreted obliquity peak there exists smaller, high-frequency peaks representing precession cycles. In order to aid the correlation of χ peaks to the Laskar et al. (2004) precession orbital model, we applied a Gaussian filter centred at 0.047 with a bandwidth of 0.004 to the data and used the filtered data series in our correlation (Fig. 9). This allowed us to remove the low-frequency eccentricity and obliquity-related variability to better recognize the high-frequency peaks associated with precession cycles. Many of the interpreted precession peaks were coincident with the interpreted obliquity-related χ peaks. This is simply because the precession cycles (23 ka, 19 ka) are approximately half as long as the obliquity cycle (41 ka), and we therefore expect every other precession peak to coincide with an obliquity forced peak.
We used both the obliquity and precession correlations shown in Figure 9 to create a high-resolution age model for the Stirone section (Fig. 10; Table 1). The age model obtained by correlation to the obliquity cycles differs slightly from the model obtained by correlation to precession. Both of these models, however, show variability in sediment accumulation rate that cannot be resolved using the magnetostratigraphic time scale alone. The parts of the section where the obliquity age model converges to the precession age model coincide with time periods when the theoretical obliquity and precession cycles are in phase. This suggests that the variability in the χ data series is most likely controlled by both of the forcing parameters. The short time period when the two age models diverge, between c. 2.2 and 2.0 Ma, coincides with large amplitude swings in the χ data series and a coarsening of the Stirone section lithology (Fig. 2). This suggests that subtle changes in lithology and magnetic mineralogy in this interval mask the encoding of one or more of the orbital forcing parameters.
Age models for the Stirone section determined from the magnetostratigraphy (black dashed line), obliquity correlation (black sold line) and precession correlation (grey solid line). All three models agree remarkably well and display little variation in sediment accumulation rate over a 1.2 Ma period of time.
Our new high-resolution age model requires an adjustment to the timing of a hiatus at the base of our measured section. Channell et al. (1994) reported a stratigraphic hiatus that ended directly above the chemoherm bed at the base of our section. Based on the last occurrence (LO) of calcareous nannofossil D. tamalis and the nannofossil chronology of Rio et al. (1990a), Channell et al. (1994) report an age of 2.77 Ma for the end of the hiatus. Our cyclostratigraphic correlation requires the LO of D. tamalis to be 2.99 Ma in the Stirone section. This is not surprising as the LO of D. tamalis has already been adjusted several times. Rio et al. (1990a, 1991) assigned the LO of D. tamalis to occur at 2.60 Ma, just prior to the Plio-Pleistocene boundary at 2.58 Ma. Berggren et al. (1995) reported a 2.78 Ma age for the LO of D. tamalis in their astronomically tuned Neogene timescale, an estimate which has subsequently been adjusted to 2.87–2.88 Ma (Di Stefano 1998; Raffi et al. 2006). Raffi et al.'s (2006) review of calcareous nannofossil chronology evaluated each Neogene biostratigraphic horizon and concluded that the LO of D. tamalis is diachronous worldwide, meaning that the age variations of the LO in individual sections are larger than c. 100 ka. Our tuned timescale places the LO of D. tamalis within 120 ka of the most recent adjustment and is consistent with the diachronous nature of D. tamalis.
A detailed biostratigraphic and magnetostatigraphic study of c. 30 m of section directly above the chemoherm layer at the base of our section, that was conducted as part of a thesis by Cau (2007) at the University of Parma, provides insight into the strength and limitations of our cyclostratigraphic approach. Cau (2007) observed a 4 m-thick reversed interval beginning 10 m above the chemoherm, and interpreted there to be a shorter hiatus than Channell et al. (1994) reported at the base of this zone, spanning 3.31–3.57 Ma. Additionally, a short, limited occurrence of D. tamalis was observed 20 m above the base of our section, which supports our interpretation that the base of our section is older than initially proposed by Channell et al. (1994). Cau (2007) then used a cyclostratigraphic analysis of the sapropels in this short interval to tune this section to the Laskar et al. (2004) insolation curve. These data support our cyclostratigraphic correlation; by tuning our section to the theoretical precession model we assign the horizon at 21 m, near the top of Cau's (2007) section, as 2.93 Ma. Cau's (2007) own age determination for this horizon differs from ours by only 10 ka. Cau (2007), however, does assign the base of the section to be c. 3.31 Ma, in contrast to the 2.99 Ma age required by our age model. The apparent age discrepancy between the two age models only occurs in the first 20 m of section, where the amplitude of the χ data series is very small, indicating possible aliasing of the precession signal in this short interval caused by very slow sediment accumulation rates. Our age model requires the basal 20 m span 2.99–2.93 Ma, while Cau (2007) suggests the timing to be between 3.31 and 2.94 Ma.
Our age model also confirms the timing of regional calcarenite deposition. The Stirone Fm facies display a shallowing-upward progression probably caused by the combined effects of fault-related folding of the Salsomaggiore anticline, regional uplift of the Apennine range and the progradation of the Po Delta. Superimposed on the shallowing-upward trend is the record of climatically driven sea-level fluctuations, represented by the occurrence of calcarenite beds. Previous studies have linked the deposition of calcarenites in the Northern Apennines to orbital forcing parameters; for instance, Roveri & Taviani's (2003) cyclostratigraphic investigation of Plio-Pleistocene calcarenites in the nearby Castell'Aquarto Basin (Fig. 1) determined that bundles of calcarenites in Mediterranean basins are controlled by 100 and 400 ka eccentricity cycles during this period. Roveri and Taviani (2003) observed calcarenite bundles centred at c. 3.0, 2.6, 2.2 and 1.8 Ma. The high-resolution age model we developed using the χ data series at the Stirone section (Fig. 10) show that calcarenite bundles are similarly timed and centred at c. 3.0, 2.6 and 1.9 Ma.
Variability in χ in the Stirone section is related to precessional and obliquity orbital forcing. Precession-controlled cyclicity in the formation of sapropels in Mediterranean marine sections is already well established (Rossignol-Strick et al. 1982; Rossignol-Strick 1985; Rohling & Hilgen 1991). Sapropel formation occurs during insolation maxima, when strengthened African monsoons increase runoff in North African rivers. This increased runoff contributes to the formation of sapropels in the Mediterranean by delivering more organic material to the ocean and creating a low-salinity lid that inhibits ocean circulation. Larrasoaña et al. (2003, 2006) also related enhancement in certain rock magnetic parameters that measure fine-grained ferromagnetic minerals to the same processes responsible for the deposition of sapropels, namely a strengthened African monsoon causing increased runoff and restricted ocean circulation. Our rock magnetic experiments show that χ is controlled by both ferromagnetic and paramagnetic components, and the relative contribution of these components creates the variability observed in the χ data series. It is during times of increased runoff and restricted Mediterranean circulation that the Stirone section displays peaks in χ. Since χ represents paramagnetic, iron sulfide and magnetite components, the variability in all of these components must be explained using the same encoding process. Enhanced runoff probably increased the delivery of detrital magnetite and Fe-rich detrital clays while the restricted Mediterranean circulation caused by the low salinity lid during high runoff times increased the productivity of diagenetic sulfide minerals.
The presence of significant obliquity cycles in the χ data series is more difficult to explain since obliquity exerts little control on insolation and runoff at middle and low latitudes. Orbital obliquity, however, is a major control on sea-level during the Late Pliocene–Early Pleistocene time (Lisiecki & Raymo 2005) with phases of low obliquity corresponding to an increased build-up in high latitude ice sheets and subsequently to lower sea-levels. Using the constraints imposed by our magnetostratigraphy, we correlated the large peaks in χ to obliquity minima, which coincide with sea-level lows. It is possible that the lower sea-level during obliquity minima contributed to the restriction of deep-water ventilation in the Mediterranean, thereby intensifying the anoxic conditions that contributed to the increased productions of iron-sulfides minerals. This sea-level effect, however, is probably secondary compared with the precession controlled runoff and ocean circulation variability.
An intriguing outcome of our spectral results for the Stirone section is that extensive diagenetic alteration of magnetite to iron sulfide minerals did not obscure the palaeomagnetic or cyclostratigraphic signals in the Stirone section. This is true despite the fact that different parts of the section have experienced different intensities of reduction diagenesis of magnetite (Mary et al. 1993). This is probably because the anoxic conditions prevalent during the deposition of sapropels and magnetic enhancement are ideal for the formation of iron-sulfide minerals such as pyrrhotite and greigite. The relatively fast and nearly constant sedimentation rate at the Stirone during this time (0.24 m/ka) meant that the magnetic minerals probably spent little time in the upper 1–2 m critical zone where reduction diagenesis occurred, ideal for the preservation of the anoxic signal.
We used rock-magnetic cyclostratigraphy to generate a high-resolution age model for the Stirone Fm, an important Plio-Pleistocene marine stratigraphic section that exhibits no lithological cyclicity. Our timescale correlation improved the timing of the LO of D. tamalis in this section, adjusted the length of a sedimentological disconformity, and determined high-resolution sedimentation rates that can be used for tectonic, sequence stratigraphic or sedimentological studies. The efficacy of rock magnetic cyclostratigraphic correlations is dependent on our ability to independently determine absolute time, estimate sediment accumulations rates and recognize depositional hiatuses. These constraints can be overcome when cyclostratigraphic analyses are combined with detailed lithostratigraphic, biostratigraphic and magnetostratigraphic analysis.
At the Stirone River section, we linked cyclic variations in χ to precessional scale changes in runoff and ocean circulation that control the concentrations of fine-grained ferromagnetic magnetite and iron sulfides and paramagnetic clays. The additional presence of obliquity cycles in the χ data series provides an insight regarding the ability of global ice volume-induced sea-level changes to subtly affect the precession-dominated variability by contributing to the restriction of deep water Mediterranean circulation.
This work was supported by a National Science Foundation grant (EAR-0809722) to D. J. Anastasio, K. P. Kodama and F. J. Pazzaglia, and a Geological Society of America student research grant to K. L. Gunderson. We thank Parco Fluviale Regionale dello Stirone for facilitating access to the section. S. Brachfeld, B. Ellwood and two anonymous reviewers contributed helpful reviews. We thank the editors for their suggestions and consideration of this manuscript. We thank D. Smith, A. Ponza and R. Gunderson for field assistance. A. Artoni and V. Picotti contributed their insight regarding the local geology.
(1998a) Sedimentology, micropalaeontology, and strontium-isotope dating of a lower-Middle Pleistocene marine succession (Argille Azzurre) in the Romagna Apennines, northern Italy. Bollettino della Società Geologica Italiana 117:789–806.
(1998b) The Pleistocene littoral deposits (Imola Sands) of the northern Apennines foothills. Giornale di Geologia 60:83–118.
(2004) The Salsomaggiore structure (northwestern Apennine foothills, Italy); a Messinian mountain front shaped by mass-wasting products. Geo-Acta 3:107–127.
(2007) in Thrust Belts and Foreland Basins; from Fold Kinematics to Hydrocarbon Systems, Tectonic and climatic controls on sedimentation in late Miocene Cortemaggiore wedge-top basin (northwestern Apennines, Italy) eds Lacombe O., Lave J., Roure F. M., Verges J. (Springer, Berlin) http://dx.doi.org/10.1007/978-3-540-69426-7, pp 431–456.
(1995) Late Neogene chronology: new perspectives in high-resolution stratigraphy. Geological Society of America Bulletin 107:1272–1287, http://dx.doi.org/10.1130/0016-7606(1995)1072.3.CO;2.
(1979) Palynology and stratigraphy of the Plio-Pleistocene sequence of the Stirone River (Northern Italy) Pollen et spores 21:150–167.
(1989) Evidence for a change in the periodicity of tropical climate cycles at 2.4 Myr from whole-core magnetic susceptibility measurements. Nature (London) 342:897–900.
(1988) Paleoenvironmental implications of rock-magnetic properties of late Quaternary sediment cores from the eastern Equatorial Atlantic. Paleoceanography 3:61–87, http://dx.doi.org/10.1029/PA003i001p00061.
(1990) Fluid flow and mass flux determinations at vent sites on the Cascadia margin accretionary prism; Special section on the Role of fluids in sediment accretion, deformation, diagenesis, and metamorphism in subduction zones. Journal of Geophysical Research 95:8891–8897, http://dx.doi.org/10.1029/JB095iB06p08891.
(2007) Paleoecologia delle comunita chemiosintetiche al passaggio Zancleano-Piacenziano nel torrente Stirone. M.S. thesis (Università DegliStudi di Parma).
(1994) Magnetic stratigraphy and biostratigraphy of Pliocene ‘argilleazzurre’ (Northern Apennines, Italy) Palaeogeography, Palaeoclimatology, Palaeoecology 110:83–102.
(1999) Miocene chemoherms of the northern Apennines, Italy. Geology 27:927–930, http://dx.doi.org/10.1130/0091-7613(1999)0272.3.CO;2.
(2008) Dynamic equilibrium among erosion, river incision, and coastal uplift in the northern and central Apennines, Italy. Geology 36:103, http://dx.doi.org/10.1130/G24003A.1.
(2005) Cartageologica d'Italia alla scala 1:50 000 (Regione Emilia-Romagna, ServizioGeologico, Sismico e dei Suoli, Firenze).
(1998) Calcareous nannofossilquanitative biostratigraphy of holes 969E and 963B (Eastern Mediterranean) Proceedings of the Ocean Drilling Program, Scientific Results 160:99–112, http://dx.doi.org/10.2973/odp.proc.sr.160.009.1998.
(2001) Taphonomy and paleoecology of shallow marine macrofossil assemblages in a collisional setting (late Pliocene–early Pleistocene, western Emilia, (Italy) Palaios, United States 16:336–353.
(1961) Nota paleontologico-stratigrafica sul Pedeappennino padano. Bollettino della Società GeologicaItaliana 81:113–245.
(2008) Time series analysis of magnetic susceptibility variations in deep marine sedimentary rocks: a test using the Upper Danian–Lower Selandian proposed GSSP, Spain. Palaeogeography, Palaeoclimatology, Palaeoecology 261:270–279, http://dx.doi.org/10.1016/j.palaeo.2008.01.022.
(1953) Introductions to statistical methods applied to directional data. Proceedings of the Royal Society of London A217:295–305.
(2002) Advanced spectral methods for climatic time series. Reviews of Geophysics 40:1–41, http://dx.doi.org/10.1029/2000RG000092.
(2004) A Geologic Time Scale 2004 (Cambridge University Press, Cambridge), pp 409–471.
(1976) Variations in the Earth's orbit; pacemaker of the ice ages. Science 194:1121–1132.
(1995) Loess magnetism. Reviews of Geophysics 33:211–240, http://dx.doi.org/10.1029/95RG00579.
(2000) New perspectives on orbitally forced stratigraphy. Annual Reviews of Earth Planetary Sciences 28:419–475.
(2004) Earth's Orbital Parameters and Cycle Stratigraphy (Cambridge University Press, Cambridge), pp 55–62.
(1988) Partial anhysteretic remanence and its anisotropy: applications and grain size dependence. Geophysical Research Letters 15:440–443.
(2006) Astronomic calibration of the late Eocene/early Oligocene Massignano section (central Italy) Geochemistry, Geophysics, Geosystems 7:1–10, http://dx.doi.org/10.1029/2005GC001195.
(2010) Astronomical calibration of the middle Eocene Contessa Highway section (Gubbio, Italy) Earth and Planetary Science Letters, Netherlands 298:77–88, http://dx.doi.org/10.1016/j.epsl.2010.07.027.
(1980) The least-squares line and plane and the analysis of paleomagnetic data. Geophysical Journal. Royal Astronomical Society 62:699–718.
(2010) High-resolution rock magnetic cyclostratigraphy in an Eocene flysch, Spanish Pyrenees. Geochemistry, Geophysics, Geosystems 11:1–22, http://dx.doi.org/10.1029/2010GC003069.
(2001) Quantification of magnetic coercivity components by the analysis of acquisition curves of isothermal remanent magnetisation. Earth and Planetary Science Letters 189:269–276.
(2002) Cyclostratigraphy and rock-magnetic investigation of the NRM signal in late Miocene palustrine–alluvial deposits of the Librilla section (SE Spain) Journal of Geophysical Research 107:1–18, http://dx.doi.org/10.1029/2001JB000945.
(2003) A new proxy for bottom-water ventilation in the eastern Mediterranean based on diagenetically controlled magnetic properties of sapropel-bearing sediments; Paleoclimatic and paleoceanographic records in Mediterranean sapropels and Mesozoic black shales. Palaeogeography, Palaeoclimatology, Palaeoecology 190:221–242.
(2006) Detecting missing beats in the Mediterranean climate rhythm from magnetic identification of oxidized sapropels (Ocean Drilling Program Leg 160); ODP contributions to paleomagnetism. Physics of the Earth and Planetary Interiors 156:283–293, http://dx.doi.org/10.1016/j.pepi.2005.04.017.
(2004) A long-term numerical solution for the insolation quantities of the Earth. Astronomy Astrophysics 428:261–285, http://dx.doi.org/10.1051/0004-6361:20041335.
(2006) Magnetic record of Milankovitch rhythms in lithologically noncyclic marine carbonates. Geology 34:29–32, http://dx.doi.org/10.1130/G21918.1.
(2005) A Pliocene–Pleistocene stack of 57 globally disturbed benthic δ18 O records. Paleoceanography 20:17, http://dx.doi.org/10.1029/2004PA001071.
(1996) Robust estimation of background noise and signal detection in climatic time series. Climate Change 33:409–445, http://dx.doi.org/10.1007/BF00142586.
(1995) Yellow sand facies with Arctica islandica: low-stand signature in an early Pleistocene Front–Apennine Basin. Giornale di Geologia 57:259–275.
(1993) Magnetostratigraphy of pliocene sediments from the Stirone River (Po Valley) Geophysical Journal International 112:359–380, http://dx.doi.org/10.1111/j.1365-246X.1993.tb01175.x.
(1986) A 40-million-year lake record of early Mesozoic orbital climatic forcing. Science 234:842–848.
(1996) Macintosh program performs time-series analysis. Eos Transactions 77:379–379.
(1963) La serieplio-pleistocenicadel T. Stirone (Parmense occidentale) Bollettino della Società Geologica Italiana 81:293–335.
(2003) Selected room temperature magnetic parameters as a function of mineralogy, concentration and grain size. Physics and Chemistry of the Earth 28:659–667, http://dx.doi.org/10.1016/S1474–7065(03)00120-7.
(1986) The significance of marine boreal molluscs in the early Pleistocene faunas of the Mediterranean area. Palaeogeography, Palaeoclimatology, Palaeoecology 52:267–289.
(2006) A review of calcareous nannofossilastrobiochronology encompassing the past 25 million years. Quaternary Science Reviews 25:3113–3137, http://dx.doi.org/10.1016/j.quascirev.2006.07.007.
(1994) Separation of paramagnetic and ferrimagnetic susceptibilities using low temperature magnetic susceptibilities and comparison with high field methods. Physics of the Earth and Planetary Interiors 82:113–123.
(1990a) 32. Pliocene–Pleistocene calcareous nannofossil distribution patterns in the western Mediterranean. Proceedings of the Ocean Drilling Program, Scientific Results 107:513–533.
(1990b) Pliocene–Early Pleistocene Chronostratigraphy and the Tyrrhenian deep-sea record from Site 653. Proceedings of the Ocean Drilling Program, Scientific Results 107:705–714, http://dx.doi.org/10.2973/odp.proc.sr.107.185.1990.
(1991) Pliocene–lower Pleistocene chronostratigraphy; a re-evaluation of Mediterranean type sections. Geological Society of America Bulletin 103:1049–1058, http://dx.doi.org/10.1130/0016-7606(1991)1032.3.CO;2.
(1991) The eastern Mediterranean climate at times of sapropel formation: a review. Geologie en Mijnbouw (Netherlands Journal of Geosciences) 70:253–264.
(1985) Mediterranean Quaternary sapropels, and immediate response of the African monsoon to variation of insolation. Palaeogeography, Palaeoclimatology, Palaeoecology 49:237–263.
(1982) After the deluge; Mediterranean stagnation and sapropel formation. Nature (London) 295:105–110.
(2003) Calcarenite and sapropel deposition in the Mediterranean Pliocene: shallow- and deep-water record of astronomically driven climatic events. Terra Nova 15:279–286, http://dx.doi.org/10.1046/j.1365-3121.2003.00492.x.
(1999) Rock magnetism and palaeomagnetism of greigite-bearing mudstones in the Italian peninsula. Earth and Planetary Science Letters 165:67–80.
(1995) A new Late Neogene time scale: application to Leg 138 sites. Proceedings of the Ocean Drilling Program, Scientific Results 138:74–101.
(1999) Astronomical calibration of Oligocene–Miocene time. Philosophical Transactions: Mathematical, Physical and Engineering Sciences 357:1907–1929.
(1982) Spectrum estimation and harmonic analysis. Proceedings of the IEEE 70:1055–1096.
(1990) Time series analysis of Holocene climate data. Philosophical Transactions of the Royal Society of London. Series A, Mathematical and Physical Sciences 330:601–616.
(1997) Lake Baikal record of continental climate response to orbital insolation during the past 5 million years. Science 278:1114–1117.
(1967) in Methods in Paleomagnetism, A.C. demagnetization of rocks: analysis of results, eds Collinson D. W., Creer K. M., Runcorn S. K. (Elsevier, Amsterdam), pp 254–286. | 2019-04-20T04:39:30Z | https://sp.lyellcollection.org/content/373/1/309?ijkey=80e1dfe33fb8410d920077020b16cfffc94af7bf&keytype2=tf_ipsecsha |
Please visit Dr. Heffernan's Google Scholar and DBLP pages. According to Google Scholar, Dr. Heffernan's 200+ papers have been cited 5,558 times.
J20 Heffernan, N. & Heffernan, C. (2014). The ASSISTments Ecosystem: Building a Platform that Brings Scientists and Teachers Together for Minimally Invasive Research on Human Learning and Teaching. International Journal of Artificial Intelligence in Education. 24(4), 470-497. Link to the Springer version DOI 10.1007/s40593-014-0024-x. The Special Issue focused on landmark systems.
Roschelle, J., Feng, M., Murphy, R. & Mason, C. (2016). Online Mathematics Homework Increases Student Achievement. AERA OPEN. Vol. 2, No. 4, 1–12. DOI: 10.1177/2332858416673968. The corrected version of this paper is available here.
Dr. Heffernan is well known for running the only platform for open science in education: I host ASSISTmentstestBed.org as a shared scientific instrument.
J21 Ostrow, K.S., Heffernan, N.T., & Williams, J.J. (2017). Tomorrow’s EdTech Today: Establishing a Learning Platform as a Collaborative Research Tool for Sound Science. Teachers College Record, Volume 119 Number 3, 2017, 1-36.
ASSISTments has a long history, having started back in 2005.
CP11 Razzaq, L., Feng, M., Nuzzo-Jones, G., Heffernan, N.T., Koedinger, K. R., Junker, B., Ritter, S., Knight, A., Aniszczyk, C., Choksey, S., Livak, T., Mercado, E., Turner, T.E., Upalekar. R, Walonoski, J.A., Macasek. M.A. & Rasmussen, K.P. (2005). The Assistment project: Blending assessment and assisting. In C.K. Looi, G. McCalla, B. Bredeweg, & J. Breuker (Eds.) Proceedings of the 12th Artificial Intelligence in Education, Amsterdam: ISO Press, 555-562.
ASSISTments is a good assessor of student knowledge and can predict state test scores better than traditional methods because of accounting for factors like how many hints and attempts a student makes.
J8 Feng, M., Heffernan, N.T., & Koedinger, K.R. (2009). Addressing the assessment challenge in an Intelligent Tutoring System that tutors as it assesses. The Journal of User Modeling and User-Adapted Interaction. 19, 243-266. (Based on CP15) Best Paper of the Year (See Award #20). Mentioned in National Ed. Tech Plan (See Award #19).
CP25 Razzaq, L. & Heffernan, N. (2009). To Tutor or Not to Tutor: That is the Question. In Dimitrova, Mizoguchi, du Boulay & Graesser (Eds.) Proceedings of the 2009 Artificial Intelligence in Education Conference. IOS Press, 457-464. Honorable Mention for Best Paper First Authored by a Student.
If you are interested in the methodological issues related to estimating treatment effects, you might want to start with the following papers. We released a set of 22 experiments to test which interventions increased student learning the most. An active interest of mine is detecting heterogeneous treatment effects to learn which kids should be given which type of feedback.
SP24 Sales, A. C., Botelho, A. F., Wu, E., Gagnon-Bartsch, J., Miratrix, L., Patikorn, T. & Heffernan, N. T. (2018) Residualization Methods to Better Estimate Treatment Effects in Randomized Controlled Trials. Presented at the Conference on Digital Experimentation (CODE) held at MIT. You can watch the talk here.
Dr. Heffernan does work using Bayes Nets in Student Modeling.
CP40 Pardos, Z. & Heffernan, N. (2010). Modeling Individualization in a Bayesian Networks Implementation of Knowledge Tracing. In P. De Bra, A. Kobsa, D. Chin, (Eds.) The 18th Proceedings of the International Conference on User Modeling, Adaptation and Personalization. Springer-Verlag, 255-266. Nominated for Best Student Paper.
Dr. Heffernan had a role in creating the Cognitive Tutor Authoring Tools (CTAT) before deciding to create ASSISTments.
CP6 Koedinger, K. R., Aleven, V., Heffernan. T., McLaren, B. & Hockenberry, M. (2004). Opening the door to non-programmers: Authoring intelligent tutor behavior by demonstration. In James C. Lester, Rosa Maria Vicari, Fábio Paraguaçu (Eds.) Proceedings of 7th Annual Intelligent Tutoring Systems Conference, Maceio, Brazil,162-173.
Dr. Heffernan is also well known for work on detecting “gaming” and why students do it.
This work led to a handful of later papers about student emotions while using ASSISTments.
J25 Kai, S., Almeda, M. V., Baker, R., Heffernan, C., & Heffernan, N. (2018). Decision Tree Modeling of Wheel- Spinning and Productive Persistence in Skill Builders. JEDM | Journal of Educational Data Mining, 10(1), 36-71.
J23 Inventado, P., Scupelli, P., Ostrow, K., Heffernan, N., Almeda, V., & Slater, (2018). Contextual Factors Affecting Hint Utility, International Journal of STEM Education, 5(1), 13. Retrieved from here or here.
J19 Heffernan, N. & Heffernan, C. (2014). The ASSISTments Ecosystem: Building a Platform that Brings Scientists and Teachers Together for Minimally Invasive Research on Human Learning and Teaching. International Journal of Artificial Intelligence in Education. 24 (4), 470-497. Link to the Springer version DOI 10.1007/s40593-014-0024-x. The Special Issue focused on landmark systems.
J17 Pardos, Z.A., Gowda, S. M., Baker, R. S.J.D., Heffernan, N. T., (2012). The Sum is Greater than the Parts: Ensembling Models of Student Knowledge in Educational Software. ACM’s Knowledge Discovery and Datamining Explorations, 13(2), 37-44.
J16 Gong, Y, Beck, J. E., Heffernan, N. T. (2011). How to Construct More Accurate Student Models: Comparing and Optimizing Knowledge Tracing and Performance Factor Analysis. International Journal of Artificial Intelligence in Education. 21, 27-46.
J15 Pardos, Z., Dailey, M. & Heffernan, N. (2011). Learning what works in ITS from non-traditional randomized controlled trial data. The International Journal of Artificial Intelligence in Education. 21, 47-63.
J12 Broderick, Z., O’Connor, C., Mulcahy, C., Heffernan, N. & Heffernan, C. (2011). Increasing Parent Engagement in Student Learning Using an Intelligent Tutoring System. Journal of Interactive Learning Research, 22(4), 523-550. Longer version available as WPI CS Technical Report Number 2010 #08. Chesapeake, VA: AACE. Retrieved August 15, 2013, from http://www.editlib.org/p/34133.
J11 Militello, M., & Heffernan, N. (2009). Which one is "just right"? What educators should know about formative assessment systems. International Journal of Educational Leadership Preparation, 4(3), 1-8.
J8 Feng, M., Heffernan, N.T., & Koedinger, K.R. (2009). Addressing the assessment challenge in an Intelligent Tutoring System that tutors as it assesses. The Journal of User Modeling and User-Adapted Interaction. 19, 243-266. (Based on CP15) Best Paper of the Year (See Award #20 above). Mentioned in National Ed. Tech Plan (See Award #19).
J7 Mendicino, M., Razzaq, L. & Heffernan, N. T. (2009). Improving Learning from Homework Using Intelligent Tutoring Systems. Journal of Research on Technology in Education (JRTE). 41(3), 331-346.
J5 Razzaq, L., Heffernan, N., Feng, M., & Pardos Z. (2007). Developing Fine-Grained Transfer Models in the ASSISTment System. Journal of Technology, Instruction, Cognition, and Learning. 5(3), 289-304.
J2 Heffernan, N. T., Koedinger, K. & Razzaq, L. (2008). Expanding the model-tracing architecture: A 3rd generation intelligent tutor for Algebra symbolization. The International Journal of Artificial Intelligence in Education. 18(2), 153-178 (Builds upon CP8 and CP1-4).
BC5 Heffernan, N., Militello, M, Heffernan, C., & Decoteau, M. (2012). Effective and meaningful use of educational technology: three cases from the classroom. In C. Dede & J. Richards (Eds.). Digital Teaching Platforms, 88-102. Columbia, NY: Teachers College Press.
BC4 Razzaq, L. & Heffernan, N. (2010). Open content Authoring Tools. In Nkambou, Bourdeau & Mizoguchi (Eds.) Advanced in Intelligent Tutoring Systems.pp 425-439. Berlin: Springer Verlag.
BC3 Pardos, Z. A., Heffernan, N. T., Anderson, B., Heffernan, L. C. (2010). Using Fine-Grained Skill Models to Fit Student Performance with Bayesian Networks. Chapter in C. Romero, S. Ventura, S. R. Viola, M. Pechenizkiy and R. S. J. Baker. Handbook of Educational Data Mining. Boca Raton, Florida: Chapman & Hall/CRC Press.
BC2 Feng, M., Heffernan, N.T., & Koedinger, K.R. (2010). Student Modeling in an Intelligent Tutoring System. In Stankov, Glavinc, and Rosic. (Eds.) Intelligent Tutoring Systems in E-learning Environments: Design, Implementation and Evaluation, 208-236. Hershey, PA: Information Science Reference. (Based on W10, W11 and W12).
BC1 Razzaq, Feng, Heffernan, Koedinger, Nuzzo-Jones, Junker, Macasek, Rasmussen, Turner & Walonoski. (2007). A Web-based authoring tool for intelligent tutors: Assessment and instructional assistance. In Nadia Nedjah, Luiza deMacedo Mourelle, Mario Neto Borges and Nival Nunesde Almeida (Eds). Intelligent Educational Machines. Intelligent Systems Engineering Book Series, 23-49. Berlin: Springer Verlag.
Note: Unlike most other disciplines where journal papers are more prestigious than conference papers, in Computer Science as a discipline, conference publications are often more difficult to get accepted and are more prestigious than most journal publications. These conference proceedings are stringently peer-reviewed, with at least three reviewers. The acceptance rate is usually in the 30% to 39% range. (The Educational Data Mining conference in 2010 was unusual in that they accepted 42% of the papers, but that is non-standard.) I have started labeling the acceptance rates on new papers to make that easier to understand.
CP87 Ostrow, K & Heffernan, N. (2018). Testing the Validity and Reliability of Intrinsic Motivation Inventory Subscales within ASSISTments. Proceedings of the Nineteenth International Conference on Artificial Intelligence in Education. Pp 381-394.
CP86 Botelho, A. F., Baker, R. S., & Heffernan, N. T. (2017). Improving Sensor-Free Affect Detection Using Deep Learning. In E. Andre' et al (Eds.) Proceedings of the Eighteenth International Conference on Artificial Intelligence in Education. Pp 40-51.
CP85 Slater, S., Ocumpaugh, J., Almeda, M., Allen, L., Heffernan, N., & Baker, R. (2017) Using Natural Language Processing Tools to Develop Complex Models of Student Engagement. Affective Computing and Intelligent Interaction, At San Antonio, TX, US.
CP83 Slater, S., Baker, R., Almeda, M, Bowers, A., & Heffernan, N. (2017) Using Correlational Topic Modeling for Automated Topic Identification in Intelligent Tutoring Systems. Learning Analytics and Knowledge (LAK 2017).
CP82 Heffernan, N., Heffernan, C., Li, Y., Logue, M.E., Mason, C., McGuire, P., Ostrow, K., & Tu, S. (2016) To See or Not to See: Putting Image-Based Feedback in Question. International Society for Technology in Education (ISTE). Denver. Listen & Learn: Research Paper.
CP81 Williams, J. J., Kim, J., Rafferty, A., Maldonado, S., Gajos, K. Z., Lasecki, W. S. & Heffernan, N. T. (2016) Axis: Generating explanations at scale with learnsourcing and machine learning. Proceedings of the Third (2016) ACM Conference on Learning @ Scale pp 379-388. (Acceptance Rate = 23%).
CP78 Selent, D. & Heffernan, N. T. (2015) When More Intelligent Tutoring in the Form of Buggy Messages Does Not Help. In Conati, Heffernan, Mitrovic & Verdejo (Eds) The 17th Proceedings of the Conference on Artificial Intelligence in Education, Madrid, Spain. Springer, 768-771.
CP71 San Pedro, M., Ocumpaugh, J., Baker, R., & Heffernan, N. (2014). Predicting STEM and Non-STEM College Major Enrollment from Middle School Interaction with Mathematics Educational Software. In John Stamper et al. (Eds) Proceedings of the 7th International Conference on Educational Data Mining, 276-279. A longer version is here.
CP70 Ostrow, K., & Heffernan, N. T. (2014). Testing the Multimedia Principle in the Real World: A Comparison of Video vs. Text Feedback in Authentic Middle School Math Assignments. In John Stamper et al. (Eds) Proceedings of the 7th International Conference on Educational Data Mining, 296-299.
CP68 Hawkins, W., Heffernan, N. Baker, R. (2014). Learning Bayesian Knowledge Tracing parameters with a Knowledge Heuristic and Empirical Probabilities. In Stefan Trausan-Matu, et al. (Eds) International Conference on Intelligent Tutoring 2014. LNCS 8474. (Acceptance Rate = 42%).
CP67 Wang, Y. & Heffernan, N. (2014). The Effect of Automatic Reassessment and Relearning on Assessing Student Long-term Knowledge in Mathematics. In Stefan Trausan-Matu, et al. (Eds) International Conference on Intelligent Tutoring 2014. pp 490-495. LNCS 8474. (Acceptance Rate = 42%).
CP66 Feng, M., Roschelle, J., Murphy, R. & Heffernan, N. (2014). Using Analytics for Improving Fidelity in a Large Scale Efficacy Trial. International Conference of the Learning Sciences 2014.
CP65 San Pedro, M., Baker, R., Bowers, A. & Heffernan, N. (2013). Predicting College Enrollment from Student Interaction with an Intelligent Tutoring System in Middle School. In S. D’Mello, R. Calvo, & A. Olney (Eds.) Proceedings of the 6th International Conference on Educational Data Mining (EDM2013). Memphis, TN, 177-184.
CP64 Hawkins, W., Baker, R. S. J. d., & Heffernan, N. T., (2013). Which is more responsible for boredom in intelligent tutoring systems: students (trait) or problems (state)? Affective Computing and Intelligent Interaction. Geneva, 618-623.
CP63 Hawkins, W., Heffernan, N., Wang, Y. & Baker, S.J.d., (2013). Extending the Assistance Model: Analyzing the Use of Assistance over Time. In S. D’Mello, R. Calvo, & A. Olney (Eds.) Proceedings of the 6th International Conference on Educational Data Mining (EDM2013). Memphis, TN, 59-66.
CP62 San Pedro, M., Baker, R., Gowda, S., & Heffernan, N. (2013). Towards an Understanding of Affect and Knowledge from Student Interaction with an Intelligent Tutoring System. In Lane, Yacef, Mostow & Pavlik (Eds) The Artificial Intelligence in Education Conference. Springer-Verlag, 41-50.
CP61 Wang, Y. & Heffernan, N. (2013). Extending Knowledge Tracing to allow Partial Credit: Using Continuous versus Binary Nodes. In Lane, Yacef, Mostow & Pavlik (Eds) The Artificial Intelligence in Education Conference. Springer-Verlag, 181-188.
CP60 Song, F., Trivedi, S., Wang, Y., Sárközy, G., & Heffernan, N. (2013). Applying Clustering to the Problem of Predicting Retention within an ITS: Comparing Regularity Clustering with Traditional Methods. In Boonthum-Denecke, Youngblood (Eds) Proceedings of the Twenty-Sixth International Florida Artificial Intelligence Research Society Conference, FLAIRS 2013, St. Pete Beach, Florida. May 22-24, 2013. AAAI Press 2013, 527-532.
CP59 Kehrer, P., Kelly, K. & Heffernan, N. (2013). Does Immediate Feedback While Doing Homework Improve Learning. In Boonthum-Denecke, Youngblood (Eds) Proceedings of the Twenty-Sixth International Florida Artificial Intelligence Research Society Conference, FLAIRS 2013, St. Pete Beach, Florida. May 22-24, 2013. AAAI Press 2013. p 542-545.
CP58 Kelly, K., Heffernan, N., D'Mello, S., Namias, J., & Strain, A. (2013). Adding Teacher-Created Motivational Video to an ITS. In Boonthum-Denecke, Youngblood (Eds) Proceedings of the Twenty-Sixth International Florida Artificial Intelligence Research Society Conference, FLAIRS 2013, St. Peters Beach, Florida, 503-508.
CP57 Pardos, Z. & Heffernan, N. (2012). Tutor Modeling vs. Student Modeling. Proceedings of the Twenty-Fifth International Florida Artificial Intelligence Research Society Conference. Invited talk. Florida Artificial Intelligence Research Society (FLAIRS 2012). St. Petersburg Beach, Florida pp 420-425.
CP56 Qiu, Y., Pardos, Z. & Heffernan, N. (2012). Towards data driven user model improvement. Proceedings of the Twenty-Fifth International Florida Artificial Intelligence Research Society Conference. Florida Artificial Intelligence Research Society (FLAIRS 2012), 462-465.
CP55 Pardos, Z., Trivedi, S., Heffernan, N. & Sarkozy, G. (2012). Clustered Knowledge Tracing. 11th International Conference on Intelligent Tutoring Systems, 404-410.
CP54 Wang, Y. & Heffernan, N. (2012). The Student Skill Model. 11th International Conference on Intelligent Tutoring Systems. Springer. pp 399-404.
CP53 Gong, Y., Beck, J. & Heffernan, N. (2012). WEBsistments: Enabling an Intelligent Tutoring System to Excel at Explaining Why Other Than Showing How; 11th International Conference on Intelligent Tutoring Systems. Springer. pp 268-273.
CP52 Wang, Y. & Heffernan, N. (2012). Leveraging First Response Time into the Knowledge Tracing Model. 5th International Conference on Educational Data Mining, 176-179.
CP51 Trivedi, S. Pardos, Z., Sarkozy, G. & Heffernan, N. (2012). Co-Clustering by Bipartite Spectral Graph Partitioning for Out-Of-Tutor Prediction. 5th International Conference on Educational Data Mining, 33-40.
CP50 Gowda, S., Baker, R.S.J.d., Pardos, Z., Heffernan, N. (2011). The Sum is Greater than the Parts: Ensembling Student Knowledge Models in ASSISTments. Proceedings of the KDD 2011 Workshop on KDD in Educational Data.
CP49 Qiu, Y., Qi, Y., Lu, H., Pardos, Z. & Heffernan, N. (2011). Does Time Matter? Modeling the Effect of Time with Bayesian Knowledge Tracing. In Pechenizkiy, M., Calders, T., Conati, C., Ventura, S., Romero , C., and Stamper, J. (Eds.) Proceedings of the 4th International Conference on Educational Data Mining, 139-148.
CP48 Trivedi, S., Pardos, Z., Sarkozy, G. & Heffernan, N. (2011). Spectral Clustering in Educational Data Mining. In Pechenizkiy, M., Calders, T., Conati, C., Ventura, S., Romero , C., and Stamper, J. (Eds.) Proceedings of the 4th International Conference on Educational Data Mining, 129-138.
CP47 Bahador, N., Pardos, Z., Heffernan & Baker, R. (2011). Less is More: Improving the Speed and Prediction Power of Knowledge Tracing by Using Less Data In Pechenizkiy, M., Calders, T., Conati, C., Ventura, S., Romero , C., and Stamper, J. (Eds.) Proceedings of the 4th International Conference on Educational Data Mining, 101-110.
CP46 Pardos, Z., Gowda, S., Baker, R. & Heffernan, N. (2011). Ensembling Predictions of Student Post-Test Scores for an Intelligent Tutoring System. In Pechenizkiy, M., Calders, T., Conati, C., Ventura, S., Romero , C., and Stamper, J. (Eds.) Proceedings of the 4th International Conference on Educational Data Mining, 189-198.
CP45 Baker, R., Pardos, Z., Gowda, S., Nooraei, B., & Heffernan, N. (2011). Ensembling Predictions of Student Knowledge within Intelligent Tutoring Systems. In Konstant et al (Eds.) 20th International Conference on User Modeling, Adaptation and Personalization (UMAP 2011), 13-24.
CP44 Pardos, Z. & Heffernan, N. (2011). KT-IDEM: Introducing Item Difficulty to the Knowledge Tracing Model. In Konstant et al (Eds.) 20th International Conference on User Modeling, Adaptation and Personalization (UMAP 2011), 243-254.
CP43 Trivedi, S., Pardos, Z. & Heffernan, N. (2011). Clustering Students to Generate an Ensemble to Improve Standard Test Score Predictions In Biswas et al. (Eds) Proceedings of the Artificial Intelligence in Education Conference 2011, 328–336.
CP42 Singh, R., Saleem, M., Pradhan, P., Heffernan, C., Heffernan, N., Razzaq, L. Dailey, M. O'Connor, C. & Mulchay, C. (2011). Feedback during Web-Based Homework: The Role of Hints In Biswas et al. (Eds) Proceedings of the Artificial Intelligence in Education Conference 2011, 328–336.
CP41 Wang, Y. & Heffernan, N. (2011). The "Assistance" Model: Leveraging How Many Hints and Attempts a Student Needs. The 24th International FLAIRS Conference. pp 549-554 Nominated for Best Student Paper.
CP40 Pardos, Z. & Heffernan, N. (2010). Modeling Individualization in a Bayesian Networks Implementation of Knowledge Tracing. In Paul De Bra, Alfred Kobsa, David Chin, (Eds.) The 18th Proceedings of the International Conference on User Modeling, Adaptation and Personalization, 255-266.
CP39 Feng, M. & Heffernan, N. (2010). Can We Get Better Assessment From a Tutoring System Compared to Traditional Paper Testing? Can We Have Our Cake (Better Assessment) and Eat It Too (Student Learning During the Test). In Baker, R.S.J.d., Merceron, A., Pavlik, P.I. Jr. (Eds.) Proceedings of the 3rd International Conference on Educational Data Mining, 41-50.
CP38 Pardos, Z. & Heffernan, N. (2010). Navigating the parameter space of Bayesian Knowledge Tracing models: Visualization of the convergence of the Expectation Maximization algorithm. In Baker, R.S.J.d., Merceron, A., Pavlik, P.I. Jr. (Eds.) Proceedings of the 3rd International Conference on Educational Data Mining, 161-170.
CP37 Gong, Y., Beck, J, Heffernan, N. (2010). Using Multiple Dirichlet distributions to improve parameter plausibility Educational Data Mining 2010. In Baker, R.S.J.d., Merceron, A., Pavlik, P.I. Jr. (Eds.) Proceedings of the 3rd International Conference on Educational Data Mining, 61-70.
CP35 Pardos, Z. A., Dailey, M. D., Heffernan, N. T. In Press (2010). Learning what works in ITS from non-traditional randomized controlled trial data. In Aleven, V., Kay, J & Mostow, J. (Eds) Proceedings of the 10th International Conference on Intelligent Tutoring Systems (ITS2010) Part 2. Springer-Verlag, Berlin, 41-50. Nominated for Best Student Paper.
CP34 Razzaq, L. & Heffernan, N. (2010). Hints: Is It Better to Give or Wait to be Asked? In Aleven, V., Kay, J & Mostow, J. (Eds) Proceedings of the 10th International Conference on Intelligent Tutoring Systems (ITS2010) Part 1. Springer, 349-358.
CP33 Gong, Y., Beck, J., Heffernan, N. & Forbes-Summers, E. (2010). The impact of gaming (?) on learning at the fine-grained level. In Aleven, V., Kay, J & Mostow, J. (Eds) Proceedings of the 10th International Conference on Intelligent Tutoring Systems (ITS2010) Part 1. Springer, 194-203.
CP32 Gong, Y., Beck, J. & Heffernan, N. (2010). Comparing Knowledge Tracing and Performance Factor Analysis by Using Multiple Model Fitting. In Aleven, V., Kay, J & Mostow, J. (Eds) Proceedings of the 10th International Conference on Intelligent Tutoring Systems (ITS2010) Part 1. Springer-Verlag, Berlin, 35-44. Nominated for Best Student Paper.
CP31 Baker, R., Goldstein, A. & Heffernan, N. (2010). Detecting the Moment of Learning. In Aleven, V., Kay, J., & Mostow, J. (Eds) Proceedings of the 10th International Conference on Intelligent Tutoring Systems (ITS2010) Part 1. Springer, 25-33. Nominated for Best Paper.
CP30 Sao Pedro, M., Gobert, J., Heffernan, N., & Beck, J. (2009). In N.A. Taathen & H. van Rjin (Eds.) Comparing Pedagogical Approaches for Teaching the Control of Variables Strategy. Proceedings of the 31st Annual Conference of the Cognitive Science Society Austin, TX: Cognitive Science Society.
CP29 Pardos, Z.A., Heffernan, N.T. (2009). Determining the Significance of Item Order In Randomized Problem Sets. In Barnes, Desmarais, Romero & Ventura (Eds.) Proc. of the 2nd International Conference on Educational Data Mining, 111-120. Won Best Paper First-Authored by a Student.
CP28 Feng, M., Beck, J., & Heffernan, N. (2009). Using Learning Decomposition and Bootstrapping with Randomization to Compare the Impact of Different Educational Interventions on Learning. In Barnes, Desmarais, Romero & Ventura (Eds) Proc. of the 2nd International Conference on Educational Data Mining, 51-60.
CP27 Gong, Y., Rai, D. Beck, J. & Heffernan, N. (2009). Does Self-Discipline impact students’ knowledge and learning? In Barnes, Desmarais, Romero & Ventura (Eds) Proc. of the 2nd International Conference on Educational Data Mining, 61-70. ISBN: 978-84-613-2308-1.
CP26 Pardos, Z. & Heffernan, N. (2009). Detecting the Learning Value of Items in a Randomized Problem Set. In Dimitrova, Mizoguchi, du Boulay & Graesser (Eds.) Proceedings of the 2009 Artificial Intelligence in Education Conference. IOS Press, 499-506.
CP25 Razzaq, L. & Heffernan, N. (2009). To Tutor or Not to Tutor: That is the Question. In Dimitrova, Mizoguchi, du Boulay & Graesser (Eds.) Proceedings of the 2009 Artificial Intelligence in Education Conference. IOS Press. pp. 457-464. Honorable Mention for Best Paper First Authored by a Student.
CP24 Feng, M., Heffernan, N. & Beck, J. (2009). Using Learning Decomposition to Analyze Instructional Effectiveness in the ASSISTment System. Proceedings of the 2009 Artificial Intelligence in Education Conference. IOS Press, 523-530.
CP23 Feng, M., Beck, J,. Heffernan, N. & Koedinger, K. (2008). Can an Intelligent Tutoring System Predict Math Proficiency as Well as a Standardized Test? In Baker & Beck (Eds.). Proceedings of the 1st International Conference on Education Data Mining. Montreal, Canada, 107-116.
CP22 Feng, M., Heffernan, N., Beck, J, & Koedinger, K. (2008). Can we predict which groups of questions students will learn from? In Baker & Beck (Eds.). Proceedings of the 1st International Conference on Education Data Mining. Montreal, Canada, 218-225.
CP21 Pardos, Z. A., Beck, J., Ruiz, C. & Heffernan, N. T. (2008). The Composition Effect: Conjunctive or Compensatory? An Analysis of Multi-Skill Math Questions in ITS. In Baker & Beck (Eds.) Proceedings of the First International Conference on Educational Data Mining. Montreal, Canadam, 147-156.
CP20 Razzaq, L., Mendicino, M. & Heffernan, N. (2008). Comparing classroom problem-solving with no feedback to web-based homework assistance. In Woolf, Aimeur, Nkambou, and Lajoie (Eds.) Proceeding of the 9th International Conference on Intelligent Tutoring Systems, 426-437.
CP19 Razzaq, L., Heffernan, N. T., Lindeman, R. W. (2007). What Level of Tutor Interaction is Best? In Luckin & Koedinger (Eds.) Proceedings of the 13th Conference on Artificial Intelligence in Education, 222-229.
CP16 Feng, M., Heffernan, N. & Koedinger, K.R. (2006a). Predicting state test scores better with intelligent tutoring systems: developing metrics to measure assistance required. In Ikeda, Ashley & Chan (Eds.). Proceedings of the Eighth International Conference on Intelligent Tutoring Systems. Springer-Verlag: Berlin, 31-40.
CP13 Razzaq, L. & Heffernan, N.T. (2006). Scaffolding vs. hints in the Assistment system. In Ikeda, Ashley & Chan (Eds.). Proceedings of the Eight International Conference on Intelligent Tutoring Systems. Springer-Verlag: Berlin, 635-644.
CP12 Walonoski, J. & Heffernan, N.T. (2006a). Detection and analysis of off-task gaming behavior in intelligent tutoring systems. In Ikeda, Ashley & Chan (Eds.). Proceedings of the Eight International Conference on Intelligent Tutoring Systems. Springer-Verlag: Berlin, 382-391.
CP10 Rose C., Donmez P., Gweon G., Knight A., Junker B., Cohen W., Koedinger K., Heffernan N.T. (2005). Automatic and semi-automatic skill coding with a view towards supporting on-line Assessment. In Looi, McCalla, Bredeweg, & Breuker (Eds.) The 12th Annual Conference on Artificial Intelligence in Education 2005, Amsterdam. ISO Press, 571-578.
CP9 Croteau, E., Heffernan, N. T. & Koedinger, K. R. (2004). Why are Algebra word problems difficult? Using tutorial log files and the power law of learning to select the best fitting cognitive model. In J.C. Lester, R.M. Vicari, & F. Parguacu (Eds.) Proceedings of the 7th International Conference on Intelligent Tutoring Systems. Berlin: Springer-Verlag, 240-250.
CP8 Heffernan, N. T. & Croteau, E. (2004). Web-Based Evaluations Showing Differential Learning for Tutorial Strategies Employed by the Ms. Lindquist Tutor. In James C. Lester, Rosa Maria Vicari, Fábio Paraguaçu (Eds.) Proceedings of 7th Annual Intelligent Tutoring Systems Conference, Maceio, Brazil, 491-500.
CP7 Jarivs, M., Nuzzo-Jones, G. & Heffernan. N. T. (2004). Applying machine learning techniques to rule generation in intelligent tutoring systems. In J.C. Lester, R.M. Vicari, & F. Parguacu (Eds.) In James C. Lester, Rosa Maria Vicari, Fábio Paraguaçu (Eds.) Proceedings of 7th Annual Intelligent Tutoring Systems Conference, Maceio, Brazil, 541-553.
CP6 Koedinger, K. R., Aleven, V., Heffernan. T., McLaren, B. & Hockenberry, M. (2004). Opening the door to non-programmers: Authoring intelligent tutor behavior by demonstration. In James C. Lester, Rosa Maria Vicari, Fábio Paraguaçu (Eds.) Proceedings of 7th Annual Intelligent Tutoring Systems Conference,e, Maceio, Brazil,162-173.
CP5 Heffernan, N. T. (2003). Web-based evaluations showing both cognitive and motivational benefits of the Ms. Lindquist tutor In F. Verdejo and U. Hoppe (Eds) 11th International Conference Artificial Intelligence in Education. Sydney, Australia. IOS Press,115-122.
CP4 Heffernan, N. T., & Koedinger, K. R.(2002). An intelligent tutoring system incorporating a model of an experienced human tutor. In Stefano A. Cerri, Guy Gouardères, Fábio Paraguaçu (Eds.): 6th International Conference on Intelligent Tutoring System. Biarritz, France. Springer Lecture Notes in Computer Science, 596-608.
CP3 Heffernan, N. T., & Koedinger, K. R. (2000). Intelligent tutoring systems are missing the tutor: Building a more strategic dialog-based tutor. In C.P. Rose & R. Freedman (Eds.) Proceedings of the AAAI Fall Symposium on Building Dialogue Systems for Tutorial Applications. Menlo Park, CA: AAAI Press, 14- 19.
CP2 Heffernan, N. T. & Koedinger, K. R. (1998). A developmental model for algebra symbolization: The results of a difficulty factors assessment. In M. Gernsbacher & S. Derry (Eds.) Proceedings of the Twentieth Annual Conference of the Cognitive Science Society. Hillsdale, NJ: Erlbaum, 484-489.
CP1 Heffernan, N. T. & Koedinger, K.R. (1997). The composition effect in symbolizing: The role of symbol production vs. text comprehension. In Proceedings of the Nineteenth Annual Conference of the Cognitive Science Society. Hillsdale, NJ: Erlbaum, 307-312. [Marr prize winner for Best Student Paper].
SP22 Patikorn, T, Selent, D., Beck, J., Heffernan, N., & Zhou, J. (2017). Using a Single Model Trained Across Multiple Experiments to Improve the Detection of Treatment Effects. Conference of Educational Data Mining 2017. Pp 202-207.
SP21 Zhao, S. & Heffernan, N. (2017) Estimating Individual Treatment Effects from Educational Studies with Residual Counterfactual Networks. In the 10th International Conference on Educational Data Mining (EDM 2017).
SP16 Inventado, P. S., VanInwegen, E., Ostrow, K., Scupelli, P., Heffernan, N., Baker, R., Slater, S. & Ocumpaugh, J. (2016) Design Subtleties Driving Differential Attrition. The 6th International Learning Analytics & Knowledge Conference pp 284-289.
SP15 Ostrow, K.S. & Heffernan, N. T. (2016) Studying Learning at Scale with the ASSISTments TestBed Proceedings of the Third (2016) ACM Conference on Learning @Scale, 333-334.
SP13 Botelho, A., Adjei, S. & Heffernan, N. (2016) Modeling Interactions Across Skills: A Method to Construct and Compare Models Predicting the Existence of Skill Relationships. In Barnes, Chi & Feng (eds) The 9th International Conference on Educational Data Mining, 292-297.
SP12 Wang, Y., Ostrow, K., Beck, J. & Heffernan, N. T. (2016) Enhancing the efficiency and reliability of group differentiation through partial credit. In the Proceedings of the Sixth International Conference on Learning Analytics & Knowledge LAK2016 , 454-458 (Acceptance Rate = Not released but maybe 50%). Materials from the study.
SP11 Adjei, S., Boethello, A. & Heffernan, N. (2016) Predicting student performance on post-requisite skills using prerequisite skill data: an alternative method for refining prerequisite skill structures In the Proceedings of the Sixth International Conference on Learning Analytics & Knowledge LAK2016 , 469-473 (Acceptance Rate = Not released but maybe 50%).
SP8 Wang, Y., Heffernan, N, & Heffernan, C. (2015). Towards better affect detectors: effect of missing skills, class features and common wrong answers. Proceedings of the Fifth International Conference on Learning Analytics And Knowledge, 31-35.
SP7 Van Inwegen, E., Adjei, S., Wang, Y., & Heffernan, N. (2015) An analysis of the impact of action order on future performance: the fine-grain action model. Proceedings of the Fifth International Conference on Learning Analytics And Knowledge, 320-324.
SP6 San Pedro, M.O., Baker, R., Heffernan, N., Ocumpaugh, J. (2015) Exploring College Major Choice and Middle School Student Behavior, Affect and Learning: What Happens to Students Who Game the System? Proceedings of the 5th International Learning Analytics and Knowledge Conference, 36-40.
SP5 Adjei, S., Selent, D., Heffernan, N., Pardos, Z., Broaddus, A., Kingston, N. (2014). Refining Learning Maps with Data Fitting Techniques: Searching for Better Fitting Learning Maps. In John Stamper et al. (Eds) Proceedings of the 7th International Conference on Educational Data Mining, 413-414.
SP1 Feng, M., Heffernan, N., Pardos, Z. & Heffernan, C.(2011). Establishing the value of dynamic assessment in an online tutoring system. In Pechenizkiy, M., Calders, T., Conati, C., Ventura, S., Romero , C., and Stamper, J. (Eds.) Proceedings of the 4th International Conference on Educational Data Mining, 295-300.
(aka “Poster”) in Prestigious Conferences (Acceptance rates of 50-60%).
PP41 Hulse, T., Harrison, A., Ostrow, K. S., Botelho, A. F., & Heffernan, N. T. (2018). Starters and Finishers: Predicting Next Assignment Completion From Student Behavior During Math Problem Solving. In Proceedings of the Eleventh International Conference on Educational Data Mining, 525-528.
PP39 Patikorn, T., Heffernan, N., & Zhou, J. (2017). An Offline Evaluation Method for Individual Treatment Rules and How to Find Heterogeneous Treatment Effects. Conference of Educational Data Mining 2017.
PP38 Selent, D., Patikorn, T., & Heffernan, N. T. (2016). ASSISTments Dataset from Multiple Randomized Controlled Experiments. A “Work in progress” category presented at Learning at Scale 2016. At ACM Digital Library, 181-184.
PP37 Patikorn, T., Selent, D., Heffernan, N. T., Yin, B., Botelho, A. (2016) ASSISTments Dataset for a Data Mining Competition to Improve Personalized Learning. Poster at MIT Conference on Digital Experimentation. CODE 2016.
PP37 Williams, J. J., Botelho, A., Sales, A., Heffernan, N. & Lang, C. (2016) Discovering 'Tough Love' Interventions Despite Dropout In Barnes, Chi & Feng (eds) The 9th International Conference on Educational Data Mining, 650-651.
PP32 Selent, D. & Heffernan, N. T. (2015) When More Intelligent Tutoring in the Form of Buggy Messages Does Not Help In Conati, Heffernan, Mitrovic & Verdejo (Eds) The 17th Proceedings of the Conference on Artificial Intelligence in Education, Madrid, Spain. Springer, 768-771.
PP31 Jiang, Y., Baker, R., Paquette, L., San Pedro, M. & Heffernan, N. T. (2015) Learning, Moment-by-Moment and Over the Long Term. In Conati, Heffernan, Mitrovic & Verdejo (Eds) The 17th Proceedings of the Conference on Artificial Intelligence in Education, Madrid, Spain. Springer, 654-657.
PP30 Ostrow, K. & Heffernan, N. T. (2015) The Role of Student Choice Within Adaptive Tutoring. In Conati, Heffernan, Mitrovic & Verdejo (Eds) The 17th Proceedings of the Conference on Artificial Intelligence in Education, Madrid, Spain. Springer, 752-755.
PP28 Van Inwegen, E., Wang, Y., Adjei, S. & Heffernan, N.T. (2015) The Effect of the Distribution of Predictions of User Models. In the Proceedings of the 8th International Conference on Educational Data Mining EDM2015, Madrid, Spain. ISBN: 978-84-606-9425-0. pp 620-621.
PP27 Botelho, A., Adjei, S., Wan, H. & Heffernan, N.T. (2015) Predicting Student Aptitude Using Performance History. In the Proceedings of the 8th International Conference on Educational Data Mining EDM2015, Madrid, Spain. ISBN: 978-84-606-9425-0. pp 622-623.
PP26 Kelly, K., Wang, Y., Thompson, T. & Heffernan, N.T. (2015) Defining Mastery: Knowledge Tracing Versus N- Consecutive Correct Responses Students EDM2015, Madrid, Spain. In the Proceedings of the 8th International Conference on Educational Data Mining EDM2015, Madrid, Spain. ISBN: 978-84-606-9425-0, 630-631.
PP25 Kelly, K., Arroyo, I., & Heffernan, N. (2013). Using ITS Generated Data to Predict Standardized Test Scores. In S. D’Mello, R. Calvo, & A. Olney (Eds.) Proceedings of the 6th International Conference on Educational Data Mining (EDM2013). Memphis, TN, 322-323.
PP24 Duong, H., Zhu, L., Wang,Y. and Neil Heffernan. (2013). A prediction model that uses the sequence of attempts and hints to better predict knowledge: “Better to attempt the problem first, rather than ask for a hint.” In S. D’Mello, R. Calvo, & A. Olney (Eds.) Proceedings of the 6th International Conference on Educational Data Mining (EDM2013). Memphis, TN, 316-317.
PP23 Adjei, S., Salehizadeh, S. M. A., Wang, Y., & Heffernan, N.T. (2013). Do students really learn an equal amount independent of whether they get an item correct or wrong? In S. D’Mello, R. Calvo, & A. Olney (Eds.) Proceedings of the 6th International Conference on Educational Data Mining (EDM2013). Memphis, TN, 304-305.
PP21 Heffernan, N. T., Heffernan, C. L., Dietz, K., Soffer, D. A., Goldman, S. R., & Pellegrino, J. W. (September, 2012). Spacing Practice, Assessment and Feedback to Promote Learning and Retention. Presented at the Scientific Research on Educational Effectiveness Fall 2012 Conference in Washington, D.C.
PP20 Wang, Y. & Heffernan, N. (2011). Towards Modeling Forgetting and Relearning in ITS: Preliminary Analysis of ARRS Data. In Pechenizkiy, M., Calders, T., Conati, C., Ventura, S., Romero, C., and Stamper, J. (Eds.) Proceedings of the 4th International Conference on Educational Data Mining, 351-352.
PP18 Wang, Y., Heffernan, N. & Beck, J. (2010). Representing Student Performance with Partial Credit. In Baker, R.S.J.d., Merceron, A., Pavlik, P.I. Jr. (Eds.) Proceedings of the 3rd International Conference on Educational Data Mining, 335-336.
PP17 Rai, D., Beck, J., & Heffernan, N. (2010). Mily’s World: A Coordinate Geometry Learning Environment with Game-Like Properties. In Aleven, V., Kay, J & Mostow, J. (Eds) Proceedings of the 10th International Conference on Intelligent Tutoring Systems (ITS2010) Part 2. Springer, 399-401.
PP15 Feng, M., Heffernan, N., Koedinger, K. (2010). Using Data Mining Findings to Aid Searching for Better Skill Models. In Aleven, V., Kay, J & Mostow, J. (Eds) Proceedings of the 10th International Conference on Intelligent Tutoring Systems (ITS2010) Part 2 Springer, 312-314.
PP13 Shrestha, P., Wei, X., Maharjan, A., Razzaq, L., Heffernan, N.T., & Heffernan, C., (2009). Are Worked Examples an Effective Feedback Mechanism During Problem Solving? In N. A. Taatgen & H. van Rijn (Eds.), Proceedings of the 31st Annual Conference of the Cognitive Science Society, 1294-1299.
PP12 Kim, R, Weitz, R., Heffernan, N. & Krach, N. (2009). Tutored Problem Solving vs. “Pure”: Worked Examples In N. A. Taatgen & H. van Rijn (Eds.), Proceedings of the 31st Annual Conference of the Cognitive Science Society, 3121-3126). Austin, TX: Cognitive Science Society.
PP11 Razzaq, L., Heffernan, N.T. (2008). Towards Designing a User-Adaptive Web-Based E-Learning System. In Mary Czerwinski, Arnold M. Lund, Desney S. Tan (Eds.): Extended Abstracts Proceedings of the 2008 Conference on Human Factors in Computing Systems, 3525-3530. Florence, Italy: ACM 2008.
PP10 Patvarczki, J., Almeida, J. F., Beck, J. E., & Heffernan, N. T. (2008). Lessons Learned from Scaling Up a Web-Based Intelligent Tutoring System. In Woolf & Aimeur (Eds.) Proceeding of the 9th International Conference on Intelligent Tutoring Systems. Springer-Verlag: Berlin.Volume 5091, 766-770.
PP9 Guo, Y., Heffernan, N. T., & Beck, J. E. (2008). Trying to Reduce Bottom-out hinting: Will telling student how many hints they have left help?. In Woolf & Aimeur (Eds.) Proceeding of the 9th International Conference on Intelligent Tutoring Systems. Springer-Verlag: Berlin. Volume 5098, 774-778.
PP7 Weitz, R., Heffernan, N. T., Kodaganallur, V. & Rosenthal, D. (2007). The distribution of student errors across schools: An initial study. In Luckin & Koedinger (Eds.) Proceedings of the 13th Conference on Artificial Intelligence in Education. IOS Press. pp 671-673.
PP6 Kardian, K. & Heffernan, N.T. (2006). Knowledge engineering for intelligent tutoring systems: Assessing semi-automatic skill encoding methods. In Ikeda, Ashley & Chan (Eds.) Proceedings of the Eight International Conference on Intelligent Tutoring Systems. Springer-Verlag: Berlin, 735-737. A longer version was published as WPI-CS-TR-06-06 (pdf).
PP5 Walonoski, J. & Heffernan, N. (2006b). Prevention of off-task gaming behavior in intelligent tutoring systems. In Ikeda, Ashley & Chan (Eds.). Proceedings of the 8th International Conference on Intelligent Tutoring Systems. 2006, LNCS 4053. Springer-Verlag: Berlin, 722-724. [Winner of the Best 3-page Paper Award out of 40 such papers]. A longer version is here.
PP2 Koedinger, K. R., Aleven, V., & Heffernan, N. T. (2003). Toward a rapid development environment for cognitive tutors. In F. Verdejo and U. Hoppe (Eds) 11th International Conference Artificial Intelligence in Education. Sydney, Australia. IOS Press, 455-457.
PP1 Razzaq, L. & Heffernan, N. T (2004). Tutorial dialog in an equation solving intelligent tutoring system. In J.C. Lester, R.M. Vicari, & F. Parguacu (Eds.) Proceedings of 7th Annual International Intelligent Tutoring Systems Conference, Berlin: Springer-Verlag, 851-853. [Winner of the Best Poster Award].
WP23 Wilson, K., Xiong, X., Khajah, M., Lindsey, R. V., Zhao, S., Karklin, K., Van Inwegen, E., Han, B., Ekanadham, C., Beck, J., Heffernan, N., & Mozer, M., (2016) Estimating student proficiency: Deep learning is not the panacea. Submission to the NIPS 2016 Workshop on Machine Learning for Education.
WP21 Williams, J. J., Schultz, S., & Heffernan, N. T. (2015) Adaptively Personalizing Instruction through Collaborative Development of MOOClets by Instructors, Education, Psychology and Machine Learning Researchers. Learning with MOOCs II workshop that will be held at Teachers College, Columbia University on October 2-3, 2015. There were two peer reviews but an unpublished acceptance rates so I listed down in this section.
WP20 Patvarczki, J., Almeida, S., Beck, J. & Heffernan, N. (2008). Lessons Learned from Scaling Up a Web-Based Intelligent Tutoring System. Lecture Notes in Computer Science, Intelligent Tutoring Systems, 5091, 766-770.
WP19 Razzaq, L., Heffernan, N.T. (2008). Towards Designing a User-Adaptive Web-Based E-Learning System. In Mary Czerwinski, Arnold M. Lund, Desney S. Tan (Eds.): Extended Abstracts Proceedings of the 2008 Conference on Human Factors in Computing Systems, 3525-3530. Florence, Italy.
WP17 Pardos, Z., Feng, M. Heffernan, N. T., Heffernan-Lindquist, C. & Ruiz, C. (2007). Analyzing fine-grained skill models using Bayesian and mixed effect methods. In the Educational Data Mining Workshop held at the 13th Conference on Artificial Intelligence in Education. This is a longer version of CP18.
WP16 Lloyd, N., Heffernan, N. & Ruiz, C. (2007). Predicting student engagement in intelligent tutoring systems using teacher expert knowledge. In the Educational DataMining Workshop held at the 13th Conference on Artificial Intelligence in Education.
WP13 Feng, M., Heffernan, N. T., & Koedinger, K. R. (2005). Looking for sources of error in predicting student's knowledge. In Beck. J. (Eds). Educational Data Mining: Papers from the 2005 AAAI Workshop. Menlo Park, California: AAAI Press, 54-61. Technical Report WS-05-02.
WP9 Freyberger, J., Heffernan, N., & Ruiz, C. (2004). Using association rules to guide a search for best fitting transfer models of student learning. In Beck, Baker, Corbett, Kay, Litman, Mitrovic & Rigger (Eds.) Workshop on Analyzing Student-Tutor Interaction Logs to Improve Educational Outcomes. Held at the 7th Annual Intelligent Tutoring Systems Conference, Maceio, Brazil. Lecture Notes in Computer Science.
WP8 Livak, T., Heffernan, N. T., Moyer, D. (2004). Using cognitive models for computer generated forces and human tutoring.13th Annual Conference on (BRIMS) Behavior Representation in Modeling and Simulation. Simulation Interoperability Standards Organization. Arlington, VA.
WP7 Razzaq, L. & Heffernan, N. T (2004). Tutorial dialog in an equation solving intelligent tutoring system. Workshop on “Dialog-based Intelligent Tutoring Systems: State of the art and new research directions” at the 7th Annual Intelligent Tutoring Systems Conference, Maceio, Brazil, 33-42.
U9 McGuire, P., Logue, M., Mason, C., Tu, S., Heffernan, C., Heffernan, N., Ostrow, K. & Li, Y. (2016, accepted). To See or Not To See: Putting Image-Based Feedback in Question. Interactive lecture at the International Society for Technology in Education Conference. Denver, CO.
U8 Williams, J. J., Krause, M., Paritosh, P., Whitehill, J., Reich, J., Kim, J., Mitros, P., Heffernan, N., & Keegan, B. C. (2015). Connecting Collaborative & Crowd Work with Online Education. Proceedings of the 18th ACM Conference Companion on Computer Supported Cooperative Work & Social Computing, 313-318.
U7 Williams, J., J., Li, N., Kim, J., Whitehill, J., Maldonado, S., Pechenizkiy, M., Chu, L., & Heffernan, N. (2014). MOOClets: A Framework for Improving Online Education through Experimental Comparison and Personalization of Modules (Working Paper No. 2523265). The Social Science Research Network.
U6 Williams, J. J., Maldonado, S., Williams, B. A., Rutherford-Quach, S., & Heffernan, N. (2015). How can digital online educational resources be used to bridge experimental research and practical applications? Embedding In Vivo Experiments in “MOOClets” . Paper presented at the Spring 2015 Conference of the Society for Research on Educational Effectiveness, Washington, D. C.
U4 Kelly, K., Heffernan, N., Heffernan, C., Goldman, S., Pellegrino, J., & Soffer-Goldstein, D. (2014). Improving student learning in math through web-based homework review. In Liljedahl, P., Nicol, C., Oesterle, S., & Allan, D. (Eds.). (2014). Proceedings of the Joint Meeting of PME 38 and PME-NA 36 (Vol. 3). Vancouver, Canada: PME, 417-424.
U3 Pellegrino, J., Goldman, S., Soffer-Goldstein, D., Stoelinga, T., Heffernan, N., & Heffernan, C. (2014). Technology Enabled Assessment:Adapting to the Needs of Students and Teachers. American Educational Research Association (AERA 2014) Conference.
U2 Soffer,-Goldstein, D., Das, V., Pellegrino, J., Goldman, S., Heffernan, N., Heffernan, C., & Dietz, K. (2014). Improving Long-term Retention of Mathematical Knowledge through Automatic Reassessment and Relearning. American Educational Research Association (AERA 2014) Conference. Division C - Learning and Instruction / Section 1c: Mathematics. (peer reviewed but unknown rate) Poster Nominated for the Best Poster of the Session.
U1 Heffernan, N., Heffernan, C., Dietz, K., Soffer, D., Pellegrino, J. W., Goldman, S. R. & Dailey, M. (2012). Cognitively-Based Instructional Design Principles: A Technology for Testing their Applicability via Within-Classroom Randomized Experiments. AERA 2012. | 2019-04-21T14:40:53Z | https://www.neilheffernan.net/publications |
December is Old Fashioned Fun Month!
$21 Challenge: Order Quick for Christmas!
Happy Christmas! Thank you for all your wonderful support this year. We really appreciate your help and couldn't do it without you! To show our appreciation we have been hard at work preparing your 2012 free Simple Savings calendar entitled "The NEW, Clever YOU!" We are almost finished and will have it ready for you early next week. How exciting!
I am really looking forward to some time relaxing with my family this festive season. I hope you get lots of time to do the things you love most, with the people you love most!
Thanks for all the wonderful emails you send in. Reading your success stories really makes my day!
"Our household is celebrating today! After 11 years we paid off our mortgage. During this time we almost went broke, recovered ourselves and went on to buy an investment property as well as paying off a $260,000 house mortgage.
"I really love your $21 Challenge book. We spend more than that on milk and bread each week, so we tried the slightly modified version for one adult and four kids. I find many of the recipes in there quite useful. My kids are fussy and so I find most cook books to be unsuitable, but even my kids will eat quite a few of the recipes. To be honest, I didn't think I had enough food in the house to create good meals but when I followed the instructions in the book and did the freezer and pantry inventory, I discovered that I could possibly have fed half of the starving population of Haiti! I used to always buy generic brand products because our budget was so tight that we didn't have a choice. When I started working, I enjoyed having a choice to be able to purchase some branded items but I discovered that I had stopped purchasing ANY generic items. I also found that I wasn't organised and started grabbing whatever was quick and easy - and usually expensive - from the supermarket at the last minute.
Thank you for your wonderful feedback! We really appreciate every single tip we receive every week too, so keep sending them in. As well as helping other members to save money, you could also win yourself a free 12 month Vault membership (value $47) in our weekly Hint of the Week competition!
P.S. We have slashed the price of Vault memberships to $27 until 31st December. We are getting into the festive season by cutting the cost of Vault memberships from $47 to $27 until 9PM on the 31st December. Order here.
"This is great, isn't it?" Sally smiled, as she looked around. "Look Pete - our whole street is here!" "I'm not surprised," Pete grinned. "Last year's was so much fun nobody would have wanted to miss out this year! Well done, Love, I'm really proud of you for getting our neighbourhood together. It's a really good thing you're doing - although I think you might have landed yourself with an annual job!"
"Well it's a nice job to have," Sally replied happily. "Especially now I have Linda to help me. She's been terrific at coming up with all the old fashioned games, I have to admit I'd forgotten half of them!" "Me too! I can't remember the last time I ran an egg and spoon race!" laughed Pete. "And don't think I didn't notice how you just happened to pair Linda up with our new neighbour for the three-legged race!"
"I don't know what you mean!" Sally giggled. "But they ARE getting on very well aren't they? Tom is such a nice chap too. Look how he is helping her. And look at how she is smiling!" "Well, that's what street parties are all about isn't it? Good, old fashioned, fun," Pete winked. "And, by the look of those two, I reckon we might end up with a new couple in our street!"
2. December is Old Fashioned Fun Month!
I love my computer! My computer gives me access to so many things. For starters there is Google, Youtube, Simple Savings and Skype. It is great. But, it also has me sitting on my rapidly growing bottom for way too many hours a day. It is time for that to change; it is time for my bottom to stop growing. So this month, I'm swapping the screens for good old fashioned, bottom-shrinking FUN!
I want to have a great month this month, by getting our bodies up and active. Modern entertainment is fun, but it only exercises our brain and our eyes. The rest of the body needs a workout too.
When you get moving your body switches its happiness endorphins on. So you'll experience good old fashioned joy.
Getting your mind and body moving together will also strengthen you up.
In the Lippey household we are really getting into the swing of things for Old Fashioned Fun month and going camping for three weeks of swimming, playing and laughing. It is going to be so much fun!
The list goes on and on, see how many more you can think of. Give it a go this month - and make sure you report in to the Forum or our Facebook page and tell us how much fun you are having in Old Fashioned Fun month!
3. Great Aussie Street Party!
That's Life! magazine have once again got behind our Great Aussie Street Party and are running a fantastic competition. To enter, you must hold a street party on either Saturday 3rd December or Sunday 4th December 2011. Then, tell That's Life! in 500 words or less all about your party and send in your story with an entry form and at least one photograph. Easy - not to mention a lot of fun! You can download your entry form from www.thatslife.com.au or you can find it in Issues 45 and 46 of That's Life! magazine. There is a prize of $1000 cash for the first prize winner and 5 x $100 runners-up prizes. Entries close January 5th 2012. Visit www.thatslife.com.au for full terms and conditions. Have a fantastic street party and good luck in the competition!
4. $21 Challenge: Order Quick for Christmas!
As mentioned above we are going away for some old fashioned fun, so if you would like to order some $21 Challenge books for Christmas, you must do so before 6th December. Any orders you place after that will be sent in January. Here is a link to the Australian order page.
You will still be able to order NZ books until 17th December from here.
You will also be able to order American books until the 17th December from here.
We have a five star rating on Amazon!
Thank you, thank you, THANK YOU to the fantastic 16 people who went to our Amazon page last month and left glowing reviews for the $21 Challenge book. The reviews were so gorgeous. I read every single one of them out loud to Jackie Gower on the telephone. If you would like to read them here is the link.
Going through four major earthquakes in the past 14 months has really shaken us literally, along with the 7000+ aftershocks. Life chugs along nicely then WHAM something really upsets the apple cart. No power or water for a week is pretty bad too.
Hey, but there is a silver lining. If we hadn't had September's first big shake more people would have died in February.
Several buildings collapsed or got badly damaged in September but being at 4.35am no one was killed. Take our Railway Clock Tower. In September that got damaged with large cracks through it. So they wrapped it in ply to stabilise it. When February's quake hit the tower stayed up. Had that not had the support of the wood around it, it would have also come down. Underneath that tower is an arcade, cinema complex and a science expo.
Buildings that were damaged were surrounded by containers to keep people away. Those containers did exactly what they were meant to do. They contained the rubble when more of the buildings came down in February. Experts believe there could have been another 300+ extra people killed in February if we hadn't had September first.
The problem for Christchurch is that we're still on edge. It's always in the back of our minds that it could happen again. There are faults running right under the city they didn't know about.
What's all this to do with saving? Well quite a bit in a way.
We learned to live with what we had in the fridge and freezer. They had to be cooked straight away.
We learned to cope without power. The problem is we're not talking just dark, we're talking pitch black. My nine- and six-year-olds were terrified. During the nights came aftershocks. You could hear them but you couldn't see anything. First would come the rumble, then the windows would rattle, then the bookcase, then the house and it would rumble away again.
They put a curfew up. It's eerie lying there and not hearing trucks. Knowing that the rumble is an aftershock, NOT a truck on the main road. Because it was so quiet you could hear them coming several seconds before they hit. It was a bittersweet sound. It was sometimes easier if you heard them because then you knew but also you would be tensed for longer whereas the sudden ones you usually didn't do more than jump.
In February for the first time in my life I was a hysterical mess. I was standing on the deck sobbing, hanging on to my eight week old baby like a lifeline. One neighbour looked over, took one look and was at our house in seconds. His wife came out later and took the baby so I could get my older two. My son didn't talk for nearly an hour. My daughter was scared for a bit then got over it pretty quickly.
Life will never be the same in Christchurch. We joke about it, smile, make funny sentences but deep inside it's hit us all in some way.
I was talking on the phone to the Earthquake Commission a couple of weeks ago and she asked us how we were. I said we were pretty good and not as bad as some people.
I said that's probably because in Christchurch pretty much EVERYONE knows someone worse off than them.
My mum and grandfather lost their house. BUT they're safe, they're stuff was salvageable and their cat came home. There are families who not only lost their homes but everything in them, people who lost their livelihoods, their loved ones. Yes, we're very lucky. I don't know personally anyone who was killed but I know of them. On Tuesdays I'm normally wandering around the local shopping mall but that day I hadn't gone as I had an appointment.
The only thing that did not come down in our room was the heavy clock above our eight week old baby's bassinet. The one she was fast asleep in at 12.51pm February 22. It is no longer on the wall and will never go back up.
In both the February 22 and June 13 quakes it was lunch time for the kids. They were still outside playing. My daughter's classroom was severely damaged in February and is still fenced off today.
Your baby gets rocked to sleep and you're not in the same room.
Your kids ask for a milkshake so you take them for a drive.
Your son no longer asks for a skateboard ramp.
You finally get the speed humps in your street - but the road crews didn't put them there.
No life will ever be the same but it's a relief to hear your nine year old say 'Did you feel the earthquake last night Mummy? It was only a little one though and didn't scare me'.
I think that's probably one of the nicest sounds in the world. Especially in Christchurch.
I have watched closely as the plants go through different stages. I noticed one week some of them were getting lighter coloured leaves, so I fed them with worm wees and voila within two days they returned to their healthy dark green leafy colour! I had watered the plants most days, however, there were a few times where I didn't and panicked that they would have wilted, but they were still standing tall and proud and when I put my finger into the soil I was surprised to feel it was still damp. I have since found out that plants prefer to have a good dose of water a few times a week rather than a sprinkling of water daily.
Let me share with you what I've learnt about feeding over the past few months. I've tried them all and my garden is going great guns!
Worm wee - With the set up cost of our potager garden I am holding off getting a worm farm for the moment. But this is definitely a long term goal as the 'worm wee' has been a valuable feed in our garden. Zoe's kindergarten has a worm farm and they just ask for a gold coin donation in exchange for 1 litre of concentrated worm wee. We came home excited to pour some over the garden, but it didn't go far. I found out the next day that you add one part worm wee to five parts water! Oops, thank golly my worm wee concentrate didn't kill my plants! Note: A litre of worm wee concentrate actually goes a long way!
Seaweed - Zoe and I have been to the beach several times now to collect seaweed. Seaweed is full of nitrogen and breaks down quickly. You can use it in many ways: We added it to the lasagne garden, using our layering technique as this provides the plants with lots of nutrients.
To make up a liquid feed, simply wash the sand off first and then pile into a big bucket. Fill to the top with water. Once a week we dip in our watering can and water around the base of the plants. We simply top up the bucket of water for next week's feed and by doing this it allows it to dilute and not burn your plants!
Using another bucket, fill it with seaweed, put a lid on top and leave it for 3-4 months. Once it has broken down, you can mix it in and around your garden - it will love you for it. I have spoken to many keen gardeners around here and they all swear by it!
Sheep poo - Golly it's all about poos and wees! Sheep poo is great for the garden. The kids added sprinkles of it to the lasagne layering. Like the seaweed you can also make up a liquid feed. Just one quarter fill a bucket with sheep pellets and top it up with water. Dip your watering can in and water over plants every other fortnight.
We finally built our potager garden! Plants were transplanted at six weeks old.
Picture taken today, nine weeks after the seeds were planted, and just three weeks after the picture above!
Look at our healthy cos lettuce and beetroot plants - delicious!
Zoe collecting seaweed from the beach.
Bucket full of seaweed and water, liquid feed.
Mixing 'worm wee' and water.
I will continue to blog my gardening journey here between newsletters and would love any savvy tips from members. Happy gardening everyone!
One of the best things about old fashioned fun is that it is low-cost and often no-cost! Check out some of these terrific ideas from the Vault to get you started.
Regularly checking council websites for festivals, markets, movies, concerts and events.
Going to a free Sunday movie at Brisbane City Library.
Watching a free outdoor movie at South Bank. These are held on a regular basis in a really comfortable atmosphere right beside the river. We take a picnic dinner, a nice drink and a rug and pillow.
Have a picnic dinner at South Bank and then join in the cafe footpath dancing if you feel inclined.
Make some popcorn and some drinks and invite a group of friends over for movie nights.
Games around the table - sometimes we will play games like 'how many musical sounds can you make with your mouth, nose, hands?' and so on. Or one person will start a story and everyone has to then add a bit - can get very interesting!
Spend time at the museum, art gallery or library.
Organise a BYO barbecue or pot luck dinner. Just add some good music and friends.
A game of paper wasps or water pistols.
Drives to the beach or mountains.
Volunteering time at big music festivals.
Skating, swimming, star gazing, sunsets, football, soccer - you get the idea. The list is endless and the boys' Suggestion Box is working so well, I often find myself dipping into it too!
I found a great alternative to buying expensive traditional tales for my son when I discovered a free online audio equivalent. My son loves traditional stories and requested a few of them for Christmas. However, after discovering they were expensive to buy new and hard to source second hand, I looked online to find an alternative. In no time I had come across an awesome website that contained a huge range of children's stories, all of which come with free audio downloads and the words to go along with it. It's fabulous because it allows you to create your own read-along books which are something all children enjoy.
I recently found a website to help release some of my son's energy and relieve the boredom, without spending money! www.playgroundfinder.com is a user-contributed directory of playgrounds in Australia with photographs, reviews, available facilities, a five star rating system and map links. Now when I go out, I can look for a playground in that area and my son can enjoy a variety of equipment.
Our family is having some great fun together while getting exercise too! Most children just seem to like the idea of cycling whether they are going anywhere in particular or not! Recently we picked up a free folding exercise bike at our church Freecycle. We thought the whole family could have fun on it, and we were right! The youngest child had a bit of a job to get her feet to reach all the way down to the pedals as it is meant for adults but somehow that just increases the fun of it. Even 'Tyler' the dog was interested and a very patient spectator. Of course the children are getting the exercise they need as well as fun without even realising. It is great when good things can just happen naturally for the children without having to bribe them to get up and move, or get off the computer chair and actively live life!
www.free-ebooks.net is an excellent website that lets you download free ebooks from a wide range of categories including fiction, food, health and beauty, parenting and travel.
Budding authors can even submit their own ebooks to the site, so this website is perfect for readers AND writers!
The best festive memories are made of people, not presents! Remember the good old days, when a game of backyard cricket was the highlight of any family gathering? For a dose of good old fashioned fun, look no further than our Forum.
This thread is a fabulous reminder of the good old days when Christmas was more about spending time with the special people in our lives and less about splashing the cash. What do you love about Christmas?
This thread is a timely reminder to stop endlessly buying things and instead put what you already have to good use.
Our members will warm your heart with their lists of things to be thankful for. Goodwill to all this festive season.
Share your favourite spots in the great outdoors with other members and let's all soak up some good old fashioned camping fun!
This time of year see councils and communities everywhere providing free entertainment for young and old. Like our members, head on out and discover what's on in your neighbourhood.
Christmas is almost upon us. If you, like many others, have just realised that you've forgotten teacher gifts, Aunty Mabel and Uncle Herman gifts or that the pickings are looking a little slim all round this year, then these recipes are just for you.
For the first time though, I feel it's only fair to warn you that one of these recipes is totally inedible. Yes, totally. No, I haven't lost the plot, I've found a gorgeous new way to decorate the tree, label the gifts and use up that ton of cooking salt I bought a while back when I thought I was going to dye all of my clothes purple.
Here's my Stamped Salt Dough Ornaments, and this recipe will make between a dozen and eighteen rustic looking ornaments or really unique little gift tags.
Turn your oven on to 100C.
Gather the family together because the kids are going to have a ball with this!
Dust a clean section of your kitchen bench or dining table with flour. Place all of the bickie and scone cutters, craft stamps and stamping pad nearby.
Put the ingredients into the bowl of your food processor, cover and pulse it all until combined, about 20 seconds. Alternatively plop it all into your mixing bowl and squish it around with clean hands until it congeals into a nice ball of goopy stuff.
Flip the goop onto your floured bench or table, dust your hands with a little more flour and knead it, pushing it away and turning it a quarter turn ten or twelve times until it's nice and smooth.
Grab your rolling pin and after giving the children a lesson in 'a rolling pin is not an instrument to get your own way when it's not your turn', let them roll away. Keep the rolling pin well dusted with flour. Your dough needs to be about the thickness of a 50c coin.
Now, don't get carried away and start cutting things out just yet. It's actually a lot easier to stamp your designs first and then cut out around them. So, doing one at a time, get creative. Press your craft stamp into the inked stamp pad and press firmly onto your waiting dough. There's a fine line between pressing hard enough for the entire picture to appear and pressing so hard that you get the edges of the stamp too, but it all adds to the lovely rustic look of your ornaments.
Now, choose the shape you want the ornament to be. Scone cutters give a lovely traditional round ornament, while bickie cutters are great for hearts, stars or anything else that takes your fancy.
Cut out your ornament, lift it with the spatula and place it onto your waiting lined tray. Now use your skewer to pierce a hole in the middle of what will be the top of your ornament. You'll be threading your kitchen twine or ribbon through this, and the hole will shrink as your ornament dries, so make your hole large enough to allow for that.
Continue until you've cut out enough shapes from your sheet of dough that you only have the dough edges left, then gather the scraps, roll it out again and have another go. You should be able to do this three or four times.
Once all of your chic new ornaments are done, pop them into your oven and leave them there for about four hours, turning once. Alternatively, you can just leave them out on the bench to dry naturally.
Once dry, thread with twine, raffia or ribbon and they're ready to go. If you find the back of the hole has shrunk a little, just carefully use the skewer to release some of the dried dough around it to enable you to thread them with twine.
Tie them to your tree or write on them with a felt tipped pen to use them as gift tags. These look gorgeous used on a twig style tree, or as tags on gifts wrapped with brown paper or butchers paper. Who needs all that posh stuff they're trying to flog in the shops?
Rustic and home-made is always in style!
Foodie gifts are the sanctuary of the cash poor. That would apply to just about all of us wouldn't it? This quick toffee is so posh, no one will believe you actually made it with your own two hands. They're all going to think you've gone crazy at some uber trendy deli and spent mega bucks stocking up on the latest thing from some far flung foodie paradise. That's a good thing though, right... *wink*?
Line your baking tray with your baking paper and set aside.
In your microwave-safe bowl stir together the sugar and peanuts.
Pop the glucose syrup in to the microwave and zap it on high for about 25 seconds to liquefy it slightly because this stuff is darned sticky!
Spray your measuring cup with some cooking spray and measure your glucose syrup into it. The cooking spray will help it slide right out and into your bowl.
Give it all a good stir and accept that your spoon is going to look like it has an alien growth on it. Don't worry about it. Scrape off as much as you can into the bowl with your other spoon. Like I said, that glucose syrup is sticky stuff!
Microwave on high for six minutes. It will bubble and froth and generally look like it's alive, but that's good.
At this point it should be anything from light amber to dark brown in some patches, so carefully remove it from the microwave with oven mitts or a tea towel and place it on the bench. Add the butter and vanilla and tip the bowl around a bit to swirl it through the mixture. Don't stir it yet.
Add the baking soda and stir carefully until it's well combined, frothy and lighter in colour.
Pour the mixture on to the lined baking tray and spread it out. It won't completely cover the tray but try to spread it to about 2.5 centimetres thick.
Sprinkle the chopped chocolate on top and allow it to sit for a few minutes. The chocolate will soften and melt. After a minute or two, grab your smoothing spatula or large knife and smooth the warmed and melting chocolate into a topping covering the toffee.
Pop it into the fridge for an hour or so to allow it to cool and harden.
When the chocolate is hardened and the toffee is cooled, it's ready to package. No knife needed, just break it into lovely huge chunks and place in a paper-lined box or wrap in a cellophane bundle.
This could become YOUR Christmas speciality. Promise!
Here is a rare photo of my boys as you don't usually see them. For starters they're not fighting which is unusual and Ali isn't pulling a crazy face which is a real rarity but this was taken recently on an evening hunting trip, deep in the heart of the King Country. Ali is a seasoned hunter and bushman but for Liam this was his first night walking out of the bush in the dark, not to mention carrying a deer on his back. I've mentioned before how Liam has changed since living in Whangamata. Gone is the shy, anxious boy with no self esteem. He's been replaced by a happy go lucky young man who is no longer scared to give things a go. To say his dad and I are pleased is an understatement.
So what's brought about the change? It's simple really - he no longer spends most of his waking hours glued to video games and TV. Instead of rarely leaving the house, he's never home! He makes the most of each day. He goes and finds things to do, just like we had to when we were young. Mind you, there is a lot to do in Whangamata and the great thing is most of it is free. Another thing I love about living where we do (and I apologise if I have also said this before) is that it's a little like going back in time, 20 or 30 years or so. Just like some of the Simple Savings newsletters, where Fiona reminisces about the days when people would chat over the fence and everybody knew all the local kids and where they were and what they were doing, our little town is like that.
Ali has always been a 'do-er'. He's never been able to keep still and crams as much as he possibly can into each day. If he wants to do something, whether it be swimming, surfing or whatever he'll jump on his bike and go and visit his friends one by one until he finds someone to do it with. He has a Playstation in his room and it hasn't been switched on since we moved house over a year ago. Liam on the other hand couldn't have been more different. It seemed that days would simply pass him by in a blur of Ratchet and Clank or Modern Warfare - whatever his latest game obsession was. I admit, Noel and I clashed a lot about it. Where Noel was seriously concerned about how much of Liam's life was being wasted this way, I was far more relaxed. 'He's just a teenager, doing what teenagers do. Give the kid a break!' After all, his friends were all doing it too. Where was the harm in it? I used to cringe every time Noel would go stomping upstairs and throw Liam and his mates outside, telling them to stop being a 'bunch of girls' and get out in the sunshine.
In the end, the change came about from Liam himself, triggered by another boy he knows. This boy is a talented sportsman but doesn't play sports. Instead he plays video games. All day, every day. It affects his schooling, it affects his relationships, it affects his health and it affects his sleep. Liam and his friends used to go round and call for him but they gave up when he started locking the door so nobody could interrupt his gaming. Sad, terribly sad, but true. All of a sudden, video games didn't seem so cool to Liam any more. 'I don't want to turn out like that!' he said. And that was that. The games got switched off and are now only played when it's raining or when a group of them are staying over and want to play. The rest of the time he's swimming, surfing, running, biking, walking the dog - anything really. I had to chuckle the other day when he arrived home after a day out and told me he and some friends had pooled their coins together and hired a couple of tandem bikes and spent a whole hilarious afternoon riding around. 'We must have biked over 20km!' he said proudly.
If there's one thing both Noel and I have been delighted with it's the range of activities available at the boys' school. They are ALWAYS doing something, it's brilliant! Just last week Liam spent four days away at camp with his whole year level, white water rafting and water skiing at Blue Lake, near Rotorua and had an absolute ball. An absolute stipulation of these camps is that there are no cell phones, no iPods and no 'gadgets' of any kind. It's a sad sign of the times that these days school camps are as much about getting kids away from technology and out into the real world than simply offering the opportunity to do something different and bonding with their fellow students. Sadder still, however, is that the boy I mentioned in the previous paragraph chose not to go on camp and stay at home for the week instead. Liam, however, came home full of even more energy and no one was more amazed than Noel and I when he came bounding downstairs on Saturday morning and asked 'Dad, can we go clay target shooting?' The great thing is, this attitude is now rubbing off on his friends and more and more of them are asking Noel to take them fishing and bushwalking. Hooray for old fashioned fun!
So I stand corrected. I would much rather see my boys the way they are now; happy, healthy and making the most of each day rather than sitting inside playing shooting games with complete strangers on the other side of the world while real life and real fun passes them by. Making the change to his lifestyle has given Liam confidence, knowledge, skills, good health and energy. We no longer have to nag him to get outside - in fact, we have more trouble keeping him in! Although even the most active boys run out of steam eventually. Here are the boys after their hunting trip, fast asleep in their chairs. Both nights they were too exhausted to even make it to bed and spent the whole night in their chairs! Shh, don't tell them I showed you!
"I would REALLY love some suggestions for different gifts for Christmas this year. I love to give home-made gifts and have done gifts in a jar and all kinds of hampers thanks to the brilliant suggestions on Simple Savings. They are always a big hit but this year I am struggling to come up with something really unusual. Time is ticking on so I would love to know if your members have any ideas, either home-made or bought that I will be able to get organised in time for Christmas!"
Don't worry Maryann, it's not too late! We have received some fantastic ideas as you can see here. Thank you to everyone who responded to Maryann's request, unfortunately we received so many we are unable to print them all here but we are sure this selection will inspire you to get crafty for Christmas!
I would suggest soap for your gift giving dilemma. You still have time to make your own! You could make it totally from scratch like I do; render the fat from the butcher, add lye and water and pour in a fun plastic mould or some round PVC pipe - whatever your creative mind can think of. I bought fun beach sets from the dollar store and used the sand moulding forms to make soaps! I have sets of giant seashells, fish, turtles and crabs. The seashells really are wonderful as they don't have little parts that break off like the fish tails, turtle legs and crab claws, which are kind of fragile. Home-made or crafted soaps are so cool and definitely as creative and unusual as you are. If you don't want to give home-made soaps you could give soap making kits!
For a festive looking gift with a difference, how about a 'tea wreath?' Check out this website link - they look fantastic. I'm going to have a go at making one myself!
For a nice, 'girly' gift, make a set of notelets. Cut some card into smaller than usual cards and stamp with pretty craft stamps. You can also stamp matching envelopes so they are part of the 'set'. Wrap in a pretty ribbon and there you have it!
For the person who has everything, make them a 'family faces' bunch of flowers! Make a flower template with enough room in the middle for a smiling face to be inserted. Use your template to cut flowers out of bright cardboard, then get your photos and cut them into round centres to glue into the middle of each flower. You can then use pipe cleaners as stems and cut cardboard leaves to twist on. Make many of these and then tie them together with a big bow for your loved one. They can then see everyone's smiling face whenever they walk by!
This delicious strawberry syrup makes an ideal festive gift for sipping and slurping! It's so easy and only takes a week until it's ready. All you need are strawberries, sugar, vodka and a clean jar.
Put fresh strawberries into elegant jar or bottle.
Drizzle with white sugar until jar is filled.
Turn daily for a week to dissolve sugar.
Decorate jar with ribbon and/or festive tag.
Use as drink with lemonade or over crepes, ice cream or strawberries. Yum!
For a great Christmas present for someone who loves to read, go to the second hand stores or school fairs/galas and buy up four or five books in good condition, in the genre the recipient loves (romance, thrillers and so on, or a mixture of them all). They should only be around $2.00 each (often less at the fairs) and the kids' books are often really cheap, such as five for $1.00. Stack them up and wrap in pretty ribbon, add a mix of sweets or lollipops (buy in bulk and split them up into your own ziplock or cellophane bags). Now you have a great cheap present for someone to while away their summer holiday time. You could do a version for the kids too, with a stack of colouring or maze books, pencils and sweets.
For a delicious, low-cost Christmas treat, make peppermint bark! Simply melt dark chocolate and spread it over a large piece of baking paper. Next, smash two or three candy canes (peppermint flavour works best, but you can vary it if you like) and sprinkle this over the dark chocolate. Leave to set in the fridge. Melt some white chocolate and pour this over the top and return to the fridge. When cool use a knife to break into pieces, put into a plastic bag or pretty mug and tie the top with a nice ribbon.
Snakes and ladders: Using a piece of 3mm MDF (you could use stiff cardboard) I divided the board into 25 squares and, using pencils and Texta, coloured in the board. When finished I covered the whole thing in contact. I bought counters and dice from a game shop. You could make as many squares as is suitable for the age of the child.
Fishing game: I got a piece of board and cut it into a circle (again cardboard would be just as good). I then cut out fish shapes from an old calendar and laminated them and fixed on paper clips. I made fishing rods out of dowel, string and magnets.
Memory game: I printed up cards on the printer and then had them laminated. On the back of the cards I printed the child's name to make them individual. You could use shapes, family member's photo, dinosaurs or whatever the child is interested in.
"Over the past two years I have lost a mother and mother-in-law. These two women were real 'ladies' and I have been left with boxes and boxes of beautiful packaged handkerchiefs - has anyone any suggestions on how I could use these individually? Incidentally they both did use hankies but there were way too many for them! We travel in our caravan and I was wondering whether I could incorporate these in an appreciation gift? We often have someone do a kindness for us and like to repay them with a thank you gift. I am quite handy with a sewing machine and would love some ideas."
If you have any tips or suggestions which can help Delma, please send them in to us here.
Tadahh! You've finally read right down to the bottom of this month's newsletter. Did we mention that taking some quiet time to read an interesting newsletter is a form of old fashioned fun? And didn't we have fun! I hope you really enjoyed the newsletter and have been inspired to try something new in your savings journey.
Don't be a stranger - drop me an email. I love receiving your feedback and suggestions. If you have enjoyed this month's newsletter, why not forward it on to your friends to help them save money too? Or tell them about us on Facebook by clicking the 'like' button on our Simple Savings Facebook page? Spread the love and the savings.
I hope you have heaps of old fashioned fun this month. Our family certainly will. Merry Christmas and best wishes for a family fun, festive season. Enjoy! | 2019-04-19T09:12:58Z | https://www.simplesavings.com.au/p/November-2011---Simple-Savings-Newsletter |
I not to mention my buddies have already been digesting the good recommendations on your web site and then before long got a terrible feeling I never thanked the web site owner for those secrets. My ladies were definitely certainly very interested to read all of them and have now pretty much been enjoying them. We appreciate you genuinely really helpful and then for settling on these kinds of superior resources most people are really desirous to understand about. My honest regret for not expressing gratitude to earlier.
I precisely had to appreciate you all over again. I do not know the things I would’ve done without the concepts provided by you on this topic. This was a real frightening issue for me personally, however , viewing a new well-written approach you treated that made me to jump over delight. I will be happy for this assistance and then trust you are aware of an amazing job you are getting into training others thru your websites. More than likely you’ve never got to know any of us.
Thanks so much for providing individuals with an extremely wonderful opportunity to read in detail from this website. It can be so pleasurable and also full of a good time for me personally and my office colleagues to search the blog no less than thrice a week to study the new issues you will have. And lastly, I am just always impressed considering the attractive ideas you serve. Certain 3 ideas in this posting are surely the most suitable we’ve ever had.
I happen to be writing to let you know of the perfect encounter our girl obtained using your web page. She learned some things, with the inclusion of what it’s like to have a very effective teaching mindset to get the rest without difficulty thoroughly grasp several problematic topics. You actually exceeded our own desires. Thanks for imparting such insightful, trusted, explanatory and fun tips on that topic to Janet.
I actually wanted to jot down a simple message in order to say thanks to you for all of the stunning steps you are writing on this website. My considerable internet research has at the end of the day been rewarded with reasonable facts to exchange with my family. I would say that we readers actually are unequivocally lucky to be in a great network with very many wonderful people with interesting secrets. I feel pretty fortunate to have seen the web page and look forward to really more fabulous times reading here. Thank you again for all the details.
I precisely needed to say thanks again. I am not sure the things I could possibly have accomplished without the actual basics contributed by you over my theme. It previously was a real frustrating difficulty in my position, nevertheless being able to view your professional manner you treated that forced me to cry with contentment. Extremely happier for this assistance and then wish you recognize what an amazing job that you are getting into instructing the rest all through your site. More than likely you haven’t come across any of us.
I am just writing to let you understand what a terrific discovery my daughter undergone reading through the blog. She even learned such a lot of pieces, which included how it is like to possess a very effective giving mood to have the rest really easily know precisely chosen tortuous subject matter. You undoubtedly exceeded her expectations. Thank you for giving those practical, healthy, explanatory as well as cool tips on that topic to Emily.
I am just commenting to let you know what a impressive experience my child found using your blog. She picked up such a lot of issues, not to mention what it is like to have an excellent giving mindset to get folks easily fully grasp certain complicated matters. You truly exceeded our own expectations. Many thanks for distributing these powerful, dependable, revealing and also unique guidance on your topic to Mary.
My husband and i have been now peaceful when Michael could deal with his analysis via the precious recommendations he acquired while using the web page. It’s not at all simplistic just to possibly be giving away guides which usually many people might have been selling. So we see we’ve got you to thank because of that. Most of the explanations you made, the simple website navigation, the friendships your site make it easier to instill – it’s got everything awesome, and it’s really making our son in addition to the family recognize that that issue is excellent, and that’s extraordinarily mandatory. Thanks for everything!
I would like to express thanks to this writer just for rescuing me from such a instance. Just after exploring through the the net and getting thoughts which are not powerful, I was thinking my entire life was well over. Being alive devoid of the answers to the issues you have resolved as a result of your entire short article is a critical case, and ones which could have in a wrong way affected my career if I hadn’t come across your web page. The mastery and kindness in playing with almost everything was priceless. I’m not sure what I would have done if I had not discovered such a stuff like this. I am able to at this time look ahead to my future. Thank you so much for your specialized and result oriented guide. I will not think twice to recommend the website to any person who ought to have guidance on this matter.
I needed to write you that tiny observation to be able to say thanks as before just for the lovely principles you’ve contributed in this article. It is certainly surprisingly open-handed with people like you to present without restraint what exactly most people could have offered for an e book in order to make some money on their own, most importantly seeing that you might have tried it in the event you decided. The advice likewise served to become great way to comprehend most people have a similar fervor just as my very own to know the truth good deal more concerning this problem. I’m certain there are millions of more enjoyable opportunities in the future for many who browse through your website.
Thank you a lot for giving everyone remarkably terrific possiblity to read articles and blog posts from this blog. It is often very amazing plus packed with a great time for me personally and my office co-workers to visit your web site more than three times per week to read the fresh things you have got. And of course, I am also certainly motivated with the breathtaking tips you give. Some 1 areas in this article are undoubtedly the most suitable I have ever had.
I have to voice my respect for your kind-heartedness giving support to those people that have the need for guidance on the field. Your real dedication to getting the message throughout had become remarkably effective and has in every case encouraged guys much like me to realize their goals. Your own invaluable useful information denotes a great deal to me and additionally to my fellow workers. Many thanks; from all of us.
I in addition to my buddies were digesting the good information and facts from your web site then all of a sudden got an awful feeling I had not thanked the blog owner for them. My boys are already for this reason stimulated to learn all of them and now have definitely been tapping into them. I appreciate you for indeed being so considerate and for settling on some quality areas millions of individuals are really wanting to know about. My personal honest regret for not expressing gratitude to you earlier.
My spouse and i felt contented John could finish off his reports with the ideas he acquired from your very own web page. It is now and again perplexing to simply possibly be freely giving solutions which the rest could have been making money from. So we see we’ve got the writer to appreciate because of that. The illustrations you made, the easy web site navigation, the relationships you aid to instill – it is many fantastic, and it is letting our son and us reckon that this matter is interesting, which is certainly wonderfully essential. Thank you for everything!
I enjoy you because of all of your work on this blog. My niece enjoys managing investigations and it’s really easy to see why. We all hear all regarding the lively ways you make very useful solutions through your web site and as well attract response from some others about this theme then our own child is actually being taught a whole lot. Take pleasure in the remaining portion of the new year. You have been doing a useful job.
I am only writing to make you know of the excellent discovery my friend’s princess gained checking your web site. She noticed too many details, which include what it’s like to possess an awesome teaching character to make most people easily thoroughly grasp selected extremely tough subject areas. You really surpassed visitors’ expectations. Thank you for giving such good, safe, edifying and in addition easy guidance on this topic to Sandra.
I just wanted to post a small note to be able to thank you for some of the wonderful tips and tricks you are placing here. My time intensive internet lookup has at the end been compensated with wonderful content to write about with my family and friends. I would believe that we website visitors actually are really lucky to exist in a superb community with so many wonderful professionals with very helpful hints. I feel truly happy to have come across your entire site and look forward to so many more enjoyable moments reading here. Thank you once more for all the details.
I needed to write you one little bit of observation so as to say thanks a lot over again with the awesome ideas you’ve featured here. It’s certainly strangely open-handed of people like you to supply extensively just what a lot of people would’ve supplied as an e book to help make some bucks for themselves, primarily now that you could possibly have done it in the event you desired. The strategies also acted like a fantastic way to comprehend other people online have the identical interest just as my personal own to see great deal more when considering this issue. I know there are many more pleasant situations up front for people who take a look at your website.
Thanks a lot for giving everyone a very splendid chance to read from this blog. It can be so lovely and stuffed with a lot of fun for me personally and my office fellow workers to visit your site minimum three times in 7 days to study the latest tips you have got. And lastly, I am always pleased considering the breathtaking things you serve. Certain 1 areas in this post are in fact the most efficient we have all ever had.
Thanks a lot for providing individuals with an extremely superb possiblity to read from this blog. It can be very useful and jam-packed with a great time for me and my office peers to search your website really 3 times every week to read through the latest tips you have. Of course, I’m so actually amazed for the brilliant tips and hints served by you. Some 2 tips on this page are ultimately the most efficient I’ve had.
I wanted to post a brief note in order to appreciate you for all of the superb items you are posting here. My extended internet search has at the end of the day been compensated with wonderful know-how to write about with my classmates and friends. I ‘d suppose that we visitors are really lucky to dwell in a fantastic website with so many outstanding professionals with very beneficial secrets. I feel quite blessed to have come across your entire web site and look forward to some more entertaining moments reading here. Thanks again for all the details.
Thanks for all your labor on this site. Ellie takes pleasure in making time for investigations and it’s really obvious why. A lot of people learn all relating to the powerful way you make both useful and interesting ideas by means of the web blog and as well invigorate participation from the others on that content then our princess is always learning so much. Have fun with the rest of the year. You’re the one performing a wonderful job.
Thank you a lot for giving everyone such a pleasant opportunity to discover important secrets from this site. It’s usually so beneficial plus stuffed with a great time for me personally and my office acquaintances to search your site really three times per week to find out the fresh secrets you will have. And of course, I’m usually happy with all the magnificent tricks you give. Some 1 ideas in this posting are really the very best I have had.
My spouse and i felt so excited that Jordan could complete his preliminary research via the ideas he came across from your web site. It’s not at all simplistic to simply always be freely giving points many others have been trying to sell. And we also remember we now have you to be grateful to for that. The entire explanations you have made, the easy website menu, the friendships you aid to promote – it is mostly extraordinary, and it’s really aiding our son and us recognize that this idea is awesome, which is certainly highly vital. Thanks for all!
I as well as my guys were found to be checking the excellent tips and tricks from the blog and so before long I had an awful suspicion I never expressed respect to the web blog owner for those secrets. Those guys were definitely passionate to read all of them and now have extremely been taking pleasure in these things. Appreciation for simply being indeed accommodating and then for using these kinds of fine information most people are really desperate to learn about. My personal honest regret for not expressing appreciation to earlier.
I precisely needed to appreciate you again. I do not know the things I might have created in the absence of those tips and hints shown by you directly on my industry. It had become a challenging issue in my position, however , noticing a new skilled technique you managed that forced me to jump for delight. Extremely thankful for your help and thus hope that you recognize what an amazing job that you’re carrying out teaching the rest through the use of your web blog. I know that you have never encountered any of us.
I intended to write you the tiny remark just to say thanks yet again with the pretty strategies you’ve featured above. It is so pretty open-handed with you to provide extensively all numerous people could possibly have supplied as an ebook to help with making some profit for themselves, mostly now that you might well have done it if you ever desired. These suggestions as well acted as the easy way to fully grasp other people have the same keenness just as mine to find out much more related to this problem. I believe there are millions of more fun times ahead for those who scan your blog post.
I would like to express some thanks to the writer for rescuing me from this instance. Right after looking through the world wide web and coming across suggestions that were not productive, I figured my life was over. Existing without the presence of answers to the difficulties you have fixed through this report is a serious case, as well as the kind that might have negatively affected my career if I had not come across your blog post. Your actual knowledge and kindness in controlling all the pieces was vital. I am not sure what I would’ve done if I hadn’t come upon such a solution like this. I can now look ahead to my future. Thank you very much for the reliable and amazing guide. I will not be reluctant to endorse the blog to any person who should get guidance on this subject.
I simply wished to appreciate you all over again. I do not know the things I could possibly have accomplished without the entire aspects revealed by you on my situation. It previously was an absolute scary matter in my opinion, nevertheless taking note of your well-written mode you treated the issue forced me to cry over fulfillment. I will be happy for the assistance and as well , have high hopes you know what an amazing job that you are carrying out instructing people today by way of your web blog. Most likely you have never encountered any of us.
I precisely had to appreciate you once more. I’m not certain what I could possibly have created without these strategies revealed by you directly on such a theme. It actually was an absolute hard crisis in my circumstances, nevertheless encountering a new specialized avenue you dealt with that took me to jump for gladness. Extremely happier for the assistance and even hope that you are aware of a great job you were undertaking training people today all through your websites. I am sure you haven’t got to know all of us.
I intended to post you one little bit of word to say thanks over again relating to the breathtaking principles you’ve discussed above. This has been so pretty generous of you to supply unhampered all a number of us might have distributed for an electronic book in order to make some cash on their own, and in particular considering that you could possibly have done it if you ever considered necessary. The concepts additionally acted to become good way to know that many people have a similar zeal just as mine to know a whole lot more with respect to this issue. I am sure there are thousands of more enjoyable opportunities up front for people who looked at your blog.
I precisely needed to appreciate you once again. I’m not certain the things I might have implemented in the absence of the entire points documented by you relating to my field. Certainly was an absolute frightening crisis in my opinion, however , coming across your specialised manner you treated it made me to cry over joy. I’m just thankful for your work and even believe you know what an amazing job you are always undertaking instructing the mediocre ones by way of a site. Probably you haven’t come across any of us.
I enjoy you because of all your valuable work on this blog. Betty really loves participating in investigations and it’s easy to understand why. Most of us know all concerning the powerful way you provide valuable steps through your web site and as well as increase contribution from some other people on this situation then my daughter is in fact discovering a whole lot. Enjoy the rest of the year. You have been doing a dazzling job.
I must point out my respect for your kindness giving support to those people that really need assistance with this particular field. Your special commitment to passing the message all through became unbelievably insightful and has always helped professionals much like me to reach their dreams. Your amazing useful instruction denotes a great deal a person like me and additionally to my peers. Best wishes; from everyone of us.
I have to show thanks to the writer for bailing me out of this condition. Because of browsing through the world-wide-web and meeting tricks which were not beneficial, I figured my life was well over. Being alive without the presence of answers to the issues you’ve solved by means of your entire report is a serious case, and ones that would have badly affected my career if I had not come across the website. Your own know-how and kindness in taking care of all things was crucial. I don’t know what I would’ve done if I had not come upon such a stuff like this. I am able to at this time look forward to my future. Thanks for your time so much for your impressive and result oriented guide. I won’t hesitate to recommend your web page to anybody who wants and needs counselling on this subject.
I simply needed to say thanks once again. I do not know the things I might have made to happen in the absence of the type of opinions provided by you concerning such subject. Certainly was a very frightful concern in my opinion, but seeing your specialized fashion you dealt with the issue forced me to leap for contentment. I am happy for your support and thus hope that you really know what an amazing job you are always accomplishing educating many others all through your web blog. I know that you’ve never encountered all of us.
I wanted to jot down a small remark in order to say thanks to you for the great information you are giving out at this website. My time-consuming internet search has now been honored with extremely good content to exchange with my contacts. I ‘d point out that we readers are extremely lucky to live in a good site with very many lovely people with interesting concepts. I feel truly lucky to have come across your entire website page and look forward to so many more entertaining minutes reading here. Thanks a lot again for a lot of things.
I intended to send you the very small note to help say thank you once again for the remarkable concepts you’ve contributed above. This is simply pretty open-handed of you to give unhampered exactly what a lot of folks could possibly have marketed as an electronic book to earn some cash for their own end, mostly considering that you could have done it in case you desired. These solutions in addition worked to become a good way to understand that other individuals have a similar passion just as my personal own to grasp somewhat more regarding this matter. I believe there are lots of more fun situations in the future for folks who take a look at your blog post.
I am also commenting to make you know what a amazing encounter my friend’s daughter had reading through your webblog. She discovered some things, which include what it is like to have a marvelous teaching mindset to make many more without hassle gain knowledge of specific multifaceted topics. You really exceeded people’s expected results. Thanks for imparting the valuable, trusted, informative and even easy tips about that topic to Ethel.
My husband and i have been absolutely more than happy Chris managed to complete his survey with the precious recommendations he had when using the blog. It’s not at all simplistic to just possibly be offering helpful tips some other people could have been trying to sell. And we understand we have the blog owner to be grateful to because of that. The illustrations you’ve made, the straightforward web site menu, the relationships you will make it possible to promote – it’s got everything awesome, and it is leading our son and us consider that that matter is awesome, which is unbelievably essential. Thank you for the whole thing!
I actually wanted to jot down a small message to be able to appreciate you for those wonderful items you are placing at this site. My long internet investigation has at the end of the day been paid with professional suggestions to talk about with my visitors. I ‘d claim that we visitors actually are truly blessed to live in a wonderful community with very many perfect individuals with valuable advice. I feel very much lucky to have used your entire webpage and look forward to tons of more awesome minutes reading here. Thanks a lot once again for a lot of things.
I wanted to draft you a little bit of observation to be able to thank you very much as before on your lovely concepts you have shared in this case. This is certainly extremely generous with people like you to deliver without restraint just what most of us would have distributed for an ebook in order to make some profit for themselves, mostly seeing that you could possibly have tried it in case you considered necessary. These things as well acted like a great way to realize that other individuals have a similar fervor just like my own to realize many more related to this matter. I’m sure there are several more enjoyable periods in the future for individuals that looked at your site.
I must express thanks to you for rescuing me from this issue. Right after researching through the world-wide-web and seeing suggestions which were not helpful, I thought my entire life was gone. Being alive minus the approaches to the difficulties you’ve solved by way of your main guide is a crucial case, as well as the kind which may have in a negative way damaged my career if I had not encountered your web blog. Your good ability and kindness in handling almost everything was vital. I don’t know what I would have done if I had not encountered such a step like this. It’s possible to now look ahead to my future. Thanks a lot very much for this impressive and sensible help. I will not be reluctant to endorse the blog to anyone who should have assistance about this subject matter.
I simply wished to thank you so much again. I do not know the things I would’ve implemented without the type of creative ideas shown by you about such a subject matter. This was a very fearsome situation for me, nevertheless spending time with a specialized form you dealt with the issue took me to jump with gladness. I am happier for your advice and thus expect you find out what a great job that you’re undertaking educating the rest through your website. I’m certain you have never met any of us.
I must convey my gratitude for your kind-heartedness supporting individuals that absolutely need help on this idea. Your very own commitment to passing the solution all around appeared to be surprisingly useful and has in most cases empowered associates just like me to arrive at their endeavors. Your personal valuable help and advice means much a person like me and even more to my office workers. Regards; from everyone of us.
I wish to express some appreciation to this writer for bailing me out of this particular problem. Right after searching through the world wide web and getting thoughts which are not productive, I thought my life was done. Living without the solutions to the issues you have resolved by way of your blog post is a serious case, as well as the kind which might have in a negative way affected my entire career if I had not discovered your web blog. Your good training and kindness in touching all the details was helpful. I am not sure what I would’ve done if I had not discovered such a thing like this. I can at this moment relish my future. Thank you very much for the specialized and effective guide. I will not hesitate to recommend the website to anybody who wants and needs support on this area.
I just wanted to compose a brief remark to be able to appreciate you for the wonderful facts you are showing at this site. My particularly long internet investigation has at the end been honored with awesome information to share with my companions. I would suppose that many of us site visitors are quite lucky to live in a wonderful website with so many outstanding individuals with very beneficial things. I feel very happy to have come across the web pages and look forward to so many more amazing times reading here. Thank you once again for all the details.
I not to mention my friends ended up reading through the best solutions from the website then immediately came up with a horrible feeling I had not thanked the site owner for those techniques. All of the men were as a consequence glad to read them and have in effect definitely been having fun with those things. Appreciation for indeed being so kind as well as for deciding upon certain superb useful guides most people are really eager to be aware of. My personal honest regret for not saying thanks to you earlier.
I precisely wished to appreciate you again. I do not know the things I could possibly have created without the strategies revealed by you over this industry. Completely was a horrifying concern in my circumstances, nevertheless finding out the skilled way you treated that forced me to jump for contentment. I am just happier for the assistance and even hope you comprehend what a powerful job you are always undertaking educating men and women by way of your web blog. I’m certain you’ve never come across any of us.
Thank you a lot for giving everyone an exceptionally superb chance to read in detail from this web site. It is often so awesome and as well , jam-packed with a great time for me personally and my office peers to search your site the equivalent of three times per week to see the latest secrets you have. Of course, I’m so always motivated with your extraordinary tips you serve. Selected 3 tips in this post are certainly the simplest we have all had.
I simply wished to thank you so much again. I do not know what I would’ve created in the absence of the entire methods documented by you regarding this situation. It seemed to be a real frustrating situation in my circumstances, but considering the very skilled form you solved the issue forced me to leap for happiness. I am happier for the service and thus believe you are aware of a powerful job you are always accomplishing training others with the aid of your web blog. Probably you’ve never got to know any of us.
I wanted to post a word in order to express gratitude to you for all the fantastic tips and hints you are writing on this website. My time consuming internet lookup has at the end been compensated with reasonable facts and techniques to exchange with my neighbours. I would express that we readers are undoubtedly blessed to dwell in a fantastic community with so many brilliant people with beneficial guidelines. I feel somewhat fortunate to have come across your entire site and look forward to so many more amazing times reading here. Thanks a lot once again for everything.
Thanks for all of your hard work on this website. My mum really likes conducting internet research and it’s really obvious why. Many of us hear all about the dynamic manner you deliver helpful tips and tricks by means of the blog and as well boost response from some others on this article and our daughter is really discovering so much. Enjoy the remaining portion of the year. You are always doing a stunning job.
I precisely wanted to say thanks once more. I am not sure the things I could possibly have accomplished in the absence of those basics contributed by you on such a problem. It had been a real depressing crisis in my view, but taking a look at the well-written way you managed the issue made me to cry for happiness. Now i am thankful for your assistance and as well , hope you know what a powerful job you are accomplishing educating other individuals with the aid of a site. I know that you have never come across any of us.
Thank you a lot for providing individuals with an extraordinarily marvellous opportunity to read critical reviews from this web site. It’s usually so terrific and packed with a lot of fun for me personally and my office colleagues to visit your site at the very least thrice a week to study the new tips you have. Not to mention, we are certainly happy with all the amazing techniques you serve. Selected 3 facts in this post are undoubtedly the most impressive we have all ever had.
Thank you for your own effort on this website. My mother loves getting into investigation and it is easy to see why. All of us hear all concerning the lively method you deliver reliable ideas by means of the blog and even boost contribution from others on this subject matter then our favorite girl is certainly becoming educated a lot of things. Take pleasure in the remaining portion of the year. You have been conducting a dazzling job.
I intended to post you that bit of observation to help thank you yet again for these magnificent basics you’ve provided on this site. It was simply unbelievably open-handed of you to provide without restraint just what some people would have made available for an electronic book to get some profit on their own, specifically seeing that you could possibly have done it if you ever desired. These smart ideas likewise served to become good way to fully grasp some people have the identical zeal just as my very own to learn somewhat more around this matter. I’m certain there are thousands of more pleasurable periods up front for many who look over your website.
I simply had to say thanks once more. I am not sure the things I could possibly have achieved in the absence of the actual techniques revealed by you relating to that problem. Completely was a real alarming case in my opinion, but coming across a new skilled approach you resolved it forced me to jump over joy. I’m thankful for this service and thus have high hopes you realize what a powerful job you are always carrying out training most people with the aid of your web site. I am certain you haven’t encountered any of us.
Thanks so much for providing individuals with such a pleasant opportunity to read critical reviews from this blog. It really is so ideal plus packed with a good time for me and my office mates to search your web site nearly 3 times in 7 days to learn the newest issues you have got. And definitely, I’m also usually fascinated with your unbelievable points you serve. Selected 4 areas in this article are indeed the simplest we have all had.
I needed to compose you that bit of observation to help give thanks the moment again for your personal marvelous basics you’ve featured above. It was quite unbelievably generous with you to give easily just what most of us might have advertised as an e book to make some money on their own, mostly considering that you might well have tried it if you ever considered necessary. These tactics in addition worked as the fantastic way to be sure that most people have similar desire similar to mine to realize whole lot more with respect to this problem. I’m sure there are millions of more enjoyable situations in the future for folks who check out your blog.
I’m just commenting to let you know of the amazing discovery our princess experienced viewing yuor web blog. She came to find some things, which included how it is like to have an amazing helping heart to let the rest smoothly grasp specified specialized topics. You undoubtedly did more than our desires. Thank you for coming up with such useful, healthy, edifying and also easy thoughts on the topic to Evelyn.
I simply desired to thank you very much once again. I am not sure the things I might have worked on in the absence of those basics contributed by you about such a industry. Previously it was a real frightening matter for me personally, but being able to see a well-written form you dealt with it forced me to weep with fulfillment. Now i’m happy for this support and thus wish you find out what an amazing job you are putting in training the mediocre ones thru your web site. More than likely you’ve never got to know any of us.
I as well as my friends were studying the good guidelines on your web blog and so then I had a horrible suspicion I never expressed respect to the site owner for those strategies. All of the ladies had been for this reason glad to study them and have simply been using those things. Thank you for actually being so thoughtful as well as for utilizing variety of terrific tips millions of individuals are really eager to understand about. Our sincere apologies for not saying thanks to you earlier.
I would like to show my admiration for your generosity in support of men and women that absolutely need help on the study. Your real commitment to passing the solution along appeared to be pretty effective and have regularly enabled ladies just like me to realize their ambitions. The valuable guidelines signifies so much a person like me and even further to my office colleagues. Many thanks; from each one of us.
I am also writing to make you know what a fantastic experience our princess undergone reading your blog. She mastered several issues, which included what it’s like to have a wonderful giving style to get other folks just completely grasp various extremely tough topics. You truly did more than visitors’ desires. Thanks for presenting those precious, healthy, revealing not to mention easy tips on this topic to Janet.
My wife and i felt quite satisfied Michael could finish up his preliminary research using the ideas he got when using the web site. It is now and again perplexing to simply happen to be offering helpful tips that some others could have been selling. We really do understand we have the blog owner to thank because of that. All of the explanations you made, the easy web site navigation, the friendships your site help to instill – it’s got mostly fabulous, and it’s really making our son in addition to us reckon that that theme is fun, and that is unbelievably vital. Many thanks for the whole thing!
I want to show my appreciation to this writer just for bailing me out of this matter. Just after looking out through the the web and finding opinions that were not helpful, I believed my life was over. Existing without the presence of answers to the problems you have fixed all through this blog post is a crucial case, and the kind that would have adversely affected my entire career if I had not come across your site. Your own personal knowledge and kindness in playing with the whole lot was important. I don’t know what I would’ve done if I hadn’t come upon such a subject like this. I can at this moment look forward to my future. Thanks a lot very much for this reliable and effective help. I will not be reluctant to suggest your web blog to anybody who should receive support on this matter.
I definitely wanted to write down a small word to say thanks to you for all the amazing tips you are placing on this site. My particularly long internet research has at the end of the day been paid with reasonable tips to exchange with my family. I would assert that we website visitors actually are definitely fortunate to dwell in a remarkable website with so many marvellous professionals with great suggestions. I feel pretty fortunate to have encountered your entire web pages and look forward to many more entertaining moments reading here. Thanks a lot again for everything.
I needed to compose you one little word to finally thank you so much again just for the breathtaking solutions you’ve shared at this time. It was certainly incredibly open-handed of you to present publicly exactly what some people might have distributed as an e book to help with making some money for themselves, precisely since you could possibly have done it if you ever wanted. These things in addition served to become fantastic way to be aware that some people have the identical fervor much like my personal own to figure out somewhat more in terms of this condition. I’m certain there are lots of more fun sessions in the future for individuals that read carefully your website.
I wish to show some appreciation to this writer for bailing me out of this issue. Because of surfing around throughout the world wide web and getting methods which are not beneficial, I thought my life was gone. Being alive minus the answers to the difficulties you’ve fixed all through your main review is a critical case, and those which may have adversely affected my entire career if I had not encountered your web blog. Your primary mastery and kindness in dealing with the whole lot was tremendous. I don’t know what I would’ve done if I hadn’t encountered such a step like this. I am able to now look forward to my future. Thanks so much for your expert and sensible guide. I won’t be reluctant to endorse your site to any individual who desires guidelines on this situation.
I simply had to thank you so much once again. I’m not certain the things I might have carried out without the entire tips and hints shown by you over such subject matter. Certainly was a very challenging difficulty in my view, nevertheless understanding the very specialized way you dealt with the issue made me to jump with fulfillment. I will be happy for your help and in addition hope you know what a great job you have been putting in educating the rest all through a blog. More than likely you have never met any of us.
I precisely wanted to thank you so much once again. I do not know the things I might have achieved in the absence of the entire advice documented by you directly on my subject. It truly was a real traumatic issue for me personally, nevertheless taking note of the very specialised fashion you dealt with it forced me to jump for fulfillment. I’m thankful for the information and as well , have high hopes you really know what an amazing job that you are putting in educating some other people through the use of your webpage. Most probably you’ve never met any of us.
I am just writing to let you understand what a terrific discovery my cousin’s girl experienced reading your blog. She mastered numerous issues, including how it is like to have a marvelous giving character to get other people without hassle comprehend selected multifaceted topics. You actually surpassed readers’ expected results. I appreciate you for giving such precious, dependable, informative and also fun tips about your topic to Evelyn.
I precisely desired to say thanks once more. I do not know the things that I could possibly have used without those hints shared by you on such a theme. It absolutely was the hard issue for me personally, but taking a look at the very professional avenue you handled that forced me to weep with joy. Now i’m thankful for this advice and even trust you know what an amazing job that you are providing educating men and women thru your webpage. Probably you have never got to know all of us.
My wife and i ended up being quite relieved Albert managed to do his analysis from your ideas he discovered from your own blog. It is now and again perplexing to simply happen to be offering concepts which often the others may have been selling. Therefore we grasp we’ve got the website owner to thank for that. All the explanations you have made, the easy blog menu, the friendships your site give support to foster – it’s mostly impressive, and it is facilitating our son in addition to our family reckon that the matter is interesting, and that’s exceedingly vital. Thank you for everything!
I must show my appreciation to you just for rescuing me from this particular difficulty. As a result of searching throughout the search engines and meeting opinions that were not powerful, I thought my entire life was over. Existing minus the answers to the problems you’ve resolved by means of your entire guideline is a critical case, and ones that might have in a negative way damaged my career if I had not encountered your web blog. Your main skills and kindness in touching all areas was very useful. I am not sure what I would have done if I hadn’t discovered such a point like this. I’m able to at this moment relish my future. Thanks very much for this high quality and sensible help. I won’t hesitate to suggest your blog post to any individual who ought to have assistance on this matter. | 2019-04-25T20:25:27Z | http://www.ts833.net/1734/ |
Department of Oncology-Pathology, Division of Medical Radiation Physics, Karolinska Institutet and Stockholm University, Sweden.
Department of Otolaryngology, Karolinska University Hospital, Stockholm, Sweden.
Department of Oncology, Radiumhemmet, Karolinska University Hospital, Stockholm, Sweden.
BACKGROUND AND PURPOSE: Determination of the dose-response relations for oesophageal stricture after radiotherapy of the head and neck.
MATERIAL AND METHODS: In this study 33 patients who developed oesophageal stricture and 39 patients as controls are included. The patients received radiation therapy for head and neck cancer at Karolinska University Hospital, Stockholm, Sweden. For each patient the 3D dose distribution delivered to the upper 5 cm of the oesophagus was analysed. The analysis was conducted for two periods, 1992-2000 and 2001-2005, due to the different irradiation techniques used. The fitting has been done using the relative seriality model.
RESULTS: For the treatment period 1992-2005, the mean doses were 49.8 and 33.4 Gy, respectively, for the cases and the controls. For the period 1992-2000, the mean doses for the cases and the controls were 49.9 and 45.9 Gy and for the period 2001-2005 were 49.8 and 21.4 Gy. For the period 2001-2005 the best estimates of the dose-response parameters are D(50)=61.5 Gy (52.9-84.9 Gy), γ=1.4 (0.8-2.6) and s=0.1 (0.01-0.3).
CONCLUSIONS: Radiation-induced strictures were found to have a dose response relation and volume dependence (low relative seriality) for the treatment period 2001-2005. However, no dose response relation was found for the complete material.
Department of Otolaryngology, Albert Einstein College of Medicine, New York, NY.
Cochlear Implant Research Program, Department of Otolaryngology, University of Miami Ear Institute, University of Miami, School of Medicine, 1600 N.W. 10th Avenue, RMSB 3160, Miami.
We have established an in vitro model of long-time culture of 4-day-old rat utricular maculae to study aminoglycoside-induced vestibular hair-cell renewal in the mammalian inner ear. The explanted maculae were cultured for up to 28 days on the surface of a membrane insert system. In an initial series of experiments utricles were exposed to 1 mM of gentamicin for 48 h and then allowed to recover in unsupplemented medium or in medium supplemented with the anti-mitotic drug aphidicolin. In a parallel control series, explants were not exposed to gentamicin. Utricles were harvested at specified time points from the second through the 28th day in vitro. Whole-mount utricles were stained with phalloidin-fluorescein isothiocyanate and their stereociliary bundles visualized and counted. In a second experimental series 2'-bromo-5'deoxyuridine labeling was used to confirm the antimitotic efficacy of aphidicolin. Loss of hair-cell stereociliary bundles was nearly complete 3 days after exposure to gentamicin, with the density of stereociliary bundles only 3-4% of their original density. Renewal of hair-cell bundles was abundant (i.e. 15x increase) in cultures in unsupplemented medium, with a peak of stereociliary bundle renewal reached after 21 days in vitro. A limited amount of hair-cell renewal also occurred in the presence of the anti-mitotic drug, aphidicolin. These results suggest that spontaneous renewal of hair-cell stereociliary bundles following gentamicin damage in utricular explants predominantly follows a pathway that includes mitotic events, but that a small portion of the hair-cell stereociliary bundle renewal does not require mitotic activity.
Thiosulfate may reduce cisplatin-induced ototoxicity, most likely by relieving oxidative stress and by forming inactive platinum complexes. This study aimed to determine the concentration and protective effect of thiosulfate in the cochlea after application of a thiosulfate-containing high viscosity formulation of sodium hyaluronan (HYA gel) to the middle ear prior to i.v. injection of cisplatin in a guinea pig model. The release of thiosulfate (0.1 M) from HYA gel (0.5% w/w) was explored in vitro. Thiosulfate in the scala tympani perilymph of the cochlea 1 and 3 h after application of thiosulfate in HYA gel to the middle ear was quantified with HPLC and fluorescence detection. Thiosulfate in blood and CSF was also explored. The potential otoprotective effect was evaluated by hair cell count after treatment with thiosulfate in HYA gel applied to the middle ear 3 h prior to cisplatin injection (8 mg/kg b.w.). HYA did not impede the release of thiosulfate. Middle ear administration of thiosulfate in HYA gel gave high concentrations in the scala tympani perilymph while maintaining low levels in blood, and it protected against cisplatin-induced hair cell loss. HYA gel is an effective vehicle for administration of thiosulfate to the middle ear. Local application of a thiosulfate-containing HYA gel reduces the ototoxicity of cisplatin most likely without compromising its antineoplastic effect. This provides a minimally invasive protective treatment that can easily be repeated if necessary.
METHODS: Analysis was performed on data in a national database collected from 32 ear, nose, and throat clinics. Surgical procedures and outcomes, and patient satisfaction from a questionnaire were studied.
RESULTS: The database was comprised of 3,775 surgical procedures, with follow-up available for analysis. One-third were children under the age of 15 years. The most common indication for surgery was infection prophylaxis. The overall healing rate of the tympanic membrane after surgery was 88.5%, with a high mean patient satisfaction. Complications registered were postoperative infection, tinnitus, or taste disturbance that occurred in 5.8% of patients.
CONCLUSIONS: Swedish results for a large number of patients who completed myringoplasty are presented. The success rate in this study is comparable to other studies, and good patient-reported outcome measures of myringoplasty are presented. Databases for surgical procedures and clinical audits are systematic processes for continuous learning in healthcare. This study shows that clinical databases can be utilized to analyze national results of surgical procedures.
LEVEL OF EVIDENCE: 2b Laryngoscope, 127:2389-2395, 2017.
Objectives/Hypothesis: Postoperative tinnitus and taste disturbances after myringoplasty are more common than previously reported.
Study Design: This study was a retrospective analysis of prospectively collected data from the Swedish National Quality Registry for Myringoplasty.
Methods: The analysis was performed on extracted data from all counties in Sweden collected from database A from 2002 to 2012 and database B from 2013 to 2016. Tinnitus and taste disturbance complications 1 year after myringoplasty were analyzed in relation to gender, age, procedure, and success rate. In database A, physicians reported tinnitus and taste disturbances. In database B, patients reported the complications.
Results: A major difference was found when the complications were reported by physicians compared to when the complications were reported by patients. In database A, tinnitus was reported in 1.2% of the patients and taste disturbances in 0.5%. In database B, the frequencies were 12.3% and 11.2%, respectively. Tinnitus and taste disturbances were more frequent after conventional myringoplasty compared to those after fat grafting and were more frequent after primary compared to those after revision surgery when reported by physicians. Patients, however, reported the same frequency of tinnitus after fat graft myringoplasty compared to that after conventional myringoplasty (12.0% vs. 12.6%) and fewer taste disturbances after revision surgery. In follow‐up assessments, complications persisted after surgery over a long time period.
Conclusion: Tinnitus and taste disturbances are more common after myringoplasty when patients report their symptoms than when physicians report the symptoms.
Cutaneous reparative processes, including wound healing, are highly developed procedures in which a chain of actions occurs to reconstitute the function of the wounded tissue. To prevent a delayed or excessive reparative process it is important to understand how this procedure develops and is maintained. One of the major extracellular matrix components of the skin is the glycosaminoglycan hyaluronan (HA). HA contributes to an extracellular environment, which is permissive for cell motility and proliferation, features that may account for HA’s unique properties observed in scarless foetal wound healing. The molecule is found at high concentration whenever proliferation, regeneration and repair of tissue occur.
The aims of the present studies were to analyse the distribution of HA and to investigate its possible role in various cutaneous conditions associated with an impaired reparative process like in scar tissue formation in healing wounds, changed skin characteristics in diabetes mellitus and proliferating activity in basal cell carcinomas.
Tissue biopsies were obtained from healthy human skin, type-I diabetic skin and various scar tissues. The samples were analysed in the light microscope with a hyaluronan-binding-probe, antibodies for collagen I, III, PCNA and Ki-67. Ultrastructural analyses were performed on the same tissue samples.
In normal skin HA was present mainly in the papillary dermis. In epidermis HA was located in between the keratinocytes in the spinous layer. In the different scar tissues the localization of HA varied, with an HA distribution in mature scar type resembling that in normal skin. In keloids the papillary dermis lacked HA, but the thickened epidermis contained more HA than the other scar types. Ultrastructural studies of keloids revealed an altered collagen structure in the dermal layers, with an abundance of thin collagen fibers in the reticular dermis and thicker collagen fibers in the papillary dermis. Furthermore, the keloids displayed epidermal changes, which involved the basement membrane (BM), exhibiting fewer hemidesmosomes, and an altered shape of desmosomes in the entire enlarged spinous layer. These alterations in epidermis are suggested to influence the hydrodynamic and cell regulatory properties of the wounded skin.
In diabetic patients, a reduced HA staining in the basement membrane zone was seen. The staining intensity of HA correlated to the physical properties of the skin reflected by their grades of limited joint mobility (LJM). Furthermore, the HA staining correlated with serum concentration of the HbA1c.
In basal cell carcinomas (BCC), HA occurred predominantly in the tumour stroma. The distribution was most intense in the highly developed superficial BCC type, and resembled that of the papillary dermis of normal skin. In contrast, in the infiltrative BCC type, the tumour stroma stained weakly in the infiltrative part of the tumour. Moreover, the surrounding dermal layer was deranged and devoid of HA. The findings suggest that the tumour stroma in superficial BCC causes a slow, well-regulated cell growth in which the tumour cells do not substantially disturb the normal skin function. In the infiltrative BCC type, the tumour cells cause a disintegration of the tumour stroma as well as the normal surrounding dermis, which permits further spreading of the tumour. In fact, the behaviour of the infiltrative BCC tumour, growing beyond its boundaries, resembles that of the keloid.
The mapping of the distribution of HA could be a useful tool for prognostic information, for evaluating the degree of progress and for deciding the choice of treatment in various diseases of the skin. In skin malignancies such as BCC it can be used to determine the radicality at the surgical excision of the tumour.
A hyaluronan-binding protein (HABP) was used to locate the distribution of HA in normal skin and in various types of scar tissue: mature scar tissue, hypertrophic scar tissue and keloids. The study was intended to establish whether or not a deviant HA distribution could explain the different clinical features of these scar tissues. The distribution of HA was found to differ between the various scar tissues. In normal skin an intense HA-staining was observed in the papillary dermis. In mature scar tissue the distribution of HA resembled that of normal uninjured tissue, but the layer of HA was thinner. In hypertrophic scar tissue, HA occurred mainly as a narrow strip in the papillary dermis. Keloid tissue showed the least HA-staining of the papillary layer and resembled that of the bulging reticular dermis. In contrast, the thickened granular and spinous layer of the keloid epidermis exhibited an intense HA-staining. We suggest that the altered distribution and amount of HA in these different scar tissues may contribute to their different clinical characteristics. This histochemical technique for the demonstration of HA in scar tissue could be of use in clinical work to decide on therapeutic strategies.
Specimens of basal cell carcinomas collected from 28 patients were classified into three groups: superficial, nodular, and infiltrative, according to their microarchitecture. The specimens were then subjected to histological characterization by means of a biotinylated hyaluronan-binding probe (HABP). By using Ki-67 and PCNA the proliferative activity of the BCC tumours was evaluated with immunohistological techniques. In superficial BCC the tumour islands displayed moderate hyaluronan (HA) staining. Feeble proliferation, denoted by modest mitotic activity and weak Ki-67 and PCNA immunoreactivity, occurred within the tumour islands. The surrounding connective tissue resembled normal skin, and no differentiated tumour stroma was observed. In nodular BCC, the HA staining of the tumour strands was weak to moderate, denoting increased proliferative activity. The differentiated surrounding tumour stroma stained strongly for HA. Tumour islands of infiltrative BCC stained weakly to moderately to HA and evidenced intense proliferation. The intensely HA-stained tumour stroma ended abruptly and the adjacent areas were almost devoid of HA. This study showed that the proliferative activity of BCC cells is associated with increased expression of HA in the tumour stroma. Modification of tumour-associated connective tissue indicates a close relationship between the tumour cells and the adjacent matrix. In particular, in infiltrative BCC, such alterations include degeneration and possible modification and remodelling of the surrounding extracellular matrix. These processes involving areas of probable importance for tumour progression, should be considered when deciding the extent of intended surgical resection.
Umeå University, Faculty of Medicine, Department of Clinical Sciences, Otorhinolaryngology. Department of Speech-Language Pathology and Audiology, University of Pretoria, South Africa.
BACKGROUND: No published studies on the prevalence of paediatric otitis media at primary healthcare clinics (PHCs) in South Africa (SA) are available. OBJECTIVE: To examine the point prevalence of otitis media in a paediatric population in a PHC in Johannesburg, SA, using otomicroscopy. METHODS: A sample of 140 children aged 2 - 16 years (mean 6.4; 44.1% females) were recruited from patients attending the PHC. Otomicroscopy was completed for each of the participants' ears by a specialist otologist using a surgical microscope. RESULTS: Cerumen removal was necessary in 36.0% of participants (23.5% of ears). Otitis media with effusion was the most frequent diagnosis (16.5%). Chronic suppurative otitis media (CSOM) was diagnosed in 6.6% of children and was the most common type of otitis media in participants aged 6 - 15 years. Acute otitis media was only diagnosed in the younger 2 - 5-year age group (1.7%). Otitis media was significantly more prevalent among younger (31.4%) than older children (16.7%). CONCLUSION: CSOM prevalence, as classified by the World Health Organization, was high. Consequently diagnosis, treatment and subsequent referral protocols may need to be reviewed to prevent CSOM complications.
We studied the diagnoses made by an otologist and general practitioner (GP) from video-otoscopy recordings on children made by a telehealth facilitator. The gold standard was otomicroscopy by an experienced otologist. A total of 140 children (mean age 6.4 years; 44% female) were recruited from a primary health care clinic. Otomicroscopic examination was performed by an otologist. Video-otoscopy recordings were assigned random numbers and stored on a server. Four and eight weeks later, an otologist and a GP independently graded and made a diagnosis from each video recording. The otologist rated the quality of the video-otoscopy recordings as acceptable or better in 87% of cases. A diagnosis could not be made from the video-otoscopy recordings in 18% of ears in which successful onsite otomicroscopy was conducted. There was substantial agreement between diagnoses made from video-otoscopy recordings and those from onsite otomicroscopy (first review: otologist κ = 0.70 and GP κ = 0.68; second review: otologist κ = 0.74 and GP κ = 0.75). There was also substantial inter-rater agreement (κ = 0.74 and 0.74 at the two reviews) and intra-rater agreement (κ = 0.77 and 0.74 for otologist and GP, respectively). A telehealth facilitator, with limited training, can acquire video-otoscopy recordings in children for asynchronous diagnosis. Remote diagnosis was similar to face-to-face diagnosis in inter- and intra-rater variability.
We present the fabrication and clinical use of a custom-made nasal septal silicone button that can be inserted transnasally into a perforation of the nasal septum by the physician as an office procedure, or by the patients themselves in their home. Questionnaire and retrospective chart review were used to evaluate the efficacy of this prosthesis as treatment of disturbing symptoms from nasal septal perforation. The study included 41 patients (27 women) with a nasal septal perforation. The follow-up time ranged from 1 to 9 years. Symptoms investigated were nasal obstruction, crusting, feeling of dryness, pain, epistaxis, and whistling from the nose. The degree of experienced symptoms was estimated on a VAS-scale. The questionnaire was answered by 37 of the 41 patients. Fourteen patients were still using their button at the follow-up. Treatment with the prosthesis greatly diminished all the investigated symptoms. Also, use of the silicone button resulted in an improved quality of life. No case of infection was noted in connection with use of the silicone prosthesis.
Squamous cell carcinoma of the head and neck, SCCHN, the sixth most common cancer in the world, comprises tumours of differentanatomical sites. The overall survival is low, and there are no good prognostic or predictive markers available. The p53 homologue, p63, plays an important role in development of epithelial structures and has also been suggested to be involved in development of SCCHN. However, most studies on p63 in SCCHN have not taken into account the fact that this group of tumours is heterogeneous in terms of the particular site of origin of the cancer. Mapping and comparing p63 expression levels in tumours and corresponding clinically normal tissue in SCCHN from gingiva, tongue and tongue/floor of the mouth revealed clear differences between these regions. In normal samples from tongue and gingiva, tongue samples showed 2.5-fold higher median p63 expression and also more widespread expression compared to gingival samples. These results emphasise the importance of taking sub-site within the oral cavity into consideration in analyses of SCCHN.
The p53-family member, p63 is a transcription factor that influences cellular adhesion, motility, proliferation, survival and apoptosis, and has a major role in regulating epithelial stem cells. Expression of p63 is often dysregulated in squamous cell carcinomas of the head and neck. In this study we show that p63 induces the expression of the basal epithelial transcription factor, Basonuclin 1. Basonuclin 1 is an unusual transcription factor that interacts with a subset of promoters of genes that are transcribed by both RNA polymerase-I and -II and has roles in maintaining ribosomal biogenesis and the proliferative potential of immature epithelial cells. Chromatin immunoprecipitation and reporter assays demonstrate that Basonuclin 1 is a direct transcriptional target of p63 and we also show that up-regulation of Basonuclin 1 is a common event in squamous cell carcinomas of the head and neck. These data identify a new transcriptional programme mediated by p63 regulation of the Basonuclin 1 transcription factor in squamous cell carcinomas and provide a novel link of p63 with the regulation of ribosomal biogenesis in epithelial cancer.
Tayside Tissue Bank Division of Medical Sciences, University of Dundee, Ninewells Hospital and Medical School, Dundee DD1 9SY, UK.
BACKGROUND: MicroRNAs (miRNAs) are small noncoding RNA molecules with an essential role in regulation of gene expression. miRNA expression profiles differ between tumor and normal control tissue in many types of cancers and miRNA profiling is seen as a promising field for finding new diagnostic and prognostic tools.
MATERIALS AND METHODS: In this study, we have analyzed expression of three miRNAs, miR-21, miR-125b, and miR-203, and their potential target proteins p53 and p63, known to be deregulated in squamous cell carcinoma of the head and neck (SCCHN), in two distinct and one mixed subsite in squamous cell carcinoma in the oral cavity.
RESULTS: We demonstrate that levels of miRNA differ between tumors of different subsites with tongue tumors showing significant deregulation of all three miRNAs, whereas gingival tumors only showed significant downregulation of miR-125b and the mixed group of tumors in tongue/floor of the mouth showed significant deregulation of miR-21 and miR-125b. In the whole group of oral squamous cell carcinoma (SCC), a significant negative correlation was seen between miR-125b and p53 as well as a significant correlation between TP53 mutation status and miR-125b.
CONCLUSION: The present data once again emphasize the need to take subsite into consideration when analyzing oral SCC and clearly show that data from in vitro studies cannot be transferred directly to the in vivo situation.
Umeå University, Faculty of Medicine, Department of Medical Biosciences, Pathology. RECAMO, Masaryk Memorial Cancer Institute, 656 53 Brno, Czech Republic; Institut de Génétique Moléculaire, Université Paris 7, Hôpital St. Louis, 75010 Paris, France.
Umeå University, Faculty of Medicine, Department of Clinical Sciences, Otorhinolaryngology. Department of Surgical Sciences/ENT, Uppsala University, 752 36 Uppsala, Sweden.
Due to the high frequency of loco-regional recurrences, which could be explained by changes in the field surrounding the tumor, patients with squamous cell carcinoma of head and neck show poor survival. Here we identified a total of 554 genes as dysregulated in clinically tumor free tongue tissue in patients with tongue tumors when compared to healthy control tongue tissue. Among the top dysregulated genes when comparing control and tumor free tissue were those involved in apoptosis (CIDEC, MUC1, ZBTB16, PRNP, ECT2), immune response (IFI27) and differentiation (KRT36). Data suggest that these are important findings which can aid in earlier diagnosis of tumor development, a relapse or a novel squamous cell carcinoma of the tongue, in the absence of histological signs of a tumor.
Despite intense research, squamous cell carcinoma of the tongue remains a devastating disease with a five-year survival of around 60%. Late detection and recurrence are the main causes for poor survival. The identification of circulating factors for early diagnosis and/or prognosis of cancer is a rapidly evolving field of interest, with the hope of finding stable and reliable markers of clinical significance. The aim of this study was to evaluate circulating miRNAs and proteins as potential factors for distinguishing patients with tongue squamous cell carcinoma from healthy controls. Array-based profiling of 372 miRNAs in plasma samples showed broad variations between different patients and did not show any evidence for their use in diagnosis of tongue cancer. Although one miRNA, miR-150, was significantly down-regulated in plasma from patients compared to controls. Surprisingly, the corresponding tumor tissue showed an up-regulation of miR-150. Among circulating proteins, 23 were identified as potential markers of squamous cell carcinoma of the tongue. These findings imply that circulating proteins are a more promising source of biomarkers for tongue squamous cell carcinomas than circulating miRNAs. The data also highlight that circulating markers are not always directly associated with tumor cell properties.
The investigation of vocal folds viscoelastic properties in an animal model (rabbit) after injection of various augmentation substances, 6 months after injection, is reported. The injected materials were: hyaluronan-based materials (Hylan B gel and Deflux(R)), cross-linked collagen (Zyplast(R)) and polytetrafluoroethylene (Teflon(R)). Rheological properties of the augmentation substances were also evaluated. The results from these animal experiments indicate that the viscoelastic properties of the vocal folds injected with Deflux(R), Zyplast(R) and Hylan B gel are similar to the healthy vocal folds (non-injected samples) used as control, thus demonstrating that these materials are good candidates for further studies aimed at restoring/preserving the vibratory capacity of the vocal folds with injection treatment in glottal insufficiency.
For patients with type 1 Gaucher disease, challenges to patient care posed by clinical heterogeneity, variable progression rates, and potential permanent disability that can result from untreated or suboptimally treated hematologic, skeletal, and visceral organ involvement dictate a need for comprehensive, serial monitoring. An updated consensus on minimum recommendations for effective monitoring of all adult patients with type 1 Gaucher disease has been developed by the International Collaborative Gaucher Group (ICGG) Registry coordinators. These recommendations provide a schedule for comprehensive and reproducible evaluation and monitoring of all clinically relevant aspects of this disease. The initial assessment should include confirmation of deficiency of beta-glucocerebrosidase, genotyping, and a complete family medical history. Other assessments to be performed initially and at regular intervals include a complete physical examination, patient-reported quality of life using the SF-36 survey, and assessment of hematologic (hemoglobin and platelet count), visceral, and skeletal involvement, and biomarkers. Specific radiologic imaging techniques are recommended for evaluating visceral and skeletal pathology. All patients should undergo comprehensive regular assessment, the frequency of which depends on treatment status and whether therapeutic goals have been achieved. Additionally, reassessment should be performed whenever enzyme therapy dose is altered, or in case of significant clinical complication.
Objectives/Hypothesis: Patients with olfactory dysfunction appear repeatedly in ear, nose, and throat practices, but the prevalence of such problems in the general adult population is not known. Therefore, the objectives were to investigate the prevalence of olfactory dysfunction in an adult Swedish population and to relate dysfunction to age, gender, diabetes mellitus, nasal polyps, and smoking habits.
Study Design: Cross-sectional, population-based epidemiological study.
Methods: A random sample of 1900 adult inhabitants, who were stratified for age and gender, was drawn from the municipal population register of Skövde, Sweden. Subjects were called to clinical visits that included questions about olfaction, diabetes, and smoking habits. Examination was performed with a smell identification test and nasal endoscopy.
Results: In all, 1387 volunteers (73% of the sample) were investigated. The overall prevalence of olfactory dysfunction was 19.1%, composed of 13.3% with hyposmia and 5.8% with anosmia. A logistic regression analysis showed a significant relationship between impaired olfaction and aging, male gender, and nasal polyps, but not diabetes or smoking. In an analysis of a group composed entirely of individuals with anosmia, diabetes mellitus and nasal polyps were found to be risk factors, and gender and smoking were not.
Conclusion: The sample size of the population-based study was adequate, with a good fit to the entire population, which suggests that it was representative for the Swedish population. Prevalence data for various types of olfactory dysfunction could be given with reasonable precision, and suggested risk factors analyzed. The lack of a statistically significant relationship between olfactory dysfunction and smoking may be controversial.
Objectives: Scarring caused by trauma; postcancer treatment, or inflammation in the vocal folds is associated with stiffness of the lamina propria and results in severe voice problems. Currently there is no effective treatment. Human embryonic stem cells (hESC) have been recognized as providing a potential resource for cell transplantations, but in the undifferentiated state, they are generally not considered for therapeutic use due to risk of inadvertent development. This study assesses the functional potential of hESC to prevent or diminish scarring and improve viscoelasticity following grafting into scarred rabbit vocal folds.
Study Design: hESC were injected into 22 scarred vocal folds of New Zealand rabbits. After 1 month, the vocal folds were dissected and analyzed for persistence of hESC by fluorescence in situ hybridization using a human specific probe, and for differentiation by evaluation in hematoxylin-eosin-stained tissues. Parallel-plate rheometry was used to evaluate the functional effects, i.e., viscoelastic properties, after treatment with hESC.
Results: The results revealed significantly improved viscoelasticity in the hESC-treated vs. non-treated vocal folds. An average of 5.1% engraftment of human cells was found 1 month after hESC injection. In the hESC-injected folds, development compatible with cartilage, muscle and epithelia in close proximity or inter-mixed with the appropriate native V rabbit tissue was detected in combination with less scarring and improved viscoelasticity.
Conclusions: The histology and location of the surviving hESC-derived cells strongly indicate that the functional improvement was caused by the injected cells, which were regenerating scarred tissue. The findings point toward a strong impact from the host microenvironment, resulting in a regional specific in vivo hESC differentiation. and regeneration of three; types of tissue in scarred vocal folds of adult rabbits.
Conclusion: There was a high prevalence of Fusobacterium necrophorum (FN) in patients with chronic tonsillitis in the age group 15-23 years. This indicates that FN might play an important role in the pathogenesis of chronic tonsillitis in this age group, which is also the age group in which chronic or recurrent tonsillitis is most common.
Objectives: The role of FN in patients with acute and chronic tonsillitis is unclear. Thus, this study investigated the occurrence of FN in tonsils of patients with chronic tonsillitis. The aim of the study was to determine the prevalence of FN in patients that underwent tonsillectomy due to chronic tonsillitis. This study also investigated if FN was found at different areas in the tonsils.
Method: One hundred and twenty-six consecutive patients undergoing tonsillectomy due to chronic tonsillitis were included from the ENT clinics at Sunderby Hospital and Gallivare Hospital, Sweden. Both children and adults were included to encompass various age groups (age =2-57 years). Culture swabs were taken from three different levels of the tonsils - the surface, the crypts, and the inner core of the tonsils. Selective agar plates for detecting FN were used for culture. Culture was also made for detecting -hemolytic streptococci, Haemophilus influenzae, and Arcanobacterium.
Results: FN was the most common pathogen (19%). The highest prevalence of FN was found in the age group 15-23 years (in 34% of the patients). FN was detected both at the surface and in the core of the tonsils. Furthermore, in the few patients where FN was not detected in all three areas, FN was always detected at the tonsillar surface, in spite of being an anaerobic bacterium. Streptococci group G and C also occurred most frequently (30%) in the same age group as FN (15-23 years), whereas Streptococci group A was more evenly spread among the age groups.
Department of Medical Science, Uppsala University Hospital, 751 85 Uppsala, Sweden.
Department of Molecular Medicine and Surgery, Karolinska Institutet, 171 77 Stockholm, Sweden.
Department of Oncology, Karolinska University Hospital, 118 83 Stockholm, Sweden.
School of Medical and Health Sciences, Örebro University, 701 82 Örebro, Sweden.
PURPOSE: This study aimed to explore the predictive value of systematic inflammatory and metabolic markers in head and neck (H&N) cancer patients during radiotherapy (RT).
METHODS: Twenty-seven patients were evaluated. The protocol included serial blood tests [highly sensitive C-reactive protein (hsCRP), albumin, insulin-like growth factor 1 (IGF-1), IGF binding protein 1 (IGFBP-1) and ghrelin], measurements of body weight and assessment of oral mucositis.
RESULTS: The mean nadir of weight loss was observed at the end of RT. At the time of diagnosis, mean hsCRP was 5.2 ± 1.0 mg/L. HsCRP significantly increased during RT and decreased during the post-RT period. Mean maximum hsCRP was 35.8 ± 8.5 mg/L, with seven patients reaching >40 mg/L. A numerical decrease of albumin (by 18.2%) and only small changes in IGF-1, IGFBP-1 and ghrelin levels were observed. None of the metabolic parameters was significantly associated with weight loss.
CONCLUSIONS: HsCRP increased in response to RT for H&N cancer as a sign of irradiation-induced inflammation. Weight loss was not preceded by changes of the metabolic parameters, indicating that assessment of the blood markers used in this study is of little value. Regular body weight measurement and assessment of oral mucositis are feasible, cheap and important procedures to control the metabolic homeostasis during RT.
This retrospective single-institution cohort study aims to evaluate if therapeutic approach, tumour site, tumour stage, BMI, gender, age and civil status predict body weight loss and to establish the association between weight loss on postoperative infections and mortality. Consecutive patients with head and neck cancer were seen for nutritional control at a nurse-led outpatient clinic and followed-up for 2 years after radiotherapy. Demographic, disease-specific and nutrition data were collected from case records. The primary outcome measure was maximum body weight loss during the whole study period. The nadir of body weight loss was observed 6 months after radiotherapy. In total, 92 patients of 157 (59%) with no evidence of residual tumour after treatment received enteral nutrition. The mean maximum weight loss for patients receiving enteral nutrition and per oral feeding was 13% and 6%, respectively (p < 0.001). Using multivariate analysis, tumour stage (p < 0.001) was the only independent factor of maximum weight loss. Weight loss was not significantly related to risk for postoperative infection. Weight loss is frequently noted among head and neck cancer patients during and after treatment. Weight loss was not found to be associated with postoperative infections and mortality. Nutritional surveillance is important in all patients, but special attention should be given to those on enteral nutrition and those with more advanced disease.
The present study was undertaken to compare the clinical benefits of prescribing ear drops containing 0.05% solution of betamethasone dipropionate (BD), and ear drops containing hydrocortisone with oxytetracycline hydrochloride and polymyxin B (HCPB), for topical treatment of external otitis. Fifty-one patients were enrolled in this open randomized, parallel-group, multicentre study, performed in eight different ENT departments. The patients were randomly assigned to one of the two treatment groups: BD (n = 26) and HCPB (n = 25). Only ENT specialists investigated the patients. Bacterial and fungal cultures were raised on days 1 and 11, using swabbed material from ear canals. Twice daily the patients recorded their symptoms during the acute phase, using special diary cards.BD proved a significantly more effective cure than HCPB during the acute phase of external otitis and afforded a lower relapse frequency during a six-month follow-up period. The patients of the BD group were significantly less troubled by itching (p < 0.01) than those in the HCPB group. On day 11, at the end of the acute phase, growth of bacteria (p = 0.03) and fungi (p < 0.01) was less frequent in the BD group than in the HCPB group. No serious adverse events occurred, and those minor events observed were comparable between the two groups.Our conclusion is that the group III steroid solution, BD, cured the external otitis more effectively than did the HCPB solution, whether infected by bacteria or by fungi. No difference was evident regarding adverse effects. Furthermore, price favours a solution without any antibiotic component. In view of these observations, a group III steroid solution ought to be the preferred remedy for external otitis, whether infected or not.
Umeå University, Faculty of Medicine, Clinical Sciences.
In an animal external otitis model, inflammatory reactions were evoked by mechanical stimulation of the rat ear canal skin. The rats were in four groups: group A treated with a group III steroid, betamethasone dipropionate; group B treated with hydrocortisone combined with oxytetracycline; group C treated with hydrocortisone with oxytetracycline and polymyxin B added; Group D, the controls, treated with saline. All rats were observed otomicroscopically daily during the first 7 days after treatment and then on days 10 and 20. A standardized scoring system was used to evaluate colour, swelling and effusion of the ear canal. Histological specimens were collected on days 3, 7, 10 and 20. The most rapid improvement in the ear canal status occurred in the animals treated with betamethasone dipropionate. The inflammatory reaction of the ear canal skin caused by mechanical stimulation was characterized by oedema of the stroma but few inflammatory cells were present. The surface of the epithelium towards the connective tissue layer was smooth in the group III-treated animals (group A) whereas other groups had irregularities of the basal membrane. From this study it is inferred that the group III steroid betamethasone dipropionate alone heals experimentally induced external otitis more rapidly than hydrocortisone with oxytetracycline, with or without polymyxin B. These findings should be considered in future clinical trials of external otitis.
External otitis was produced in 12 Sprague-Dawley rats by mechanical stimulation through a plastic micropipette inserted into the right external auditory canal (EAC). The EAC was later evaluated regarding the color of the skin, swelling and the presence of fluid. Within 1 day all rats developed an external otitis that was characterized by a red, swollen ear canal containing an opalescent fluid. The tympanic membrane and middle ear cavity appeared to be normal. No healed EACs were seen within the initial 10 days of follow-up and 4 of 6 rats still exhibited external otitis at day 21. Light microscopy of biopsy specimens revealed pronounced edema of the dermis of the ear canal. Mast cells were more numerous in the early phase of the otitis present, although very few inflammatory cells were found in tissues despite the marked inflammatory reaction produced. Findings show that this animal model for external otitis can be used to investigate pathogenesis as well as to test various treatment strategies.
CONCLUSIONS: Irrespective of the microbial agent, group III steroid solution cured external otitis efficiently in a rat model. The addition of antibiotic components to steroid solutions for the treatment of external otitis is of questionable validity. OBJECTIVE: External otitis, caused by infection with either Pseudomonas aeruginosa or Candida albicans, was established in a rat model and the treatment efficacy of a group III steroid solution was studied. MATERIAL AND METHODS: Three treatments were studied: (i) a group III steroid solution; (ii) a group I steroid combined with two antibiotic components; and (iii) a saline solution. A scoring scale was used to evaluate the characteristics of the ear canal skin. Bacteriological and fungal samples were collected for culturing and ear canal skin biopsies were taken for structural analyses. RESULTS: It was possible to cause P. aeruginosa and C. albicans infections in an animal model. In the P. aeruginosa-infected animals, only the group III steroid treatment cured all the animals. In the C. albicans-infected animals, group III steroid treatment resolved external otitis faster than the other treatment modalities.
In a prospective multicenter, randomized, double-masked trial, 30 patients with external otitis received betamethasone dipropionate in a 0.05% solution for 11 days. Fifty percent of the patients were assigned randomly to receive concomitant treatment with loratadine to help control itching, and 50% received placebo. The status of the external auditory canal (EAC) was assessed on days 0, 3, 7, 11, and 21 according to a new scoring system that graded color, the extension of redness outside the EAC, swelling, and effusion. Eighteen patients underwent sampling for a bacteriologic culture at the start of treatment; 14 cultures showed positive findings. The EAC status improved rapidly, and by day 11 it was almost normal in all patients. Pain and sleep disturbances disappeared by day 7; at which point itching was either nonexistent or mild. All patients were able to resume work after 3 days of treatment. At the end of the study, 29 (97%) of the 30 patients were cured. The addition of loratadine to the treatment did not improve results significantly. External otitis is generally treated with a combination of a steroid and an antibiotic. Results of this study suggest that external otitis, whether culture-positive or not, can be cured using a group III steroid alone.
A systematic analysis using serial sectioning of the round window membrane (RWM) in the cynomolgus monkey was performed. Light and transmission electron microscopy (LM and TEM) revealed that the RWM rim may be endowed with gland-like structures with glyco-protein material secernated into the window niche. This was detected in one third of the specimens. The secreted material displayed waste material and scavenger cells. There was also a rich network of capillaries, lymph channels, and sinusoidal veins containing leukocytes. Their abluminal surfaces displayed mature plasma cells and monocytes. These findings suggest that in certain primates the middle ear may have developed specific immunoprotective means for disposal of foreign and noxious substances before they reach the inner ear. | 2019-04-19T02:15:31Z | http://umu.diva-portal.org/smash/resultList.jsf?searchType=ORGANISATION&language=en&onlyFullText=false&af=%5B%5D&aq=%5B%5B%7B%22organisationId%22%3A%22745%22%7D%5D%5D |
Inclusive website design that builds in accessibility helps everyone, not only those with disabilities. And it helps you with SEO. Our panel uses the example of the MacArthur Foundation site rebuild. They’re James Kinser, from the MacArthur Foundation, and Cyndi Rowland with WebAIM.
Oliver DelGado helps you navigate the bilingual balance in print, social and on your site, as we discuss opportunities and challenges. He’s from Levitt Pavilion Los Angeles.
Hey. Oh, hi there. Hello and welcome to Tony martignetti non-profit Radio Big non-profit ideas for the other ninety five percent. I’m your aptly named host. Oh, I’m glad you’re with me. I’d bear the pain of Macro Stone Mia if I had to say that you missed today’s show Be accessible. Inclusive website design that builds in accessibility helps everyone, not only those with disabilities, and it helps you with the CEO. Our panel uses the example of the MacArthur Foundation site Rebuild their James Kinzer from the MacArthur Foundation and Cindy Roland with Web aim Go bilingual Oliver Delgado helps you navigate the bilingual balance in print, social and on your site as we discuss opportunities and challenges. He’s from Levitt Pavilion, Los Angeles, on Tony’s Take two Grieving in your plant e-giving. We’re sponsored by pursuant full service, fund-raising Data driven and technology enabled Tony dahna slash pursuant by Wagner c. P. A’s guiding you beyond the numbers regular cps dot com But tell US Attorney credit card processing into your passive revenue stream. Tony dahna slash tony Tell us and by text to give mobile donations made easy Text. NPR to four four four nine nine nine here is be accessible from the nineteen ntcdinosaur twenty nineteen non-profit Technology conference. Welcome to Tony martignetti non-profit radio coverage of nineteen NTC. You know what that is tonight? Twenty nineteen non-profit Technology Conference Way are in the convention center in Portland, Oregon, and this interview, Like all of ours at nineteen NTC is brought to you by our partners at Act Blue Free fund-raising Tools to help non-profits make an impact With Me are James Kinzer is senior associate for digital communications at the John D. And Catherine T. MacArthur Foundation and Cindy Roland. She’s director of Web AIM at Utah State University. James Cindy. Welcome. Welcome to non-profit radio. With pleasure. Thank you, Cindy. What is Web aim? So Webb started in nineteen ninety nine. Two. Actually, we started with federal funding to assist folks in higher education with Web accessibility. We’ve since grown were now working in literally every sector on helping Web developers, Web designers and content creators will make their content accessible for individuals with disabilities. OK, OK, and James, I I hope I can get this right. That MacArthur Foundation seeks a more just verdant and peaceful world was full world. Thank you I’m a big NPR listener. What do you do specifically at senior associate with digital communication? What does that mean? That the foundation. Right. So my primary role is to manage content on the Web site and Tio manage over email communications, But it also involves a lot of project management. And so one of the largest projects that I managed recently is the complete redesign of our website. Look at this wonderful transition. Yeah, to redesign of the website to include accessibility. Exactly. Okay. Okay. And you, uh, you built in some grantee grantee encouragement there, too. With guides. Is that right? We’re in the process of developing developing that with the wedding team. Currently? Yeah, we’LL get to that. Okay, good. If I forget, remind me. Okay, Because the fact that you’re trickling it down it didn’t It doesn’t stop with the foundation, But you’re encouraging your grantees to do the same. Or at least a attempt. Yeah, exactly. Conscious are our learning. You’re so great and valuable. We recognized that it would be so stingy for just for us to keep that to ourselves. We wanted to make sure we were sharing that with the greater population And, of course, it doesn’t just stop with MacArthur grantees this booklet. Once it’s done, it ,’LL be available. Really, Teo. Anyone in, well, anyone? But certainly we envision that it’s going to be of use to you lots and lots non-profits non-profits. Cindy. Why is accessible design important form or with the wider population than those who need accessible sites? Right? Well, you know, and you you are hitting on a really important piece, which is, of course, everyone’s going to agree that we make content accessible to everyone, including those with disabilities, because that’s just the right thing to do. You know, anyone who has a moral core isn’t going to purposely exclude a segment of the population. But there are lots of reasons that you would develop accessibly just for typical users on DH. I won’t even go into as we all age. We all acquire disabilities, but we all typically in our lifetime have some accidental disabilities. Whether it is, I break my arm. I dropped my mouse and it busts into hundred pieces. I have, you know, some kind of ah temporary vision issue that needs, you know, Cem correction and takes a little bit time so accessible design ends up by being hopeful for everyone. It’s helpful on mobile devices. It’s helpful in virtually, you know, every platform. So those folks that are developing accessibly are not only helping those with disabilities, but they’re helping everyone. Okay, is there also an sio advantage? If we have to make it, bring it down to such a basic level, right? Which I admit. But I’m the one asking questions. You know, they’re they’re they’re also there is all right, So you know there’s there. We all know that it’s, you know, secret sauce in the background there as to how the the CEO’s air really pulled together. But those sites that are accessible end up by being, you know well, it makes sense because you’re able to call through all of the text information, and it’s able Teo, get at things that you might be presenting visually. So let’s say you have an image or you have a chart that you’re providing to visual users. If you have alternative text or you have some description, the search engines air ableto look at that text and be able to index that properly so it does end up by helping fun folks find your content. Okay. At the MacArthur Foundation, James, what raised the consciousness that in your redesign you needed to consider accessibility? Yeah, So we had a CZ. Part of our regular development process is worked with a developer to do just kind of an ad hoc scan on accessibility for the website. And then about five years ago, with new leadership of the foundation, there was a real turning point in our approach to grantmaking and s. So the the number of grantmaking areas was reduced down to a more focused number and there was a greater sense of urgency brought to the work. And it was at that time we were also looking at, um, asking ourselves the question Are we really living are our tagline Are we living this commitment to be in a I’m a more just, verdant and peaceful world? Were we truly being just if we were not giving access to everybody to the information on our website? It’s a very introspective discussion that that someone raised Yeah, that, uh, you know, that takes a lot of courage to consider that we may not be living up to our our own tagline, right? Right. So essentially, once that question was answered, way had our marching orders from from leadership, and it was something that I was already aware of and passionate about. And so it just kind of came together really beautifully. Acquaint us with the with the start of the process. How does the accessibility fit into an overall redesigned right? So for us, we actually worked with the Web AIM team to do a scan of our sight. I think that we gave them maybe twenty, twenty to twenty five pages to review. And from that review they created a report. And that report identified all of the areas that were not in compliance with accessibility and essentially ranks them in priority order. And so made it really easy for us to go to the designers and say, Hey, we’ve got all of these issues, These air, the scaled need for each one of them. Let’s incorporate that into the new pages as we design them. Cindy A. Z do that kind of ah evaluation. Where are the standards? But how do we know where they are? Is our cottage is codified somewhere? It absolutely is. So the the World Wide Web consortium that W three c has Web accessibility initiative w ay, I’m going into alphabet soup. No way. Have jargon jail on radio. Okay, okay. Transgress. I will not. So they created the web content, accessibility guidelines and I’m going to throw another one at you. It’s called Would CAG. Right now they’re at version two point. Oh, So you confined the wood. CAG died. Guidelines, Web content, accessibility, eye lines to point out yet so Google it it’LL provide all the technical standards. Therefore principles perceivable, operable, understandable and robust. Each of those principles has a set of essentially success criteria so that you know that you’re meeting what? That what that is. So let me just throw out example so that all content is perceivable. So let’s say that I don’t have vision. How am I going to perceive that content? Well, I’m probably using a screen reader to read the the What’s behind what it is that we see. So if a developer does not put alternative Tex, there’s no way for me to extract the content of that. Let’s say I’m deaf and you’ve got a video. How am I going to perceive that content well. If you have captions that I’m able to get that content, so just very simple things like that. Now you know somewhere a little more complex. It’s not way used to say that accessibility is simple, but as the Web has developed and matured, things are more complicated. But it’s still something that can be achieved when you agree. James. Definitely it’s time for a break. Pursuant. The Art of First Impressions. How to combine Strategy, analytics and creative to captivate new donors and keep them coming back that is, there a book on donor acquisition. They want you to read it. Check it out. Um, helps you make a smashing first impression with donors. You will find that the listener landing page tony dot m a slash Pursuant capital P for please. You know that Let’s do the live. Listen, love. Let’s I bumped it up. It’s accelerate because my heart’s bursting with love for the live listeners. So it’s going out now. Live love to you. If you are with us, the love goes out. Like I said, it’s Sze being redundant, right? Okay, enough said live love tto listeners. We’re listening now and to those listening by podcast pleasantries to you, the vast majority of our audience so glad that you were with us whatever time. Whatever device, however, non-profit radio fits into your schedule binge or week after week pleasantries to you. Let’s continue with James Kinzer and Cindy Roland. How do we work our way into it? So that the MacArthur Foundation they asked you to evaluate twenty five pages or so and you applied the standards right? Is that something that buy-in organise that small and midsize shop could do on their own? Absolute thing, finding guidelines or the guidelines a kind of technical and the guidelines air absolutely technical. So if you’re not a technical person, you you may go read them, and your eyeballs may spend around head. However, there are lots of places that you can go, I think even just starting understanding. There are lots of introductions to have accessibility. Web dot org’s certainly has one of those. There are others as well on DH. There are tools that are available, of course. Webb has one that’s it’s free for folks to use and other people have to. So I’m not trying to just, you know, talk R R R But if someone were to go to wave and that’s w A v e wave dot web dot org’s, they’d be able to put in a whirl and wave will check for them where they are with the standards now about Yeah, that’s all it takes just Well, yeah, yeah, yeah, yeah So here’s the rub. Only about twenty five or thirty percent of the errors Khun B. Programmatically detected by any look any tool, not just ours. Any of them the rest of them need. They require a human determination of whether or not it meets the guidelines so you could get started. Wave dot web dot or dot org’s absolutely and you know, thie. Other thing that I’m goingto mention about wave is as you look at the errors it and we describe, you know what the what The area is, why it’s important, how you can fix it. And there’s even links to little tutorials that go further in depth about that whole thing. So, to be honest, Wave is used an awful lot by developers and designers as a tool to learn about what it is that they need to be doing Okay, James, what are what are some lessons that some takeaway is that you you can share from the from this redesign? Yeah. I think a really great example of, uh, before and after a people meaning accessibility was the grant search page on our website. When we originally we’re looking at it, it was just the results were all just in a long list, and they were really wide columns of a text. So each line was really long. And as we were going through the redesign process that we recognize that this wasn’t optimized for people with cognitive disabilities. And so what we did is we shorted shortened each entry. What? I’m sorry. What’s the What’s the deficit that that they have that prevents them from reading are seeing longlines? Yeah. What is that? Specifically very questions. So a person with cognitive disabilities, the longer the line of text, the greater the chance. By the time they get to the end of that sentence, they will have forgotten what they read at the beginning. And so, by shortening the line lines, what that means is that they can increase. Is there there retention? Okay. And I will say also, for many folks that are struggling with literacy issues, that notion of them visually wrapping back around if it’s really long, then there they have to visually scan back to the leg left next to the next line. And that can become visually a difficult task, even though it’s not a visual problem problem. But it’s a visual processing problem. OK, alright, So thats the benefit in the redesign was that we took that knowledge. We sat down with the designers, came up with the new layout and moved things into more of a grid pattern. And so what it meant is that the lion lengths were shorter. But then also for typically sighted people, it meant that they could actually see more information on the page but scrolling wherever they’re scroll across. Exactly. So So this is a really great example of how we were addressing accessibility for for one audience. But then it it still had a really significant benefit for everyone else. Yeah, excellent. OK, Tio Cindy’s point earlier about the benefit for all rank. I love that. What else? You got? Another take away. What else have you learned from this whole thing? How long did this take, Let’s say, from the time that you asked Web aim to do the evaluation of twenty five pages, Tio, you felt like okay. I mean, website is never completed. I realize that, but you know, that button eyes a project manager, But until the point where you sat back and said, OK, we’ve pretty much where where I wanted to be back then, right? How long was that time? It was a three year process. Okay, Now MacArthur’s gotta have an enormous mean tens of thousands of pages. I cite you. Remember the number? I think there are a hundred. Two hundred thousand. Okay, but it’s a long front. Yeah, yeah. Okay, but smaller sites not going. I’m not going to be a daunting right. Okay, so another. Please. Another another. I’m trying to think of one of the other pages that we were working on. So one of the other, the significant takeaways is that start wherever you can like before. Five years ago, I really knew very little about accessibility, and I reached out. I took a workshop and overtime working with the designers. I learned a lot more about it. So I think using what resources are available to you. TTO learn is super important. I think it graded another example or take away from the site. Is that because something is a, uh, wait, Just have to make sure that everything that we’re seeing on the side of a CZ typically sighted people is truly consumable buy-in e other person that has a disability. And so when I’m formatting content now, I’m thinking about that. I’m making sure that, uh, was that descriptions that I put in for the photos. Are they going to be clear? Are they in the right format? Uh, it’s it’s, uh, it’s It’s not just a developer issue. It’s a content management meant issue. So I take a lot of responsibility for making sure that, as I’m formatting, that content that that I am formatting with accessibility of mind. And I’d love to just add another. Although I don’t know that this is MacArthur’s experience and you could you could certainly mentioned it is, But I think it’s so critical for folks to understand that web accessibility is not a one and done that. It has to be baked in. You know, you’d mention that you’re never really done with the website. Well, to the extent that accessibility is then part of baked into that process by by virtue of that, you’re never done with accessibility either. One of the things that we see over and over and over again, people will come to us. We’LL work with them. The folks that are on that team get it, they make those changes. But then those folks leave. They go someplace else. And the accessibility problems creep back in because, organizationally, they haven’t shifted the culture. They haven’t created a workflow that is going to sustain accessibility, even at the level of how their purchasing, you know, products, you know, widgets, naps and all that. Are they even asking the question if we buy X and embedded in our site? Is that accessible? How do we, uh, create a cultural change? Yeah, it’s t institutionally, the conscious of accessibility. So that has turned over occurs. It’s not lost weight. What do we do? Well, I think first and foremost you gotta start with a commitment from the top. I mean, if you don’t have those you know, top executives saying in our shop this is something we value and This is something we’re going to monitor every year, every two years, every whatever. We’re going to have a regular way that we look at this and it’s going to be systematic if you don’t do other things that will help sustain it. So, for example, in HR, why would it be that for technical people you wouldn’t routinely put into your job descriptions when you’re hiring that knowledge and skills of Web accessibility is, if not required, at least preferred, because that sends a message out to the people that want to have these jobs. Oh, well, apparently, this is a skill set that I need to acquire. How about purchasing? You know, if if you’re if you’re you’re webber, your other digital materials do rely a lot on the work of others. Let’s say it’s templates. Let’s say it, you know, whatever it happens to be, maybe it’s Ah, um uh, you know Cem donor-centric where you’re using, you know, how is it that you are checking to see the accessibility of the easy way to start is to ask the vendor to make a declaration of how it is that they conform to the current standards of accessibility. There is a thing called and sorry. Here we go again, as long as you define okay, as long as you tell the veep at the voluntary product accessibility template or in version two point Oh, now so don’t accept anyone that sends you a one point no document, but it’s very common to act in your in your requests and your solicitations say that you’re going to require that what you procure, what you purchase or acquire conforms toe tag, too double double and that the vendor either submit a V pat or some other kind of declaration. Or sometimes, folks, we’re just going right to give us a third party. A report of where where you stand with accessibility. Now, all is not lost if the vendor has some problems. It’s not that you don’t by the product you want, but you have a negotiation about what’s your roadmap for accessibility. If if we put money down, how long will it be in your development process before weakening exact that art that the stuff we’re hosting is going to be accessible, You know, it’s three three months of its three years, you know that may give you pause. Okay, okay. Very good. Uh, I’m gonna go back and underscore something that that Cindy mentioned earlier and that that’s leadership. Energy is welcome. So? So I just feel like it’s important to underscore that that that getting buy-in from leadership makes this process infinitely more simple, like it would be with anything and any significant initiative. Right? And And I think the turning point for the MacArthur Foundation was truly that that moment where we were looking at our tagline like I mentioned before. Ah, and for for people who are organizations that are concerned about how do I get buy-in? I think that that’s the easiest way at a mission driven organization is to look at that tack line and say, How does this pair with accessibility? And how can we, uh, make that argument? What? What case would you bring to your CEO? When you are, your consciousness is raised. Uh, how do you How do you raise? But he raised the issue up above. Yeah, I think that, uh, once I were educated Ah, a little bit more on it. I would go to leadership and and say, Look, this is this is kind of like Rex of our presentation later today is you know, it’s not just addressing accessibility for people with disabilities. It’s truly it’s truly improving for everyone, for everyone, right? And it’s our It’s embodied in our mission way just haven’t been conscious, right of of how our mission intersects with accessibility. Right? So it’s just making those two points connect. Okay. Okay, um, Cindy’s Web team did a evaluation of one hundred one hundred sites. Yeah, we were just We were just doing a quick little scan of Of where the non-profit world is in terms of accessibility. How did you pick one hundred? What we did is we went to a website top non-profits, and they had a lovely little list of the top one hundred non-profits. And of course, you know what does top mean, right? You know, But we took their list is probably names that we would roll recognize for one reason or another by budget or employees. Eyes are exactly yeah, annual fund-raising. Whatever. Okay. And and for me, the purpose was just to just to get a sense of what’s happening. So we just landed on the home page of each of those now the rationale is that, you know, as we well know, home pages get the most attention home pages get the most. Is that still true? Now, a lot of times I’ve had guests say that not to be overly focused on your home page because a lot of people coming in directly looking for the content that’s buried in your side, your sight because of because of a link that they follow, right? So for those people that have a direct link, you’re absolutely right. But if somebody doesn’t know the organisation, it doesn’t know the content that they want. How are they going to get in? They’re going to get in through your friends. They’re explored. So they’ve heard your name and they want to, right? Exactly. Exactly. So let’s say I’m a, you know, a family. And I’m feeling particularly philanthropic. I’m going to be I’m gonna be rooting around here looking at you know, where might I want to, you know, engage in sin, flandez. Okay, so we’re gonna learn about. So we went in and we we ran the wave on one hundred pages and do so when you’re when you’re back in Utah State. Utah State University. Yeah, this. So do you all do away and there are there six of you? I made it that way. I wish we didn’t run the way. Wave Runner. Okay. Yeah, You don’t You’re not doing that at heart. And I will take it on. Okay. Okay. Well, we’ll do that. Bring that back back east from here. It’s back. He’s okay. Based in New York. So it’s strange to say he’s for Utah. Okay. Wey came at us dahna aggression. We came out up from his acronym, But you’re right. We should be thinking about into a wave. Think maybe every Friday. That’s good. Alright, Anything. So we were running it through the way and way were only looking at those items that were again programmatically detectable. So how can the machine that hogan this after her to say that? Only thirty percent you said? Oh, yeah. Isn’t twenty five or thirty men are right in that vision. Indestructible Because we knew he didn’t have the time to do, you know, an in depth, blah, blah, blah. So again, we’re not looking at accessibility. What we’re looking at are errors, problems. That’s thes air. This is the low hanging fruit. This is the stuff that if you’re considering accessibility, you’re probably going to be nailing these things, right? Okay, because the things that are harder, those things that require human interaction and detective in deduction sadly, of the hundred, there were only three pages that didn’t have programmatically detectable arika cloudgood. So ninety seven percent of that sample and again, we don’t know to what extent that generalizes to the rest of the pages blah, blah, blah. But these are big organizations. These are, however measured these are And and I mean my heart just sunk because I thought, if there is ever if there’s ever a sector of our society that should be aware of this and working towards this, this should be the non-profit world. These air, the folks that are you know, the the standard carriers for lots of ethical causes and equity and rights of people with disabilities is certainly one of one of those. So I’m very sad to report that the data are that bad and we we are going to follow up, and we may end up by doing a much larger look at non-profits, not just home pages, but you know, scanning. You know, main domains and looking at thousands of pages, you probably need some funding for that. Well, you know, it’s always it’s always very helpful. If there’s anyone out there that would like to sponsor a deep look into this, give me a holler, OK? Our audiences, non-profits. Yeah, I’m not I’m not sure, but, uh, okay. Noble noble cause and yeah, disappointing. But I will say the stuff that work, the stuff that we did, you know, that was just I found internally little pot of money. We just did stuff We do stuff like that out of, you know, anyway, because it’s just part of it’s part of our mission to make sure that we’re getting information out about the state of accessibility. Does Webb aimed Is there a way I am? Stand for something? Well, it it really does. It’s the The initial project back in nineteen ninety nine was keeping Web accessibility in mind. So we have the web. Aye, aye. Accessibility in mind. Okay. And that’s the one thing we want. We want folks to keep a web accessibility in mind as they’re considering their content as they’re developing design frameworks. is They’re thinking about their coding on all levels. So this is all well, initially, consciousness raising way. Need to be aware, APS and then we can go toe wave dot web, a web dot organ, and we can begin their right. James, we got, like, thirty seconds, so I’m gonna give you the the wrap up. What would you like to leave people with? Yeah. Ah, a few things. I think the first one is that that accessibility applies to everybody and, uh, take the opportunity to learn about it. To do some research, to take a workshop, started any level on and then you can begin addressing it in many different ways. There there are small steps, medium steps, deep involved, development steps. So, really, it’s it’s it’s completely pardon of fun accessible to everybody. He is James Kinzer, senior associate senior associate, the digital Communications at the John D. And Catherine T. MacArthur Foundation on DH. She is Cindy Rolling, director of Web aim at Utah State University. James, Wendy, Thanks so much. Thank you, Cindy. I’m sorry, but, uh, you’re listening to Tony martignetti non-profit radio coverage of nineteen ntcdinosaur non-profit technology conference. Like all our interviews here. This one is brought to you by our partners at Act Blue Free fund-raising Tools to help non-profits make an impact. Thanks so much for being with us. We need to take a break. Wagner, CPS. They’ve got a free webinar coming up. It’s on April sixteenth. Tips and tricks for your nine ninety. The best part of this is using your nine ninety as a marketing tool to do some PR for you in various sections, including Narrative, but they’re going to talk about other uses for your nine ninety. It regarded regarding PR and promo. Okay, because it’s so widely read, it’s so widely available. Ah, if you can’t watch the live, you can watch the archive of their webinar weather cps dot com Click seminars, then Goto April. Now, time for Tony. Take two grieving part of your plan to giving program. As I was grieving, my well still am. My father in law’s death came very sudden in Ah, late March, and, um, it occurred to me that grieving is part of your plan e-giving program. And that happens when relatives who contact you because a donor to your organization has died and so you can’t expect those relatives to be at their best. UM, they’re going to be a little gonna be on edge. You know it’s there, not goingto be contacting you the day after the death, or even probably within a week. But when they do, they’re still grieving and you know it’s it’s likely to be a spouse or a child that’s the most common s. So it’s it’s someone close. And when it happens, you wantto handle them appropriately and keep things simple for them have a simple process to make them jump through hoops. I’ve got a bunch of ideas on managing and working with the grieving relatives when you do hear from them in my video, and that is at twenty martignetti dot com. Now let’s go to Oliver Delgado, also from the twenty nineteen non-profit Technology Conference, and this is go bi lingual. Welcome to Tony martignetti non-profit radio coverage of nineteen ninety si. That’s the twenty nineteen non-profit Technology Conference. We are in the Convention Center, Portland, Oregon, and this interview, like all our nineteen NTC interviews, is sponsored by our partners at Act Blue Free fund-raising Tools to help non-profits make an impact with me now is Oliver Delgado. He is director of marketing and communications at Levitt Pavilion, Los Angeles. Oliver, Welcome. Thank you so much for having me. I appreciate it. My pleasure. It’s great to be here in Portland’s Great to be here with you. I agree with you and, well, not that it’s great to be here with me, but I agree that it is fun to be important. Important is a wonderful city. It’s my first time here, so I’m taking it all and so much to see. And really, in the short amount of time I see a lot of drink through, it’s started Edible Food and Drink City. I started last night. I got very lucky. There’s an amazing restaurant in my hotel, and food was fantastic. Where was the food? It’s Ah, guess. Contemporary Mexican American fusion. Okay, Yeah. Super cool Down in the Pearl District, I went Tio when I first got here first night I went Teo, I’ve always heard that Israeli food is very good. I want a place called shoot Shalom Y’All y’all like y’All I love come over. Yeah, yeah, that’s not a Finnish border. Zoho are Israeli word. It’s your long Y’All Yeah, was X there have to one in the north east, their one southeast and southwest. I think it’s funny you mention that actually just saw this really cool report on Israeli food and considering that’s a fairly recent kind of evolution and food, considering the history behind the country and how it’s really a fusion of so many different regions. And so they’re taking so many different steps to just create an identity for it. Which means the food is just amazing. So I’d love to baby check it out if you recommend it. Although I do. I had a roasted eggplant, which was outstanding. Their home is their to bully both very oath. Oh, a light, a light and tasty falafel. Very nice. Dunaj dahna like a dark brown. Not a golden battle darker brown. That was incredible. You had me at recommend Shalom. Y’All All right, but we’re here to talk about sabelo Espanol. That’s right. Expand your reach and impact by going bilingual. Why should we? What if we’re not serving a non-profit? We’re not serving. Ah ah, Spanish community, Spanish speaking or Mexican speaking community. Why should we, uh, should we abila spaniel sure this is really specific to Los Angeles, and I think for us going bilingual has to do a lot with sustainability on and being able to reach more people in an organic way, and it’s not limited. I think Spanish for us is the case study. It’s the example. But I think if you’re living in a major metropolitan city or not, we see that there’s migration patterns across the country from all parts of the world. So it may not be Spanish. It could be Russian. It could be Chinese. It could be Hindi. It could be Farsi. There always be theirs. There’s probably a secondary language being spoken in certain communities and cities across the country. So when you look at those demographics and you look at those shifts and statistics, how do you tap into those alternative language and communities essentially with your organization? If the impact or the goal is to be sustainable, to be far reaching toe have longevity. It’s about reinvention. It’s about adapting. It’s about finding different ways to use different tools to reach people. So be aware of the demographics in in the area that you are serving absolutely and be responsive? Absolutely. And I think just speaking factually, the US eventually will be a minority majority country. So it’s twenty, twenty, twenty five, I believe so. I believe by twenty twenty five. So you know, we’re at twenty nineteen, and it gives you a really good idea, especially a lot of organizations who are toying with the idea of employing a secondary language or even a third of how to start doing it connecting with communities. And that’s what I’LL be talking about today is the multi layered approach from really identifying a language to identify your audience. Is your media lists your brochures or collateral? The programming, the PR, the communications of community relations, your digital presence? Everything has to be intertwined so that your message, or rather, your intent, has has legs. How do we make this case to leadership before we get there? Buy-in. Before we can go ahead, it’s so we can start spending money. Sure, it’s looking at your, especially from a fund-raising point of view. Are you maximizing what you’re able to raise when you have an event when you have a gala? If you have, if its an electronic of it? A letter appeal for end of your donations or a seasonal campaign. Uh, whatever our whichever mechanisms you used to fund-raising Are you maximizing its potential? Right? And I think that’s the lens that we look at. Eleven l. A. It’s not only the fund-raising but the friendraising, because our impact is creating, and our intent is creating stronger, more connected communities through the yards, and it’s kind of a dual approach. We also need to be ever support the yards fiscally for us to be able to get our mission. So it’s a little more about what Levitt Pavilion does. Sure, absolutely. Levitt Pavilion Los Angeles is It’s part of a national Levitt network, and Levin Foundation does incredible work in twenty six cities across the country, with the mission of creating stronger, more connected communities through the arts and specifically free live music. So throughout the country, you’LL see cities and towns come alive every single summer, respectively, between twenty five and fifty concerts. Every Levitt pavilion that’s designated as a permanent pavilion has the task. And really, though these air structures, these air structures, Yeah, absolutely. And so we’re all tasked with creating an offering fifty free concerts every single summer, and we do so by away of a public private partnership where we partner with the city departments in the respective city that were in what the foundation helps provide some seed money. But also the most important piece is getting the community buy-in because at the end of the day, if you don’t have the commuters support for something that is intended to serve them, what’s the point? So what’s incredible about Levin Potvin Los Angeles, is that it came at such an incredible time in MacArthur Park, which is a historic park in Los Angeles. Jimmy Webb dahna Summer, Right? So they were talking about this epic park that was once the premiere vacation oasis. The Shabelle is a of Los Angeles, and over the years, over the course of one hundred years of transition from being an incredibly wealthy neighborhood to one that then became a creative enclave, really boho chic, think Lower East Side Manhattan in the seventies and eighties and then continues in transition as resource is stopped coming to the neighborhood, especially city services. You know the migration patterns, especially as the civil wars in Central America really left, warn Torrente lands and people seeking refuge, and we saw an influx of Central American refugees into this specific neighborhood. But it happened so quickly that the density literally just expanded within maybe a few years, to the point where the neighborhood, because it was such a wealthy enclave, the sing the one single family homes had to be slapped together and carved into multiple family. When and you can imagine what that looks like, right, it’s, you know. And then we saw this very specific example. MacArthur Park went from the shop Elyse Tomb or oven, Ellis Island. And it’s incredible to know that Levitt Pavilion now and where we are. We are going into a thirteenth season we launch in two thousand seven in MacArthur Park. We have reached about five hundred thousand people coming through this place. So it’s we’re moving the needle, and it’s about exploring different ways to do so. Let’s talk about some of the challenges of doing bilingualism or trilingual is you said. You know, it could be depending on your demographics. What are some of the obstacles we’re gonna have to overcome? Sure, I think some of the obstacles, I think, just from a logistical point of view, it’s the long term investment is making sure that shouldn’t organization, especially at non-profit, commit to a plan of incorporating a second language into their marketing. And as part of their brand identity is identifying, Do they want to build an in house team to manage this every day and what that looks like? Or do you outsource it? And is it more of a campaign kind of a seasonal initiative? But the issues you run into that is continuity. When you have different hands kind of touching, OH, are influencing or molding the ingredients, you may get different results. What are some of these ingredients were talking about? You don’t meddle with different communications channels. So all the different, especially the communication channels, because everything has to be interlinked from your E blast from your website to your social tio, literally your printed materials, your hyre rabbis, radio, TV billboard. Everything has to be a cohesive marketing unit, even down to the programme in the community relations, right the way we conduct those meetings and identify and select and create a lineup, it’s creating a path for all of that to be connected through a single lens, you got to take a break. Tell us it’s the long stream of passive revenue because you get fifty percent of the fee. When cos you refer process their credit and debit card transactions through Tello’s, check out the video, then refer companies to the video if they’re still interested. They found the video interesting, then asked if they would consider making the switch and then you contact, Tell us and put the two of them together. It’s all in the video at the listener landing page at tony dot m a slash tony. Tell us for that long stream of passive revenue. Now back to Oliver Delgado. There’s gotta be more to this. I’m sure they’re then then just the language. You have to. You have to understand the culture of the the needs, the frustrations of the people in the community that you’re now trying to reach out to. It’s more than just speaking their language. Correct. You need to understand what what they’re about. Absolutely, and that is really That was the foundation of what Levitt Pavilion sought to do, and that’s create organic ties in the community. And that’s the first already start faith communities really everywhere, anywhere where the doors opened. So for us it was because of MacArthur Park, in the way that it’s structured it, surrounded by schools, churches, neighborhood organizations and businesses. It’s a very dense neighborhood. You were talking about eighty two hundred thousand people in about six square miles, right? So we’re talking about almost like the density of Manhattan, right? So I think that made it easy for us and being able to reach more people quickly. But also, it means that we have three more strategic and how you developed that plan, right? So the schools, the faith based organizations, you touch them all. You have to be able to be open, open, open toe, learning open to learning. And really, you know, the cultural nuances Just because, you know, blind ex culture is so vibrantly divers in itself, you know, from Mexico to Central America, Teo, even South America, the language changes so much that you have to be adaptable and be mindful that, you know, different words mean different things in different countries. So employing you know, the formality informal, the presentation. But ultimately is the trust it going in with an open heart and open arms so that people understand that it’s a dialogue not so much a speech critical. Otherwise, you’re not gonna build trust. Correct. You’LL have a meeting and xero and nothing correct, I think from it. So for Levitt Pavilion, Los Angeles one of our fundamental tools really is our Community Advisory Council. And this is a coalition comprise of principles residents, business owners, different organization and community leaders that acted one as our sound board for potential sponsors, for just bouncing ideas on artist to come to the stage, but also to get their eyes and ears on the ground learn from them directly. What’s happening in the neighborhood that we should be mindful of should be reflective love. So you can invite community leaders in the community you’re trying to approach to an advisory committee and say, You know, this is not a not a pro forma counsel. We really want your want your input. We want your advice. That’s right. We yes, we do want your connections also correct. We need them to reach the community that you’re now serving correct, and then for us, it’s about creating the space. And so everyone has a seat at the table. Yeah. Okay. You, uh, you planning talking something about developing your marketing lists? That’s right. Best practices for developing marketing list? Sure, Absolutely. So for eleven Pavilion, Los Angeles, again, we develop our English language baseless, which means that everyone on that list has opted in when they signed on up for a concert from ours weepie to get reminders of certain shows they had option of being about to receive information in English or in Spanish. So once we get that information, they’re actually collated in that way. So the English sign ups go to one lesson in Spanish sign ups go to another, and then we target those specific list with specific language newsletters. And so that way nothing is cut out. Nothing is impacting away where you’re lessening the content or, you know, undercutting it you’re presenting. If it’s for the weekend of concerts, is providing a glimpse of what’s to come. But also who are the community partners who are the sponsors who are the different? What are the different pieces? Making that specific show so unique? So for us is being able to deliver that message in a timely way, but also easy to follow. Easy to read, as the kids say, making a very chill right, making it approachable. I’m just easy to digest. OK, anything else but best practice Wai’s best practices. I mean, I have a whole list. It just depends on how you sometime. Absolutely so. If we were to look at labbate putting Los Angeles we, the first thing we had to do is identify our audiences. And so we know that we had the task of incorporating and reaching the local community of West Lake. But then, looking at what makes our work possible. And that’s then we have our sponsors, our community partners, thie elected officials, kind of the periphery supporting cast that plays a crucial role. So identifying those audiences that helps us figure out how we present our information. From there, it was creating, enduring in on. We’re going to present our secondary language in an informal way or the informal Spanish, right? So you, you know, quick Spanish lesson. If you haven’t reverses the form, correct, you have the to form and then you have those dead form, which can be as different as a swimsuit versus a power suit. And that’s how different the communication ring communication can come across. So for us, we’re identifying that informal old home and then identifying the in house team. So that’s myself and a couple of associates where we create every single day opportunities for our bilingual approach to have legs. So through our social writing, different mechanisms to make sure that that’s observed. Um, you can’t you can’t outsource. This is you mentioned earlier like you can outsource it on what kind of what kind of consultant freelance or are you looking for? Well, we don’t because I’m not, you know? Yeah, sure, would one. What would one would be looking for? Absolutely. I think the aforementioned when you figure out your audiences and you figure out what tone you want to take is finding people who specialize in that specific thing in that culture and that culture, because ultimately this person is without you knowing or wanting is your surrogate right in the communities in just written and spoken word. So it’s how do you essentially create this opportunity for someone to learn your voice or create your voice in a way that’s organic to you and is possible for you to continue past their involvement with organization on a consultant basis. So again it’s created for digital presence. For us is massive because that’s the way that we reach more people. So the website we have when you go to our website, you have the ability of hovering and clicking on the English side to get all the information in English and then hovering over to the other side of cooking the Spanish button. Everything goes into Spanish, but the most crucial thing and this goes back to the house human capital. And that’s when you click over to the Spanish. The’s are all handwritten translations were not running it through a filter. It’s not a plug Google translate. Now Google translate these air hand rin translations because we want to present the same level of enthusiastic, community oriented and accessible information the way we do in English. Same way in Spanish. Okay, Yeah, And so that weaves into social media that we into the electron newsletters. It goes into the brochures or collateral, your swag, your videos, your programming, which is fundamental, right? Yeah, let’s talk about some events. So live events. How do you make those bilingual? Absolutely. So what’s really cool about labbate Pope in Los Angeles again, we we’ve bilingual that that bilingual asking to everything we do. So when you come onto the Levitt Lawn, you’LL see a massive led wall, a Jumbotron, if you will, and it displays real time information from set list to the vendors selling food or merchandise document partners. Any special announcements or recognitions all is per presented visually in English and Spanish, right? So there’s that step. And then there’s another layer, which is really cool is that we have emcees. Every show was the last summer, was emceed by a bilingual professional from local influencers. Podcast Media photo there. So they’re speaking in both. So they speak both languages. They’LL say two or three sentences in English, and then they’LL say it in Spanish. That’s correct. And so and this emcee not only helps narrow rate the experience for level, especially for new comers, new visitors, but it helps really set the tone of the excitement so that what is written can now have an auditory base and support, and so from commercials, as we call them to prompts from even two no smoking and picking up trash or even promised to donate on venmo or through our buckets. It’s providing that accessibility both in English and Spanish. Our sign Ege Everything is in English and Spanish in case it should the led well go out Should the sound go out and emcees can’t speak, we actually have actual signs that display the same information. And then when it goes to our actual advertising, which is placed beyond you know, our neighborhood and in Westlake and goes across the city very targeted for certain neighborhoods so that we funnel in and really reach demographics you’LL see that the billboards are staggered you’LL see English and Spanish side by side and you’LL see it for Rose and Rose and the whole point there is again presenting information at once, especially considering that you have short amount of time. Someone’s driving down the street is that you have seconds and his you know, we may have a little more time with her at a red light. Ah, and our goal there is hopefully by the end of that, you probably expanded your Spanish vocabulary. But at minimum, you know that there are fifty free concerts coming to MacArthur Park from June first two September first twenty nineteen. Time for our last break text to give you diversify your revenue by adding mobile giving. It’s not only for disasters, it’s not only for small dollar donations, it’s not on ly through the phone bill. It does not need to be through the phone bill. There are different ways of doing it that can make the donations larger. You could find out all about what text to give. Does ah eliminate some of the misconceptions you may have all by texting NPR to four four, four nine nine nine. We’ve got several more minutes for go Bi lingual. We still have some time left together. What? Absolutely, What else you want to talk about? Best, Maybe more. Best practices around the list building the membership. I’m sorry, the marketing marketing lists? Absolutely no. So I think what? Ah, very big piece, I think, fundamentally is the fund-raising. I think that’s a lens that we’re also in a new venture for us in the Spanish speaking community because we haven’t seen a culture of philanthropy and this specific neighborhood, because again we mentioned its transition, and it’s currently ah, low income community. But that doesn’t mean that you can’t create it. It just means that you make it more accessible, Easier to tap into or participate so means lower dollar fundraisers, but nonetheless, on a continuous basis. So that one people know that we need their support, but too, they can participate. How do you How do you get contact info for people who attend concerts? Sure thing s o our web. So we have kind of like a multi pronged approach. So the digital phase is on our website. There. We have a third party plug in O R. Rather service that helps present every single concert so that you can say All right, great. These are the fifty shows I’m gonna click on one, and then it asked youto ours VP, you put in your email, you select which language you’d like to receive information in, and then we’re able to, you know, grab, you know, capture that information. And then we have our community relations team that has a presence across so many events across the city that then they are actually physically collecting emails. How do they What do they say? Somebody. So they they’re walking among the audience before the show starts. Yeah, eso business, eh, Externally from our site. So if they’re out of community event, what happens? Oh, I’m sorry. He’s a community, not let us. So we have to go out. So you have tables. That’s correct. So let’s say a festival year are already all those things. You know, our swag are very appealing, so that makes it easy to bring people over. So that’s one way of getting emails. Two on the lawn again. We have our info booth, right? Very much very branded info, booth, English and Spanish information equipped with volunteers and staff. They’re also bilingual, so they’re trained cultural sensitivity, language, sensitivity. And again, that’s another way that we collect information. Because if you want to get some water, if you want to get a brochure, if you want to get a tote bag, you want to get a hat. You want to get sticker whatever it is. Once you’re there, you’re probably more likely to try to get more. You want to get more information because you’re not only receiving information, but you’re experiencing what our brand and what our mission is. Okay, I like your first point about being out in the community, not just waiting for people to call, and you don’t know that. Do you need to go out to like street fairs are great idea. That’s that’s crucial. And the thing with us is that we we want to reach everyone. So we’re all parts of the cities in different neighborhood and ethnic enclaves. Just because we want to make sure that one Level eight continues to put inclusivity at the forefront of everything that we do, too, that we make that earnest attempt of creating this organic, trusting relationships by those connections and three a letting people know there’s an incredible way to connect with fellow Angelenos and a freeway and get quality entertainment because we offer both local and international talent. And so, in a situation where L. A is an expensive city and you know, let’s say a family of four to six can’t afford four hundred dollars worth of concert tickets, right? You’re the alt correct, and so tickets, food, beverages easily. You can spend five hundred dollars in the night well, versus coming over to the level L A stage bringing your food. You know, it’s kind of a cook outside you can bring your blanket, bring your food, bring the whole family we have. Actually, we have one really cool Levitt ear, as we call them, of very staunch supporter of of ours. And her name is Nora. And every single summer she celebrates her birthday at eleven Ellen, and she brings out forty people for this, right? And it’s one of those things where it’s a great equalizer. Everyone can come whether it’s water, whether it’s juice, whether it’s a sandwich cake, they’re all coming out. Creating this potluck environment and enjoying the music and adding to the vibe at eleven Really a magical experience. We still we still actually have about two minutes there so together. So what more you’re going to share with your audience that we haven’t talked about sure that we haven’t talked about that That’s applicable for them, not about what we’ve done a lot of absolutely. I think it is just exploring the different ways that you know folks and get involved. I think media is going to be a very big point that I’m going to drive in because it’s important that people realize that should they adventure out into a second language it’s creating again those organic relationships or media. So I have conversations with Spanish media where my conversations are completely in Spanish. Right? That’s important because again you’re meeting them on their turf. I have conversations with England English media partners where everything is in English and then I have newer engagements or rather, interactions where it’s tze bilingual, right? It’s kind of reflective, right? Especially if I’m working with Gen z millennial media outlets or, you know, social media entities. It’s creating that really cool, conversational direct dialogue with them and meeting them on their turf so that when we meet them where they are absolutely so again, it helps frame the experience. But two, you know, people will be more likely to try to get involved with you if if they see that you’re making a really earnest attempt, you know, connect to their audience is that the same is travelling to another country, correct. When I think when when foreigners see that you’re you’re making an attempt to learn to use the language, your pronunciation isn’t so good. You may be the cabin. The vocabulary is not a robust, but you know it’s over between pointing and attempting and, you know, you’re you’re outreaching to them in their homeland. Correct. They’re going. They’re going to try to meet you halfway. That’s right, Right? With a little, you know, some variation of of their their English. That’s right. And what’s really same same. Exactly. And that’s and then ultimately, hyre one end this our programming, our programming, obviously, is our main product. And so we present ah, fifty percent of our baseline concert. So twenty five shows are going to be Latin genres. Right? And we’re talking about not only the cumbias and the betting is in sauces, but the acid jazz, the hip hop, the rnb thie Scott, the reggae, right? Just exploring the different visions auras of Spanish language music and presenting in away sametz correct where where they are again, understanding the culture understanding, but also helping people expand their musical pallets right there. Right, Gobi out. So the so the so The marine gay listener is getting exposed to Scott. Correct? Correct. We gotta leave it there. You’re sure? Alright. Is Oliver over Delgado? That’s right. Director of marketing and communications at the eleven a civilian Los Angeles. This is non-profit radio coverage of nineteen ntcdinosaur non-profit Technology Conference This interview. All of them at ninety ninety Sea brought to you by our partners at ActBlue Free fund-raising Tools to help non-profits make an impact Thanks so much for being with us next week. Grit, succeeding as a woman in Tech and how to create an implement. Great ideas both from nineteen Auntie Si. If you missed any part of today’s show, I beseech you find it on tony. Martignetti dot com were sponsored by pursuing online tools for small and midsize non-profits, Data driven and technology enabled. Tony dahna Slash Pursuing by Wagner’s Deepa is guiding you beyond the numbers When you’re cps dot com. Bye, Tell us Credit card and payment processing your passive revenue stream. Tony dahna slash Tony Tell us and by text to give mobile donations. Made Easy text. NPR to four four four nine nine nine A Creative producers. Claire Meyerhoff Sam Liebowitz is the line producer shows Social Media is by Susan Chavez. Mark Silverman is our Web guy, and this music is by Scott Stein of Brooklyn, New York Thanks for that information, Scotty. There with me next week for non-profit radio. Big non-profit ideas for the other ninety five percent go out and be great. You’re listening to the talking Alternate network e-giving Wait, you’re listening to the Talking Alternative Network. Are you stuck in a rut? Negative sports feelings and conversations got you down. Hi, I’m nor in Sumpter potentially ater. Tune in every Tuesday at nine to ten p. M. Eastern time and listen for new ideas on my show yawned potential Live life your way on talk radio dot N Y c Hey, all you crazy listeners looking to boost your business. Why not advertise on talking alternative with very reasonable rates? Interested? Simply email at info at talking alternative dot com Thie Best designs for your life Start at home. I’m David here. Gartner interior designer and host of At Home Listen, Live Tuesday nights at eight p. M. Eastern Time. As we talk to the very best professionals about interior design and the design, that’s all around us right here on talk radio dot N. Y c. You’re listening to Talking Alternative Network at www dot talking alternative dot com now broadcasting twenty four hours a day. Are you a conscious co creator? Are you on a quest to raise your vibration and your consciousness. Sam Liebowitz, your conscious consultant. And on my show, that conscious consultant, our awakening humanity. We will touch upon all these topics and more. Listen, live at our new time on Thursdays at twelve noon Eastern time. That’s the conscious consultant. Our Awakening Humanity. Thursday’s twelve noon on talk radio dot You’re listening to the Talking Alternative Network. | 2019-04-25T02:00:46Z | https://tonymartignetti.com/tag/bilingual/ |
They’ve got pointy ears and they can’t be trusted even slightly. That’s about the normal Corporation citizen’s understanding of the aliens known as Asterians. True to form, the DGB do not disappoint, with an underhand team full of dubious ploys that are guaranteed to get the crowd on their feet, baying for blood and shouting at the ref.
What a bunch of jerks, I like it. Also fragile is a really interesting wrinkle. I find it interesting that the Guards don’t have it though. They already get an additional die to save with, but we’ll see with play testing. Thanks again for putting these up.
The guards are selected on the basis of their toughness, in effect, hence their lack of fragility.
Mean little bastards. They sound cool! Pretty interested how they’ll play.
I know this is obvious but maybe Take a Dive needs the standard phrase: If the player taking the dive has the ball it scatters and the rush ends.
A good point. I’ll add it in to the v2.
Harden up you Asterian Striker! I can see my Asterian Franchise challenging Orx etc early in the piece and getting the strikers killed. The sooner they die and FREEZER BURN the fragility away, the sooner the team can move forward with the BEST strikers in the game. Recycle any striker that gets any other burn and keep trying until you get 1 or 2 of these HARDENED strikers. Should take 12 kills but could take only 2:).
Combine these super strikers (get them skill 3) with Jacks diving to open up 4point shots, well not bad.
Looks like possible by RAW, but probably not intented. They should probably be some sort of restriction on what abilities can be removed by freezer burn.
The simplest solution could be to make starting abilities unselectable.
I have been considering making some abilities unremovable. On the other hand I’ve also been reading about brain damage causing radical personality changes and in many cases you could argue that this mechanism would explain the change. For the moment I’ll leave it as it is, so any ability can be lost. Remember that you’ve got to go through quite a few filtering processes to get to the point where that specific ability is lost.
I 100% agree. Weird rule but a lot of fun.
Agreed or a robot who forgots how to transform.
Not really feeling these ones, about the only team I’ve just not liked (although I wasn’t massively keen on Judwan).
There’s something a little gimmicky about the team relying on one shot foul calls, because they certainly won’t be slamming ever. Personally I think Fragile is enough of a handicap, they don’t need to be Strength 5+ as well. This is a compound handicap in a similar way to the FF low move low speed.
Their strategy is going to consist of dirty tricks/take a dive vs which ever opposing player is about to score/is blocking the bonus hex, then grab the ball and score themselves.
However if they run out of those one shot abilities without doing much to the opposing team, then they aren’t going to have much success as the only other option they have is slamming which they will fail at.
Once the guard has used dirty tricks there isn’t much point in keeping him on the pitch – your opponent isn’t going to bother slamming someone that poses no real threat when all the other players are Fragile.
If Fragile was Fae where they were more fragile but also more successful (ie +1 Dodge/slam success -1 armour success) then they might have more luck at doing stuff other than one shot special rules (assuming they stayed Str 5+).
Making them rely heavily on their underhand tricks is no different than relying on a 3+ stat or anything else mechanical. It is one of their defining features, so they should be based around how that works. Changing them so they didn’t simply moves them towards being the same as other teams, which is less interesting. Which brings me back to the first point: the teams all need to be different and to play differently to be worth including.
Not sure what you mean by “If Fragile was Fae” in your last para. “Fae”?
The difference here is that the thing they rely on can only be done once and if it doesn’t work, they don’t really have a back up. A 3+ stat can be tested against continuously to try and make up for previous failures.
Once you’ve taken a dive and nothing’s happened, you’ve just got a fragile Str 5+ jack whose only recourse to move people is slam them. I’m looking at them from the perspective of what the team is once it no longer has its special rules because they’ll get used up during the game. If they haven’t won by the time they’ve used up those one shots, what are they left with that will still allow them to win.
Obviously the level of difference between teams is subjective, I just find something weird about how the asterian team is put together.
If you look at the two human teams, they both play very similarly despite the apparent differences. They have the same number of strikers and corp play a striker heavy game. Once they’ve used up RI, they’re still a 3 striker team with a capable guard and jacks that can slam in a pinch, not really different to the standard corp team.
The asterians have a similar concept to Corp2 with single use abilities. However they seem to rely on their one shots far more than the Corp2 team does. You can play a Corp2 team without ever using RI, the line up difference is a swap of 1 jack and 1 guard (1 card for 3 dice). I can’t see many ways to play the asterians without ever using dirty tricks or take a dive.
I think it’s probably the proportionally larger impact that sending off players through those abilities has. It can work spectacularly well and you bench all the opposing team’s best players allowing you to stroll in for the landslide, or it can do very little, leaving a team that relies on benching players without much to do. Unlike a slamming team that benches by slamming, you can’t keep try to send them off with continual slams. You get one shot per player at it and if it doesn’t work, you’re left with a fragile player that has no way of moving the opposition.
I feel that a lot of the game will be decided on whether the asterian’s cheating rolls actually work or not. As they have a pretty big range (dirty tricks the entire pitch and in your opponent’s rush, dive 12+ hexes via a sprint) and are a one sided roll, you are just waiting on whether players disappear or not.
I can’t speak for Warpath as I’ve not played Asterians in it. However, in Deaadzone they field robots almost exclusively, so you don’t get much of a chance to see how the Asterians work. And I haven’t published the stats for them yet anyway, so what exactly are you comparing this to?
Also, you are more right than you might think regarding how the DGB assembled these teams. DreadBall is a spectator sport that works a bit like WWF (or whatever it’s called these days) wrestling. It’s big, brash and primarily based on cliché and tabloid media perception rather than reality. For the average Corporation citizen the Asterians are physically weak, untrustworthy aliens who use proxies to fight their battles for them.
I’m keen on the zees and they don’t disappoint. Their cheating is required because they’re so individually poor, but they can also do it for the whole game. It becomes a bigger part of their game. With the asterians because it’s so limited, their team isn’t as defined by it and has to deal with the game using the stats they have after those abilities are used or don’t work.
Relying on those 4 one shot abilities just seems like a very narrow window for gameplay. Put it this way, if they didn’t have those abilities at all, would the team be any fun to play? They’d have easily killable strikers and jacks and a guard that has little going for it. So they’re designed around relying on sending off opposing players using their one shot abilities. I can see them being frustrating for one or both players as their effectiveness comes down to how many cheats they succeed at.
Their 4 special tricks can be used over their 7 Rushes so are not all that rare. Their normal abilities need to be woven into the whole game, before, during and after they’ve used their TaD and DT. Zees, on the other hand, work much the same throughout a game. It’s a whole different tactical challenge. Looking at them solely as a means of delivering their DT/TaD is too narrow an analysis.
You have to take them as a whole. If they were as good as Zees at cheating then (a) their stats would be way too good and (b) we’d have two teams the same which would be dull. They are supposed to be different.
I think they work as intended. If the one shots were free use they would be overpowered. I played a play test game vs zzor and managed to send off 4 players pretty effectivly to open up enough of a gap that even when they came back on the pitch I had enough of a lead to keep the pressure on. Think they will be a Strike Hard a fast team and try and hold onto the lead.
Hum, while the “Fragile” ability is thematic, I think that removing a success is very crippling especially in league play.
If you removed one die from armor checks instead it would already be a very bad flaw to have but it would be somewhat less extreme. With this variant, it could even be applied to Guards (they would still get 3 dice for armor saves), in case you need to make them cheaper or to justify giving them another ability.
I still worry that this might hurt them too much over the long term in leagues.
Changing the armour checks would be less dramatic, as you say. However, they are protected by a 3+ Dodge and a Defensive Coach as well, which does reduce the body count somewhat.
Did you have one eyebrow raised when you said that? If not, go and practice in front of a mirror. You know you want to.
No mirror practice for me, my poker face is terrible and my emotion free alien face even worse: I get the giggles.
These Asterians dont seem to really be the emotions-in-check, good-of-the-many variety anyway, more vindictive and self-serving. Somebody said “jerks” earlier, which seems to succinctly sum them up. Spock is/was a lot of things, but not really a jerk.
With the Asterians you have to remember that the average Corporation citizen is unable to distinguish between the different types within Asterian society. All tend to get lumped in with the worst of their kind.
Interesting. I like the conceptual take – sneaky, entitled – but Im not sure how long lasting their appeal would be. Screwing up other players plans is always fun, but once he has done his one-shot thing the guard is is really not going to Slam much and Slamback only occasionally.
Guards can do more than hit people.
Blocking a run path, or stepping on a threatened ball to scatter it.
A ricochet player on the launch line, or a more reliable threat himself.
For these roles his 5+ str has no effect on how he performs.
True. Plus he can get a Strength upgrades like anyone else. And even without a Strength upgrade he might make a very interesting Keeper. I wouldn’t rule them out yet. What I think they are is a very different take on what guards can be and do, so it will take a while to see all the possibilities.
Of course Guards can do a lot more than simply slam, but in this case the Guard will behave a little more like Jacks on a Corporation team (for example), generally getting in the way rather than getting stuck in.. Im simply getting my head around a S5 guard in the team context. Thinking aloud to an extent.
I also see them trying to gang up on people, hit them in the back and even stomp (as you could justifably risk him being sent off once he’s done his Dirty tricks. Strength 4+ would be a big boon to these guys though, so that’s what I’d be hoping for as an upgrade.
Sure but most of those can be performed by a jack or striker. Stepping on the ball is a unique guard ability, but it isn’t exactly a common or easily controllable one. Many a time I’ve not have guards anywhere near the ball when I needed that, and that’s with teams that have multiple guards. With a single guard on the pitch It’s unlikely that will be a useable enough ability to bother.
Once dirty tricks has been pulled, he’s probably not going to see much action. Your opponent will choose to slam different players as they’re more easily killable. Only slamming him if he hasn’t already used dirty tricks to try and remove it from the field.
Yup. Don’t forget their coach.
I had no intention of playing ASTERIANS but after testing them they have become one of my more favoured teams.
As Jake says .. the whole point of having 12 teams is to provide options and variation in play style. Just wait till you see the ZEE (haha) !!
1) I assume that like a regular foul, you perform the ref check after the current action is resolved? I.e. A striker is half way through a run and throw action when I declare the use of ‘Dirty Tricks’ on him. He still resolves the throw before we check to see if he’s sent off?
2) It is my opponent’s rush and he has the ball. I declare Dirty Tricks on his ball carrier, who is sent off – does this cause him to lose possession, thus ending his rush?
1) calling the foul itself is done in the normal manner. The difference lies only in how you get to call it.
2) Excellent question. I don’t think the rules are 100% clear what happens when someone carrying the ball is sent off. It isn’t listed on page 32 under ending a Rush. You could argue either way. However, I will rule that it does not end the Rush. This is mainly because the team is already being penalised by losing a player and control of the ball (because it will scatter).
Thanks Jake – it occurred to me earlier today, and just seemed extremely powerful – If you knew what you were doing and used your jacks carefully you could use it at the beginning of the game to score a landslide in the first couple of turns without giving your opponent a chance to react (assuming they actually try to pick up the ball).
Possibly, and being consistent would be my preference, but I am aware that not everyone shares my acceptance of some of the harsh realities of the sport.
If DT can be used to “snipe” a ball carrier in their own Rush (thus ending it) then it may well be too potent.
I think that the consistent rule is worth keeping. Dirty Tricks can certainly be powerful if used on a ball-carrier, but it is a once-per-match ability, and it can be lost to a bad roll of the dice, it can even be significantly weakened with an Event card. Fielding a team of Dirty Tricks players could be a major problem, causing a player to lose all of their Rushes, but it is no different from sticking a Jack on the launch line and repeated scoring.
I think consistency is important too. But Dirty Tricks is already powerful on its own. It may be too much to lose your Rush on top of everything. Asterians, with 2 Guards when they bought a second one, would be able to have a very good shot at ending 2 rushes of their opponents, every game. I’m afraid if you lose your Rush by being sent off, the new game will be to move your ball carriers with that new rule : always farthest than 7 hex from the Ref.
Hard decision ! Clearly counterintuitive to decide this particular case doesn’t end your rush. But probably more balanced.
I could of sworn this was clarified with regards to the Sneak foul.
The game already has the capacity to send off a ball carrier through random selection through a Sneak – I thought it was said somewhere that the ball just scatters from their location and game continues.
I think that having a skill that can lose the opponents rush without him being able to do anything to prevent it is very bad. You play 7 rushes your opponent plays 4 (You can have 2 guards and the MVP). Simply not fun.
I would say that yes it is a powerful trick but the one shot trick makes up for that. I played them and didnt use the DT to snipe the bal carrier its much better use i found to snipe a Zzor guard trying to slam your ball carrier!
Maybe if the ball carrier is sent off the ball should relaunch much like a ball shattered card?
@ chrisbburn. Its a one shot trick as long as you have one guard. When you have 2 is a 2 shot trick. And if you hire nightshade it’s a 3 shot trick. If it is a turnover as well and if it recovers you the ball twice you have almost won.
I feel it would encourage too much people to play for the landslide. If the skills work they win fast if they don’t they give it up (fragile encourages this gameplan as well, in leagues).
Personally I’m not a big fan of landslides, I prefer to play the whole game, it’s more fun.
Nightshade won’t play for asterians. But I can see where people are coming from. It is a powerful tool but this is only one offensive style of using DT it’s got plenty of defensive uses like sending off that pesky guard which will help keep your players alive. Especially helpful in leagues. Surely you have to play for landslides in order to get the exp in leagues. Winning in 3 rushes can be no fun but to write off attempting a landslide buts you in the back seat!
Seeing your write-up below, I am getting worried about playing against these guys. Maybe Dirty Tricks needs to be played when a player is next to you? My understanding is that you can call it anytime, anywhere; what if the “fouling” player had to be next to the trickster?
This was a planned use of Dirty Tricks I had in mind for the ASETERIANs. I see no reason to change the Dreadball philosophy (as stated above) of: “If you lose the ball then your turn is over”.
I also think this was covered indirectly already in the FAQ that discusses what happens to the ball when an “upright ball carrier” is “sent off”.
As the ball scatters from where the player was standing at the time he was removed using Dirty Tricks, whether or not the RUSH ends is dependent on the scatter outcome.
No idea if this will actually post, as the site has been eating my comments all day! I’m also concerned that it might be a little too potent to end people’s rushes. If Asterian’s are Home, I deploy 3 Jacks, 2 strikers and the Guard – he deploys really deep. I call an offensive play with my Coach – assuming I get it, I use 2 actions to force ref checks on players between the ball and the 3/4 point zone and play up to 3 actions on a striker to retrieve the ball, get to the 3/4 point zone and score – a 3 pointer if the direct line is blocked, otherwise a 4 pointer.
This might well require a level of luck, and certainly relies on your opponent actually going for the ball, but I see it as a high chance of landsliding the opposing player with them having minimal impact on the game. If it fails, the Asterian player might be out of luck for the rest of the game, but the potential is certainly strong. Perhaps they need this to balance their poor slamming game, but I think it needs to be considered carefully.
If you send a random player off through Sneak and they have the ball, the ball just scatters from their location. If it is subsequently dropped by a friendly player then you lose your rush. But iirc the initial send off doesn’t end your rush.
I think it will be too powerful if it ends the rush as well – Its powerful enough without taking out the opposing team’s chance to compete – its just too unblockable.
My reading of the existing FAQ is that it isn’t covered specifically. Obviously it needs to be answered clearly one way or the other.
I’m not against landslides as they are, but if a team has to sistematically play for it, It won’t be much fun. To play a game of DB I have to keep the evening free, take my car and drive for half an hour. If the game ends in 5 minutes, well, it’s pretty frustrating.
I think dirty tricks is powerful enough without causing rushes to end.
For sure. With the shaved heads they even look a little like the Eric Bana style Romulans from the new movies.
Had planned on giving them “Starfleet” uniforms…with Gold shirted Guard, Blue shirted Jack and Red shirted Strikers…may still go that route.
My wife and I just had a game : Asterians VS Teratons. I won’t leave feedback on the Dinos because they barely played at all. Landslide for the Asterians on their second Rush.
Rush 1 : I declare a Defensive Play, succeed.
My Striker picks up the ball, gets a free action, runs and dash towards the 4 points bonus Hex.
One of my Jacks, close to the center of the pitch, sprints towards the only Teraton Guard protecting the 3-4 strike zone… he shamelessly Takes a Dive and sends him off for 3 turns (with only 1 Ref dice).
My Striker scores 4 points.
I move the Ref in my opponent side.
Rush 2 : The Teratons bring back a Guard, try to slam, the Asterians dodge, no injury.
Teraton Jack picks up the ball, free action, run towards a 1 pointer Strike position. He can’t go further, he’s only a Jack.
Strike and scores, with style. Fan check.
Teratons moves the Ref away from them.
Score : 3-0 for me, Asterians.
Another Jack of mine, defending a Strike zone, sprints 2 times from the other side on the pitch and Takes a Dive next to the poor freshly arrived Guard. He send him off for 2 turns, 1 Ref Dice again.
The path is clear. If it had failed, I would have used Dirty Tricks.
Score : Landslide in 2 rushes. Though Teratons scored !
Thoughts : I’ve been a little lucky while calling fouls. But not so much. Even with one dice, you do get 50% chance of getting a player off the pitch and clearing any line of sight. PLUS, I had more fouls to call if the other ones had failed.
The only thing the Teratons could have done better is putting 2 Guards defending the 3-4 points zone.
Though, even so, 2 Jacks starting in the middle of the pitch can both sprint and Dash towards the 2 guards and Take a Dive, still leaving you actions to score.
It does require a bit of luck if the Ref is away, but Taking a Dive is extremely powerful, especially with players with move 6 and such a good speed that allows them to Dash easily and evade potential threats.
I just felt like I could have a shot at sending off anyone, anywhere… and I did !
It felt extremely fun, clearly. At first sight, I think I was too powerful. But, maybe that’s one of those teams that requires 3 defending players in the bonus zone.
That demonstrates why I think it might be a little too strong to allow Dirty Tricks to end rushes – as it stands, they can already score pretty easily in the first few rushes – denying the opposing team even a chance to counter it by making them lose a rush on top of it would be extremely powerful.
Thanks Vinsss, an excellent report.
You were extremely lucky to send off two players for 5 turns with only 2 dice. Even sending off both of them is above average. However, the Asterians sound like they played well and played to their strengths (as everyone should).
Of course, if you don’t manage to landslide in the first couple of turns, if the opposition defends a little better, if your dice rolls get the other end of the luck… then you’re in a different place.
Powerful when it comes off and things go to plan, as here. But then I’ve played veer-myn where every strike dice was a 5 or a 6 and that’s unstoppable. it’s a matter of degrees. How easy is it for them to have a good day?
Oh dear! Now all my slightly different comments have come back I look pretty obsessed!
I may be wrong with the playing of coaches… But can a ‘defensive’ coach call an ‘offensive’ play? I’m pretty sure that’s not the case.
So the asterians can only call defensive plays in my opinion… Am I wrong?
“…any type of Coaching Staff can try to call any type of Play”. They’re just better at calling their own type.
Cool. Glad I missed that. That makes coaches WAY better.
Managed to get a quick game in with the Asterians against the Forge Fathers.
Asterians went first with the ball landing between my Striker and an FF Guard that I was going to have to evade. I was eager to try out the sneaky Asterian abilities so I ran a Jack up, ‘took a dive’, and got the Guard sent off for 3 turns; huzzah! The Striker went on to score 3 points!
Rush 2 saw the diving Jack (who stood up immediately after his dive :p ) get slammed by another FF Guard who obviously saw what had happened, as he ripped him the poor innocent Asterian to shreds. Wow, they are squishy!
An FF jack picked up the ball and was lining up to score a 2-pointer, but must have been wearing inappropriate equipment, or maybe the ref misheard an uttered slur, but either way he was sent off for 3 turns under mysterious circumstances. Mwa ha ha!
Rush 3 saw a bit of passing and another 3 pointer, but otherwise legitimate play, and on Rush 4, unfortunately for the Forge Fathers, they failed to pick up the ball, and on Rush 5, the Asterians scored an easy 1 point.
The game was over pretty quickly, but given how easily my Jack died, it became very clear that the Asterians need to use their naughty tricks to win ASAP to stay alive! I was very lucky, very, very lucky, and I think if the game went any longer, a lot more Asterians would have died. The defensive coach called Defensive Play in Rushes 1 & 3, and those dice definitely came in useful. Rush 5 I called Offensive Play, to give my players a bit more flexibility and ensure that I won that turn.
All in all, they were a lot more fun to use than I had imagined; who doesn’t love playing as the panto villains! I’ll try and get another game in with them, and see what happens when my luck has gone.
The only thing that bothers me, is that, indeed, sending off for 3 turns IS lucky, but when you Take a Dive, you barely care about how long the sending off will be : 1 turn or 3, I just want the opposite player out of my line of sight.
So, always at least 50% to clear your path, IF the Ref is far far away.
I hope people are playing to win with these guys since, so far, my play testing has only supplied them loosing once against the original out of 8 games when they are the home team. With dives and dirty tricking the ball carrier on the opponents rush, its landslide after landslide.
I’m glad were getting alt styles of play, but dirty tricking the ball carrier and having that end the oppositions rush is just a bit too much and this team with dethrone the Judwan as the “unbalanced for non-league tournaments” team (or possibly pull up a chair next them).
Which is the problem. Judwan ARE unbalanced outside leagues. If people said that about them, they were right. If this team counters Judwan, but turns out to suffer against bashy teams, I’m okay with that. But I doubt in one-off games or res tournies that bash teams will cause much of a problem. If there is no permanent penalty for loosing half your roster, you aren’t gonna play in a manner where you care to keep them alive.
I need to test the Void Human team against them more, as I believe that was the only to team to win their away game, and won the home game as well. If Juds beat all, Asterians beat Juds, Void beat Asterians, and a gaggle beat Void, that will at least keep things interesting. But if Asterians beat all if the dice averages are showing up, that’s not optimal design.
And Jake, I think you are doing a great job and have helped develop a top notch product. This is definitely constructive critism as I care that the game continues to succeed and grow. I just find it funny people were complaining about the play style…….they obviously hadn’t tested the team.
As long as the enemy hasn’t totally blocked off all paths to a scoring zone (which I think is impossible), you should be able to take advantage of the excellent Speed stat, Stealing the ball from behind to avoid Slambacks and Evading your way to the Strike hex.
Has anyone tested the effectiveness of distracting the ref when opposing the asterians? If there is concern about a guard sniping a ball carrier, an action or two spent mitigating the risk of getting sent off might cause your opponent to reconsider using his once off ability on a 50-50.
@theearthdragon – Jake’s current ruling is that using Dirty Tricks to send off the opposing ball carrier doesn’t end their rush.
Aha… that’s what I get for going on holidays for a week… 4 days in and I’m already behind on the happenings of the world.
Distracting the ref seems a legitimate move against them anyway, if only to nulify the likelyhood of successful dirty tricks or diving.
Okay, finally got a game in with these guys vs the Nameless. Asterians were visiting.
Nameless set up with two of each player type, with all of the Guards on the line and a pair of Strikers a bit further back. I set up deep, with the Guard blocking the 4 point zone and all 3 jacks a bit further up with 2 strikers on the flanks.
The Nameless sprinted up a Gotcha! Guard next to a striker and then moved a hitty guard into position to slam with his next action, so I called Dirty Tricks on him, getting him sent off for the rest of the game! The Other Hitty Guard took the striker out for 3 turns though. A nameless Striker moved up and picked up the ball, but didn’t double so was left sitting with it. In My rush, I had one jack move next to the ball carrier and took a dive next to him, getting him sent off for 3 turns. I had players surround the hitty guard and then had my guard slam him in the back, and although I doubled him, steady kept him on his feet and he easily made his saves. I used a striker to run to the ball, doubled the pick up and ran to the 4 point hex and used my final action to score!
The Nameless were able to score a 4 pointer bringing the score back level. When the final Asterian jack recovered from his earlier beating he came on and had the Nameless scorer sent off, once again for the whole game! With his hitters out, he was down players and was also unable to properly take advantage of my fragile players, and I was able to score another 4 pointer. He missed a 4 pointer to bring the game back level at a crucial juncture where he had a striker poised to catch the relaunched ball, and I was able to take advantage of this and score a 3 pointer to win.
@Rolex – you realise that Asterians cannot hire Nightshade right?
No worries – I did wonder why they couldn’t hire him when the Season 2 book came out – I thought it was because he was a bit of an outcast because of his dirty playing style, but it looks like I was completely wrong, and it was really because 3 x Dirty tricks would be pretty damn crazy!
I think a second Guard would be high on my list of priority purchases for this team, but then, so would cards and replacements for players that get killed (or just repairs for them), and the Guard is a slightly weird player for them in that once he’s used Dirty Tricks, he’s fairly poor at doing what he does… I’m looking forward to running these guys in a league and seeing just how it works out!
Sorry to catch up with the comments in one lump like this rather than individually. However, just to let you know that I’ve been listening and playing some more and have made one little tweak to the Asterian rules which should mitigate the all-or-nothing first couple of turns a bit.
In addition to the current rules, Take a Dive is now limited to once per Rush for the whole team. More than that starts making it a bit obvious, so the Asterians know not to push their luck too far. In game terms it means that you need to use it slightly differently and scatter it across the game’s Rushes, which makes for a more interesting game on both sides.
So, that’s once per player per match, and once per Rush for the team.
That sounds like a very efficient answer ! Thank you for your attention.
I can’t wait to read all the new rules.
By the way, is it also once per opponent’s rush with RI?
I suppose if you follow the wording, “once per Rush” does mean only once during your opponent Rush as well !
Obviously you can’t use it in the opponent’s Rush unless you also have RI, which you will have to gain with experience.
RI is limited to once per player per match and once per opposing action. See page 42 of the core rules.
– Does this also apply to Dirty Tricks (if you were to have 2 Guards in a league team)?
– You can still use Dirty Tricks and one Take a dive in the same turn if you like?
Rather than making fragile the reverse of can’t feel a thing, could it be more like quick recovery? That is, you either are off for one more turn than normal when sent off (so minimum 2) or you’re always off for 3 rushes? That way it only works when the player is actually HURT, rather than increasing their chances of being hurt in the first place.
Basically it could be ‘glass jaw’ – when you actually hurt them they go down for the count, but you’ve got to hurt them first.
Not saying it needs changed, but that is a pretty neat idea with it being opposite of quick recovery…glass jaw indeed!
Has anyone else discussed speed 3+ strikers being too powerful? It has become a bit of a debate with a few of us recently. Mainly from a statistical point of view.
I think the Judwan were too good as an entire team of speed 3+ strikers who could all also use their speed to move people around and Long Arms wasn’t enough of a negative to balance out their suberb stats and player redundancy. I’m glad they’re being reduced to Speed 4+ because I think this will make them a much better balanced team. | 2019-04-22T04:05:58Z | https://quirkworthy.com/2013/07/08/dreadball-asterian-team-beta-stats/ |
You are currently viewing a previous version of this article (November 23, 2016 - 04:35).
Dr. Meena S. Madhur, Department of Medicine, Division of Clinical Pharmacology, Vanderbilt University, 2215 Garland Avenue, P415D Medical Research Building IV, Nashville, Tennessee 37232.
• Hypertension is associated with an increase in T-cell–derived cytokines such IL-17A and IL-17F.
• Monoclonal antibodies to IL-17A, IL-17F, IL-17RA, or isotype control antibodies (IgG1) were administered twice weekly during the last 2 weeks of a 4-week angiotensin II infusion protocol in mice.
• Antibodies to IL-17A or IL-17RA, but not IL-17F, lowered blood pressure by 30 mm Hg, attenuated renal and vascular inflammation, and reduced renal transforming growth factor beta levels (a marker of renal fibrosis) compared with control IgG1 antibodies. All 3 experimental antibodies blunted the progression of albuminuria.
• Monoclonal antibodies to IL-17A or IL-17RA may be a useful adjunct treatment for hypertension and the associated end-organ dysfunction.
Inflammatory cytokines play a major role in the pathophysiology of hypertension. The authors previously showed that genetic deletion of interleukin (IL)-17A results in blunted hypertension and reduced renal/vascular dysfunction. With the emergence of a new class of monoclonal antibody–based drugs for psoriasis and related autoimmune disorders that target IL-17 signaling, the authors sought to determine whether these antibodies could also reduce blood pressure, renal/vascular inflammation, and renal injury in a mouse model of hypertension. The authors show that antibodies to IL-17A or the IL-17RA receptor subunit, but not IL-17F, may be a novel adjunct treatment for hypertension and the associated end-organ dysfunction.
There is now strong evidence that hypertension is an inflammatory disease in which T cells and T-cell–derived cytokines contribute to blood pressure elevation and end-organ damage (1,2). Our group was the first to demonstrate the critical role of the proinflammatory cytokine interleukin (IL)-17A in hypertension and the accompanying vascular dysfunction, using mice with genetic deletion of IL-17A (3). Subsequently, Nguyen et al. (4) showed that infusion of recombinant IL-17A into mice raised blood pressure and impaired vascular relaxation through phosphorylation of endothelial nitric oxide synthase and reduced vascular nitric oxide production. Recently, we showed that IL-17A contributes to angiotensin II (Ang II)-induced renal injury and modulates the expression of renal sodium transporters resulting in altered pressure natriuresis, whereas the related cytokine, IL-17F, had little or no effect on blood pressure, renal injury, and renal sodium chloride cotransporter activity (5,6).
The IL-17 cytokine family consists of 6 isoforms (IL-17A through IL-17F) of which IL-17A and IL-17F share the highest (50%) sequence homology (7). Both IL-17A and IL-17F are produced by similar subsets of immune cells and bind as homo- or heterodimers to the same receptor complex composed of IL-17RA and IL-17RC subunits (8). Although IL-17A has been widely studied and found to play critical roles in the pathogenesis of many autoimmune diseases such as psoriasis and multiple sclerosis (9), the role of IL-17F is less well characterized. IL-17F has a lower affinity for the IL-17RA receptor subunit and has been found to have overlapping, as well as distinctive functions, from IL-17A (10).
IL-17A is the signature cytokine of a relatively recently described subset of CD4+ T-helper cells designated TH17 cells. However, a subset of CD8+ T cells as well as gamma delta (γδ) T cells (innate-like unconventional T cells with a distinct T-cell receptor) can produce IL-17A under various conditions (11). In fact, γδ T cells are thought to be the most prominent producers of IL-17A under certain pathological conditions (12–14). Interestingly, recent studies have demonstrated that salt can promote the production of pathogenic TH17 cells, thus providing a potential link between high salt intake and the development of hypertension (15,16).
In the current study, we sought to determine the cellular source of IL-17 isoforms in the kidney and vasculature of hypertensive animals, and determine whether pharmacological inhibition of IL-17 signaling can reduce blood pressure and ameliorate renal/vascular inflammation and renal injury in a mouse model of hypertension. A number of human monoclonal antibodies to IL-17A, IL-17F, or the IL-17RA receptor subunit are in development, clinical testing, or Food and Drug Administration (FDA)-approved for the treatment of psoriasis and related autoimmune disorders (reviewed in Beringer et al. ). However, to date, these drugs have not been tested in humans for their efficacy in hypertension or other cardiovascular disorders. We hypothesized that inhibition of IL-17A or the IL-17RA receptor, but not IL-17F, would lower blood pressure and end-organ damage in a mouse model of Ang II–induced hypertension.
Wild-type C57BL/6J mice were purchased from Jackson Laboratories (Bar Harbor, Maine). All protocols were approved by the Institutional Animal Care and Use Committee at Vanderbilt University. At approximately 10 to 12 weeks of age, osmotic minipumps (Alzet, Model 2004, Cupertino, California) were implanted subcutaneously for infusion of Ang II (490 ng/kg/min). Blood pressure was measured at baseline and weekly during the 4 weeks of Ang II infusion using a computerized, noninvasive, tail-cuff system (MC4000 Blood Pressure Analysis System, Hatteras Instruments, Cary, North Carolina). Mice were sacrificed at the end of all experiments by CO2 inhalation.
Two weeks after induction of hypertension with Ang II infusion, mice were randomly allocated to the following 5 experimental groups: 1) control mouse IgG1 antibody (mouse IgG1 K isotype control functional grade purified, Clone P3.6.2.8.1 [eBioscience, San Diego, California] or mouse IgG1, PL-31545 [Amgen, Thousand Oaks, California]); 2) control rat IgG1 antibody (rat IgG1 K isotype control functional grade purified, Clone eBRG1 [eBioscience]); 3) IL-17A neutralizing antibody (mouse anti-mouse IL-17A functional grade purified, Clone eBioMM17F3 [eBioscience]); 4) IL-17F neutralizing antibody (rat anti-mouse IL-17F functional grade purified, Clone RN17 [eBioscience]); 5) IL-17RA receptor antagonist (murine muIL-17RA-M751, PL-31280 [Amgen]). Mice were injected intraperitoneally with 100 μg of antibodies twice weekly during the last 2 weeks of Ang II infusion (days 16, 19, 22, and 25). This dose and frequency was chosen based on prior studies from our group and others (17–20).
Mice were euthanized with CO2 and transcardially perfused with phosphate buffered saline–heparin (10 U/ml) before the collection of organs. Kidney and thoracic aortic leukocytes were isolated as described previously (21) and stained using the antibodies listed in Supplemental Table 1. For intracellular staining, single-cell suspensions of kidneys or the whole aortae in RPMI medium supplemented with 5% fetal bovine serum were stimulated with 2 μl of BD Leukocyte Activation Cocktail (ionomycin, brefeldin A, and phorbol myristic acetate) at 37°C for 3 h. Cells were then stained using the antibodies listed in Supplemental Table 2.
Urine was collected over a 24-h period at baseline, after 2 weeks, and after 4 weeks of Ang II infusion by placement of mice in metabolic cages. Urinary albumin concentrations were determined by enzyme-linked immunosorbent assay (ELISA) (Exocell M, Exocell, Philadelphia, Pennsylvania). Total urinary volume was recorded and multiplied by the albumin concentration to determine the daily albumin excretion rate. Renal transforming growth factor beta (TGFβ) levels were quantified from whole-kidney homogenates using a mouse/rat/porcine/canine TFG-β1 Quantikine ELISA kit (R&D Systems, Minneapolis, Minnesota) and normalized to total kidney weight.
Data are expressed as mean ± SEM. The assumption of normality was made because most continuous physical characteristics follow a Gaussian distribution. Differences between 2 groups were compared using the Student t test. Differences between 3 or more groups were compared using 1-way analysis of variance (ANOVA) followed by the Holm-Sidak post hoc test. Blood pressure and albuminuria data were analyzed at day 28 using 1-way ANOVA followed by the Holm-Sidak post hoc test. p < 0.05 was considered significant.
We first determined whether there was an increase in IL-17A– and IL-17F–producing T cells in the kidney and aortae of mice following 4 weeks of Ang II (490 ng/kg/min) infusion. Intracellular staining indicated that Ang II–induced hypertension was associated with a significant increase in CD3+IL-17A+ and CD3+IL-17F+ cells in the kidney and aortae (Figure 1). To determine the specific T-cell subsets producing IL-17A and IL-17F, we employed the gating strategy depicted in Figure 2A. CD3+ T cells were selected from the live singlets. Intracellular staining with IL-17A or IL-17F identified the total T cells producing these cytokines. Gamma delta (γδ) T cells possess a distinct T-cell receptor that can be distinguished from conventional alpha beta (αβ) T-cell receptors by flow cytometry. We therefore gated on the presence of the γδ T-cell receptor to quantify the IL-17–producing γδ T cells. Finally, we gated on the γδ T-cell receptor negative cells (which are presumably conventional αβ T cells) and further classified these into CD4+, CD8+, or other double-negative cells. To determine the relative T-cell subset contribution to IL-17A and IL-17F production, we used the integrated mean fluorescence intensity, which is obtained by multiplying the number of cells expressing a particular marker with the mean fluorescence intensity in that channel (22). We found that γδ T cells and CD4+ TH17 cells are the predominant sources of IL-17A and IL-17F in the kidney and aorta, particularly following Ang II infusion (Figure 2B).
(A) Representative flow cytometry analysis of renal and aortic T cells isolated from mice after 28 days of vehicle (Sham) or angiotensin II (Ang II) infusion. (B) Quantification of the total number of interleukin (IL)-17A– or IL-17F–producing cells in kidneys and aortae (n = 5 to 6 per group). Data were analyzed using Student t test and expressed as mean ± SEM. *p < 0.05; **p < 0.01; ***p < 0.001 versus WT/Sham. WT = wild type.
(A) Representative example of flow cytometry gating strategy for the quantification of IL-17A–producing CD3+ T cells in the kidney of an Ang II–treated mouse. A similar strategy was used for IL-17F and for aortic samples. (B) Quantification of the integrated mean fluorescence intensity (iMFI) of IL-17A– or IL-17F–producing subsets of T cells (γδ T cell receptor [TCR]+, CD4+, CD8+, and other double-negative T cells [other DN]). Data were analyzed using Student t test and expressed in arbitrary units (A.U.) as mean ± SEM (n = 5 to 6 per group). *p < 0.05; **p < 0.01; ***p < 0.001 versus WT/Sham. Abbreviations as in Figure 1.
Monoclonal antibodies to human IL-17A, IL-17F, and the IL-17RA receptor subunit are in various phases of development, testing, and FDA approval for the treatment of psoriasis and related IL-17A–mediated autoimmune diseases. As proof of concept, we sought to determine whether these antibodies would have a beneficial effect in a mouse model of hypertension. Ang II was infused for 4 weeks into wild-type C57Bl/6J mice. During the final 2 weeks of Ang II infusion, antibodies to IL-17A, IL-17F, IL-17RA, or corresponding IgG1 control antibodies were injected intraperitoneally twice weekly as depicted in Figure 3. Administration of monoclonal antibodies to IL-17A and IL-17RA resulted in a 30 mm Hg decrease in blood pressure. By contrast, the anti–IL-17F antibody and both IgG1 control antibodies had no effect on blood pressure during the final 2 weeks of Ang II infusion (Figure 3). It should be noted that although blood pressure was reduced from approximately 180 mm Hg to 150 mm Hg with anti–IL-17A and anti–IL-17RA antibodies, the pressures were still elevated compared with the baseline blood pressures of 110 to 120 mm Hg.
Systolic blood pressure was measured at baseline and weekly during 28 days of Ang II infusion. Arrows indicate timing of antibody administration. Data are expressed as mean ± SEM and were analyzed at day 28 using 1-way analysis of variance followed by Holm-Sidak post hoc test (n = 7 to 20 per group). ***p < 0.001 versus mouse IgG1. Ab = antibody; other abbreviations as in Figure 1.
To determine the effect of IL-17 blockade on renal and vascular inflammation, Ang II was infused for 4 weeks, and monoclonal antibodies to IL-17A, IL-17F, IL-17RA, or corresponding IgG1 control antibodies were injected intraperitoneally twice weekly during the last 2 weeks of Ang II infusion as described in the preceding text. Flow cytometry was performed on single-cell suspensions from 1 kidney and the thoracic aorta to quantify degree of inflammation. Surface staining was performed for CD45 (total leukocytes), CD3 (total T cells), CD4 (T helper cells), CD8 (cytotoxic T cells), and F4/80 (a monocyte/macrophage marker). Mice treated with monoclonal antibodies to IL-17A and IL-17RA exhibited significantly reduced levels of total leukocytes, total T cells, CD4+ T cells, and CD8+ T cells in the kidney compared with mice treated with a corresponding IgG1 control antibody (Figure 4A). Of note, there was no change in the number of double-negative T cells or monocytes/macrophages in the kidney with either of these antibody treatments (Supplemental Figure 1A). Similar results were observed in the aorta (Figure 4B, Supplemental Figure 1B). By contrast, mice that received the IL-17F antibody exhibited no change in total leukocyte, T-cell, or monocyte/macrophage numbers in the kidney or aorta compared to IgG1-treated mice, consistent with a lack of blood pressure effect with this antibody (Figure 5, Supplemental Figure 2). To confirm that the administration of isotype control antibodies did not affect renal or vascular immune cell infiltration, we compared levels of total leukocytes, total T cells, T-cell subsets, and monocytes/macrophages in untreated wild-type mice following 4 weeks of Ang II infusion to mice that received mouse or rat IgG1 during the last 2 weeks of Ang II infusion and found no significant differences in the numbers of these cells in both the kidney and aorta (data not shown).
Quantification of total leukocytes (CD45+ cells), total T lymphocytes (CD3+ cells), and T-cell subsets (CD4+ and CD8+) per kidney (A) or thoracic aorta (B) in mice infused with Ang II for 4 weeks and treated with either IL-17A neutralizing antibody, IL-17RA receptor antagonist, or corresponding mouse IgG1 isotype control twice weekly during the last 2 weeks of Ang II infusion. Data were analyzed using 1-way analysis of variance followed by Holm-Sidak post hoc test and expressed as mean ± SEM (n = 12 to 20 per group). *p < 0.05; **p < 0.01; ***p < 0.001; ****p < 0.0001 versus mouse IgG1. Abbreviations as in Figures 1 and 3.
Quantification of total leukocytes (CD45+ cells), total T lymphocytes (CD3+ cells), and T-cell subsets (CD4+ and CD8+) per kidney (A) or thoracic aorta (B) in mice infused with Ang II for 4 weeks and treated with either IL-17F neutralizing antibody or corresponding rat IgG1 isotype control twice weekly during the last 2 weeks of Ang II infusion. Data were analyzed using Student t test and expressed as mean ± SEM (n = 12 per group). Abbreviations as in Figures 1 and 3.
To assess the effect of IL-17 blockade on renal injury, 24-h urine was collected at baseline, after 2 weeks of Ang II infusion (before antibody administration), and after 4 weeks of Ang II infusion (with twice weekly antibody injections given between weeks 2 and 4). Urinary albumin concentration was measured by ELISA. At baseline, there was very little albuminuria and no significant differences among the groups. Albuminuria increased markedly after 2 weeks of Ang II infusion. Interestingly, treatment with any of the IL-17 antibodies, including the IL-17F antibody, halted the progression of albuminuria, whereas mice that received either control antibody had continued progression of albuminuria (Figure 6A). Thus, blockade of IL-17 signaling protects from Ang II–induced glomerular injury.
(A) Glomerular injury was assessed by quantifying 24-h urinary excretion of albumin at baseline and after 14 or 28 days of Ang II infusion. Arrows indicate timing of antibody administration. Data are expressed as mean ± SEM and were analyzed at day 28 by 1-way analysis of variance followed by Holm-Sidak post hoc test (n = 7 to 15 per group). *p < 0.05; ***p < 0.001; ****p < 0.0001 versus corresponding IgG1 control. (B) Renal fibrosis was assessed by quantifying transforming growth factor β1 (TGFβ1) from renal homogenates. Data from mice that received no antibodies and no Ang II (Untreated) are shown as a reference control. The other groups were infused with Ang II for 28 days and treated with the indicated antibodies twice weekly during the last 2 weeks of Ang II infusion. Data were analyzed by 1-way analysis of variance followed by Holm-Sidak post hoc test and expressed as mean ± SEM (n = 5 per group). *p < 0.05; ***p < 0.001; ****p < 0.0001. Abbreviations as in Figures 1 and 3.
Renal fibrosis is another hallmark of renal damage from hypertension and is mediated by TGFβ signaling (23). To assess renal fibrosis, we measured TGFβ levels from whole-kidney homogenates following 4 weeks of Ang II infusion. Antibodies to IL-17A, IL-17RA, IL-17F, or corresponding isotype controls were injected twice weekly during the last 2 weeks of Ang II infusion as depicted in Figure 6A. Kidneys from mice that did not receive Ang II or antibodies were used as a reference control (untreated). Renal TGFβ1 levels markedly increase in response to Ang II infusion in isotype control–treated mice. Antibodies to IL-17A or IL-17RA, but not IL-17F, significantly reduced TGFβ levels (Figure 6B). Thus, inhibition of IL-17A and IL-17RA reduces blood pressure, renal/vascular inflammation, and markers of renal injury and fibrosis.
We and others have previously demonstrated that the proinflammatory cytokine IL-17A is elevated in hypertensive animals and humans, and that IL-17A promotes sustained hypertension, as well as renal and vascular dysfunction (3–6,24). However, the specific T-cell subsets that contribute to IL-17 production in the kidney and vessels were previously unknown. From a translational perspective, there are few data on whether neutralizing antibodies to IL-17A, IL-17F, or the IL-17RA receptor subunit can lower blood pressure and ameliorate hypertension-associated end-organ damage. In the current study, we show that innate-like γδ T cells and CD4+ TH17 cells are the primary sources of IL-17A and IL-17F in the aortae and kidneys of Ang II–infused mice. Importantly, we show that acute pharmacological inhibition of IL-17A or the IL-17RA receptor subunit can reduce blood pressure by ∼30 mm Hg, attenuate renal/vascular inflammation, and decrease markers of renal injury and fibrosis, making these potentially attractive therapeutic options for the management of hypertension.
Our finding that γδ T cells are a major source of IL-17 isoforms in the aorta and kidney of hypertensive animals is consistent with a recent report by Li et al. (14) showing that γδ T cells are a major source of IL-17A in the heart and contribute to Ang II–induced cardiac fibrosis and injury. Similar to our findings, these authors showed that whereas γδ T cells were one of the smallest cardiac infiltrating T-cell populations, their secretion of IL-17A was the greatest. These unconventional T cells are particularly interesting in that they do not seem to require antigen processing and major histocompatibility complex presentation of peptide antigens. Moreover, they play an important role in the recognition of lipid antigens and may be triggered by alarm signals such as heat shock proteins. In autoimmune disease, these cells appear to be driven by self molecules that arise during inflammation. In many inflammatory processes, they appear to be the dominant early source of IL-17 with antigen-specific CD4+ TH17 cells developing later and contributing to the sustained immune response (13,25). Further studies are needed to determine the role of γδ T cells in hypertension and whether specific pharmacological inhibition of these cells may prove beneficial in treating hypertensive end-organ damage.
In an elegant study, Amador et al. (19) demonstrated that spironolactone treatment prevented TH17 cell activation and increased numbers of anti-inflammatory T regulatory cells in rats in response to deoxycorticosterone acetate and high-salt diet (DOCA-salt). The effect of spironolactone on γδ T cells was not investigated in this study, but it does raise the question of whether the beneficial effects of spironolactone, one of the most effective drugs on the market for resistant hypertension, could be due to inhibition of the IL-17A axis. Consistent with our results, these authors showed that administration of an anti–IL-17A antibody to DOCA-salt–treated rats lowered blood pressure by ∼20 mm Hg and reduced the expression of some profibrotic and proinflammatory mediators in the heart and kidney (19). However, in this study, the authors noted no improvement in proteinuria with anti–IL-17A treatment. Our findings confirm and extend this study by using a different animal model of hypertension and examining the effects of inhibition of IL-17A, IL-17F, and the IL-17RA receptor. We found that inhibition of both IL-17A and the IL-17RA receptor significantly lowered blood pressure and end-organ damage from hypertension, including inflammation, albuminuria, and renal TGFβ levels.
The mechanism by which IL-17A promotes immune cell infiltration into the aorta or kidney is likely through the up-regulation of chemokines that attract all types of immune cells (not just IL-17–producing immune cells) into the target organ. For example, Paust et al. (26) showed that in mouse mesangial cells, IL-17A up-regulates CCL2, CCL3, and CCL20—chemokines involved in the recruitment of T cells and monocytes. We previously showed that IL-17A induces the expression of a number of chemokines such as CCL8, CXCL2, and CCL7 in human aortic smooth muscle cells (3). Thus, by blocking the effects of IL-17, we reduced overall inflammation in the aorta and kidney. However, double-negative cells were not reduced (Supplemental Figure 1). These cells are a small subset of the total immune cell population in these organs, so it is possible that we were not able to detect a decrease or there was no significant decrease in this small population of cells. Moreover, there may be a feedback mechanism in effect, by which double-negative cell numbers are maintained to compensate for the reduction in IL-17 signaling by the antibodies.
Of note, the anti–IL-17F antibody caused a modest reduction in albuminuria without a reduction in blood pressure or other parameters of renal inflammation/fibrosis. We previously showed that mice genetically deficient in IL-17F had no reduction in Ang II–induced hypertension or albuminuria (6). Moreover, we showed in that study that IL-17A, but not IL-17F, regulates the renal sodium chloride cotransporter (NCC) through a serum and glucocorticoid regulated kinase 1 (SGK1) pathway. Taken together, IL-17A signaling through the IL-17RA subunit appears to play a critical role in Ang II–induced hypertension and end-organ damage with little or no contribution of the IL-17F isoform.
Given that IL-17A and IL-17F are so closely related and signal through the same receptor complex, the reason for the differential effects of IL-17A and IL-17F in hypertension is unclear. However, there is precedence for this in other chronic inflammatory diseases. For example, in a chronic asthma model, IL-17A positively regulates asthmatic allergic responses, whereas IL-17F has a negative role. In dextran sulfate sodium–induced colitis, IL-17F–deficient mice exhibit reduced colitis, whereas IL-17A deficiency is associated with more severe disease (10). Thus, it is important to determine the precise role of these related cytokines in hypertensive models.
Our results are timely and clinically relevant because there are currently several monoclonal antibody–based drugs that are in various phases of development, testing, and FDA approval for the treatment of psoriasis and other IL-17A–mediated autoimmune diseases (reviewed in Beringer et al. ). Specifically, secukinumab has recently been FDA approved for psoriasis, psoriatic arthritis, and ankylosing spondylitis. Brodalumab (a human monoclonal anti–IL-17RA antibody) showed promising results in Phase III clinical trials for psoriasis. For our study, we used a murine anti–IL-17RA antibody (a generous gift from Amgen). Interestingly, there is strong epidemiological evidence for a link between psoriasis and hypertension (27–29), which may be due to a shared pathophysiology of increased IL-17A signaling and inflammation. Psoriasis patients tend to have more difficult-to-control hypertension compared with their non-psoriatic counterparts (27). Through a population-based study in the United Kingdom, Takeshita et al. (29) found that among patients with hypertension, psoriasis is associated with a greater likelihood of uncontrolled hypertension.
One limitation of this study is that we only utilized a mouse model of Ang II-induced hypertension. Further studies are needed to determine whether our results are applicable to other animal models of hypertension and ultimately to human hypertension. Another limitation is that we did not have a vehicle infused (normotensive) control group for our flow cytometry experiments. Thus, while we observed a reduction in renal and vascular inflammation in mice treated with anti-IL-17A and anti-IL-17RA antibodies during the last two weeks of a 4- week Ang II infusion protocol, we do not know if the levels of inflammation in these tissues returned to baseline. Finally, we only tested one dose, frequency, and duration of antibody administration. Thus, it is unknown whether alternative doses, frequency, and treatment durations would result in greater reductions in blood pressure and end-organ damage.
Our results suggest that anti–IL-17A/RA–based therapies for psoriasis could have a dual benefit by also reducing hypertension severity and cardiovascular risk. Moreover, the benefit of these IL-17A/RA inhibitors could apply to the general hypertensive population as well. Of course, the risk-to-benefit ratio of these drugs needs to be determined in a cardiovascular disease cohort. There is an increased risk of mild-to-moderate infections in patients treated with IL-17 inhibitors. In addition, it should be noted that in a few experimental models of renal injury, IL17 inhibition has been shown to exacerbate disease (30–32). Nevertheless, our provocative data support the use of IL-17A/RA inhibitors in pilot proof-of-principle clinical trials in humans for the treatment of hypertension and its devastating complications.
COMPETENCY IN MEDICAL KNOWLEDGE: The major T-cell sources of IL-17A and IL-17F in the kidney and vasculature in response to angiotensin II–induced hypertension are specialized subsets of innate-like γδ T cells and CD4+ TH17 cells. Monoclonal antibodies to IL-17A or the IL-17RA receptor subunit lower blood pressure, attenuate renal/vascular inflammation, and decrease markers of renal injury and fibrosis in a mouse model of hypertension.
TRANSLATIONAL OUTLOOK: Monoclonal antibodies to IL-17 isoforms and the IL-17 receptor are in various phases of development, clinical testing, and FDA approval for the treatment of psoriasis and related autoimmune disorders. There is a strong epidemiological link between psoriasis and hypertension severity. Thus, this new class of drugs could have an added benefit of reducing blood pressure and end-organ damage in psoriasis patients. Importantly, our results suggest that these drugs could also have a benefit in the management of hypertension in the general population. Thus it is imperative that small clinical trials and eventually larger randomized studies are conducted to determine the safety and efficacy of these drugs in human hypertension.
This work was supported by an American Heart Association Fellowship Award (14POST20420025) to Dr. Saleh, a training grant from the National Institutes of Health (NIH T32 HL069765-11A1) and an NIH NRSA Award (F31 HL127986) to Dr. Norlander, and an NIH K08 award (HL121671) to Dr. Madhur. Amgen provided some reagents used in this study. Dr. Madhur has received a research grant from Gilead Sciences, Inc. All other authors have reported that they have no relationships relevant to the contents of this paper to disclose.
(2012) Vascular inflammatory cells in hypertension. Front Physio 3:128.
(2015) Inflammation, immunity, and hypertensive end-organ damage. Circ Res 116:1022–1033.
(2010) Interleukin 17 promotes angiotensin II-induced hypertension and vascular dysfunction. Hypertension 55:500–507.
(2013) Interleukin-17 causes Rho-kinase-mediated endothelial dysfunction and hypertension. Cardiovasc Res 97:696–704.
(2015) Renal transporter activation during angiotensin-II hypertension is blunted in interferon-gamma-/- and interleukin-17A-/- mice. Hypertension 65:569–576.
(2016) Interleukin-17A regulates renal sodium transporters and renal injury in angiotensin II-induced hypertension. Hypertension 68:167–174.
(2002) IL-17: prototype member of an emerging cytokine family. J Leukoc Biol 71:1–8.
(2008) The human IL-17F/IL-17A heterodimeric cytokine signals through the IL-17RA/IL-17RC receptor complex. J Immunol 181:2799–2805.
(2016) IL-17 in chronic inflammation: from discovery to targeting. Trends Mol Med 22:230–241.
(2008) Regulation of inflammatory responses by IL-17F. J Exp Med 205:1063–1075.
(2010) Interleukin-17 and its expanding biological functions. Cell Mol Immunol 7:164–174.
(2009) Gamma/delta T cells are the predominant source of interleukin-17 in affected joints in collagen-induced arthritis, but not in rheumatoid arthritis. Arthritis Rheum 60:2294–2303.
(2014) gammadeltaT Cell-derived interleukin-17A via an interleukin-1beta-dependent mechanism mediates cardiac injury and fibrosis in hypertension. Hypertension 64:305–314.
(2013) Induction of pathogenic TH17 cells by inducible salt-sensing kinase SGK1. Nature 496:513–517.
(2013) Sodium chloride drives autoimmune disease by the induction of pathogenic TH17 cells. Nature 496:518–522.
(2014) Innate IL-17A-producing leukocytes promote acute kidney injury via inflammasome and Toll-like receptor activation. Am J Pathol 184:1411–1418.
(2011) Inhibition of IL-17A in atherosclerosis. Atherosclerosis 215:471–474.
(2011) Role of interleukin 17 in inflammation, atherosclerosis, and vascular function in apolipoprotein e-deficient mice. Arterioscler Thromb Vasc Biol 31:1565–1572.
(2015) Lymphocyte adaptor protein LNK deficiency exacerbates hypertension and end-organ inflammation. J Clin Invest 125:1189–1202.
(2010) Correlation analysis of intracellular and secreted cytokines via the generalized integrated mean fluorescence intensity. Cytometry A 77:873–880.
(1999) TGF-beta and fibrosis. Microbes Infect 1:1349–1365.
(2015) Elevated serum level of interleukin 17 in a population with prehypertension. J Clin Hypertens (Greenwich) 17:770–774.
(2014) In search of the T cell involved in hypertension and target organ damage. Hypertension 64:224–226.
(2009) The IL-23/Th17 axis contributes to renal injury in experimental glomerulonephritis. J Am Soc Nephrol 20:969–979.
(2011) Psoriasis and hypertension severity: results from a case-control study. PLoS One 6:e18227.
(2015) Cardiovascular risk profile at the onset of psoriatic arthritis: a population-based cohort study. Arthritis Care Res (Hoboken) 67:1015–1021.
(2015) Effect of psoriasis severity on hypertension control: a population-based study in the United Kingdom. JAMA Dermatol 151:161–169.
(2014) Deficiency of the interleukin 17/23 axis accelerates renal injury in mice with deoxycorticosterone acetate+angiotensin II-induced hypertension. Hypertension 63:565–571.
(2016) Low-dose IL-17 therapy prevents and reverses diabetic nephropathy, metabolic syndrome, and associated organ fibrosis. J Am Soc Nephrol 27:745–765.
(2015) Local IL-17 production exerts a protective role in murine experimental glomerulonephritis. PLoS One 10:e0136238. | 2019-04-26T10:50:57Z | http://basictranslational.onlinejacc.org/content/early/2016/11/04/j.jacbts.2016.07.009 |
Baer, Thomas A. was born 23 April 1959, is male, registered as Republican Party of Florida, residing at 220 Canal Blvd, Ponte Vedra Beach, Florida 32082-3608. Florida voter ID number 107966351. This is the most recent information, from the Florida voter list as of 31 March 2019.
31 May 2013 voter list: Thomas A. Baer, 220 Canal Blvd, Ponte Vedra Beach, FL 32082 Republican Party of Florida.
Baer, Thomas Anthony born 4 July 1946, Florida voter ID number 109166833 See Baer, Tomas A. CLICK HERE.
BAER, THOMAS EDWARD was born 16 October 1960, is male, registered as Florida Democratic Party, residing at 1650 Oak Berry Cir, Wellington, Florida 33414. Florida voter ID number 114872238. This is the most recent information, from the Florida voter list as of 31 May 2017.
31 March 2014 voter list: THOMAS EDWARD BAER, 11256 MELLOW CT, WEST PALM BEACH, FL 33411 Florida Democratic Party.
BAER, THOMAS F. was born 23 February 1938, is male, registered as Republican Party of Florida, residing at 822 Capri Isles Blvd, Apt 211, Venice, Florida 34292. Florida voter ID number 100381294. This is the most recent information, from the Florida voter list as of 31 May 2012.
Baer, Thomas Frank was born 18 April 1952, is male, registered as Florida Democratic Party, residing at 109 Crawford Rd, New Smyrna Beach, Florida 32169. Florida voter ID number 108530601. The voter lists a mailing address and probably prefers you use it: PO BOX 1810 New Smyrna FL 32170-1810. This is the most recent information, from the Florida voter list as of 30 September 2017.
31 January 2015 voter list: Thomas Frank Baer, 4705 S Atlantic AVE, New Smyrna Beach, FL 32169 Florida Democratic Party.
Baer, Thomas George was born 21 August 1952, is male, registered as No Party Affiliation, residing at 35711 Washington Loop Rd, Lot 0, Punta Gorda, Florida 33982. Florida voter ID number 117416296. This is the most recent information, from the Florida voter list as of 31 October 2017.
28 February 2015 voter list: THOMAS G. BAER, 1915 MARAVILLA AVE, #19, FORT MYERS, FL 33901 No Party Affiliation.
31 May 2013 voter list: THOMAS G. BAER, 625 OLD ENGLEWOOD RD, #3, ENGLEWOOD, FL 34223 No Party Affiliation.
31 May 2012 voter list: THOMAS G. BAER, 2017 SANTA BARBARA BLVD, #4, CAPE CORAL, FL 33991 No Party Affiliation.
Baer, Thomas L. was born 8 April 1947, is male, registered as No Party Affiliation, residing at 8 Asters Ct, Homosassa, Florida 34446. Florida voter ID number 114330081. His telephone number is 1-352-382-7995. This is the most recent information, from the Florida voter list as of 31 March 2019.
Baer, Thomas Paul was born 20 February 1954, is male, registered as No Party Affiliation, residing at 9740 N Cherry Lake Dr, Citrus Springs, Florida 34433. Florida voter ID number 121152328. This is the most recent information, from the Florida voter list as of 31 December 2015.
BAER, THOMAS S. was born 2 December 1960, is male, registered as Republican Party of Florida, residing at 2014 Kilmer Ln, Apopka, Florida 32703. Florida voter ID number 108634881. His telephone number is 1-386-860-6878. This is the most recent information, from the Florida voter list as of 31 March 2019.
31 January 2015 voter list: Thomas S. Baer, 4726 E Michigan St, APT 2, ORLANDO, FL 32812 Republican Party of Florida.
Baer, Timothy Allen was born 3 October 1975, is male, registered as No Party Affiliation, residing at 51 Main St, Horseshoe Beach, Florida 32648. Florida voter ID number 102887965. His telephone number is 1-352-498-0756. The voter lists a mailing address and probably prefers you use it: PO BOX 62 Horseshoe Beach FL 32648-0062. This is the most recent information, from the Florida voter list as of 31 March 2019.
31 December 2015 voter list: Timothy A. Baer, 4207 CONFEDERATE POINT RD, APT 149, Jacksonville, FL 32210 No Party Affiliation.
31 August 2015 voter list: Timothy A. Baer, 4207 Confederate Pt RD, APT 149, Jacksonville, FL 32210 No Party Affiliation.
30 November 2014 voter list: TIMOTHY A. BAER, 2285 MARSH HAWK LN, APT 16301, FLEMING ISLAND, FL 32003 No Party Affiliation.
31 May 2013 voter list: TIMOTHY A. BAER, 3983 THUNDER HEIGHTS LN, MIDDLEBURG, FL 32068 No Party Affiliation.
Baer, Timothy E. was born 27 March 1963, registered as No Party Affiliation, residing at 1704 Nw 6Th Ave, Cape Coral, Florida 33993. Florida voter ID number 119893998. This is the most recent information, from the Florida voter list as of 31 August 2016.
Baer, Timothy James was born 30 March 1959, is male, registered as No Party Affiliation, residing at 512 Cedar Grove Dr, Brandon, Florida 33511. Florida voter ID number 111266107. This is the most recent information, from the Florida voter list as of 31 March 2019.
31 May 2016 voter list: Timothy N. Baer, 512 Cedar Grove Dr, Brandon, FL 33511 No Party Affiliation.
Baer, Tina M. was born 16 March 1966, is female, registered as No Party Affiliation, residing at 512 Cedar Grove Dr, Brandon, Florida 33511. Florida voter ID number 111266106. This is the most recent information, from the Florida voter list as of 31 March 2019.
BAER, TODD was born 29 November 1971, is male, registered as Republican Party of Florida, residing at 326 Marlberry Cir, Jupiter, Florida 33458. Florida voter ID number 112512752. His telephone number is 1-561-436-1333. This is the most recent information, from the Florida voter list as of 31 March 2019.
31 August 2017 voter list: TODD BAER, 8150 MYSTIC HARBOR CIR, BOYNTON BEACH, FL 33436 Republican Party of Florida.
Baer, Todd Alan was born 25 December 1960, is male, registered as Republican Party of Florida, residing at 2663 Wyndsor Oaks Way, Winter Haven, Florida 33884. Florida voter ID number 118514380. This is the most recent information, from the Florida voter list as of 31 March 2019.
31 December 2013 voter list: TODD A. BAER, 411 EAGLE RIDGE DR, DAVENPORT, FL 33837 Republican Party of Florida.
Baer, Tohnya Lee was born 31 October 1948, is female, registered as Florida Democratic Party, residing at 86 Moonstone Ct, Port Orange, Florida 32129. Florida voter ID number 108634372. This is the most recent information, from the Florida voter list as of 31 March 2019.
Baer, Tomas A. was born 4 July 1946, is male, registered as No Party Affiliation, residing at 10323 Nw 9Th Street Cir, Apt 1, Miami, Florida 33172. Florida voter ID number 109166833. This is the most recent information, from the Florida voter list as of 31 March 2019.
31 December 2015 voter list: Thomas Anthony Baer, 10323 NW 9Th Street CIR, APT #1, Miami, FL 33172 No Party Affiliation.
30 September 2014 voter list: Thomas Anthony Baer, 10323 NW 9Th Street Cir, #1, Miami, FL 33172 No Party Affiliation.
Baer, Tonya L. was born 11 January 1963, is female, registered as Republican Party of Florida, residing at 10000 St Georges Rd, Apt 305B, Ormond Beach, Florida 32174. Florida voter ID number 104187090. The voter lists a mailing address and probably prefers you use it: PO BOX 684 Wilmington OH 45177. This is the most recent information, from the Florida voter list as of 30 November 2018.
22 October 2014 voter list: Tonya L. Baer, 1077 Country Club Park, DeLand, FL 32724 Republican Party of Florida.
Baer, Tracey Lynn was born 7 August 1968, is female, registered as Republican Party of Florida, residing at 124 S Morgan St, #1149, Tampa, Florida 33602. Florida voter ID number 111780404. Her telephone number is 1-904-375-1498. This is the most recent information, from the Florida voter list as of 31 March 2019.
30 September 2018 voter list: TRACEY LYNN BAER, 1505 MAJESTIC VIEW LN, FLEMING ISLAND, FL 32003 Republican Party of Florida.
Baer, Traci Dana was born 7 February 1971, is female, registered as Florida Democratic Party, residing at 11305 Brown Bear Ln, Port Richey, Florida 34668. Florida voter ID number 123613178. This is the most recent information, from the Florida voter list as of 31 March 2019.
28 February 2018 voter list: Traci Dana Baer, 6738 MARINA POINTE VILLAGE CT, APT 205, TAMPA, FL 336359003 Florida Democratic Party.
31 May 2017 voter list: TRACI DANA BAER, 6465 142ND AVE N, #O104, CLEARWATER, FL 33760 Florida Democratic Party.
BAER, TRACY L. was born 9 September 1964, is female, registered as Republican Party of Florida, residing at 835 16Th Ave Ne, St Petersburg, Florida 33704. Florida voter ID number 107281917. This is the most recent information, from the Florida voter list as of 31 March 2019.
BAER, TRECA LEE was born 19 July 1934, is female, registered as Republican Party of Florida, residing at 147 Nelly St, Freeport, Florida 32439. Florida voter ID number 123635638. This is the most recent information, from the Florida voter list as of 31 March 2019.
30 June 2017 voter list: TRECA LEE BAER, 778 SCENIC GULF DR, #B413, MIRAMAR BEACH, FL 32550 Republican Party of Florida.
Baer, Trevor Lee was born 21 July 1993, is male, registered as Republican Party of Florida, residing at 1528 Alcazar Ave, Fort Myers, Florida 33901. Florida voter ID number 120163304. His telephone number is 1-239-789-5880. This is the most recent information, from the Florida voter list as of 31 March 2019.
29 February 2016 voter list: TREVOR LEE BAER, 1909 W NELSON CIR, TALLAHASSEE, FL 32303 Republican Party of Florida.
22 October 2014 voter list: TREVOR LEE BAER, 418 W COLLEGE AVE, TALLAHASSEE, FL 32301 Republican Party of Florida.
BAER, TROY was born 6 December 1961, is female, registered as Republican Party of Florida, residing at 8380 Fresh Creek, West Palm Beach, Florida 33411. Florida voter ID number 111753034. The voter lists a mailing address and probably prefers you use it: 1341 N RAILROAD AVE STATEN ISLAND NY 10306. This is the most recent information, from the Florida voter list as of 31 December 2017.
Baer, Ty Christopher was born 6 August 1999, is male, registered as No Party Affiliation, residing at 15 Louisiana Ave, St. Cloud, Florida 34769. Florida voter ID number 124806734. This is the most recent information, from the Florida voter list as of 31 March 2019.
Baer, Tyler E. was born 11 March 1990, is male, registered as No Party Affiliation, residing at 12090404 Beaty Towers E, Gainesville, Florida 32612. Florida voter ID number 115151137. This is the most recent information, from the Florida voter list as of 31 May 2017.
Baer, Verena M G was born 22 May 1942, is female, registered as Republican Party of Florida, residing at 5854 Gasparilla Rd, #M18, Boca Grande, Florida 33921. Florida voter ID number 102628160. Her telephone number is 964-0889 (no area code listed). The voter lists a mailing address and probably prefers you use it: PO BOX 312 Boca Grande FL 33921-0312. This is the most recent information, from the Florida voter list as of 31 March 2019.
31 May 2012 voter list: Michelle G. Baer, 5854 Gasparilla Rd, #M18, Boca Grande, FL 33921 Republican Party of Florida.
Baer, Vicki Lynn was born 1 October 1973, is female, registered as Republican Party of Florida, residing at 1301 Ne 516 Ave, Old Town, Florida 32680. Florida voter ID number 103237858. This is the most recent information, from the Florida voter list as of 30 November 2017.
Baer, Victoria Degan born 3 March 1967, Florida voter ID number 103731958 See Pool, Victoria Baer. CLICK HERE.
Baer, Victor William was born 18 March 1969, is male, registered as Republican Party of Florida, residing at 4230 Lee Hall Pl, Cocoa, Florida 32927. Florida voter ID number 123469256. This is the most recent information, from the Florida voter list as of 31 March 2019.
Baer, Vikki L. was born 7 June 1961, is female, registered as Republican Party of Florida, residing at 932 N Lilac Loop, Saint Johns, Florida 32259-1917. Florida voter ID number 108052548. This is the most recent information, from the Florida voter list as of 31 March 2019.
31 May 2013 voter list: Vikki L. Baer, 932 N Lilac Loop, Saint Johns, FL 32259 Republican Party of Florida.
Baer, Vincent Edward was born 18 April 1969, is male, registered as Republican Party of Florida, residing at 1302 Welcome Dr, Vero Beach, Florida 32966. Florida voter ID number 119446604. The voter lists a mailing address and probably prefers you use it: 2438 Wright Rd Akron OH 44320-2448. This is the most recent information, from the Florida voter list as of 31 December 2018.
BAER, VIRGINIA S. was born 11 September 1943, is female, registered as Florida Democratic Party, residing at 707 S Indian River Dr, Fort Pierce, Florida 34950. Florida voter ID number 108182456. Her telephone number is 1-772-801-5164. This is the most recent information, from the Florida voter list as of 31 March 2019.
31 March 2014 voter list: VIRGINIA S. BAER, 1467 SE SUNSHINE AVE, PT ST LUCIE, FL 34952 Florida Democratic Party.
BAER, WALTER WILLIAM was born 11 March 1945, is male, registered as Republican Party of Florida, residing at 12025 Magazine St, Apt 7104, Orlando, Florida 32828. Florida voter ID number 105016752. His telephone number is 1-407-271-8350. This is the most recent information, from the Florida voter list as of 31 March 2019.
30 November 2017 voter list: WALTER WILLIAM BAER, 8144 SPEARFISH AVE, ORLANDO, FL 32822 Republican Party of Florida.
29 February 2016 voter list: WALTER WILLIAM BAER, 11817 ESTATES CLUB DR, APT 1617, ORLANDO, FL 32825 Republican Party of Florida.
31 May 2012 voter list: Walter W. Baer, 1950 Big Cypress Dr, St. Cloud, FL 34771 Republican Party of Florida.
Baer, Wayne Elwood was born 30 July 1937, is male, registered as Republican Party of Florida, residing at 1564 Castile St, Celebration, Florida 34747. Florida voter ID number 123122194. This is the most recent information, from the Florida voter list as of 31 March 2019.
Baer, Whitney Lynn was born 15 March 1989, is female, registered as Republican Party of Florida, residing at 1640 Bayshore Dr, Terra Ceia, Florida 34250. Florida voter ID number 119219817. Her telephone number is 1-941-932-6051. The voter lists a mailing address and probably prefers you use it: PO BOX 435 Terra Ceia FL 34250-0435. This is the most recent information, from the Florida voter list as of 31 March 2019.
30 September 2018 voter list: Whitney Lynn Baer, 814 COLLEGE LEAF WAY, Ruskin, FL 33570 Republican Party of Florida.
31 January 2016 voter list: Whitney Lynn Baer, 3896 40th Ave W, Bradenton, FL 34205 Republican Party of Florida.
22 October 2014 voter list: Whitney Lynn Baer, 3411 56th Dr E, Bradenton, FL 34203 Republican Party of Florida.
Baer, William B. was born 23 May 1950, is male, registered as Florida Democratic Party, residing at 2525 First St, Apt 1308, Fort Myers, Florida 33901. Florida voter ID number 111710938. His telephone number is 573-6468 (no area code listed). This is the most recent information, from the Florida voter list as of 31 March 2019.
31 January 2018 voter list: William B. Baer, 2525 First St, APT #1308, Fort Myers, FL 33901 Florida Democratic Party.
30 June 2016 voter list: William B. Baer, 2126 SE 6Th Ave, Cape Coral, FL 33990 Florida Democratic Party.
Baer, William Edwin was born 27 September 1956, is male, residing at 608 Butler Blvd, #3, Daytona Beach, Florida 32118. Florida voter ID number 105771233. This is the most recent information, from the Florida voter list as of 31 May 2012.
Baer, William Forbes was born 22 May 1953, is male, registered as No Party Affiliation, residing at 242 Redbud Ln, Palatka, Florida 32177. Florida voter ID number 124269951. This is the most recent information, from the Florida voter list as of 31 March 2019.
Baer, William Lewis was born 24 August 1959, is male, registered as Florida Democratic Party, residing at 4105 Old Settlement Rd, Merritt Island, Florida 32952. Florida voter ID number 101010650. His telephone number is 1-321-454-3994. This is the most recent information, from the Florida voter list as of 31 March 2019.
BAER, WILLIAM PAUL was born 1 March 1943, is male, registered as Republican Party of Florida, residing at 122 Spyglass Ln, Jupiter, Florida 33477. Florida voter ID number 124740318. This is the most recent information, from the Florida voter list as of 31 March 2019.
Baer, William Phillip was born 25 October 1949, is male, registered as Republican Party of Florida, residing at 2628 Shiprock Ct, Deltona, Florida 32738. Florida voter ID number 108807078. This is the most recent information, from the Florida voter list as of 31 May 2012.
BAER, WILLIAM R. was born 9 December 1931, is male, registered as Republican Party of Florida, residing at 5931 Nw 26Th St, Ocala, Florida 34482. Florida voter ID number 105597744. His telephone number is 1-352-867-8447. This is the most recent information, from the Florida voter list as of 31 March 2019.
BAER, WILLIAM R. was born 25 December 1952, is male, registered as Florida Democratic Party, residing at 1380 S Michigan Ave, Clearwater, Florida 33756. Florida voter ID number 107276165. This is the most recent information, from the Florida voter list as of 31 March 2019.
BAER, WILLIAM ROBERT was born 15 June 1954, is male, registered as Republican Party of Florida, residing at 3090 Avondale Ave, The Villages, Florida 32163. Florida voter ID number 104758188. His telephone number is 1-407-808-3559. His email address is WRBMLB@EMBARQMAIL.COM. This is the most recent information, from the Florida voter list as of 31 March 2019.
31 January 2014 voter list: William R. Baer, 25923 Merion Cricket Ave, Sorrento, FL 32776 Republican Party of Florida.
Baer, William Robert was born 17 March 1956, is male, registered as Republican Party of Florida, residing at 16020 Shellcracker Rd, Jacksonville, Florida 32226. Florida voter ID number 108068961. His telephone number is 1-904-254-2840. This is the most recent information, from the Florida voter list as of 31 March 2019.
31 May 2013 voter list: William Robert Baer, 711 Paradise Ln, Atlantic Beach, FL 32233 Republican Party of Florida.
BAER, WILMA J. was born 3 February 1936, is female, registered as Florida Democratic Party, residing at 7128 E Us Hwy 90, Lee, Florida 32059. Florida voter ID number 105270229. This is the most recent information, from the Florida voter list as of 30 June 2015.
BAER, WINONA E. was born 13 May 1930, is female, registered as Republican Party of Florida, residing at 3050 Ardenwood Dr, Spring Hill, Florida 34609. Florida voter ID number 104439619. This is the most recent information, from the Florida voter list as of 31 March 2019.
Baer, Yona B. was born 15 March 1980, is male, residing at 5102 Belmere Pkwy, Tampa, Florida 33624. Florida voter ID number 114598087. This is the most recent information, from the Florida voter list as of 31 May 2012.
Baer, Zachari Taylor was born 18 February 1997, is male, registered as No Party Affiliation, residing at 1901 Evergreen Dr, Edgewater, Florida 32141. Florida voter ID number 122851342. This is the most recent information, from the Florida voter list as of 31 March 2019.
Baer-Abriam, Deborah Darlene was born 12 November 1957, is female, registered as Florida Democratic Party, residing at 166 Canal Blvd, Ponte Vedra Beach, Florida 32082-3606. Florida voter ID number 108003700. This is the most recent information, from the Florida voter list as of 31 March 2019.
31 May 2013 voter list: Deborah Darlene Baer-Abriam, 166 Canal Blvd, Ponte Vedra Beach, FL 32082 Florida Democratic Party.
BAER-AUSTIN, LINDA CAROL was born 11 August 1950, is female, registered as Florida Democratic Party, residing at 3831 Nottingham Dr, Tarpon Springs, Florida 34688. Florida voter ID number 122337937. This is the most recent information, from the Florida voter list as of 31 March 2019.
Baerde, Nasima was born 11 April 1979, is female, registered as Florida Democratic Party, residing at 2317 Pizarro Ln, Apt 2310, Melbourne, Florida 32940. Florida voter ID number 125092561. This is the most recent information, from the Florida voter list as of 31 March 2019.
BAERE, CRISTINA M. was born 16 August 1972, is female, registered as No Party Affiliation, residing at 1852 Se Enfield Ave, Pt St Lucie, Florida 34952. Florida voter ID number 125187378. Her email address is CRISMORA72@HOTMAIL.COM. This is the most recent information, from the Florida voter list as of 31 March 2019.
31 July 2018 voter list: CRISTINA MORA BAERE, 1852 SE ENFIELD AVE, PT ST LUCIE, FL 34952 No Party Affiliation.
Baere, Fernando L. born 13 July 1945, Florida voter ID number 118323820 See De Baere, Fernando Leopoldo. CLICK HERE.
BAERE, WOLFGANG CHRISTIAN was born 1 August 1969, is male, registered as No Party Affiliation, residing at 1852 Se Enfield Ave, Pt St Lucie, Florida 34952. Florida voter ID number 108200822. His telephone number is 1-561-719-2170. This is the most recent information, from the Florida voter list as of 31 March 2019.
30 September 2016 voter list: WOLFGANG C. BAERE, 1852 SE ENFIELD AVE, PT ST LUCIE, FL 34952 No Party Affiliation.
BAERENKLAU, ALAN H. was born 6 May 1945, is male, registered as Republican Party of Florida, residing at 1633 Kersley Cir, Heathrow, Florida 32746. Florida voter ID number 107706917. This is the most recent information, from the Florida voter list as of 31 March 2019.
BAERENKLAU, FRANK A. was born 7 October 1917, is male, registered as Republican Party of Florida, residing at 701 E Camino Real, Apt 4-H, Boca Raton, Florida 33432. Florida voter ID number 112243154. This is the most recent information, from the Florida voter list as of 31 May 2017.
BAERENKLAU, SHARON WILEY was born 25 October 1948, is female, registered as Florida Democratic Party, residing at 1633 Kersley Cir, Heathrow, Florida 32746. Florida voter ID number 107702431. This is the most recent information, from the Florida voter list as of 31 March 2019.
31 January 2016 voter list: SHARON W. BAERENKLAU, 1633 KERSLEY CIR, HEATHROW, FL 32746 Florida Democratic Party.
BAERENRODT, DOROTHY B. was born 11 November 1912, is female, registered as Republican Party of Florida, residing at 5381 Glorious Trl, Brooksville, Florida 34602. Florida voter ID number 106325987. This is the most recent information, from the Florida voter list as of 31 August 2014.
31 May 2013 voter list: Dorothy B. Baerenrodt, 1140 Fernwood DR, Holiday, FL 34690 Republican Party of Florida.
BAERENRODT, MARK EUGENE was born 26 March 1962, is male, registered as Florida Democratic Party, residing at 13617 Breton Ln, Delray Beach, Florida 33446. Florida voter ID number 123357351. His telephone number is 1-603-759-9981. His email address is markbaerenrodt@gmail.com. This is the most recent information, from the Florida voter list as of 31 March 2019.
31 December 2016 voter list: Mark Eugene Baerenrodt, 1401 NE 9Th ST, APT 8, Ft Lauderdale, FL 33304 Florida Democratic Party.
Baerenwald, Robert Oscar was born 16 September 1965, is male, registered as Florida Democratic Party, residing at 936 N Pompeo Ave, Crystal River, Florida 34429. Florida voter ID number 101540023. His telephone number is 1-754-551-6332. This is the most recent information, from the Florida voter list as of 31 March 2019.
31 July 2017 voter list: Robert O. Baerenwald, 640 NE 16Th Ave, Ft Lauderdale, FL 333042933 Florida Democratic Party.
31 May 2015 voter list: Robert O. Baerenwald, 640 NE 16Th Ave, Ft Lauderdale, FL 33304 Florida Democratic Party.
BAERG, ANGELA KNIGHT was born 21 February 1966, is female, registered as Republican Party of Florida, residing at 4758 Watermark Ln, Sarasota, Florida 34238. Florida voter ID number 100380985. Her telephone number is 1-941-927-7832. This is the most recent information, from the Florida voter list as of 31 March 2019.
31 May 2012 voter list: ANGELA KNIGHT BERGEMAN, 4758 WATERMARK LN, SARASOTA, FL 34238 Republican Party of Florida.
BAERG, BERDON ALLEN was born 4 June 1964, is male, registered as Republican Party of Florida, residing at 4758 Watermark Ln, Sarasota, Florida 34238. Florida voter ID number 122490234. This is the most recent information, from the Florida voter list as of 31 March 2019.
BAERG, CARMEN D. was born 6 May 1961, is female, registered as Republican Party of Florida, residing at 2564 14Th Ave Sw, Largo, Florida 33770. Florida voter ID number 106849169. This is the most recent information, from the Florida voter list as of 31 March 2019.
BAERG, CORNELIUS was born 9 March 1929, is male, registered as Republican Party of Florida, residing at , , Florida . Florida voter ID number 106690538. This is the most recent information, from the Florida voter list as of 31 May 2012.
BAERG, DARLENE ANN was born 7 April 1966, is female, registered as Republican Party of Florida, residing at 9324 118Th Ln, Seminole, Florida 33772. Florida voter ID number 106833503. This is the most recent information, from the Florida voter list as of 31 March 2019.
BAERG, DAVID PHILIP was born 7 February 1996, is male, registered as Republican Party of Florida, residing at 9324 118Th Ln, Seminole, Florida 33772. Florida voter ID number 121322313. This is the most recent information, from the Florida voter list as of 31 March 2019.
BAERG, FRANCES E. was born 25 March 1933, is female, registered as Republican Party of Florida, residing at 13019 88Th Ave, Seminole, Florida 33776. Florida voter ID number 106690539. This is the most recent information, from the Florida voter list as of 31 March 2019.
31 May 2012 voter list: FRANCES E. BAERG, , , FL Republican Party of Florida.
BAERG, JENNA CHRISTINE was born 6 September 1999, is female, registered as Republican Party of Florida, residing at 9324 118Th Ln, Seminole, Florida 33772. Florida voter ID number 125111145. Her telephone number is 1-727-804-6324. This is the most recent information, from the Florida voter list as of 31 March 2019.
BAERG, JOHN C. was born 7 November 1956, is male, registered as Republican Party of Florida, residing at 2564 14Th Ave Sw, Largo, Florida 33770. Florida voter ID number 106768858. This is the most recent information, from the Florida voter list as of 31 March 2019.
BAERG, KATHRYN DIANE was born 28 August 1989, is female, registered as Republican Party of Florida, residing at 2564 14Th Ave Sw, Largo, Florida 33770. Florida voter ID number 115078984. The voter lists a mailing address and probably prefers you use it: 2513 WHITE TAIL LN WHITE OAK PA 15131-2724. This is the most recent information, from the Florida voter list as of 22 October 2014.
BAERG, RONALD D. was born 21 August 1965, is male, registered as Republican Party of Florida, residing at 9324 118Th Ln, Seminole, Florida 33772. Florida voter ID number 106823203. This is the most recent information, from the Florida voter list as of 31 March 2019.
BAERG, VANESSA ELIZABETH was born 27 July 1984, is female, registered as No Party Affiliation, residing at 5521 11Th Ave N, St Petersburg, Florida 33710. Florida voter ID number 107271747. Her telephone number is 1-727-488-5496. This is the most recent information, from the Florida voter list as of 31 March 2019.
31 May 2012 voter list: VANESSA ELIZABETH DOYLE, 5521 11TH AVE N, ST PETERSBURG, FL 33710 No Party Affiliation.
Baerga, Aimee Catherine was born 4 December 1985, is female, registered as No Party Affiliation, residing at 408 Nw 6Th St, Apt 117, Ft Lauderdale, Florida 33311. Florida voter ID number 120810559. This is the most recent information, from the Florida voter list as of 30 November 2016.
Baerga, Alexander was born 22 October 1979, is male, registered as No Party Affiliation, residing at 600 Ne 36Th St, #1811, Miami, Florida 33137. Florida voter ID number 110226041. This is the most recent information, from the Florida voter list as of 31 December 2014.
Baerga, Amanda Aviriel was born 4 August 1999, is female, registered as No Party Affiliation, residing at 30825 Midtown Ct, Wesley Chapel, Florida 33545. Florida voter ID number 124952702. This is the most recent information, from the Florida voter list as of 31 March 2019.
28 February 2019 voter list: Amanda Aviriel Baerga, 6350 Ryerson CIR, APT 11, Wesley Chapel, FL 33544 No Party Affiliation.
Baerga, Ana V. was born 23 December 1935, registered as No Party Affiliation, residing at 2730 Nw 14Th St, Apt 1, Miami, Florida 33125. Florida voter ID number 125822285. This is the most recent information, from the Florida voter list as of 31 March 2019.
BAERGA, ANNYA LESLIE was born 21 January 1983, is female, registered as Florida Democratic Party, residing at 4832 Royce Dr, Mount Dora, Florida 32757. Florida voter ID number 107700598. This is the most recent information, from the Florida voter list as of 31 March 2019.
BAERGA, ARIANA was born 10 March 1985, is female, registered as No Party Affiliation, residing at 4200 Community Dr, West Palm Beach, Florida 33409. Florida voter ID number 122717469. The voter lists a mailing address and probably prefers you use it: 4418 PALUSTRIS CT CHARLOTTE NC 28269. This is the most recent information, from the Florida voter list as of 31 March 2019.
31 August 2017 voter list: ARIANA BAERGA, 4200 COMMUNITY DR, #106, WEST PALM BEACH, FL 33409 No Party Affiliation.
BAERGA, ARNOLD J. was born 30 May 1977, is male, registered as Republican Party of Florida, residing at 633 Buoy Ln, #206, Altamonte Springs, Florida 32714. Florida voter ID number 106276174. The voter lists a mailing address and probably prefers you use it: 430 UNION ST APT 2F HACKENSACK NJ 76014424. This is the most recent information, from the Florida voter list as of 30 November 2014.
Baerga, Ashley Cierra was born 15 July 1986, is female, registered as Republican Party of Florida, residing at 125 Sunnyside Dr, #R-5, Clermont, Florida 34711. Florida voter ID number 121579453. This is the most recent information, from the Florida voter list as of 31 March 2019.
31 July 2016 voter list: Ashley Cierra Baerga, 221 Ridgecrest LOOP, Minneola, FL 34715 Republican Party of Florida.
31 May 2016 voter list: Ashley Cierra Baerga, 600 River Birch CT, APT 233, Clermont, FL 34711 Republican Party of Florida.
BAERGA, AURORA D. was born 4 June 1925, is female, registered as Florida Democratic Party, residing at 1822 Nebraska Ave, Palm Harbor, Florida 34683. Florida voter ID number 106937497. This is the most recent information, from the Florida voter list as of 31 May 2012.
Baerga, Basilio was born 22 March 1951, is male, registered as Republican Party of Florida, residing at 5641 Nw 112Th Ave, #101, Doral, Florida 33178. Florida voter ID number 113041416. This is the most recent information, from the Florida voter list as of 31 March 2019.
Baerga, Briana Aviral was born 20 March 1997, is female, registered as Florida Democratic Party, residing at 30825 Midtown Ct, Wesley Chapel, Florida 33545. Florida voter ID number 122741532. This is the most recent information, from the Florida voter list as of 31 March 2019.
28 February 2019 voter list: Briana Aviral Baerga, 6350 Ryerson CIR, APT 11, Wesley Chapel, FL 33544 Florida Democratic Party.
Baerga, Brianka Cristal was born 26 November 1997, is female, registered as Florida Democratic Party, residing at 5701 Middlesex Dr, Tampa, Florida 33615-3716. Florida voter ID number 121499339. This is the most recent information, from the Florida voter list as of 31 March 2019.
BAERGA, CARLOS was born 25 January 1931, is male, registered as Florida Democratic Party, residing at 341 Sterling Lake Dr, Ocoee, Florida 34761. Florida voter ID number 112716710. This is the most recent information, from the Florida voter list as of 31 March 2019.
Baerga, Carlos was born 25 October 1969, is male, registered as No Party Affiliation, residing at 3105 Lacy Leaf Ct, Tampa, Florida 33611-4926. Florida voter ID number 123619386. His telephone number is 1-786-925-7971. This is the most recent information, from the Florida voter list as of 31 March 2019.
31 January 2019 voter list: Carlos Baerga, 5100 S MacDill AVE, #311, Tampa, FL 33611 No Party Affiliation.
31 May 2017 voter list: Carlos Baerga, 1115 NORMANDY TRACE Rd, Tampa, FL 336025771 No Party Affiliation.
Baerga, Carlos Xavier was born 7 October 1992, is male, registered as Florida Democratic Party, residing at 1432 Dorado Dr, Apt A, Kissimmee, Florida 34741-7932. Florida voter ID number 122151356. This is the most recent information, from the Florida voter list as of 31 March 2019.
31 May 2017 voter list: Carlos Xavier Baerga, 2211 Pontina Ct, APT L, Kissimmee, FL 34741 Florida Democratic Party.
30 November 2015 voter list: CARLOS XAVIER BAERGA, 1031 MONROE AVE, BROOKSVILLE, FL 34604 Florida Democratic Party.
BAERGA, CARMEN IVETTE was born 22 November 1947, is female, registered as Florida Democratic Party, residing at 118 Fairway Ten Dr, Casselberry, Florida 32707-4823. Florida voter ID number 107828228. This is the most recent information, from the Florida voter list as of 31 March 2019.
Baerga, Carmen L. was born 14 July 1945, is female, registered as Florida Democratic Party, residing at 3208 Cranes Nest Ln, Kissimmee, Florida 34743. Florida voter ID number 106281678. Her telephone number is 1-407-344-2067. This is the most recent information, from the Florida voter list as of 31 March 2019.
31 May 2012 voter list: Carmen Baerga, 3208 Cranes Nest Ln, Kissimmee, FL 34743 Florida Democratic Party.
BAERGA, CECILIA was born 22 November 1937, is female, registered as No Party Affiliation, residing at 6926 Oakmore Ln, Orlando, Florida 32818. Florida voter ID number 119056245. This is the most recent information, from the Florida voter list as of 31 March 2019.
31 July 2014 voter list: CECILIA BAERGA RIOS, 6926 OAKMORE LN, ORLANDO, FL 32818 No Party Affiliation.
BAERGA, CHARLOTTE ATHENA was born 15 March 1955, is female, registered as Florida Democratic Party, residing at 2728 Crowder Loop, Tallahassee, Florida 32303-2388. Florida voter ID number 112352926. This is the most recent information, from the Florida voter list as of 31 October 2016.
Baerga, Christina Grisell born 30 March 1984, Florida voter ID number 116033595 See Baerga, Cristina Grisell. CLICK HERE.
BAERGA, CHRISTINA M. was born 26 September 1977, is female, registered as No Party Affiliation, residing at 4348 Hazel Ave, #C, Palm Beach Gardens, Florida 33410. Florida voter ID number 111963791. Her telephone number is 1-561-352-1488. This is the most recent information, from the Florida voter list as of 31 March 2019.
31 May 2013 voter list: CHRISTINA M. BAERGA, 3232 MERIDIAN WAY N, B, PALM BEACH GARDENS, FL 33410 No Party Affiliation.
BAERGA, CHRISTOPHER LEE was born 26 January 1976, is male, registered as Republican Party of Florida, residing at 5208 Enclave Dr, Oldsmar, Florida 34677. Florida voter ID number 107150249. His telephone number is 1-727-687-2070. This is the most recent information, from the Florida voter list as of 31 March 2019.
30 September 2017 voter list: CHRISTOPHER LEE BAERGA, 1908 EAGLE TRACE BLVD, PALM HARBOR, FL 34685 Republican Party of Florida.
22 October 2014 voter list: CHRISTOPHER LEE BAERGA, 3139 META CT, LARGO, FL 33771 Republican Party of Florida.
Baerga, Clara was born 18 August 1941, is female, registered as Florida Democratic Party, residing at 919 Otto Villa Pl, Apt 17, Tampa, Florida 33612-3564. Florida voter ID number 116121922. Her telephone number is 1-813-900-8558. This is the most recent information, from the Florida voter list as of 31 March 2019.
31 March 2016 voter list: Clara Baerga, 1420 Marathon Key Dr, APT 101, Tampa, FL 336128812 Florida Democratic Party.
31 October 2015 voter list: Clara Baerga, 1410 Marathon Key Dr, APT 102, Tampa, FL 336123637 Florida Democratic Party.
31 August 2015 voter list: Clara Baerga, 2004 COLONIAL PARC Dr, APT 101, Tampa, FL 336123240 Florida Democratic Party.
31 May 2013 voter list: Clara Baerga, 9054 Hickory Cir, Tampa, FL 33615 Florida Democratic Party.
Baerga, Cristina Grisell was born 30 March 1984, is female, registered as Florida Democratic Party, residing at 621 Lyons Rd, Coconut Creek, Florida 33063. Florida voter ID number 116033595. This is the most recent information, from the Florida voter list as of 30 November 2016.
31 May 2012 voter list: Christina Grisell Baerga, 9698 Arbor Oaks LN, APT 205, BOCA RATON, FL 33428 Florida Democratic Party.
BAERGA, DAHLIA ANIANE was born 11 September 1989, is female, registered as No Party Affiliation, residing at 6046 Westgate Dr, Apt 202, Orlando, Florida 32835. Florida voter ID number 121182942. Her telephone number is 1-646-924-9531. This is the most recent information, from the Florida voter list as of 31 March 2019.
Baerga, Darline born 2 May 1986, Florida voter ID number 102406044 See Turner, Darline. CLICK HERE. | 2019-04-20T18:31:25Z | https://flvoters.com/pages/b100247.html |
I have had many problems with my teeth but thanks to Dr. Sawyer I am now pain free. He and his staff are professional and caring. He has helped me so much. I can't imagine getting this work done by anyone else. Thank you.
Dr. Sawyer, along with his assistant, was so careful. I am one day out, and I still have no pain. I have a low tolerance for pain, so I am most satisfied with their care and expertise on performing a root canal. He explained what he was about to do each time he began a new procedure. Thank you.
Dr. Sawyers team was very efficient, punctual, polite, professional and provided me with pain-free services. Well done.
Staff was very pleasant and friendly and professional. Service was prompt with little to no waiting for appointment time. I would recommend Dr. Sawyer to anyone in need of endodontic care.
Dr. Sawyer and his staff fit me in for a root canal at the last minute because I was in such discomfort. He was professional, sympathetic and gentle. I feel so much better now! I would recommend him highly.
Dr. Sawyer and his staff are very professional. I would not hesitate to have him perform another root-canal.
Dr. Sawyer and his staff were very professional, courteous and excellent. I would not hesitate to recommend him .
Excellent painless and professional dental experience. I wouldn't hesitate to recommend Dr. Sawyer to anyone in need of this procedure.
Painless dentistry at it's finest!!!!
I recently had my first visit with Dr. Allen Sawyer, who was recommended by both my dentist and periodontist. I was promptly taken care of by his friendly and very capable staff. Dr. Sawyer was professional, friendly and took the time to give me a detailed explanation of further treatment. Very encouraging and comforting.
Went in for a root canal. Dr. Allen is quick, witty, and efficient. It was over before I knew it. No pain at all! Dwight Foreman Oh and Angel is awesome too!
Excellent care and service. The staff and Dr. Sawyer treated with the utmost care.I recommend this office to everyone. The facility was immaculate.
Amazing staff, I was a little nervous about the root canal initially but they all made me feel extremely comfortable. Definitely the best dental experience I’ve ever had.
What a great dental experience! From the courtesy & helpfulness of the front office, X-ray technician, to the obvious concern for patient comfort & timely treatment (I had a root canal & was finished in just over an hour) - couldn’t ask for better care. Thank you!
Efficient staff . Fit me in quickly and were prompt and attentive. Honest evaluation . Solid operation;what you would expect from a professional .
The first impression walking into Dr Sawyer’s office was great. The office is comfortable and inviting. The office staff was amazing. They were extremely accommodating and professional. Dr Sawyer was so easy to talk to and understand. He took his time explaining his answers breaking them down into simple terms that I understood perfectly. He was so awesome that I actually wanted the root canal although he deemed it unnecessary at the moment. That was one of the best experiences I’ve ever had at a D.’s office.
Very professional caring. Great first visit. Felt like family.
Dr. Sawyer and his staff are professional, efficient, and courteous. It starts the minute you walk-in with Angel at the reception desk and continues with Karli during the procedure. They make every effort to explain the procedures to be performed and what to expect during and after. I have had 2 root canal procedures done in the past year without any issues.
Whatever you think about having a root canal is far from what you experience when Dr Allen Sawyer does one for you. I experienced NO pain, not even when he gave me the numbing shot. It was all done in an hour. He is knowledgeable and skilled as is his assistant, Karli. The Best!
Angel is an angel and Dr. Sawyer is extremely sensitive to the feelings and needs of his patients. I recommend and will continue to see him.
Dr. Sawyer is like Kobe Bryant in Endodontics! He's awesome! He took his time to explain the procedure and made me feel comfortable. His staff was very professional and nice. I was truly blessed to be his patient. I had to return because my filling was too high. Without a doubt he saw me and took care of the issue without a problem. I will definitely recommend him for future patients.
It was the best I been everyone was friendly and helpful, it was quick and painless.
Everyone was nice and Dr Sawyer was wonderful.
Great experience-no pain Dr. Sawyer and his staff are very professional and very helpful. Dr. Sawyer is so gentle and soft spoken which relieved my anxiety.
Anyone who uses the worn-out phrase “I’d rather have a root canal” has never met Dr. Allen Sawyer. After my experience with Sawyer Endodontics last week, I am pleased to tell you that Dr. Sawyer and his staff allayed my fears, made me feel comfortable and cared for, impressed me with ultra-modern technology, proceeded gently and quietly, were intuitive to my needs and my questions, and made the pain go away in less than an hour. The sure sign came in just a few days when I was able to chew on that tooth, which I had not been able to do for the past six weeks. Besides being a great endodontist, Dr. Allen Sawyer is a righteous gentleman with an engaging chairside manner, and a sincere desire to make life better for every person who comes to him for help.
What an endodontist! What a professional staff! I cannot say enough about the treatment which I had today - 3 root canals, and I fell asleep in the chair. I have never been to a more caring, compassionate, knowledgeable, and considerate team. You can expect and receive the very best from each of them. Dr. Sawyer is the very best!
My recent experience with my root canal was to say the least, a very pleasant one. Dr. Sawyer made sure I was comfortable and in no pain at all times and the staff could not have been more professional and pleasant.
What a team. Can’t say enough about this practice!
As always, the staff was friendly, courteous, and professional. Dr. Sawyer provides excellent care based on experience and latest technology .
I hate going to the dentist but Dr. Sawyer made the experience the best possible! I’m so glad my co- worker recommended him.
I arrived slightly later than my scheduled appointment, yet everyone was very understanding, got me in and out without delay. Friendly staff, and welcoming environment.
Outstanding, professional and painless care!
I can't thank you all enough for the wonderful care you gave me. Dental procedures make me nervous but my root canal was painless and easy. You even diagnosed my abscessed wisdom tooth! Thanks so much!
Dr. Sawyer and the entire staff are amazing! Knowledgeable, personable, empathetic, etc. Everyone there takes exceptional care of you. Dr. Sawyer does great work and I have never had pain during or after a procedure!
This was my first visit to Dr. Sawyer's office. Everyone was very friendly and professional. After several x-rays I was informed that a root canal would not take care of my problem. If in the future I do require a root canal, I most definitely will make an appointment with Dr. Sawyer.
What a pleasant experience with Dr. Sawyer and his staff. Angel is awesome and went above and beyond to assist me with my root canal ordeal.
My experience was very pleasant. Having had a failed root canal 20 years ago, I was a bit apprehensive. From Angel’s greeting to Dr. Sawyer’s treatment, it was quick, relatively painless and very successful. I was well and pleased.
My visit to Doc. Sawyer office and staff was a very good experience. I felt very comfortable and pleased with the whole procedure.
Dr. Sawyer and his staff work very efficiently as a team. They truly are a well oiled machine with up to date equipment. As a team they provide the best possible service with the least emotional impact. I highly recommend Dr. Sawyer's endodontic services.
This is my second trip to Sawyer Endodontics and I am as please as I was several years ago with that experience. The staff are friendly, helpful, and very easy to work with. They go out of their way to make your visit welcoming and comfortable.
Dr. Sawyer and his staff were beyond awesome! They made a really scary situation very pleasant. Not only did Dr. Sawyer fix my tooth he also fixed my fear of having a root canal.
Quality work and outstanding personalities.
Dr. Sawyer and his staff were absolutely amazing. They walked me through everything step by step and made sure I was comfortable the whole way through. I would recommend Dr. Sawyers practice!
The Miltary referred me here so I did not know what to expect since it was my first root canal ever. The staff was friendly and professional. They made ever effort to make my visits so comfortable that I did not feel a thing. Thank you Dr Allen and Staff.
This was my first visit with Dr. Sawyer. The appointment went great. He is very informative and ensured I understood what was going on with my tooth. The staff was great. I enjoyed joking with the ladies during my wait, which was quite short. The establishment is comfortable. The appointment went quite well!
One of the best run practices-of any specialty-that I've ever been to. They were very flexible with scheduling, managing to get me in for a consult and treatment the same day I was referred. The staff is kind, professional, and very knowledgeable. Dr. Sawyer is empathetic, takes time to explain every step of his treatment plan, and highly efficient. I hope I don't have occasion to need an endodontist again, but if I do I can't imagine going anywhere else.
Great experience. Tried conservative management first when symptoms persisted root canal was very easy.
The staff and Dr. Sawyer are professional in every way. I was impressed with the entire process from billing to treatment. I was made to feel comfortable and confident I was getting the very best care.
My experience was positive in every way. I highly recommend Dr. Sawyer and his impressive staff.
I would not hesitate to recommend Dr. Sawyer's practice to anyone. A wonderful experience. I was greeted by Angel and Kimmy, two charming ladies who are as pulchritudinous as they are professional. Angel got me registered on the computer and took my blood pressure as we joked and teased each other. Then Kimmy took me to the "stand up x-ray" machine and patiently and cheerfully explained it to me. It's basically an MRI for teeth, and it costs more than a new house. Finally, I was attended to by Dr. Sawyer. Soft spoken and gentle, he performed some chemical and tactile explorations on my aching tooth. Then he went over the x-rays and MRI with me, and it was determined that a root canal was not necessary. He wrote a prescription for me, and a week later I was good as new!
I was referred to Dr.Sawyer by my regular dentist for a root canal. His staff handled all my insurance forms, answered all my questions accurately and immediately. While on vacation I experienced an emergency. I phoned the after hours line and Dr.Sawyer returned my call within minutes. The staff scheduled an appointment to resolve the situation on their day off. Dr.Sawyer and his staff were professional and caring, I would gladly recommend Dr.Sawyer to friends and family. Thank you.
Dr. Sawyer was informative when I went for a consult. His staff along with Dr, Sawyer were very professional, kind, and caring. When or if I need a root canal, I will return without hesitation.
From the onset of my first call until my departure from my appointment, I was treated with great care, honest opinion, and respect. I would recommend Dr. Sawyer.
I called on a Monday Morning and they took my husband the same day for an emergency apt. Dr. Sawyer had to do a root canal on him and he left without pain. Thank you for seeing him on such short notice.
Well honed Dentist with as painless an experience as one can have. Very impressed. Payment process made it doable.
My first experience with Dr. Sawyer and his staff was nothing less than highly professional & pleasant, from the start with registration to the finish of my root canal work. Everyone was very personable and the state of the art equipment was incredible!
I was referred by my dentist for a root canal on quick notice. The office did a good job, efficiently and professionally ,with minimal discomfort. They run on time as well which was nice to see. Highly recommend this office.
Considering going to the dentist is never fun. I came very pessimistic and I was very pleasantly surprised. From the time I came in and met the staff they made me feel very comfortable. Dr Sawyer was very pleasant and the procedure went well thank you for taking care of me.
After working in the Dental field for over thirty-five years I thought I knew what it was all about....wrong! So many new and improved procedures are available to today’s patients. Dr. Sawyer and staff have the most advanced equipment to do an amazing and comfortable treatment. It was a painless experience.
Dr. Sawyer and his staff are very friendly and always professional. They have state of art equipment. My appointment got messed up and they took me anyway on the day I was there. I was very thankful since I had taken off work. Thank you all!
I want to thank Dr. Sawyer his staff for their personable attention today. I was a little nervous, not knowing about insurance or what would come of the visit. Angel was very helpful and thorough with answering my questions, Jessie was courteous and gentle with my diagnostic tests, and Dr. Sawyer explained the results in terms that were clear. I was in promptly and appreciate the professionalism extended.
Dr. Sawyer & his staff have been a blessing to my husband & I. They helped us to coordinate the root canal procedure with my husband's other many medical needs. Thank you Angel for your extra efforts!
Dr. Sawyer and staff were very professional and friendly. No waiting. Glad I made the appointment. They use the latest technology to explain with visual clarity my dental issue. Impressive.
Dr. Sawyer and the staff of ladies were very polite and helpful. I felt a warm welcome within a few minutes and the actual dental procedure was rather quick. I thought it would've taken longer, but was done with expedience. So it was a very nice Dental office.
Exceptional in every way. Have only positive things to say.
I was on vacation when I had an urgent tooth issue. Dr. Sawyer was able to see me and resolved the pain I was in after careful examination. I got excellent care from both Dr. Sawyer and his entire staff. I was also assured that if further treatment was necessary I could feel free to come back in and they would address the issue immediately. Thank you!
I was very impressed with the staff, Dr. Sawyer, and the technical equipment. I had 2 root canals, and it was the most pain free I have ever had.
I was very happy with the care and service I received yesterday from Dr. Sawyer and the staff. They got me in quickly as I had an urgent issue. They were very nice and welcoming! Dr. Sawyer was very thorough with me and made sure I understood exactly what was going on. I was very pleased with my experience and would highly recommend them!
Doc - Outstanding!! Staff - Outstanginger!!
The staff at Sawyer Endodontics are very helpful, informative, and kind. I trust Dr. Sawyer's opinion as rather than doing a root canal that was suggested by another dentist, he took the time to evaluate the issue and recommended another course of action that was appropriate for my situation. Thank you Dr. Sawyer for your ethics!
Dr. Sawyer and staff were very welcoming and professional. My root canal was painless! Dr. Sawyer is very knowledgeable and gentle as well. I was extremely impressed with the care I received and the level of professionalism. Highly recommend.
I had my first root canal procedure today and I was super nervous. Dr. Sawyer's down to earth and personable demeanor put my mind at ease. He explained the procedure in detail. The procedure was painless. It was actually a relaxing experience. I would highly recommend Dr. Sawyer to my family and friends . He’s the best!
Thank you Dr . Sawyer and staff for such a pleasant experience getting my root canals done. Everyone in the office was so kind and professional. Angel was so helpful with getting the financial part handled and making sure I had my medications, and Dr Sawyer and his assistant were so gentle and let me know what was happening each step of the way, which made it a very comfortable experience.
Thank you for a pain free root canal. Everyone was so nice and professional I would recommend Dr Sawyer to all my family and friends.
I have spent a lot of time in the dental chair from childhood to now. I'm 63. Had a root canal performed by Dr. Sawyer. I was so impressed by the professionalism and care that was demonstrated to me. Dr. Sawyer and staff took the time to explain the procedure. Loved the high-tech showing me what he would be doing, and it made the procedure one of the best experiences I've had in the dental chair. Staff is excellent! Can't say enough good things!
Dr. Sawyer and his entire staff got me in immediately, were exceptionally pleasant, and all of my dental concerns were explained in terms I fully understood. I was referred to him by the Navy.
Wonderful, positive outcome, responsive scheduling, accommodating, virtually no waiting, polite, caring, great follow-up.
Dr. Sawyer and his staff run one of the most professional, clean, and welcoming doctor's offices I've ever seen. Couldn't have asked for better treatment.
Excellent staff and Dr. Sawyer is wonderful, would highly recommend him.
prompt, polite, patient, professional. good doctor and staff.
I waited about 12 years too long to get my root canal- because of fear. Well, after consulting with Dr Sawyer and his staff, I was still apprehensive but ready! Well, I cannot explain the way he and his staff guided me through the entire procedure as well as made sure I didn't feel a thing! Not one stitch of pain! Thanks for everything and a job well done!!
Great staff. Easy check-in. Dr. Sawyer was clear in his explanation of the issue at hand. I'm a return patient and was very much at ease in returning.
Went in for a consult on a possible root canal. The staff was extremely helpful, kind, and informative. Dr. Sawyer was just as great as his staff! He is very conservative on treatment but thorough enough that I felt confident in what he advised. I would recommend this office for anyone with endodontic needs.
I had a root canal procedure corrected by Dr. Sawyer years after the original procedure was done, and he did an excellent job. He is efficient and thorough. I would recommend him.
Was impressed with the online process of filling out required forms, never experienced that but made the visit that much faster! Staff is very knowledgeable in all aspects! Procedure was really organized and I was made aware of each step! Highly recommend.
Dr. Sawyer is very talented, and also very humble. His team are also talented and pay attention to all aspects of the procedure at hand. The scheduling and the insurance process was great. I couldn't be happier!
I was super nervous about my root canal. I am only 16 and I have heard that they hurt. I took the recommended pain medicine after the procedure in fear that the pain was coming. Dr Sawyer and staff did such a great job being gentle with me that I probably didn't even need that dose. I am grateful for their comforting and kind words as they explained the procedure to my mom and I in detail. I would recommend Sawyer Endodontics to anyone in need of a root canal. Thank you!
Great professional staff. Dr. Sawyer was great.
Very professional experience with Dr. Sawyer and his staff. Dr Sawyer was very thorough in his explanation. Would highly recommend him.
Y'all rock! Figuratively and literally! Everyone is friendly and does an excellent job. I will recommend Dr. Sawyer to everyone. Thanks for the great experience. I actually enjoyed it!
The doctor and staff were friendly and very helpful. I appreciated Dr. Sawyer explaining why this root canal was different than the first one I had and why I needed a second visit. All in all it was a good experience.
Thank you, Dr. Sawyer and team for your professional and gentle way you handled my recent root canal procedure. From the initial call to your office, when Angel made sure I had my pain under control, to the follow up call afterwards, I was so impressed with your staffs caring attitude. When I asked about costs, I was provided with a complete list of the costs and an estimate of what insurance might cover. The procedures were done as gently as possible and the results now, several weeks later are as if I never had a problem with the offending tooth! Thank you so much!
I had a wonderful experience at Dr. Sawyer's office. The staff was professional, friendly, and helpful. I admired the way Dr. Sawyer took his time to provide me with a detailed explanation of what was happening with my tooth. Although my tooth did not require a procedure at Dr. Sawyer's office, his office will be my first choice if there is ever a need in the future.
I could not have been any more anxious about my first endodontic procedure. Dr. Sawyer and his staff were so comforting and patient with me, afterwards I felt silly for the anxiety I had. The procedure was literally pain free! In and out within an hour. Thank you for such a pleasant experience. Highly recommended!
Dr. Sawyer and staff, you are awesome. This is a place you share with everyone and they will love you for it!
I had a wonderful experience! Both the doctor and assistants were as kind and as gentle as can be. They checked all my teeth and knew exactly what to do to relieve the pain. I highly recommend them.
By far the best root canal experience I've had. Dr. Allen and staff made you feel comfortable and without pain during the procedure.
I was refered to Dr. Allen sawyer by my family dentist, Dr. Joe Ferara for a root canal procedure. Needless to say at 75 years old I was a little uneasy about this procedure. However,after meeting Dr. Sawyer and his staff,I became more at ease.X-rays were taken, and then Dr. Sawyer explained in detail how he would perform the procedure on my aching tooth.The root canal surgery went smoothly and without pain.Dr.Sawyer's pleasant manners kept me calm and at ease during the procedure. I was extremely please with Dr.Sawyer and his staff,they were both courteous and professional,and I would highly recommend Sawyer Endodontics to anyone needing specialized dental work.
The staff was really nice and helpful. Everything was smooth from making the appointment to completing the procedure. I would certainly recommend this dentist office to anyone!
I have worked in the dental field for many years, and the treatment at Dr. Sawyer's office was extremely positive and professional from the initial phone call until the check out. The office is beautiful and welcoming! The waiting room is comfortable with the amenities of coffee and TV while you wait (which is not long). The treatment performed by the obviously very knowledgeable, Dr. Sawyer, includes the latest technology and the patient chairs are incredibly comfortable! I highly recommend Dr. Sawyer and his staff!
I was in the most severe pain I'd ever experienced; until … I went to initially see Dr. Thomas Bailey of Bailey Family Dentistry and after a great experience and diagnosis with their team he referred me to Dr. Allen Sawyer, D.M.D. for a root canal. I was able to secure an appointment the next day and was immediately prescribed medication for the pain until the appointment. Their staff was very friendly, professional and empathetic to my obvious pain. I was able to fill out all the forms online the day before, which was a nice convenience and saved time. Upon my arrival for the appointment I reviewed some consent forms and electronically signed and was taken in for x-rays. I was impressed with their 3D x-ray machine which shows you all angles to review and assess problems. The root canal operation went extremely smooth – no pain at all – and was finished within 20 minutes or so. I was even able to play tennis that night! Extremely impressed with Dr. Sawyer and his staff and would highly recommend to anyone.
I went to my regular dentist for a painful tooth and was told I needed a root canal . Having had one before I was not looking forward to the experience . Dr. Molly Burns highly recommended Dr. Sawyer at Sawyer Endodontics in Covington and actually set up the appointment for the same day . Dr. Sawyer's staff were welcoming and kind . I was quite impressed with how modern the office and the equipment are . This office visit was the first stress free dental experience I have ever had . Everything that was done and performed was the best. You and your office staff are extremely nice and professional . I highly recommend Sawyer Endodontics to anyone anywhere needing specialized dental work such as a root canal.
Thanks to you and your staff for your kindness and detail attention given in my root canal. Your staff was so helpful welcoming, and caring. I will definitely be recommending your office to others. You and your staff are a great team. Thanks for making this experience a great one.
I went to my regular dentist for dental work however he told me I needed some specialized work that he was unable to perform and highly recommended Dr. Sawyer at Sawyer Endodontics in Covington. He made an appointment for me to see him and his staff to have my procedures done. Thank you Jesus. Dr. Sawyer you and your staff are the greatest. Everything that was done and performed was the best. You and your office staff are scholars ladies and gentlemen of the highest order, people of character. I highly recommend Sawyer Endodontics to anyone anywhere needing specialized dental work such as a root canal. You won't be dissatisfied.
Thank you Dr. Sawyer and your wonderful staff. You were very welcoming and made this experience for my daughter very positive. I appreciate the thorough explanation of your findings. Your staff was very helpful and understanding. Thank you so much for the great experience.
So glad I found Dr. Sawyer. He explained exactly what was wrong with my tooth and eased my mind on the steps to take to fix it. I was not eager to have a root canal but he and his staff reassured me that my comfort was their main concern. I highly recommend Dr. Sawyer and his staff for your endodontics needs.
Thank you Dr. Sawyer and staff for getting me in quickly and put me on the path to getting relief.
Dr. Sawyer, thank you for seeing me so quickly and being prudent and professional in arriving at a diagnosis. You did not rush in with a quick solution, and I appreciated your good judgment as well as your kindness.
This was my first visit. Dr. Sawyer and his Staff were very welcoming. They made my visit very relax and comfortable. I would highly recommend them as a provider.
This was my first visit. Dr. Sawyer and his staff were very professional. They did a great job of explaining the services that I needed and my options, how much my insurance would pay, and how much I needed to pay. In addition, Dr. Sawyer did an excellent job. He was very thorough and explained each step prior to doing it. I highly recommend this provider.
Very competent and friendly staff. They helped me save lots by managing my insurance policy. Dr Sawyer's work is exceptional. His office uses high-tech equipment like I've never seen before.
This is a super doctor and staff. I literally walked across the street from my dentist office on referral for an emergency root canal. They could not perform the root canal that day, but scheduled the procedure early the next day. Dr. Sawyer made certain that I would be able to get through the day and night with as little pain as possible. His staff is friendly and efficient - I could not have asked for a better experience and would highly recommend Sawyer Endodontics. Thank you for a wonderful experience!
as far as root canals go, the two I had were a 10 out of 10. the first time I fell asleep in a dentist chair!!!!!
The entire staff was exceptional from the moment I walked through the front door, to the Doctor's chair, and back out of the front door. They were all very personable, and extremely knowledgeable. I would definitely feel very comfortable, and in good hands if I ever need to return to Dr. Sawyer's office.
From entrance to exit , the best dental experience ever. Would recommend to anyone and everyone needing there services. Wish all dental procedures were as painless.Thanks to all staff and Dr Allen for there friendly and delightful care.
I was very impressed with my visit. Dr. Sawyer and his staff were professional and courteous! Overall it was a pleasant experience!
Was referred to Dr. Sawyer for a consult. His office expedited my evaluation and was able to confirm my diagnosis using the latest 3D rendering equipment that even I could see clearly and understand. Great experience.
I have had three root canals..this was the best experience ever...Everyone was so friendly and professional. If I have to have another...will definitely use Dr. Sawyer!!!
My visit to Dr. Sawyer was totally a pleasant and professional experience, from the front office staff all the way to the doctor. Thank you to all.
Very professional experience. Angel was kind and helpful. The office setting is relaxed and friendly. Thank you for your services!
Front of the office is organized and caring. Third and fourth hands in the procedure were fast, accurate and effective. Dr. Sawyer is the best endodontist who has done a root canal for me and I have had three.
Many thanks to Dr. Sawyer and his wonderful staff for making my first root canal a pleasant experience.
This office is complete. From the front desk to the Dentist himself, my well-being was their first concern. A breath of fresh air. Many thanks for your kindness.
The staff and doctors are so amazing! Nothing but honesty and what's truly best for the patient. I would highly recommend this office to anyone!
I have had some pretty horrifying dental experiences in the past, so I am always terrified to go to the dentist. I can honestly say that Dr. Sawyer is the most gentle dentist to ever treat me. I hope I never have another root canal, but if I do, I can assure you I will return to Dr. Sawyer.
I just want to thank the friendly staff and Dr. Sawyer for making me feel at ease while having my root canal. Dr. Sawyer was recommended by my regular dentist because he knew that I needed someone that was gentle and made me feel relaxed. Thanks again.
What a great group of people. Angel took care of all the insurance questions and did all the work to find out what insurance would cover. Angel also went above and beyond to help me with a personnel request. Thanks Angel! Dr Sawyer was great in letting me know what to expect after my root canal and made the experience incredible! Would recommend them to everyone.
As a health professional, I am acutely aware of what it takes to provide the highest level of patient care and ensure excellent service to patients that I feel Dr. Sawyer and his staff provided. My highest recommendation goes to Dr. Sawyer and his team of caring and professional staff.
I was really pleased with this dentist and his staff. Dr. Sawyer was real informative in my root canal procedure and he did an excellent job. Angel, the receptionist and Jessie, his assistant were real caring and professional. They kept me informed with all the information I needed. Thanks to you all for a job well done!
When the Dentist told me I may lose a tooth I was horrified! I already had a root canal on the tooth and it was infected again! She sent me to Dr Sawyer and it was saved! He made the procedure painless and pleasant! I will use him again, because I refuse to lose a tooth!
I would like to thank Dr.Sawyer and his amazing staff. They combine professionalism with a genuine concern for the patient. I will and have recommended Dr. Sawyer to both my family and friends.
The thought of having endodontic work, specifically a root canal terrified me. The staff were very accommodating, helpful with insurance issues, and welcoming. Dr. Sawyer made the procedure painless and minimized any possible discomfort.
WOW. Where do I start. From my first phone call with Angel. She was super nice and answered all my questions and had me an appointment that afternoon. Then meeting with Jessie was a pleasure. Dr Sawyer explained every detail and was true on point. This was the greatest dental experience I have ever had. Professional and very friendly group.
Awesome treatment. Solved my problem. Thank you!
I literally hate to go to the dentist, but if they were all like Dr. Sawyer it would be a breeze. He took his time and explained everything to me. He was very gentle and patient. His staff was also very helpful and everyone knew exactly what they were doing. It went very smoothly.
Over my 25 year military career I have been to many many dentists just to keep up with my records, checkups, and so on. Dr. Sawyer is one of my Top 2 dentists I have ever seen. Personally, I hate going to the dentist, but his skill was above reproach and I felt nothing during my root canal. Thanks Doc!!!
One of the best dentist visits I have ever had. The staff and Dr. Sawyer are very nice and so professional.
Growing up hearing stories about how terrible it was to have a root canal, I went to Dr. Sawyer with a great deal of concern. However, I was assured that the current technology coupled with Dr. Sawyer's professional talents would prove that my concerns were unfounded. This,in fact, was the case. The procedure was carefully explained to me and was performed without discomfort or residual effects. Thanks to Dr. Sawyer and his caring and able staff!
Dr. Sawyer, as you know I am not a big fan of drilling in my mouth but if you have to have this work done, you are the best! The competence and professionalism, as well as that of your staff far exceeded my expectations and I would highly recommend your services to ANYONE!!!
Dr. Sawyer and his staff Angel, Tiffany and Jessie and the rest is by far the most welcoming and professional staff. I hope I will never have to get another root canal but if I do for sure will be from them.
My experience with Dr Sawyer over the past month and a half has been excellent. His staff is thorough, timely, and professional. Dr Sawyer is gentle and great at keeping you informed about your procedure.
Dr. Sawyer and Staff. Thank you very much for making my root canal a pleasant and successful experience. I experienced very little discomfort afterward and no complications. I would certainly recommend you to anyone needing this procedure. Enjoyed meeting each staff member, too.
DR Sawyer and his staff are the best. Staff was very friendly, polite and easy to work with. DR Sawyer took the time to completely explain each step of the procedure and did not rush the appointment and actually dealt with me on a personal level rather than treating me as if I was a number. Would highly recommend DR Sawyer and his staff to anyone in need of an endodontist.
What a great experience. Excellent staff, and Dr. Sawyer was amazing. Thank you for taking me in so quickly and taking good care of me.
I Love everyone at Dr. Sawyer's office! They are ALL so sweet & make you feel so comfortable. Dr Sawyer was very gentle and this was the first time that I have had injections in my mouth that didn't cause cancer sores! Angel was so sweet...I had to bring my 7 year old with me and I felt totally comfortable leaving him in the waiting area with Angel. Thanks for everything!!!
Dr Sawyer and Staff - Thanks so much for being so exceedingly friendly, so very professional, competent and an expert in your field of endodontics. My root canal was painless and while here on military orders, I would both recommend and return to Covington if ever to need of future root canal from anywhere in the US. Fantastic experience and highly recommended to my colleagues at the base. God Bless you all!
Thank you to Dr. Sawyer & staff. Dr. Sawyer, you are a miracle worker! My root canal was truly PAINLESS!! Even the injection site was painless. I also appreciate your compassion & gentle care during my procedure. You and your staff made me feel like family. God bless all of you. Thank you, again!
Dr. Sawyer was referred to me by my general dentist. I have never had a root canal before, and it was a painless and comfortable procedure. His staff was very friendly and took care of the insurance. They were great, but I hope I do not have to see them too often.
Doc Sawyer and his staff are the best!!! They go the extra mile, also make YOU feel like family. Thanks for everything y'all. Gods' Blessings, Frank AKA "BIGFINGERS"
Dr. Sawyer, I want to thank you and your staff for your high quality, professional care. Your staff was very helpful and courteous. I appreciated your explanation of the entire procedure, expected outcome, and patient answers to all my questions. The 3D imagery was extremely helpful to me since I am a very visual person. Throughout the entire procedure you made sure I was not experiencing discomfort or pain. I feel I received a high level of care in a friendly, comfortable environment. I have recommended your practice to several of my friends.
Dr. Allen is an excellent endodontist. He made me feel so comfortable and no pain. I feel he is an expert in his field.
Dr. Sawyer is the best endodontist in Louisiana!
My experience at Sawyer Endodontics was GREAT! Dr. Sawyer, Angel, Crystal & Kari made me feel at ease as soon as I walked in the door. The best part was a COMPLETELY PAINLESS root canal! I would recommend them highly!
I was very pleased with my experience with Sawyer Endo. Everyone seemed to have genuine concern for my issue and helped make me comfortable. I would recommend them to anyone looking. | 2019-04-25T12:27:10Z | https://www.sawyerendo.com/testimonials |
Our family member and friend, Kevin Crosser, was involved in a horrible accident on the night of August 20, 2009. This blog is to chronicle Kevin's journey that has begun since.
Trepidation, anxiety and excitement. That's what fills our thoughts tonight as we go to bed. Tomorrow (Tuesday) morning we have an appointment to see Kevin's pulmonologist when she is supposed to remove his trach. Tonight he went to bed with it and tomorrow he will not have it barring any unforeseen circumstances.
I supposed these are the kind of thoughts that anyone has whenever something you have waited for has finally come. In fact, I remember the feeling distinctly as we found out each time that things were in order for us to leave to Italy. I remember the very first time it took one year and ten months for us to raise enoug monthly support and get our legal permission in order to leave for Italy. I remember having mixed emotions of being excited for the next leg of the journey, but wondering if we were really ready for it. God was always there even if we felt unprepared. We were heading into the unknown with nothing more than faith that God would do what he had said in his word he would do.
Kevin has had his trach for exactly one year and nine months and three days. We have had to learn how to take care of his trach, knowing when to use the suction catheters and when not to, how to clean the inner canula (twice daily) and care for his skin around it all. We have become experts in something in which we didn't care to know. And now it's changing.
We are excited for sure though. Cautiously excited. Having the trach out means we hear more of his natural sounds. The past several days that Kevin has been capped we have heard his natural clearing of his throat more, snoring at night and other sounds that we don't normally hear coming from him. How will this change future speech therapy? There will be more focus on swallowing. He may be able to produce sounds easier or new ones altogether. At some point, we may starting feeding him by mouth. He may have less chance of feeling gagged, less sucretions, less potential for sickness.
All that doesn't take away the hurdles he faces. But we’ve been here before. Standing in the face of the unknown. Tomorrow I hope to write that everyting went great. The procedure painless and smooth. But we don't know what tomorrow looks like. It's unknown. What we do know is that God will be there for us tomorrow, like he has been for every step of our lives. He is greater than any temporary momentary troubles we face on this earth and there is a much larger message that needs to get out to the world.
Thanks for indulging me as I work through my thoughts before heading to bed. He's sleeping now, but I'll give Kevin a kiss on the forehead for you.
Update #138 on Kevin - Trach Coming Out!
...two days after returning home we had a circuit breaker fire that displaced us for three weeks. Greg gladly opened up his home to the five of us (Matt, Angie, Jacob, Kohl and Kevin!) plus nurses round the clock. The house had to be restored due to smoke and odor damage.
...I had bronchitis and shingles for a few weeks; meanwhile Kevin never seemed to get over whatever he had that took him to the hospital.
...came back home for two weeks and Kevin got bad again. It wasn't as bad as when we took him to the hospital in February, but bad enough to go again. So we took him once more to St. John's ER. They admitted him for a week. They were never fully sure what was going on, maybe pneumonia, but weren't sure. Brought him home on April Fools' Day.
...April was a great month for Kevin at therapy. He was more alert and doing things he hadn't done since December. But the previous few months hadn't been good due to sickness so his charts looked bad. You can really chart that the reason he's not doing good was due to sickness. So the outpatient therapy we were taking him to had to discharge for now.
...May 2, Kevin seemed to be showing similar signs to when we took him to him to the ER, so we headed that way. They ended up treating him in the ER that day and sending us home with meds for bronchitis. That seemed to do the trick.
...In the May, Kevin had an eye infection, but with some antibiotics it's already cleared up.
...Since around the middle of February, we have had approval for twenty-four around the clock nursing. However, we haven't but maybe one week where we had that fully staffed. There have been many different reasons why, some were nurses getting sick or in car wrecks. But suffice it to say, we still need competent and dependably committed nurses to fill shifts. It means constant training and reworking of our schedules. Most of the empty shifts fall to Angie, which is incredibly hard at times.
...of course during all this, we have continued therapy at home. Now that the weather is nice we can take Kevin outside on the deck which was graciously built by United Way volunteers and some of Kevin's friends. It and the other developments they made have been a great benefit going into the outdoor season.
...Kevin's trach has been being capped throughout the day. On our last visit with the pulmonologist (lung doctor) she wanted us to be more aggressive with capping him throughout the day. We had slowly been working him back up to where he was before he started getting sick around the beginning of the year. She said let's aggressively add time each day that he does well. Because she believed he could do it, and so did we.
...This week we have been adding more and more time every day working toward a full twenty-four hour period of capping. His pulmonologist said if he got to twenty-four hours then we could take the trach out. He just did that! In fact, he has had it on for more than twenty-four hours now and still doing great. This morning Angie heard him naturally snoring, because his airway is totally through his nose and mouth when capped.
...We called the doctor and left a message. She called back soon and said let's take the trach out! We can either do it with a nurse or bring him in on Monday. We are discussing it and determining which way is best. Either way, his trach is coming out!
...We'll note something, for sure on twitter, when it comes out. You can read the tweets at http://twitter.com/pray4kevin.
...Remember you can fill out a letter online and send it to Kevin at http://bit.ly/note2kevin. It will be printed off and read to Kevin.
Thanks for your thoughts and prayers. This is a big step for Kevin's recovery. We give God all the glory for getting Kevin to this point.
Update #137 on Kevin - Back at Home!
We were able to bring Kevin home last night. He stills needs to rest and continue to recuperate but he's doing so much better. He also got some redness and a slight sore while at the hospital so we are taking great strains to heal that through application of medication and consistent turns every couple hours. At this point he can't stay in his wheelchair for more than a couple hours so that he isn't stressing the already tender skin.
Our home health care agency has been trying to staff the house with 24 hour nursing. We are hoping that this can continue for the unforeseeable future since it has been difficult to give the best of care with only 8 hours of nursing. We won't know about approval for how much nursing we will get for a week or so, but right now they are trying to staff us with as close to 24 hours as possible. For instance, tonight from 10PM -8AM we don't have anyone and tomorrow from 4PM - 8PM we don't either, but the nursing agency is doing their very best to fill all those slots.
Part of the process, for whatever amount of nursing we get is having new nurses. That means lots of training and teaching for how things work at this house and what Kevin needs. Also, right now everything Kevin is getting (medicine, food, care) is spread out over the entire 24 hours to ease him back into things. That means the few breaks we worked into his schedule are gone. Every two hours he is turned (takes about fifteen -twenty minutes), food is every four hours, breathing treatments are every four hours, eye ointment is every two hours, etc…….. Little rest in between and it makes a big difference that we have more nursing right now. We pray it will continue.
If you want to send a message to Kevin, I took an idea from the hospital and added it to his website (http://www.prayforkevin.com/). All you have to do is click this link to send a note to Kevin http://bit.ly/note2kevin. We'll print it off and read it to him. This way Kevin can hear from you even if you can't come by and see him. And if you would like to come by and see Kevin, please try. We know he would love it.
Thanks for all the love and messages at the hospital. Outside of all the emails, facebook messages, and comments we had over thirty St John's "Well Wishes" emails hanging on his wall before we left! It meant so much to us and him.
Thanks so much for your prayers during this difficult time during Kevin's already difficult recovery. Your prayers are heard.
So, here I sit in Kevin's room listening to Jazz and waiting for the doctor to come in with news that we can take Kevin home today. It's 11:15 AM and she has usually come in around 9:30 AM, therefore anticipation is building.
Kevin has remained great on room air for two days. He's been off antibiotics for a couple days. All his testings have come back with good results. He does have a small sore that is right on top of a scar from one of his old ones. I have the nurses doing "extreme" side turns so that he does not put any more pressure or moisture on the spot. They've pulled unneccessary catheters from Kevin and he's doing good and regulating things on his own. All that's left is a picc line which they will remove towards the last minute.
As far as the pressure spot goes, I will argue that for fourteen months he has had none at home. When he first came home, there were two pressure sores that were healed within about two weeks. At home we only have one "patient" to monitor. So, even if he goes home with it, the hope is between good care, the circulating air mattress and meds he will bounce back from that too.
The pulmonologist came in earlier, commented on my Coltrane playing and stated that he didn't see any reason why Kevin's main doctor in this unit wouldn't let him go home today.
Angie stayed at the house last night and has cleaned like a banshee in order for it to be ready if we bring him home today. I stayed with him at the hospital and helped the nurses get him far on his side for pressure relief.
We have thirty well wishing emails that you all have sent and after being read to Kevin are hanging on the wall in his room. When I read him some yesterday, he opened his eyes bigger, turned towards me and listened. Thanks for encouraging Kevin and us. Many staff have commented that they didn't even know they had this on the St John's website and they thought it was so cool that we have them hanging in his room.
As soon as we get the discharge info, we can get it to our home health care agency, Maxim, and they can request approval for nursing. Because he needs round the clock care we are requesting twenty-four hours per day nursing. Insurance has only paid for eight hours per day so far, but then again, many of you are praying. Our God is bigger than any insurance company's policy!
It's 11:45 AM and still no doc. The Jazz relaxes and calms me. A couple other nurses came in and loved the Ella Fitzgerald and Nina Simone that was playing. Now, hopefully we can listen at home tonight. Now one of my favorites, the Girl from Ipanema (Stan Getz version).
If you still want to send an email message to Kevin, just go to http://www.sjmc.org/ and click "contact a patient" on the right side, the form will pop up. Do it soon, in case we get to go home this afternoon.
Update #135 on Kevin - Out of ICU!
So, more good stuff to report.
After waiting all day, Kevin finally got his MRI's late last night. The hospital is really full here, so he kept getting bumped for more urgent cases.
Then around midnight, Kevin got moved to a new room out of ICU. He is on the same floor in room 845. It's a private room with great nurses and aides so far. Today they pulled him off oxygen because he was doing so well and has been on humidified room air ever since.
I spent the night here with him (the nurses encourage it in this wing) and Angie went home to sleep. Tonight we are tagging out. I'll go home and Angie will stay in the room.
When Angie came up today she brought Kevin's wheelchair. This afternoon the nurses put Kevin in his chair. We wanted to start it out slow, so it was only for 2 1/2 hours but Kevin did great.
We are making arrangements with home health care because they are still looking at Kevin going home later this week, probably Friday. We hope to have more nursing in place so please pray for more than eight hours to be approved.
Feel free to keep sending messages to Kevin through the St John's website. We have read every one to him and hung them on the wall in his room. The link is http://www.sjmc.org/general.asp?id=322&siteuse=11.
Okay, much to update. Good stuff too.
Kevin has been off any breathing machines for over twenty-four hours. He's on flowby or wallflow air. It's basically just room air with adjustable oxygen (right now he's on 50% oxygen).
They stopped irrigating his bladder about 30 hours or so ago and he has had very little blood showing up, just occassional spots. That should continue to clear up over time.
He's only got one antibiotic left out of the three he was on for the pneumonia/infection. It is scheduled to end on Wednesday.
Gastro doctors scheduled a peg tube replacement this morning and it was done before we got here. We were pleasantly surprised when we got here and found out about it. Kevin's Critical Care doctor was surprised that the gastro docs could fit it in so quick and do the procedure in Kevin's room.
Kevin has MRI's scheduled for tomorrow (which again were ordered by his neurologist before Kevin came to the ER). These MRI's are for the regular recovery process and are of his brain and brain stem specifically.
The CC Doctor told us that Kevin has transfer orders ready for him to be moved into a step down unit on the other side of the hospital. However, there has been a waiting list, so we aren't sure when or if he will get moved there before being discharged later in the week.
Now, we are actively looking at post discharge stuff like reopening his home care nursing, alerting his doctors, etc… We are going to be requesting more daily nursing hours from insurance. In fact, we were already looking to do that before this sudden experience and had to put it off. Kevin requires more care than the eight hours of nursing he has been getting. Please pray for this need.
Kevin will also be getting a short break from physical, occupational and speech therapy. We haven't yet decided when in March he will start back up, but we will also be discussing this with his doctors as we make the decision.
Prior to this hospitlization, Kevin has continued to do new things in therapy and his therapists have so great and creative as they stimulate and improve Kevin's functions. His body control continues to get better. He has stood assisted by therapists, and increased neck control to where he can hold his neck up to 17 minutes long once. Back in June, his record for neck control was only a few minutes. He still needs to do this more consistently, but the increase is exciting. In August, we took Kevin for a swallow study which tracked different thickness of liquids as an Xray showed which of his muscles were/weren't working. This study gave Kevin's speech therapist the information needed to know which muscles in the back of his neck needed stimulated to improve swallowing. She has wanted to schedule a new swallow study so that we have a benchmark for comparison.
This ICU stay may give Kevin a little bit of a setback, but as you can probably tell (or already know) Kevin is a fighter. Two steps forward, one step back is still progress.
Thanks for all the notes that were sent through the hospital website, we read them to him today. The ones from February 9 & 10 just got here. Feel free to keep sending them and we'll keep reading them to him. The link is http://www.sjmc.org/general.asp?id=322&siteuse=11 and you can fill out the form. There are fifteen of them taped to Kevin's wall.
Just finished meeting with Kevin's Critical Care Doctor. Both of Kevin's Critical Care Doctors have been great. We have appreciated their patience and teaching as we went through a difficult situation. We have been able to relax due to the entire staff in the CC unit.
His doctor said that Kevin is doing well and definitely making progress. He is not septic anymore, and now we are just basically finishing up the treatment for his pneumonia. Earlier in the week he had told us about Kevin's X-rays which showed a lot of fluid in his left lung. Today's X-rays were great. They are absolutely clear. It shows too, because Kevin has not been coughing near as much the last few days.
Kevin's progress off the CPAP machine is going well too. With the low settings that I told you about in the previous update (8 Pressure and 30% O2), he is now just needing to tolerate this well. What that means for prayer terms is that he is breathing around 30-37 times per minute now, and he needs to slow down to the 20's. His doctor thinks by tomorrow (Sunday) or Monday that Kevin will be able to switch to flowby.
No real concern left on the light amount of blood in his urine, hopefully that continues to resolve itself. The doctor is thinking Kevin will stay in ICU until Monday. Then maybe between Monday and Tuesday, he will be switched to a step down unit that is outside of Critical Care. Please be praying for the doctors that Kevin will have in that department.
We had asked the doctor earlier in the week to take a look at Kevin's peg tube and see what he thought about replacing it. That should be something that can be done Monday or after next week. Also, I mentioned to him that Kevin is overdue on some MRI's that Kevin's neurologist is wanting. The CC Doctor thinks that can be done around Wednesday next week. We started trying to schedule an MRI back in early December so this will be a great help to get it done now too.
Today continues to bring good news for praise. After trying a little last night, to wean the pressure support (I misadvertently said previously they were reducing the PEEP setting, but I meant the pressure support), Kevin has made great strides today. They reduced the pressure support settings on the CPAP machine from 14 to 8.
They've also lowered the oxygen amount to 30%. Once Kevin is tolerating these levels well, they can switch him to flowby getting him closer to regular breathing.
This morning also brought the news that Kevin does not have Cdiff. He is still having a little blood in his urine, but overall that has improved tremendously since they replaced his catheter and started irrigating his bladder.
We talked to his doctor about changing his feeding tube. If you'll remember, we went in to change it back in September and the doctor said it just needed cleaned out. It has deteriorated more since then and we hoped they might be able to change it. His critical care doctor didn't think this would be a problem, once he got stabilized.
Taylor, Kevin's daughter, arrived from Midwest City yesterday afternoon. We spent the evening together as a family and went back to visit Kevin once the visiting hours started again. Today, she went up to the hospital in order to give Angie and I a break. Thanks Tay!
Thanks so much for your prayers on behalf of Kevin and our family! It means so much.
Thanks for reading and praying!!!!
Kevin is continuing to improve. We spoke with his doctor this afternoon and feel really good about Kevin's progress. He has been on CPAP settings on the vent for about 33 hours now with no problems. An hour or so ago, the doctor started lowering the pressure from 14 to 12 (PEEP) on the vent. After a couple hours, if he seems to tolerate it, they'll lower it again. They'll keep doing that and when he is tolerating an 8, they can switch him over to flowby.
For the past couple days, Kevin has been getting food through his feeding tube. It is the same formula that he gets at home, that is jevity 1.2.
For the infection in his body and pneumonia, Kevin has been getting three different antibiotics. Vancomyacin, piperacyllin and another more generic antiobiotic. While these are helping with his infection, there is a side effect too. There is potential for C-Diff in his colon. He had it at Saint Francis and Meadowbrook a year and a half ago, which makes him more likely to get it again. They tested him for it, and we should find out tomorrow.
The doctors' focus had shifted onto the blood in Kevin's urine. Last night they took three steps towards diagnosing it. First they changed his catheter. Next they tried irrigating his bladder. Then they took him for a Ctscan. When they changed the catheter, they found that it had been put in incorrectly when he arrived in the ER. The urologist said just that could have caused the bleeding that we have been seeing since arriving (especially since his blood was too thin). The irrigation has been flushing any blood out of his bladder, and right now the blood is very faint. The ctscan showed nothing of importance and therefore would not need to do a scope based on the ctscan.
We are starting to have more routine with Kevin again. Physical therapy has been coming by each day to work out his muscles and keep him from getting rigid. When they bathed him last night they also shaved him. He was starting to get scratchy barbs on his face so it was definitely time for a change.
For about 36 hours Kevin has been completely off of dopamine. That was the medicine used to raise his blood pressure to regular levels. His blood pressure has stabilized and is doing good. Right now, it is 100/65 with a map of 78. Looking good.
Of course, we didn't know what to expect from the weather and we ended up getting 5.5 inches more snow. It seems to be more driveable than the sleet driven snow last week. Glad we weren't 40 minutes to the Northeast, they got 25 inches!!! So, Kevin and I have been fairly isolated today as far as external visitors go. There have been several staff that have stopped by.
The chaplain, a little elderly nun, came by and introduced herself. She told me that she has seen us the past few days and told us what great condition Kevin is in for Home Health Care. She then proceeded to pray for him silently.
Then the Family Care Specialist came by and introduced herself. She said she only started a couple days ago and was there anything that the family needed. I told her that most everything was great. The biggest hangup we have had is the visiting hours. Visiting hours are closed until 6-9AM; then 6-8 PM and finally 10-midnight. We've had a couple come up and not be able to see him during those hours.
Rene, the nurse manager came in and introduced herself too. While I was out (due to visiting hours being off), she brought by an approved power strip for all my electronics and laptop. Earlier I didn't know who brought it in, the current nurse just told me it was there to use.
Dopamine/Blood Pressure - As far as getting him off dopamine, we slowed down. Part of the issue is that we got a new nurse today, not a bad one, just new to Kevin. During the night, the night nurse had got Kevin down to 2 mg of dopamine. The day nurse hadn't been on more than an hour or two when she saw his Blood Pressure levels hovering near the low side and she bumped up his dopamine from 2 to 5 mg again. It took me several hours to get her to try to reduce it again. She has now his Blood Pressure is doing well and he is getting 4 mg. Hopefully, we can keep taking ten steps forward and only two steps back.
Not sure where it is at, but Kevin has put on some extra weight. He weighs 205 lbs. Before this trip to the hospital he was around 194. So that means he has picked up ten or eleven pounds, although there is the possibility there is a difference in the bed scale and the therapy scale that we normally use.
Ventilator - Twice now they have switched the ventilator so that Kevin is breathing on his own and the vent is helping him. Twice they have stopped it very close to breathing treatments. This is normal at home and normal for Kevin to have increased breathing rates and coughing after a breathing treatment. The doctor said they will keep working on it and not have the knee jerk precautionary reactions. To do everything he can to at the very least get him on flowby (room air).
Today, Kevin has been awake for several hours. Probably 4 hours so far. I have talked with him and put different shows on TV on for him.
We also found out that they might be some sort of blockage in his urinary tract, but not bad, he is still urinating. The doctor is getting a Urologist on board to come see him tomorrow. His echo came back fairly good for his heart. He just had some pulmonary hypertension on his right side. But its not bad and nothing can really be done about it.
Just finished speaking with one of his critical care doctors. Both of them have been great so far, Dr. Taneja and Dr. Suku. We talked about thirty minutes. At the end he told me that he has done this for a long time and that he has seen many chronically trached patients like Kevin and that he has never seen a family more invested or educated about their family member's situation and what to do to help. Makes ya feel like you are doing things right. The doctor also said that Kevin is winning.
Tonight, in preparation for the upcoming winter storm/blizzard, we decided that I would stay the night with Kevin at the hospital. That way, in case everyone is snowed in, someone is with him in ICU. They are all very nice here and are allowing me to stay on a couch/bed that is in Kevin's room.
Kevin's fever is still gone, two days in a row. His heartrate has become much more normal , right now it is 79 and he's awake (yeah it's 2 AM - he's still a night owl). The medical staff have been weaning Kevin off the dopamine, which is to combat low blood pressure. They have lowered the dopamine from 15 mg yesterday (Monday) to 3 mg by this evening (Tuesday). His blood pressure is remaining steady and keeping up with the changes in medicine.
Also, this evening they have started weaning him off the ventilator. All evening he has been initiating all his breathing and the machine only helps if he doesn't take a deep enough breath. Because of the pneumonia, his breathing has been more shallow. Yet, even with the changes in the ventilator, his lungs are keeping up, for 9 hours so far. After his most recent breathing treatment, he needed a break so they switched back his vent settings with a plan to start the weaning again in the morning.
His nurse tonight, told me that every nurse that comes in to help her do something with Kevin cannot believe that he has been at home with home health care. They have all said the family is doing an excellent job caring for him. They can see it in his skin and the lack of any pressure sores whatsoever. And this morning when the physical therapist came and did and evaluation on him, she said that he has excellent range. They are now sending techs to come and do range of motion even while in ICU.
Apparently, at first Kevin's kidneys were not functioning good after we arrived here. Now Kevin's kidneys are functioning well, so they started giving him food through his peg tube. We spoke with the doctor about the possibility of changing his peg tube while we were here at the hospital, since his is approximately one year and five months old. Once things get stabilized they will look into that.
He still has blood in his urine that is unaccounted for, however it is lessening. They would still like to know where it is coming from. There is a plan to do an ultra sound to see if he has any kidney stones sometime soon.
Tomorrow they plan to restart provigil (at home we use nuvigil), which is an alert medication. However, throughout the day today it has been more alert than yesterday. And as normal, wide awake about an hour ago in the night.
If you would like to send cards you can get the address at St John's website. However, you can also do this digitally by clicking the following link http://www.sjmc.org/general.asp?id=322&siteuse=11 and filling out the form. Kevin is in room 809. We have a wall that we can tape any cards up, cheering Kevin up in the process.
We know it’s been a few months since we've done an update on Kevin. Life has moved forward. Angie and I returned to Italy to pack up our house and close out our aspect of the ministry that's still going on there. We had a great Thanksgiving, Christmas and New Year's at home with Kevin and the rest of the family.
However, I'm writing you from an ICU room at St John's Hospital in Tulsa, Oklahoma. It's been a whirlwind this last week, so I'll try to update you quick and get to the prayer needs for Kevin.
finally Kohl all got the flu. With the blizzard last week and record-setting snowfall in Tulsa, we didn't have nursing. I found myself caring for a housefull of sickos and Kevin. Fortunately, Brandon was able to come and help some days, because of being out of school due to the snow.
We thought we had made it out of the woods but Saturday night things took a turn. Kevin has spells of coughing, but it will eventually be resolved with one of several methods of care. He started coughing at 9PM Saturday night and continued coughing, getting more incessant and worse over the next seven hours. We ran through the checklist of everything we were taught to do in order to relieve his coughing (cough syrup, saline breathing treatment, repositioning, xopenex breathing treatments, etc….). After calling a couple medical professionals, we followed their advice and called 911. On a normal day, we could have just taken Kevin in the van, however, there is still so much snow on the ground and driveway that it was not possible. Firemen and EMSA (ambulance service) arrived and got Kevin ready for transport. Angie rode in the EMSA truck and I met them at the hospital driving in the snow in the early hours between Saturday and Sunday.
After some time in the ER, we found a rapidly rising temperature (at home it was 99) of 102-103. His blood was way too thin. His potassium was very high. Kevin’s blood pressure was dangerously low (at the lowest I saw it was 69/39) and his oxygen levels were dropping. He had blood in his urine and had thrown up some as well. Kevin’s heart rate and breathing rate were very fast.
Yesterday afternoon, they moved Kevin into an Adult ICU room on the 8th floor. A vitamin K shot was given to combat the thin blood, electrolytes for the high potassium, dopamine for the low blood pressure, O2 for the low O2 levels and Tylenol for the fever. They found pneumonia and some sort of infection that made him septic. Septic is an issue where the blood is infected. This made the staff use a Septic protocol of constant testing to insure the various levels of his body’s statistics.
During the night (last night), Kevin’s nurse called to tell us that from a blood gas test, his O2 levels were too low in his lungs and his Carbon Dioxide (CO2) levels were too high. They needed to put Kevin on a ventilator in order to keep the infection from winning in his lungs. Since then his blood gas levels have been good. Kevin’s fever is now gone. His heart rate is much more normal now. With the dopamine his blood pressure is getting back to normal, although to wean him off the dopamine it needs to become stabilized. This morning, after we got here, Kevin cracked open his eyes and even turned toward me when we came over to his bedside. He immediately went back to sleep, but that was good since he hadn’t opened his eyes since getting to the hospital. Several other times today, he has reopened his eyes, twice with visitors and a couple others just with Angie and I in the room. Even though he still has some septic symptoms, he is no longer under the septic protocol.
As things progress, we will update you via email, facebook and the prayforkevin.com website. More up to date info can be gotten by following me on Twitter (username mcrosser or pray4kevin) and my Facebook status. Thanks so much for your prayers and thoughts. No flowers or balloons can be sent to his ICU room, but please feel free to send cards to Kevin Crosser, room 809 at St John’s Hospital in Tulsa, Oklahoma.
This is a combination of two updates – one from Angie and one from me (Matt)….
Since our last update much has happened. Kevin went in for surgery on September 3, 2010 to have his feeding tube replaced. They ended up saying that it only needed cleaned out and not replaced. So he was in and out in no time! The splints we requested for Kevin’s wrists and ankles were officially denied. There is no more appeal to file. We are now looking at some other splint options as his ankles and wrists still need splints. Kevin has increased tightness causing shorting of the muscles in his ankles and wrists and splints would help to stretch those muscles and even lengthen them again.
Kevin has continued going to therapy twice weekly. We have seen increased muscle and ability in his core allowing him to hold himself up for a minute or two on the side of a mat unassisted! He has also started having some of the largest responses we have ever seen in his legs. When tickled or uncomfortably stretched on his feet or legs he will pull his legs away in large movements! This is great because the muscles and the brain are talking! Even though he is not yet purposefully moving his legs this is a good first step!
Kevin has been in really good health the last few months. He has not been on antibiotics for over two months! That’s a record since his accident! He continues to make leaps and bounds with his ability to have his trach capped and breathes only through his upper airway. He now is capped twice daily for 4 hours each! He seems to have no problems with this so we expect him to continue to grow in this area.
The most recent news is that Matt and I (Angie) are leaving tomorrow for Italy for two weeks. We are going back to finally pack up our apartment and move back to the states. This trip has been a long time coming and it’s great to finally be able to go. We are looking forward to seeing friends of ours there in Italy as well as bringing some closure to that part of our lives as we move back.
Our being gone means we have many family and friends stepping in to fill the holes that we will leave while we are gone. The family is paying to have night nursing while we are gone so family will only have to cover 8 hours of Kevin’s care daily while we are gone. So many people have stepped up to provide meals, rides, driving the van for Kevin, taking care of our ministry finances and so much more. Our family continues to be blessed by the Lord who provides us with so many people who love us! Matt and I are able to leave knowing that Kevin and his kids will be completely taken care of…what a blessing!
Please continue to pray for Kevin’s complete recovery. Although it is a slow process we hold on to the knowledge that God can and will completely heal Kevin!
How do you think the Shoemaker felt when he came out and saw that Elves had worked all night making shoes that helped him when he was struggling to make ends meet? Grateful? Humbled? Thankful? Filled with joyous relief like a weight was lifted off his neck? That is much like we felt after a group of “elves” from American Airlines and the United Way came out on the Day of Caring. When my brother, Kevin Crosser, was in a car accident we didn’t know what the immediate future held. After a few months in medical facilities he was able to come home, that is with round the clock care during a long term recovery. My wife and I moved in to care for Kevin. We also have the joy of helping his kids, as they also help care for their dad. This large commitment means that many other projects have to be pushed off into some unknown date in the future. That’s where this great group of volunteers came in.
Many of the projects they completed were things that were needed but we had neither the time nor experience to do them. Some of the projects help us care for Kevin’s home, as we care for him during recovery. All the projects make it easier to help Kevin and without their help, we don’t know when they would have ever been done. A covered deck, that rain or shine can be used to expand Kevin’s view and stimulate his senses. The sump pump removes the flooding waters that existed after every rainstorm, reducing the number of mosquitoes making the backyard safer for Kevin and the family. The privacy fence allowing for more intimate family time with Kevin and the possibility for other ways to care for Kevin. The storage building keeping secure the tools needed to maintain a clean, useable yard. And completing it was the multiple finishing touches that the crew took extra effort in doing.
Much as the Shoemaker would have been, we are grateful, humbled, thankful and filled with a joyous relief because weight was lifted off our shoulders. We know it is a monumental task caring for someone who has a recovery ahead of them like Kevin, yet we are not naïve enough to think we can do it all alone. We need family. We need friends. And we need those who might sacrifice time and money to help those in a difficult situation. Thank you for what your sacrifice has meant to this family.
Kevin is going to St. Francis Hospital here in Tulsa for an planned outpatient surgery today at 12:40pm. He is having a new feeding tube put in. It shouldn’t be a big deal and should be fairly quick. All surgeries have risks and so your prayers are appreciated!! Please pray there are no complications before or after surgery. Thank the Lord for Dr. Nightengale who has done an amazing job with Kevin in the past. We are so happy to have him do Kevin’s surgery again. Please pray that God will bring a great respiratory therapist to work with Kevin before, after and during his surgery.
We received two days ago that said the splints for Kevin’s hands and ankles have been denied yet again. It is very frustrating to deal with insurance sometimes. Kevin’s therapists, general doctor and his neurologist all agree these are the splints for him but the insurance won’t pay. They say that in Kevin’s case they are not medically necessary and that the Dyna Splints we have requested are still considered experimental for someone in Kevin’s condition. Please pray for wisdom as we decide what to do next. Please also pray for Kevin’s hands and ankles that continue to get tight and have muscle shortening with each month that goes by.
Usually Kevin goes to therapy on Mondays and Fridays. He won’t be going to therapy tomorrow since he is having the surgery and he won’t have therapy Monday since it is Labor day. We will be working on things more intensely here at the house to try and make up for those missed days of therapy.
It was one year ago this day that our lives changed forever. We all remember where we were when we heard the news that Kevin had been in an accident. Even typing that line evoked strong emotional feelings. That's why as we approached this date, we were filled with apprehension and uneasy feelings. How would we treat August 20th? Would we let it pass by while ignoring the feelings that might arise in every quiet moment or would we only mark it on the calendar as some sort of milestone?
Then a friend mentioned the idea of fasting. After we thought more about it, that seemed like the perfect way to spend this day that has brought much pain and change to our family. We decided to fast as a family from sunrise to sundown. Tonight we'll break the fast with a dinner together. The discipline of fasting is designed to put a pinpoint focus on a specific prayer topic and/or become closer to Christ. Anytime we feel hungry during the fast it will prompt us to pray for Kevin's continued healing. Through this act we place our trust in God and affirm our belief that God has power to move in Kevin's life.
So if you would like to join us, we welcome it. I do not like publicly sharing that we'll fast today. Ordinarily, I prefer keeping it between myself and God. However, we saw this as an opportunity to invite you to share in a great opportunity to focus on God today with or without the fast. You don't have to let us know if you are fasting, but we welcome your prayers this day.
As for other news, Kevin has continued progressing slowly. This was also noted by our most recent visit to our neurologist. Both he and our general doctor are trying fervently to battle our insurance for much needed splints for Kevin's hands and ankles. Kevin's hands and ankles are not completely stiff (aka constricted), but if left alone could end up that way. That's why we must continue doing range of motion with his joints and also why he desperately needs the splints. They are called Dynasplints and are made to slowly increase the angle or range of Kevin's wrists and ankles. He would only have them on at night while he sleeps. Pray that the insurance approves them while we await the results of an urgent appeal next week.
During physical therapy, Kevin has moved forward in his abilities, if even only slowly. They work with Kevin to grasp a softball and then release it. Also, while they held up his arm, they asked Kevin to move his thumb five times. After two successful attempts, he moved his fingers also. They told Kevin that he had moved his fingers too, so that didn't count. They wanted him to move his thumb three more times, which proceeded to do. They continue to hoist Kevin in a harness in order to increase the amount of weight he can hold up. He has held up 100 pounds of his own weight for about 45 minutes. He's also reached a high point of 120 pounds. That means he can hold up over half of his weight with his legs and body frame. To raise him in the harness he also has to give some effort, if not he would collapse like a noodle.
Also, one day they sat Kevin up on the edge of the bench they use during his therapy sessions. They sat him up against a stool to give his back some support. He was able to sit upright with his back against the stool for a couple minutes all by himself. He continues to improve.
This past week Kevin went to St. John's for a swallow test. This helps Kevin's speech therapist know what to work on with Kevin. We were able to watch while they sat him in front of an X-Ray machine, so that we could see a live image of him swallowing. This test showed us if he would swallow liquids and foods down his throat to his stomach or let some go to his lungs (aspirate). The thinner items when down well. we watched Kevin's muscles on the X-Ray work to swallow the liquids. The thicker items caused some slight aspiration, (applesauce and pudding), but to his credit, he did not have his trach capped. We weren't told to bring it, so they could only cover it with a glove. They told us he would lose some pressure if there wasn't a completely sealed trach. So, now we have a swallow study benchmark, and just think how well he would have done if he had been capped!
In the future, you might start seeing updates coming from Angie too. She will help me get out more regular updates on Kevin's progress. This means they should also be shorter too! Thanks still for your prayers, love, encouragement and concern for Kevin and our family over the past year. Here's to the next year.
Rock climbers or mountaineers exercise extreme caution when traversing rugged and dangerous terrain. They use safety lines to keep them past mishaps and when the unexpected happens. Some have asked how we keep going. They ask how we keep happy. Well, I should first dispel the myth that we are so happy that any of us laugh all day long like an idiot. We do laugh.....but we also cry. We get upset, then return once more to joy. We experience the full gamut of emotions, sometimes all in one hour. Yet, there is something that is underlying all those emotions. Something, like a climber's safety line, that keeps us safe and secure. It holds us, as we move forward and up in the face of the unknown. It is beyond just our faith in God. It is based in the relationship we have built with Him for years. We have seen God act in times when there seemed to be no hope, in times when we were completely blind to the future, in times when nothing else mattered and in times of pain and sorrow. It is in this relationship built on the faith and trust in Him that keeps us moving forward past the moments of despair and the periods of discouragement. God will triumph.
It's been a couple months since Kevin started going to outpatient rehab at St. John's. He has progressed in many ways, although he still has a long way to go. We take him three times per week, during which he has physical, occupational and speech therapy. During the physical and occupational they have stood Kevin with their help. Recently, they started using a harness, like a climber might wear, to aid in the standing process. By using this method, they can measure exactly how many pounds he is holding up on his own. He has already moved from holding fifty pounds to seventy pounds in a week.
The therapists also ask him to follow simple commands and have recently started asking him to do two things at once, such as waving his hand and moving his head. They have challenged him to wave his hand and in response he has moved his arm. Last week, they placed a scooter under his foot, with the eventual goal of kicking the flat scooter across the room. He hasn't kicked it yet, but they could see the muscles moving in his leg as his brain processed the command. Some days, he seems completely out, asleep. Others he is very alert, able to track people with his eyes, turn his head toward sounds or the opposite direction when asked.
They have been getting him used to controlling his own body, having him hold his own head up when asked. He could only do this a few seconds in the beginning, but has done it for several minutes now. At first, he could only do this leaned back in his chair, however now he has done it in a more upright position. This is all progress in the right direction.
During the speech therapists turn, she uses lemon swabs to facilitate reaction in his mouth and flash cards to ask him questions. He'll squeeze her hand to answer one way or the other. Sometimes a squeeze is a slight pull of one finger, other times he uses all fingers as well as his whole hand. We have added many smells to his evaluations, such as banana extract, licorice, etc.... The speech therapist has challenged Kevin to go from responding with a one syllable "ahhh" to two and three syllable versions ( "ahhh-ahhh" and "ahhh-ahhh-ahhh"). He also also said "hi" more frequently when asked, even though we can tell it is difficult for him to produce the sounds. Often he will use a cough immediately prior to producing a sound to get the volume up.
In the past couple months, we have brought a neurologist aboard who has already ordered an MRI and EEG. Those tests now done, have already brought us interesting and positive results. On Kevin's EEG it showed slow activity only in the areas that were received trauma, the rest of his brain is functioning normally. After several attempts to get his MRI taken care of, we finally got it done at Southcrest and the results were encouraging. There was no spinal damage. The neurologist wanted to verufy this after his evaluation of Kevin. The technician also did not note anything about the Brainstem. Now, if you'll recall this was the main area of concern for Kevin back at Saint Francis hospital. The area where his brainstem showed trauma affects, heartrate, digestion, respiratory issues and consciousness. Heartrate, digestion and respiratory don't seemed to be an issue and consciousness only partially affected due to his minimal state. This could be due to one of the other areas that received trauma. On the MRI, the area of Kevin's brain that got the most attention was the right frontal lobe. We won't know how he might be affected, if at all, until he continues to progress. Some other areas of his brain had shown improvement from previous study, which was somewhat difficult since Kevin had not yet gotten an MRI since the accident until this one. The tech and doctor used CTscans and the MRI of Kevin's eye as references (the MRI of Kevin's eye showed some areas of the brain, although not in very much detail).
In therapy, they are working with Kevin to hold and eventually drop his drumsticks. Since, he plays drums it is a natural thing for him to work on. Also, they are using softballs in his therapy since has played years of softball. He continues to add abilities to his life.
We have a long way to go. Kevin has a really long way to go. Yet, he has come so far. Kevin has progessed and done so many things that we weren't sure he would ever do again. We can't lose heart now! Does that mean every day is roses? No, but as we go forward, not knowing what the future holds, we can rest in the love and care of a God who loves us and Kevin very much.
Thanks to those who have sent money for media on Kevin's ipod. Thanks for those who have contributed towards family meals and various expenses. It all means so much. Recently, we asked for some help with putting up a privacy fence, we have gotten some responses on that, but had to put it on hold. It seems we need a french drain or something to help standing water to leave the backyard. We had so much rain this Spring and Summer that it kept the backyard very very swamped. If anyone knows how to put in a french drain or could help us in this way, please email me back. Then we'll move onto the privacy fence.
Also, Angie and I still need to return to Italy for about two weeks in order to pack our things and move them back to America. The hold up has been money. Between airfare, cost of meals, and shipping expenses, we are looking at a need of about $10,000. We tried to get a loan to pay for this and we would pay it out over the next five years, but the loan hit a roadblock. If you would like to contribute towards this major expense or if you have any way to help us, email us and let us know.
And if you haven't, please come by and see Kevin. Email me to find out how.
Thumbs up! When we last updated you, Kevin had begun therapy at St. John’s outpatient rehab. He also started producing “ahhh” sounds and being more consistent with squeezing his hands. Therapy has continued to go well and there are many things with which to update you.
In the past few weeks Kevin has seen several doctors. He saw his general doctor, Dr. Ting, who thought Kevin continues to look better every time she sees him. She took care of some med changes and details, then made our next appointment for two months to give us a break since Kevin is seeing so many docs right now.
Kevin saw his pulmonologist, Dr. Kennedy, who changed his trach (always a little blood and trauma, but getting easier each time). She also thought Kevin looks better each time she sees him. She has been seeing him since September 2009. She was happy to hear about the “ahhh” sounds and wanted us to continue doing what we were doing with him at home and in speech therapy.
Kevin saw his eye doctor, Dr. Peters, yesterday. This is the doctor who started seeing Kevin about his right eye issue. He was incredibly pleased with how Kevin was doing. He looked at Kevin’s eyes and saw Kevin try to avoid his bright light in each eye. Angie had made a very astute observation about Kevin’s right eye. We haven’t been able to know if Kevin could see out of his right eye due to the exposure issue and trauma inflicted on the right side of his head. She noticed if you brought your finger in front of Kevin’s right eye without his left eye seeing you and placing it directly in front of his eye, he starts to blink, except not so much in his right eye. He starts rapidly blinking his left eyelid. You can see movement in his right, but it is nothing compared to the defensive reaction you see on the left. That means Kevin is able to see something in his right eye, even if he isn’t able to respond as fluidly with his reaction on the right side. In six months we are to return to the eye doctor and he plans on doing some dilation studies and tests.
Since leaving Meadowbrook back in December, we have tried to find a neurologist to follow Kevin throughout his recovery. In fact, we were looking fervently for one during the months leading up to Kevin’s discharge from Meadowbrook. Kevin’s general doctor found one, Dr. Richter. In fact, since getting him to sign on, we have found many different connections to this doctor through friends and family. We went to the appointment with low expectations since our experience so far had been abysmal with the neuro field. We were advised that the doctor was older and would take his time. We thought that was good, then we received the advise to get a book. We realized that since he would take his time with Kevin, he would also take his time with every patient prior to Kevin. We saw the doctor about an hour and a fifteen minutes after Kevin’s appointment. We couldn’t have been more impressed. He dove straight in with an examination, choosing not to read other opinions before getting his own history. He is from the old school, using a little hammer to check Kevin’s reflexes. He commented that Kevin must have made much progress to get to this point. In taking a pin and pricking up each side of his chest, he noted that the response wasn’t consistent. He got stronger as he moved up Kevin’s chest. He hypothesized that Kevin may have had some spinal cord damage around the area of the neck fractures. Obviously, it isn’t a complete tear in the spinal cord, since Kevin feels and responds to all kinds of various stimuli throughout his body, but it might explain some things and give Kevin another hurdle to jump over. He scheduled an MRI of his head and neck, as well as an EEG to measure brain waves. He told us to continue playing his favorite music, talking to him and speaking positive of his recovery to him. Then he really blew us away. He pulled out a Bible from the shelf and found a scripture to encourage us.
“ 4Rejoice in the Lord always. I will say it again: Rejoice! 5Let your gentleness be evident to all. The Lord is near. 6Do not be anxious about anything, but in everything, by prayer and petition, with thanksgiving, present your requests to God. 7And the peace of God, which transcends all understanding, will guard your hearts and your minds in Christ Jesus.” Philippians 4:4-7.
He then left us with this thought. He said, “…and remember Abraham was thankful to God for what he didn’t yet have.” He didn’t want us to lose heart. When he spoke, he spoke to Kevin. He looked at him and us, as if there was hope and listened to what we have seen and experienced. We are so grateful for Kevin’s doctors and therapists. They are truly a team put together by God for such a time as this.
Kevin has continued producing sounds upon request while wearing the one way valve. He has tolerated the valve on his trach well and building on his responses and consistency. He has produced other sounds too, such as “H” sounds and various groans and moans as he continues to adjust how to use his vocal cords once more. Kevin’s speech therapist showed us how to put a finger to Kevin’s throat to see if he is trying to produce certain sounds, even if we can’t hear anything. Try holding your finger to your neck and breathe, then try it while speaking….see the difference? Last week, Angie and our nurse that day, heard Kevin try to say “hi” three times. He would cough in and around the words until the third time, when he let out a more normally pitched, “hi” then proceeded to cough as it isn’t easy to make these sounds after so long. They came and woke me up, but he had already used all his energy. When asked to make his sounds louder, he does, which shows comprehension. He tries really hard and then has to take a break.
Last week, during one of my night shifts with Kevin, he was slowly turning both his hands by moving them back and forth. He also seemed really alert at 1:45 AM (not abnormal for Kevin before or after). Not sure what prompted me to ask now, but I asked him to give me a thumbs up. You see, some people can discount squeezing a hand, for twitches, even if he wasn’t doing it prior to asking, and doesn’t do it afterwards. But a thumbs up, that means there is comprehension and response. I watched for about five minutes as he pulled his four fingers in to form the foundation of a thumbs up. His thumb lay flat against his hand. As I continued to ask, I saw his thumb muscles twitch. He slightly and slowly moved his thumb. He would try to raise it, but it seemed glued to his hand. Then I saw the amazing quick lift up about an inch towards giving me a thumbs up. His thumb immediately went back down, but I was so excited. I kept asking and asking, kept pushing and encouraging as if I was spotting him while lifting a challenging weight of barbells. I kept seeing his thumb twitch, but only after I asked for it each time. Two more times he raised his thumb up about an inch. At that point I went to wake up Angie. It was the first time, I have ever woke her up at night since bring Kevin home almost five months ago. She came out and saw Kevin responding to my voice and challenge to move his thumb. He didn’t get his thumb any higher that night, but he had most definitely reached a new plateau in recovery.
Over the next few days, I repeated the challenge with other family members around like his kids. He would slightly move his thumb, trying hard to get rid of the invisible rubber band holding his digit down. Then on Monday, when we returned to therapy, I shared what had been happening. Kevin seemed very sleepy during therapy, so they spent much energy waking him up, just in time for the speech therapist to come in. Today, we returned to therapy and I dropped off Kevin, Angie and our nurse. I am trying to fulfill my new role with Team Expansion and set aside hours to make that succeed. I have been working on average 40 hours per week since starting in this new role. So today, I went off to a Borders bookstore with free wifi and I proceeded to work, returning about thirty minutes before Kevin’s therapy was over. The speech therapist, who had seen Kevin first today, bumped into me as I was coming and told me how she had come back up to Kevin during his other therapy and asked Kevin to give her a really loud sound and she would leave him alone today. She said about a second later he let a big “AHHHHH”. So she said okay and let him be.
I thought that quick accurate response was going to be the big news of the day. I was wrong. I went in to Angie and the therapists excitedly recounting what Kevin had done. He gave them six thumbs up!!! I was amazed. I asked if it was the half lifts, like I had seen and been practicing with him, and they told me know. They said they were full on thumbs up! They said when they would ask for it, he would raise his thumb up and it quiver as he held it for a moment. Then he would let it back down. It was like doing reps at the gym. Awesome progress!
We got Kevin an ipod touch for use in therapy. They use it to push him further. They turn on one of his favorite songs then pause it saying they’ll turn it back on if he does whatever they ask him to do (squeeze a hand, make a sound, turn his hand, you get the idea). We have put a couple audio books on it from Audible.com. Yesterday, when Kevin I were here alone, I put a sermon from North Coast Church on for him to listen to and then put ESPN radio on for him to catch up with sports. We have found several apps (programs) to use with him for stimulation and training. We also bought an app which plays about fifty different ambient environments that we can use when he has trouble sleeping (rain, ocean waves, white noise, etc…) or for stimulation (cars, street life, airplanes, etc…). This week I will be putting some voice messages on it that he left us while we were in Italy. Vonage sends voicemails to email and I saved every one of them. This is a way that he can hear his own voice during therapy.
If you’ve read this far, I congratulate you and thank you for your love for Kevin, so much that you read my boring dribble to hear news of his recovery. Before closing, I wanted to mention a couple ways for people to help. If you want to buy iTunes gift cards for Kevin, we can use them to buy more stimulating apps, his favorite music and audiobooks. Any denomination is appreciated. Also, we are wanting to put up a privacy fence in Kevin’s back yard. It only needs one side, two neighbors already have them. It would enable a little more intimacy when taking Kevin outside. It would also allow us to create an outdoor shower, which has been done by others and suggested by a medical professional. Right now, Kevin gets bed baths every day, this would give him a new sensation on warm sunny days.
Thanks for caring. Thanks for reading. And thanks for praying.
I was planning on writing an update that told you it felt like we were moving into the next level of recovery for Kevin. Then something happened this afternoon that cemented this into our hearts and bolstered our faith.
Last Friday, we took Kevin to St. John’s outpatient rehab for some evaluations. We were requesting evaluations for physical therapy (PT), occupational therapy (OT) and speech therapy. We had appointments for the PT and OT evals and a doctor’s order for the speech evaluation. We didn’t know what to expect. We have been blessed to have some pretty good therapists for Kevin so far, and you never know if you might get a bad egg! Also, we have yet to have a speech therapist get excited about working with Kevin. Most therapists don’t have the experience with someone in Kevin’s condition, their experience usually comes from stroke victims or something similar.
Friday we went in with the hope of getting into regular therapy again (remember our in home therapists discharged Kevin a month or so ago), but the main thing we were there for was the evaluation. We wanted an outside look to see if he was ready for the next level, as well as determine if he needs any preventative splints. The team we ended up with was nothing short of amazing. Great optimistic, hopefully spirits and genuinely excited to work with Kevin. PT & OT were both great (one of them even worked in two brain rehabilitation centers), and the Speech therapist seemed not only knowledgeable, but full of ideas to work with Kevin. They want to see Kevin three times per week, three hours each visit, starting in May. Until then we have a few office visits in April (each Wednesday) and today marked the first.
Kevin remained alert the entire time we were at the rehab center. They saw Kevin trying to follow their commands and being successful some of those times. When we left, Angie and I drove Kevin downtown and took him on a field trip to see the new Driller’s Baseball Stadium, OneOK field. We parked and took him over to the fence to see the new field and the Drillers warming up for that night’s second game in the stadium so far. After that, we went over to the Coney Island, a downtown Tulsa institution and a favorite in our family. We got Coneys to go and let Kevin smell them, providing some brain stimulation.
After leaving the rehab center on Friday, we were excited, optimistic and recharged about Kevin’s recovery. However, we tempered it with the fact that was our first impression and we didn’t know how the actual therapy sessions would go. Today, was the first session. Kevin, while worn out towards the end, stayed alert the entire time we were there. They lifted Kevin out of his chair and placed him on a large bed/table where they could try different things. They noticed him holding up his head. They felt him squeezing their hand when they asked him. They saw him moving his eyes to look at them in response to them asking him to look at them. Then after his eyes made the journey, his head would follow. He did all these things much more consistently then he was doing them a month ago. His muscles got a workout when they had him sit on the edge of the bed/table, his feet off the side on the floor, with limited support. They did the same things we have been doing, providing some support, leaning him from side to side for weight shifting, but they added something to the mix. They were leaning him forward and back, like he was rowing a boat. At one point, the therapist supporting his back, said I am just giving him a little support to his head, I’m not even supporting his back right now, he’s doing that!
After PT and OT did their thing, it was the Speech therapists turn. They put a one-way valve on Kevin to start. If you’ll remember from our days at Meadowbrook, the respiratory therapists used to put this on to listen for sounds and determine if he could tolerate wearing it. It allows air to be breathed in, but he has to breathe out through his nose and mouth. This allows air to travel through his throat creating sounds. The Speech therapist was great. She did several things, such as trying to get Kevin to open an close his mouth. She asked him a few times to squeeze her hand, to which he did a couple times in response. Then she used a lemon flavored swab to see if he would have a reaction to the sour flavor. He did. He didn’t like it too much, moving his head away and making a face. Then came the really cool part. She was trying to get Kevin to make some sounds. She tried several times to get him to say something, to her, to Angie and to me. Nothing. Then she tried again and asked him to make an “ahhh” sound. She encouraged him to open his mouth by touching his bottom lip and to our great surprise he let out a soft “ahhhhhhh”. We were blown away. She asked him again and Kevin responded a second time with “ahhhhhh” softly. The PT that was working with Kevin before was about twenty feet away and working with another patient. She heard Kevin and our response and she told the Speech therapist that she wants to start videoing these sessions with Kevin. I had been taking pictures and decided to turn on the video recording mode of the camera. If he did it again, I would be ready. She proceeded to ask Kevin two more times to make a sound, this time saying she didn’t care if he groaned, said “ahhh” or whatever. Kevin was getting worn out, but he responded both those times too, although it started to be quieter and softer than the first two times. We left the center rejoicing and looking forward to the next time. On the way out, a mother introduced herself to us. We had heard of her and her son. Four years ago, her son had been in a car accident leaving him with brain injuries and in a coma state. The medical community didn’t leave their family with much hope. Their son didn’t making any vocal sounds for several months. And now, he is walking and talking and dressing himself. He is still in rehab, continuing to learn how to walk without a cane and maybe more, but he doesn’t have any memory loss and has come so far. She credits God with how far her son has come. Her message to us….don’t give up! Don’t give up. There will be days and times of discouragement, but don’t give up. There will be moments when no one else understands what you are going through, but don’t give up.
So, refreshed, recharged and optimistic we go to bed tonight. Thankful for how far Kevin has come so far and excited to see what God will do through him as we go on this journey. Thanks for reading and caring. It means so much to our family.
Last week, we took Kevin to his gastroenterologist. This is the same doctor that put his peg tube in his stomach on September 4th, 2009. This is the first time this doctor has seen Kevin since that day. He came in the room all smiles and greeting us with “How y’all doin?” I remembered that he was overly friendly when we met last September. He also gave us his card and said, call me anytime if you have questions. He repeated that plea this time as well. He told us that whoever is taking care of Kevin’s peg tube stoma (hole for feeding tube) is doing a superb job. He didn’t want to change anything, saying you can’t fix perfection. We left feeling really good with the reminder that we must be doing some things right!
We also realized last week that we have now been at home with Kevin longer than we were at Meadowbrook. What????? That sounds as crazy in my head as when we realized it. It seems like we were at Meadowbrook for a lot longer than we have been at home. Part of it is because we had to get renewed each week with insurance and the doctors at Meadowbrook. Another reason is that it was newer, or rather more raw to deal with. Also, we were constantly running, either to pick people up, or from hotel to the hospital, the house we stayed at to the hospital, etc…. During that time, we also visited thirteen different skilled nursing centers which added to the stress. If you’ll remember there had been the flood at Kevin’s house, which consumed a portion of our time too. Things have started to settle more and more. Now we are getting used to the new normal.
The end of March saw the end of the medical furlough that Angie and I were on from our ministry in Italy. After much prayer, consultation and thought, we came to the conclusion that it would be God honoring to stay in the states and care for Kevin and his kids. So, we transitioned out of our role on the team in Verona to a stateside role with our mission organization, Team Expansion. I am now part of the Creative Arts department of our mission. I will actually be doing many things that I have done for the past eleven years, but now my talents will be made available to any of our organization’s 320 missionaries on the field. I have done and will do graphics, video editing, web designing, publishing, curriculum creation as well as other creative projects. I will be doing half my hours at a borrowed office at Highland Park Christian Church, and the other half during my night shifts with Kevin. This is still a position where we have to raise all of our support. This is the practice for everyone within our organization. We have updates and prayer updates for our role in missions. If you would like to receive this, please let me know and I’ll add you on. Email me at mcrosser@teamexpansion.org.
Kevin finished one of his antibiotics and the other finishes tomorrow. He is coughing much less and producing less secretions. We continue to take him outside several days of the week and on Easter we took him over to my brother, Greg’s, house. We took him outside, so he could watch us hide and find Easter eggs with the kids. Then inside we let him smell the Easter lunch for brain stimulation. We got him back home after about four hours and we didn’t have to suction him even once.
Thanks as always for reading this, caring for our family and praying for Kevin’s recovery.
Kevin has been improving since going to the pulmonologist. She put him on a couple antibiotics due to two strands of pseudomonas (which he had at the hospital so this is a flare up) and moraxella catarrhalis (this is a common bacteria that causes sinus infections, bronchitis and pneumonia). He has been steadily improving in the past week and a half. He has been on the antibiotics for about six days and is supposed to be on them until the tenth day.
We have been taking Kevin outside in his wheelchair, when it is not snowing or storming (Oklahoma weather is amazing – in one week period rain, snow and sunny days in the 70s). We also were able to take another field trip. This time we packed up Kevin’s gear, loaded him in the van and went to my brother Greg’s house. It was on a late Sunday afternoon and Kevin did great. We only had to suction him a few times and we were there for about four hours. He was in his chair for about five hours. The family has thought about other field trips Kevin could go on, parks, movies (later) and maybe other places he is familiar with. That was one of the things that was good about going to Greg’s house, it was familiar. You are probably saying, but wait he is in his own house! Aha….remember the flood in Kevin’s house?? It ruined enough furniture and things that many of the rooms, while the same shape, are very different.
In the past week, Kevin has kind of popped open his eyes, staring off at the room and seeming to be anxious, like he doesn’t know where he is. Family members have taken the opportunity to soothe him and tell him he is at home and his body language calms down.
Today (Wednesday, March 24) we took Kevin to his general doctor. Can I just say how blessed we feel to have the doctors caring for Kevin that we have? They really are amazing. His doctor today said we were doing a great job with Kevin at home, that everyone in the office commented that they thought Kevin was looking better and they were excited to see Kevin roll in the office sitting in his chair as opposed to an ambulance gurney. His doctor took her time with Kevin and us, examining him and answering questions that we had. She also ended up giving him an allergy shot, since Kevin used to get those on a semi-regular basis. Maybe that will also help with his secretions.
I am flying out of town overnight from Thursday to Friday this week. Please pray for the family skeleton crew while I am gone, stepping up and filling holes. Please pray for wisdom as we make decisions for Kevin. Please pray for joy as we go through an incredible situation. Please pray for steadfastness and we trust in God to carry through these times of trial. And especially pray for Kevin as he heals, progress and goes through this monumental ordeal.
So, in a couple days, Kevin will have been home for three months. After almost four months an answered prayer arrived at Kevin’s house….his wheelchair. It is really nice, and a much better fit than the “one-size-fits-all” geri-chair. We are glad to get rid of it. We’re calling the medical supply rental company to pick it up this week.
After a week of helping us get used to using the new wheelchair, our two therapists discharged Kevin. He is progressing, but not fast enough for them to stay on right now. They cannot do anything more than they have taught us at this point. As Kevin continues to progress they may be able to return. Kevin had much trauma to his brain, therefore we have to wait for his brain to catch up to his body in healing. He doesn’t have major spasms anymore, he stretches out when awakened, starting to stabilize his head more and more, and able to cough out most normal secretions.
Last week, Kevin had some sort of cold that was making his secretions worse. That meant we had to suction him more than normal. He is finally starting to get over it, so he can get back to his normal healing rate.
With the wheelchair comes new freedom for Kevin. The first couple days we started to take him around the house and see things like his new kitchen that was built for him. Every once in awhile, as we pointed out different things in the kitchen, he would turn his head in that direction! We were happy he was alert for the tour.
Another freedom is taking Kevin outside. Once the weather warms up a little and stops raining, we can take him outside.
When Kevin has doctor visits, he no longer needs the assistance of EMSA (ambulatory services) to come and pick him up. Family friends have a lift equipped van and have graciously offered the use of it whenever we want. Friday was the maiden trip and we took him to his pulmonologist. Angie and I, along with our nurse, loaded Kevin up and went to the doctor’s office, then to another location to get an X-ray, finally grabbing some fast food for lunch before heading home. It worked out so well. We are looking forward to our next “field trip”.
The lung doctor remarked that Kevin continues to look better. This doctor has been seeing him since September. She said every time she sees Kevin he looks better to her. She changed out Kevin’s trach, causing some bleeding, but not as much as last time.
Today, Saturday March 13th, was the last day of our previously approved nursing. We found out on Friday, that Kevin was approved again for another four weeks of nursing, putting the next end date at April 11th. One of our main nurses had to take some emergency family time off, and a new nurse started last week. She has done a really good job and cares for Kevin very much. Kevin has been blessed to have the quality of nursing that he has had at home. We are so thankful.
Thanks to those who have brought over a couple meals, it was such a big help and tasty too!
If anyone has a standing “A” frame lying around their house, we might be able to put it to good use. I believe it might have to have some specifics, but other than that they are made to fit various sizes of people.
Thanks so much for your prayers. Thank you for praying even when we don’t report. Thanks for thinking of us and sending cards, calling or coming by. It helps it not be so lonely and gives Kevin some new stimulation. He’s probably had just about enough of me and the rest of the family!
I currently have a beard. I know, I know, you’re thinking…it’s been this long and he’s writing about facial hair? Hang with me, I’m going somewhere here. I don’t always have a full beard, but in certain times in my life, I have grown one. There have also been times where I grew it just because, but most often I have grown one when something big and difficult is going on in our lives. I don’t think I have ever told anyone this before. It’s like my own little nazarite vow with God. I’m not pledging anything to God by growing, except maybe allegiance to Him. It’s the idea that I can relax and let Him be God and take care of not only me, but with any situation that I am confronted.
Kevin’s been home at his house for two months. While these have been some very stressful months, they have also been very good months. How do you find goodness in a difficult situation? Well, I do think it is very difficult, if not impossible to do this without God. Relaxing in Him, and letting God be the one responsible is the ideal, which is the daily struggle. We don’t always relax, in fact often we worry and fear and cry and get angry, yet it the times we do trust and truly relax in Him are the best times. Those are the times that we yearn for.
Since the last update, we have had nursing approved twice for two week periods. Last Friday, we got approval for one month of eight hours per day nursing. This goes until March 13, 2010. We will request more as that date approaches. We have one main nurse five days per week and another nurse here on the other two days. They are both answers to prayer for Kevin, as is actually having the nursing at all.
Physical and Occupational therapists have both continued coming helping see where Kevin’s limits are with the tasks that he is able to endure. Sitting assisted on the edge of the bed has increased from a 20 minute activity to as long as 65 minutes. This tasks is actually limited by those of us taking turns to assist Kevin. Several professionals have indicated that Kevin might benefit from a communication board. This would be something that Kevin could look at and respond to our questions since he is still not communicating verbally. He could look at a certain picture to answer yes or no, or something of that effect. We are hoping that we can find the right Speech Therapist to work with Kevin.
We were able, thanks to those of you who responded, to fix the house needs in order to reinstate Kevin’s new home insurance.
After three months of struggling with Kevin’s medical insurance, they finally approved Kevin’s wheelchair. It should be here by the weekend. With it we can do all sorts of new stimulation from taking him outside to positioning in the chair and field trips.
Today, we took Kevin to his pulmonologist (lung doctor). She was one of Kevin’s pulmonologists at Meadorbrook. She thought Kevin looked very well cared for as well as looking more alert than the last time she saw him. She gave us a great deal of time and attention as well as changing out Kevin’s trach tube right there in her office. She wanted to do it now, since it had been a little longer than you would normally want to leave the same one in. He has had this one the entire time he has been home, plus a week or so prior. It went really smoothly, except that taking it out caused bleeding. Apparently, the longer a trach tube is in (and not changed out) makes it more possible for the skin around it to attach. She remained calm and took care of Kevin’s bleeding. After stopping the bleeding, she said we might see more blood get coughed out in the next 24 hours. He coughed a little after we got back home, but all night so far (from 11pm to 4am) he has only coughed once.
She also changed out Kevin’s trach to one that would make it easier to cap and allow him the opportunity to make vocal sounds when possible. This means now we have to clean out the inner part of his trach tube rather than inserting disposable ones.
Thanks to those of you who have brought lunch or dinner by for our family. It has been extremely helpful. Also, thanks again for all your prayers, we appreciate them tremendously.
Here I sit in the dark on my night shift with Kevin. His room is the living room and I have crashed on the couch. Angie will relieve me at 5 AM and I’ll head to bed for a bit. Kevin has been coughing a little more the last few days, but not too much tonight. Last Saturday, Kevin has been home a month. Five months ago today was Kevin’s accident. We knew the odds were up against him those first few days and weeks, so to think we are here with him, five months later is pretty amazing.
Last week, we asked you to pray about the peer to peer doctor phone call. Basically, the result was that Kevin got approved for another two weeks of eight hours per day nursing. It may end then or we may be able to get more extensions, we don’t know, but God does.
During the past several days Kevin has done some really cool stuff. I’ve told you about using the Wii and PS3 for therapy and coma stimulation with Kevin. Well, last week Kevin was pushing the button with his thumb a few times and trying a few more, it seemed difficult for him to push it all the way. He has done this a couple times now. One of those times, Kevin’s nurse asked Kevin to squeeze his hand if he wanted to keep playing. Then he asked him to not squeeze if he wanted to stop and rest. Then he reversed the question. Squeeze the nurse’s hand if he wanted to stop and rest and don’t squeeze if Kevin wanted to keep playing. He squeezed consistently to keep playing, even though his eyes were drooping and looked tired!
Then that night, we had Kevin in the geri-chair. Our mom was there and she was sitting behind him to his right. The TV was on the wall in front of him and turned on. Angie asked Kevin if he wanted to turn around to see his mom to squeeze her hand. He did. Then she asked him to squeeze her hands if he wanted to stay where he was and watch the TV. He didn’t squeeze. She asked him again and he consistently responded to the questions.
Every two hours we need to adjust Kevin’s body position in bed. This is called ‘turning’ him. I have to do this in about fifteen minutes in fact. But last Saturday night, I went to turn Kevin to his left and told him so as I came to the bed. I hesitated because I was making sure that I had everything in place before Kevin in an uncomfortable transition position. As I hesitated, Kevin’s body tightened up, it seemed every muscle was working over-time and he rolled to the left. Every other time, when Kevin has contracted his muscles he stays right where he is making it harder for us to turn him. I was so surprised and I told him thanks for saving my back from turning him!
Physical and Occupational therapy each continue to come twice weekly. They are great encouragement and have helped us do new things with Kevin for building muscle back up in different areas of his body. While OT was here today we were having Kevin do different motions with his hands in a pattern across his body. These motions not only help keep Kevin’s joints looser, but also stimulate his memory of doing these type actions in the past. For example, we took and empty spoon in his hand and made the gesture of ‘getting’ something off a plate and lifting it to his mouth. We know Kevin loves Cool Whip and we had some, so we even tried a little Cool Whip to his lips. He didn’t really respond while OT was here, but then after she left, we put a small amount inside his mouth to taste. He began moving his tongue around and chewing! We have also done BBQ sauce and frosting so far, maybe others.
Due to the bathroom flood that caused water damage in Kevin’s house, his home insurance cancelled his policy. We searched for another insurance company and thought we found one, but when they did the inspection they told us about a couple outdoor repairs that needed done or the new policy would be cancelled too. If you would be able to repair the roof over Kevin’s front porch, would you let us know? Unfortunately, we are in a time crunch too, but this would be a great way for someone to help Kevin during his recovery. Email me or call for more details (Matt – 918-850-9828 after 11AM thanks).
Today, Kevin watched some Star Trek (the original series) while he was sitting up in his chair. We have been playing some Old West audio books by Louis L’amour (his favorite) for him during the morning. In rest times, we dim the lights and play soft music or no music at all.
Keep praying for Kevin’s insurance to continue nursing. Keep praying for Kevin’s progress. Keep praying for our family’s unity and sanity. We thank God for everything that He has done so far and trust that He will continue to care for Kevin and our family.
Use this blog to keep updated about Kevin's condition, to write notes to Kevin and the family and to, most importantly, pray for Kevin.
"My name is Kevin and I shoot striaght. I like to joke around though and enjoy meeting people. My children are the highlights of my life next to GOD. My Mom, Dad, brothers, sister-in -law, and their children are very close to me also. I have many special friends and extended family and they know who they are. I love my job at AMERICAN AIRLINES. I'm a EXECUTIVE BOARD OFFICER for the TRANSPORT WORKERS UNION LOCAL 514. I love softball, hunting, and being involed in the community. I like to help people. I think this is what God intended us to do. I have the honor of setting on several boards. Vision 2025, Redcross Chapter, Redcross Blood Services and recently I've been asked on the INCOG federal reserve Board. I know every day matters so live it like you should."
Kevin's Interests: "Community Involvement, Public Relations, Hunting, Fishing, Softball, Lifting Weights Playing the drums, Big Sooner Fan"
"I like all types of music, Christian, Pop, Rap, Country, Jazz, Classic, Rock, Did I leave anything out?" | 2019-04-20T03:14:27Z | http://prayforkevin.blogspot.com/ |
This season will be bittersweet for our symphony. As we've said goodbye to our Maestro Brian Groner, we now begin the search for our new music director. Brian left behind an amazing legacy, and we are faced with the challenge of finding another great leader to bring us forward.
After a long and through search, we've narrowed the field to four finalists, and I can't wait for you to meet them. They are all skilled conductors, dedicated educators, and passionate community advocates. Each will share both new music and classics on their concert. Each will work with a soloist and get to know our orchestra both on and off stage.
More importantly, we want YOU to get to know them. We will have times during their week in Appleton for you to see them in both formal and informal settings, answering questions, and discussing why they are excited to come to our area and join our orchestra.
Each concert week will be followed up by our team collecting your thoughts and comments. We will have comment cards and surveys at the hall for each concert, as well as emailed surveys, and website forms. Please feel free to give us your candid feedback, ask questions, and become a part of this process.
I can't tell you how important this is to all of us. Let us hear from you! We are hoping the person we hire is part of our community, both on stage and off, for a very long time. Please let us know your impressions and help us make a very informed and inclusive decision.
We would love for you to become a season ticket holder, and then you will receive updates from us throughout the season, letting you know about opportunities to get involved.
We're grateful for the 23 years of artistry and dedication Maestro Groner contributed to this community, and we look forward to our next chapter under the baton of our new director.
Please scroll down to learn more about our first conductor candidate, Howard Hsu, and join us this Saturday, October 6, at the Fox Cities Performing Arts Center for our 2018-19 Opening Night!
BUY YOUR TICKETS TO OPENING NIGHT ONLINE NOW!
Disney's Pixar in Concert! Enjoy scores from your favorite Disney Pixar films! A visually stunning, high-definition, multi-media family show!
We start the season on October 6 with Conductor Howard Hsu. We've had a fun week with him visiting Appleton East High School, Lawrence University, 91.1 the Avenue, and getting to know our board and donors. Last night was our first rehearsal with the full group, and we continue with a strings-only rehearsal tonight. We can't wait to get in the P.A.C. hall for the first time on Friday night with special guest violinist Kelly Hall-Tompkins, friends from our season underwriters at The Boldt Company, and students from Big Brothers Big Sisters and the Boys & Girls Club. It will be a busy weekend, we we can't wait to share this music with you!
Howard Hsu is the Music Director of the Valdosta (GA) Symphony Orchestra, and serves as Assistant Professor of Music and Director of Orchestra Studies at Valdosta State University. Under his leadership, the Valdosta Symphony was selected as the 2014 winner of the American Prize in Orchestral Performance (community division). He has performed with world-renowned artists such as Robert McDuffie, Simone Dinnerstein, Jennifer Frautschi, Wendy Warner, Rachel Barton Pine, Stanford Olsen, Alexander Ghindin, Alexander Schimpf, Katia Skanavi, Awadagin Pratt, Amy Schwartz Moretti, and the Empire Brass, and has introduced live classical music to thousands of children in the Southern Georgia region. He conducted the world premiere of James Oliverio's Trumpet Concerto No. 1: World House, the U.S. premiere of Ned McGowan's Concerto for iPad and Orchestra (Rotterdam Concerto 2), and has given the Georgia premieres of Fernande Decruck's Sonata for Saxophone and Orchestra, several of the Debussy/Matthews Preludes, and Jonathan Bailey Holland's Motor City Dance Mix. Hsu has appeared as a guest conductor with the Hartford (CT) Symphony Orchestra, Macon (GA) Symphony, New Britain (CT) Symphony, and Bronx (NY) Arts Ensemble. Hsu received his D.M.A. from the University of Connecticut, his M.M. from the San Francisco Conservatory of Music and his B.S. from the Wharton School of the University of Pennsylvania.
Visit www.howardhsuconductor.com for more information.
The 5 Milers + FVSO = Support Your Symphony!
Local folk group The 5 Milers started in 1962 with a group of friends in high school, and today they are raising money for local charities with their love of music.
Rob Billings, one of the founders, remembers how it all started. "I purchased a used six dollar guitar and ask Tom and Terry, 'how do you play this thing?' We were only in our sophomore year at Neenah high school, but we were motivated."
Music from the Kingston Trio, Peter, Paul and Mary, and the Weavers inspired them. Once they got the hang of it, they were hooked.
Their love of music carried through the years, and even though not all of them are still living in Wisconsin, they always return home for a few concerts each year, and their fans follow them each time. They've drawn crowds in Neenah, at the Fox Cities Performing Arts Center, and other venues around the Fox Cities. "Our audiences love the folk music of the 1960s and many sing along," says Billings, "and others simply sit back and remember where they were when they first heard the music."
A few years ago, they decided to put their love of music, and their growing audience, to use in helping the community. "I had the honor of performing with Door County bluegrass musician Bill Jorgenson, and he had some great advice for us." says Billings. "He encouraged the band to do annual benefits in support of causes we really believe in. He was right, and it is such a win-win situation for us! We get to play the music we love, the audience has a great time, and it all goes toward supporting charities in our own community."
The 5 Milers select a new group to help each year. Past recipients include Homeless Connections, Old Glory Honor Flights, and Backpack for Kids. This year's recipient is another local musical group, the Fox Valley Symphony Orchestra. Billings approached the symphony first as a recipient, but it soon became clear the partnership could grow.
"We were so honored they picked us for the benefit this year," says Jamie LaFreniere, Executive Director of the Fox Valley Symphony Orchestra. "But as we started talking, Rob had the fantastic idea of having both our groups share the stage for this special night." The concert is sponsored by gifts from J.J. Keller & Associates and Dr. Monroe Trout and Sandra Lemke.
"We're looking forward to a fun night of 60s classics," says LaFreniere. "We love to partner with other groups in our community, and bring together different genres and fans of all types of music. We're just lucky to live in a community where there are so many choices!"
The concert is on September 13 at the Fox Cities Performing Arts Center, and proceeds will go to the symphony. PURCHASE YOUR TICKETS HERE!
"Growing up in the Fox Cities, our group had many memorable and enjoyable moments," says Billings. "It is our pleasure to try to give back to our community both in our performances and with the money raised for charity."
The Fox Valley Symphony Orchestra was awarded a grant from the National Endowment for the Arts to support outreach activities associated with our upcoming concert featuring Grammy-nominated composer and trombonist Chris Brubeck.
The $10,000 "Challenge America" grant will underwrite the costs for Brubeck and Fox Valley Symphony Orchestra musicians to share the music of modern American legends with veterans and audiences in rural areas, as well as support Brubeck's appearance with at the symphony's February 3 concert at the Fox Cities Performing Arts Center's Thrivent Financial Hall.
Brubeck's outreach events will include an interactive workshop at the Gerold Opera House in Weyauwega focusing on performance and music composition with band students from the Weyauwega area. He'll follow that up with a lecture and performance at the Wisconsin Veterans Home at King. The FVSO's Brass Circle quintet will accompany Brubeck at both appearances.
"Working with the local youth is part of our mission. Helping to provide the opportunity to be inspired and informed by Chris Brubeck and members of the symphony is very exciting," said Kathy Fehl, Artistic Director of WEGA Arts. "The effort to work with us and other places in the area is wonderful; encouraging kids to consider a life in the arts is very important."
Brubeck said he hopes that he can contribute to the creative spirit in the young music students.
"We still live in a society where a creative thinker, player, visual artist, dancer, film maker, author or singer can still have a significant impact. If I can connect with, encourage and inspire one young person to pursue their dreams then I feel that the mission was accomplished," said Brubeck. "The Arts are a reminder of our wonderful human potential."
Brubeck's performance at the Veteran's Home at King inspired some memories of his father, jazz musician and composer, Dave Brubeck.
"Through the years, my Dad told me many stories about his going into hospitals and playing music for Veterans which seemed to connect with them in a special way," said Chris Brubeck. "If the Vets can't come to a concert, I am happy to go to see them and reach out through music."
On February 3, Brubeck will be featured as the guest artist for the symphony's "Modern American Legends" concert. He will also participate in a discussion with FVSO's Sandra Lemke & Monroe Trout Music Director Brian Groner before the concert in the Fox Cities Performing Arts Center's Kimberly-Clark Theater.
The NEA Challenge America grant program offers support for projects that extend the reach of the arts to those whose opportunities to experience the arts are limited by geography, ethnicity, economics, or disability.
Grammy-nominated composer Chris Brubeck continues to distinguish himself as a multi-faceted performer and creative force. An award-winning writer, he is clearly tuned into the pulse of contemporary music. The respected music critic for The Chicago Tribune, John von Rhein calls Chris: "a composer with a real flair for lyrical melody--a 21st Century Lenny Bernstein."
Chris has created an impressive body of symphonic work while maintaining a demanding touring and recording schedule with his two groups: the Brubeck Brothers Quartet (with brother Dan on drums), and Triple Play, an acoustic trio featuring Chris on piano, bass and trombone along with guitarist Joel Brown and harmonica player extraordinaire Peter Madcat Ruth. Additionally, Chris performs as a soloist playing his trombone concertos with orchestras and has served as Artist in Residence with orchestras and colleges in America, coaching, lecturing, and performing with students and faculty.
Chris is a much sought-after composer, and has been commissioned to write many innovative works. Current projects include a concerto for the Canadian Brass Quintet to be premiered with the Lexington Philharmonic in November 2017. As Composer in Residence with the New Haven Symphony, Chris premiered Time Changes for Jazz Combo and Orchestra. He had two new commissions premiere in 2016: "Fanfare for a Remarkable Friend" and "Sphere of Influence". His "Affinity: Concerto for Guitar & Orchestra" was written for celebrated guitarist Sharon Isbin, and premiered in April, 2015. To commemorate the 70th anniversary of the Allied Liberation of France in June, 2014, Chris and French composer Guillaume Saint-James wrote Brothers in Arts: 70 Years of Liberty, which premiered to much acclaim in Rennes, France. Chris's long list of commissions are varied and range from a Russian-American cooperative project commissioned by the Hermitage Museum and the National Gallery ("The Hermitage Cats Save the Day"), to the Kennedy Center for the National Symphony Orchestra; to concertos written for violinist Nick Kendall; the exciting trio Time for Three, a song cycle for Frederica von Stade ("River of Song") as well as many chamber and orchestral pieces commissioned by the Concord Chamber Music Society, the Muir String Quartet, 3 commissions from The Boston Pops, and multiple commissions from consortiums including The Boston Pops, Baltimore Symphony, Colorado Music Festival in Boulder, Indianapolis Symphony, Portland Symphony, Oakland East Bay Symphony, and many others.
His highly acclaimed Concerto for Bass Trombone and Orchestra, has been played by many of the top bass trombonists in the world and was recorded with Chris as soloist with the London Symphony Orchestra. It can be heard on the Koch International Classics recording "Bach to Brubeck". He also wrote a second trombone concerto, The Prague Concerto which he premiered and recorded with the Czech National Symphony Orchestra on the Koch cd, "Convergence". Reviewing that disc, Fanfare Magazine wrote "Brubeck's skill both as composer and soloist is extraordinary." April, 2009 saw the premiere of "Ansel Adams: America", an exciting orchestral piece written by Chris and Dave Brubeck. It was commissioned by a consortium of eight orchestras and is accompanied by 100 of Ansel Adams' majestic images projected above the orchestra. In 2013, "Ansel Adams: America" was nominated for a Grammy for Best Instrumental Composition.
February 3 at 6:40 pm: Pre-show lecture in the Kimberly-Clark Theatre at the Fox Cities Performing Arts Center.
February 3 at 7:30 pm: Concert with the Fox Valley Symphony in Thrivent Hall at the Fox Cities Performing Arts Center.
AMAZING NEWS! We finally have our new Youth Orchestra conductor! Mr. Andres Moran is the director of the University of Wisconsin-Stevens Point Symphony Orchestra and a horn teacher. He was a resident conductor of the El Paso Symphony and also music director of the El Paso Symphony Youth Orchestras. Mr. Moran has a Doctorate of Music from Indiana University and a Bachelor of Music from New Mexico State University. Our coaching team and hiring committee met with Mr. Moran several times before making our decision and we are all excited about having him join our team next season. He brings with him a great passion for music education, wonderful ideas about engaging our community, and impressive technical skills on the podium. "I'm very excited to be joining the Fox Valley Youth Symphony team!" says Mr. Moran. "Throughout the hiring process, I was impressed with the level of commitment and passion that the staff and board have for this program. I can't wait to start working with our young musicians in the fall, and I look forward to getting to know more members of the Fox Valley community through our performances." Please join me in welcoming Mr. Moran to the Youth Orchestra!
When the folks at New York's 92nd Street Y got together five years ago to find a way to celebrate and encourage generosity, they had no idea their project would one day be embraced by over 40,000 organizations worldwide. They couldn't have predicted that over $116 million would be raised through social media, and they had to be shocked that their #GivingTuesday would become an international movement - a national holiday of sharing.
Those of us at the Fox Valley Symphony Orchestra take this opportunity on #GivingTuesday to thank our donors, audience members, volunteers and sponsors for their generosity every day of the year. Thank you for sharing your time, your resources, your attention, and your efforts with us. Thank you for understanding that our mission of nurturing symphonic music within our community is fulfilled because of your gifts.
Thank you on this #GivingTuesday.
Students across Appleton have been diving deep into the music of our upcoming concert. Big Arts in the Little Apple is a community collaboration coordinated in partnership with The Building for Kids Children's Museum and the Appleton Area School District to give students the opportunity to explore the intersection of music and the visual arts.
As part of the program, elementary students at 17 schools learned about and listened to this autobiographical tone poem by Richard Strauss, and then took that inspiration to their visual arts classrooms to create art in response. Over 600 of these students submitted their work for consideration and the top 50 to be featured at the Saturday, November 19thconcert when the symphony performs this epic piece.
Don't miss this opportunity to see the creativity of our local students and experience Ein Heldenleben in person with your Fox Valley Symphony Orchestra on Saturday, November 19th. Buy your tickets online today.
"I should like to write a violin concerto for you next winter. One in E minor runs in my head, the beginning of which gives me no peace." Felix Mendelssohn's 'earworm', as described in a letter from July 1838 to his good friend, violinist Ferdinand David, would become one of the most beloved and instantly recognizable melodies in the violin concerto literature.
Travel now further back in time to Berlin, 1825: 15 year old violin prodigy Ferdinand David, after two years of study with the renowned violinist and composer Louis Spohr, is on his first concert tour. There he encounters the equally precocious pianist and composer, 16 year old Felix Mendelssohn, who had that very year completed his Octet for strings, a masterwork of such assurance and maturity that even Mozart himself had not achieved at that age.
Both boys hailed from Hamburg, where their families were acquainted with each other- Ferdinand was even born in the very house where Felix had been born the previous year. Their meeting in Berlin resulted in a fast friendship- a year later, when the Mendelssohns had settled in Berlin, Felix wrote to Ferdinand that "it is of the utmost importance for your future career that you should soon come to Berlin...Would to God that I might soon have the pleasure of seeing you settled here, for I am convinced that nothing could be better for you than life and work in Berlin". After first securing a job in a Berlin theater orchestra, David took the advice to heart. Ferdinand was thereafter often a guest in the Mendelssohn home, where the two would play string quartets together(Felix on viola) with David's orchestra colleagues.
When Mendelssohn was appointed director of the Gewandhaus Orchestra in Leipzig(still going strong to this day!), he invited David to be his concertmaster; they worked hand in hand to produce one of the finest ensembles of the day. He similarly appointed his friend as violin professor when he founded the Leipzig Conservatory in 1843(David would become one of the most important teachers of the 19th century- his greatest student, Joseph Joachim, would go on to collaborate with Johannes Brahms in producing his violin concerto). Both men shared a seriousness of mind and reverence for music of the past (Mendelssohn gave the first 19th century performance of Bach's St. Matthew Passion, and David produced the first performing edition of Bach's Sonatas and Partitas for solo violin, and was the first to publicly perform Bach's Chaconne) that contrasted with the dazzling pyrotechnics of flamboyant virtuosos in the mold of Paganini, which Mendelssohn dismissed as "juggler's tricks". David's love of music of the Baroque is still with us today- many of the sonatas that he selected for his "High School of Violin Playing" comprise much of the later volumes of the Suzuki Violin School, in versions scarcely altered from David's originals and performed by violin students worldwide.
September 28, 2016 at the Fox Cities P.A.C.
Other commitments prevented Mendelssohn from finally working out his E minor earworm until 1844. Felix relied on his colleague not only for technical advice on the solo part(David was in large part responsible for the great cadenza at the heart of the first movement which was among the first to be written out instead of improvised by the soloist) but even details of the orchestration. In their correspondence, Mendelssohn is eager to please his friend and even self-deprecating; in a letter fired off before the manuscript went to the publishers he requests some last minute alterations and exclaims "Thank God the fellow is through with his concerto! you will say. Excuse my bothering you, but what can I do?"
The long gestation and close collaboration paid off; the premiere in March 1845 was a tremendous success, though sadly Mendelssohn was ill and unable to conduct. When further ill health tragically ended Mendelssohn's life two years later at the age of 38, Ferdinand David was among the small circle of family and friends who attended his bedside. David continued to champion his friend's concerto and taught it to his pupils. Through his advocacy Mendelssohn's masterpiece quickly took its place of honor as one of the greatest works for the violin.
We in the present day still respond to the concerto's blend of passionate lyricism, intimacy, and puckish high spirits. The musicians of the Fox Valley Symphony look forward to accompanying the great Itzhak Perlman in this masterpiece born out of friendship!
We are so excited to have the opportunity to perform great music with incredible guest artists throughout our 50th Anniversary season, but above that excitement is our sincere hope that this music can inspire our audiences.
For this milestone season, we decided to see if our music could inspire our local visual artists as well. Several artists have been commissioned to utilize the music of one of our season concerts to inspire the creation of a piece of art. These original artworks will be reproduced in a limited, numbered, and signed poster series commemorating out 50thAnniversary Season.
We are lucky to have such talented artists working on capturing this exciting season. Our first featured visual artist is Cristian Andersson. Cristian has spent countless hours creating a beautiful oil painting inspired by the music of Itzhak Perlman to commemorate our Opening Night. Other artists include Emily Reetz, Stephanie Harvey, and Lee Mothes.
Don't miss this opportunity to own a piece of this historic season. Individual posters will be available for purchase at each concert. However, right now through our SOLD OUT opening night performance with Itzhak Perlman, you have the opportunity to pre-order your set of posters and ensure uniform numbering from all five concerts.
Click here to orderyour set today!
The Fox Valley Symphony Orchestra (FVSO) kicked off our Senior Outreach Series on Monday, August 22nd at 3:30pm with an outdoor Brass Quintet performance at the campus of the Rennes Health and Rehab (325 E Florida Avenue, Appleton).
"Rennes is overjoyed to be a part of the Senior Series with the Fox Valley Symphony. Music is a large part of most of our residents' lives but concert accessibly is a challenge that we face frequently. Being able to bring this type of performance to them fills our hearts with happiness and theirs with joy," said Danielle Mosher, Director of Admissions from Rennes Health and Rehab.
This series of small group concerts presented in partnership with local senior living communities seeks to expand the reach and accessibility of our Fox Valley Symphony Orchestra musicians. In addition to reaching the residents of the senior communities hosting the concerts, all of the concerts are free and open to the public.
Wind Quartet at Oak Park Place (2205 Midway Road, Menasha) on October 6th, 2016 at 6pm. RSVP to 920.702.0000.
Strings & Wind Holiday Tea at Carolina Assisted Living (3201 W. 1st Avenue Appleton) on December 1st, 2016 at 7pm. RSVP to 920.738.0118.
String Quartet in the Garden at Valley VNA (1535 Lyon Drive, Neenah) on June 13th, 2017 at 6pm with a reception to follow. RSVP to 920.727.5544.
"The power of music is undeniable, especially for aging adults. That is why we are so excited for this Senior Outreach Series where we can reach those who might no longer be able to attend our full concerts," said Jamie LaFreniere, Executive Director for the Fox Valley Symphony.
This series is made possible through the partnership of our host locations as well as the series sponsor, Home Instead Senior Care. "Home Instead is committed to helping seniors stay engaged and active. We are so excited to partner with the FVSO and hosting senior communities to bring the Senior Series to the Fox Valley," said Cheryl Smith, Appleton Branch Manager for Home Instead Senior Care.
Center with special guest artist Itzhak Perlman.
Beethoven returns to the ballpark on Friday, July 1. The Fox Valley Symphony Orchestra will hold their second annual Brats, Beer, & Beethoven event at Neuroscience Group Field at Fox Cities Stadium on Friday, July 1 at 7:30pm. The event, presented by Fox Communities Credit Union, is FREE and open to the public.
"We can't believe we get to do this again and we can't thank the Timber Rattlers enough for our partnership! This concert is the perfect way for us to start our 50th season," said Jamie Lafreniere, Executive Director of the Fox Valley Symphony Orchestra. "We get to celebrate a beautiful night of music with our musicians and the community in this amazing outdoor space. With the support of our sponsors, Fox Communities Credit Union, Neuroscience Group, CommunityFoundation for the Fox Valley Region, and Tundraland, this is a free event and we hope it makes it possible for everyone to attend, enjoy the music, and even see fireworks at the end of the night. We're also proud to bring the MacDowell Male Chorus and Fox Valleyaires to the concert this year; the more music the better!"
Parking and admission to the event are free. The parking lot opens at 5:00pm with the gates to the stadium opening at 6:00pm. The concert is scheduled to start at 7:30pm and fireworks to follow at 9:00pm.
Aaron Hahn, the Timber Rattlers vice president and assistant general manager. "It's a great way to kick off the holiday weekend and an opportunity to see an amazing group of performers for FREE!"
There will also be a donation drive for musical instruments at two events at the ballpark. Donate a new or used instrument OR money to go towards the purchase of an instrument to give all children the opportunity to play a musical instrument!
Donations will be accepted at the Timber Rattlers game on Sunday, June 26 when the Rattlers host the Quad Cities River Bandits at 1:05pm. Be one of the first 1,000 fans to attend this game and you will receive a Cory Chisel Bobblehead.
Fans may also donate to the instrument drive at Brats, Beer & Beethoven on Friday, June 1. Donations may be tax deductible.
This collection is made possible by the collaborative efforts of Fox Communities Credit Union, Wisconsin Timber Rattlers, Cory Chisel, and The Refuge.
"At Fox Communities Credit Union we say "Make Life Happen", and we are excited to be a part of this event to help more people enjoy the sounds of the Fox Valley Symphony, especially kids," said Lynn Marie Hopfensperger, Community Development Officer at Fox Communities Credit Union. "Fox is happy to be able to make life happen for all of the talented artists we have in the area, we are so rich in the arts, we're proud to be a small part of this."
Seating for Brats, Beer, and Beethoven is first-come-first-serve and food and beverages will be available for purchase from the concessions stands at the ballpark.
When you meet a young lady like Masha Lakisova it is an amazing event.
About a year and half ago a good friend of mine, violinist Michael Shelton, heard Masha play. He sent me an email saying that he had heard what he described as "the real deal". Michael is not one to speak glowingly about someone unless he truly means it. He has very keen ears and high expectations.
After checking out a couple of YouTube videos of her playing I made arrangements to hear Masha at her teacher's recital. She played the Schumann Sonata, with her mother Lyudmila (a brilliant pianist). Needless to say it was stunning.
After the recital I stayed around a bit to chat and found Masha and her family to be wonderful people. They are so proud of what Masha is doing.
Since then we have worked together several times. Masha has won even more competitions and has been featured on NPR's "From The Top".
I am thankful that my friend Michael brought this amazing young woman to my attention and am honored to be able to share her gifts with our wonderful audience.
Join us this Saturday, January 23, 2016 for this special performance!
Masha will perform Tchaikovsky's Violin Concerto in D major with the Fox Valley Symphony Orchestra.
We all have a soundtrack of our lives - music that reminds us of happy or sad times in our childhood, or other important life events, friends, and family. What would a wedding be without music? How comforting is the music that accompanies a funeral?
With the recent celebration of Veteran's Day, I can't help but think of how many servicemen and servicewomen have had their spirits buoyed by a USO tour. Or, how many ceremonies on Wednesday contained music of a wartime period.
or Alzheimer's. Music becomes exponential in its power to comfort.
Our many cherished donors recognize that music is essential to our mental, emotional and physical health. In these days leading up to National Philanthropy Day on November 15, we take a moment to recognize the importance of music in our lives and those who help us keep music alive in our community.
Philanthropy is defined as "a love of humanity", and those who support the Fox Valley Symphony care deeply about our community.
Thank you for buying tickets to our concerts, and even inviting friends. You appreciate the value of symphonic music to our well-being.
Thank you for making a cash donation to make sure the FVSO is able to serve its mission far into the future.
Thank you for attending Youth Orchestra concerts. You tell the young musicians in our community that they are vital to the sustainability of symphonic music.
Thank you for your tribute to our FVSO musicians through the Chair Sponsor program. Not only does it provide important funds to the Symphony, but it is a very visible way to show our musicians how important they are to the community.
Thank you for including us in your planned giving arrangements. You are showing that you care about the artistic vitality of our community for future generations.
We are truly grateful to all of our friends. Thank you for 49 years of support and "love of humanity".
We are excited about our concert this Saturday, November 14, 2015 at the Fox Cities Performing Arts Center. Of course, we are always excited about our concerts, but this time, we are having a Concerto for Violin and Tabla. When is the last time you heard that? Exactly. The piece is Svara-Yantra by Shirish Korde with guest artists Marcia Henry Liebenow and Zach Harmon.
As an extra bit of luck, both our guest artists got to meet with the composer last week and work on the piece. Marcia was kind enough to share her experiece with us!
This past weekend Zach Harmon and I met with composer Shirish Korde in Massachusetts to rehearse his Svara-Yantra Concerto for Violin, Tabla and Symphony Orchestra. We'll be performing this fantastic piece with the Fox Valley Symphony.
I'm very excited to perform Svara-Yantra. It's an intense and absolutely amazing work, and I'm really looking forward to collaborating with Brian Groner.
I'm also thrilled to work with tabla player Zach Harmon, who is a Wisconsin native. Zach studied in the Masters program at the Thelonious Monk Institute of Jazz, and studied tabla with Abhiman Kaushal. He performs, records, and teaches around the world.
Zach and I are both faculty artists at the Red Lodge Music Festival in Montana each summer, and I have known his father, composer and jazz pianist John Harmon, for many years. I have premiered a number of John's works at that festival.
Earlier this fall I made arrangements for Zach and I to rehearse the concerto with Shirish at his studio in Worcester, MA. Finding a few days that all of us were available was a challenge, but we were able to carve out a meeting time. Boston is my old stomping grounds. It's where I earned a graduate degree from the New England Conservatory.
On November 1 I flew to Boston and stayed with my brother and his family in nearby Westborough. Zach drove down from his home in Shelburne, VT. My brother and his family are avid musicians, although they pursue other fields for their livelihood. They loved hearing us work through the complex piece at their house!
Shirish is an incredible composer, a wonderful musician, and a genuinely nice man. He helped clarify musical questions we had and worked with us on our interpretation and preparation of his piece.
Zach and I can't wait to rehearse and perform this concerto with the FVSO!
Thanks, Marcia! We can't wait to share the stage with you this weekend!
Every now and then, we get a letter in the mail that makes us smile. I just had to share this one! Love it!
We could not carry a note if it possessed the proverbial handle on its back. We have never been exposed to symphonic music, until my suddenly out-of-town boss gave us his tickets to a FVS performance about 15 years ago. Quite frankly we were surprised we enjoyed it. I believe we felt the need to play The Grateful Dead extremely loud on the way home, just to be certain we were okay.
We have been season ticket holders for about a decade now and have learned not to be the first ones to applaud. We enjoy your humor and obvious connection with both the audience and the musicians. I have found tears rolling down my cheeks, and have seen my other half with tilted head and closed eyes trying to deceipher each instruments' contribution.
The Celebrate Spring concert was truly one of our favorites. While Nazer Dzhuryn was amazing, Copland's Appalachian Spring Suite gave sound and substance to unspoken sorrow of loved ones gone, yet later providing hope of their legacy within those remaining. Ravel's Bolero was quite fascinating to hear unfold, growing in strength and depth along the way.
While the music sheets you command will always be written in a foreign language to us, we appreciate you building a place which is warm and welcoming for all to experience this music.
Austin Larson Returns to the Fox Valley!
We don't always go over the top bragging about our fabulous guest artists, but this time, we really need to make an exception! This weekend, our guest artist is Austin Larson. He is a fine player and he's won many awards (see below), but, even better, he is one of our own! Austin is from right here in Neenah! And still better, Austin was a member of our own Youth Orchestra! We are all so delighted to have him come back home for our Opening Night concert this Saturday!
Don Krause: Don is our favorite horn teacher in the area. Not sure how we got lucky enough to have him teaching our students, but we are certainly glad we can count him as a friend. We currently have six horns in the Youth Orchestra, and Don is coaching all of them!
"Of all the students I ever had, Austin had the most focus and drive of any. A lot of students practice, but they either don't have focus or don't have the drive. Austin was always trying to improve his performances, even in his lesson assignments. He managed to memorize every solo that he played for solo ensemble year after year. Practice makes perfect was his constant motto! I have had him work with a lot of my students as he has become more successful and is always willing to take the time to help young students improve."
Bruce Atwell: Bruce is our Principal Horn for the Fox Valley Symphony and also teaches at the University of Wisconsin - Oshkosh. He works with our board, staff and youth orchestra students to help make improvements across the board.
"When Don first referred Austin to me as a freshman in high school, my first impression was that he was going to become a once in a generation horn player. His sense of musicianship was already well developed from years of playing the violin and his horn technique was solid and seemed effortless. This raw talent combined with an amazing work ethic pointed to a long and successful career as a musician. His attitude still amazes me. He is still so humble and grateful for all of the success he has achieved. He still calls or texts his former teachers to let us know how he is doing. I can't wait to see where he ends up."
Lynn Lichte: Lynn was our program director for Youth Orchestra while Austin was a student. She was an amazing asset to the symphony and our Youth and Education program. She has since retired, but we miss her every day!
"It was my great pleasure to know Austin Larson while I was the manager of the fox Valley Symphony Youth Orchestra program. He was not only a gifted young musician, but a true leader in the orchestra. This fine young man received the coveted Youth Symphony "Leadership Award" during his senior year and went on to win numerous honors and accolades both nationally and internationally as an amateur and now professional musician. I believe that I can speak for the entire Fox Valley Symphony Youth Orchestra program in saying that they are proud to claim Austin as one of the brightest and best of their alumni and are thrilled to see him return as the guest artist to open the new concert season!"
We can't wait to have Austin on our stage again this Saturday! It is always a treat to work with talented guest artists, but when it is one of our own students who we've watched grow and succeed, it is a rare gift that we will all cherish!
You can also read the full program notes on our website.
Here is a copy of Austin's bio, so you can be as impressed as we are!
Neenah native Austin Larson has gone on to become one of the most successful young hornists of his generation. A graduate of Neenah High School, Austin was a member of the Fox Valley Youth Symphony for five years and studied with current and former FVSO hornists Bruce Atwell and Donald Krause. Austin has since developed one of the most impressive competitive track records of any hornist. Austin is one of only two people to ever win First Prize in both the University and Professional Divisions of the International Horn Competition of America and has also won First Place in the International Horn Society Premier Soloist Competition, the Yamaha Young Performing Artists Competition, and the Wisconsin Public Radio Young Artists Competition. On the international stage, Austin was also most recently a finalist in the Jeju International Brass Competition in South Korea. Austin has also appeared as a soloist at many prestigious venues, including the Music For All Symposium, International Horn Symposium, Jeju International Wind Ensemble Festival, Wisconsin Public Radio, and with orchestras in both the United States and South Korea.
Currently living in Denver, Austin holds the Assistant Principal Horn position with the Colorado Symphony and has previously held the Second Horn position with Symphony in C in addition to summer positions with the Verbier Festival Orchestra in Switzerland and Spoleto Festival Orchestra USA. Austin holds degrees from the University of Cincinnati College-Conservatory of Music (CCM) and the Curtis Institute of Music and his primary collegiate teachers include Jennifer Montone, Jeffrey Lang, Randy Gardner. A strong believer in music advocacy, Austin has also been involved with numerous charitable organizations, including Appleton-based Horns a Plenty Christmas and has raised funds for music scholarships both at the University of Cincinnati and in the Northeast Wisconsin area. For more information, visit www.austin-larson.com.
Welcome to our 49th Season! We are thrilled you are joining us for an exciting year of beautiful symphonic music.
I joined the Board of Directors for the Fox Valley Symphony Orchestra [FVSO] several years ago. Music and the arts have always been an important part of my life. I took piano lessons for a decade as a child but the lessons gradually slipped from a priority in my life as I entered college and launched my career. Thanks to my engagement with the Symphony (including trying to play the violin in our first Sinfonia fundraiser), I've started playing the piano again. I'm not very good but it brings me great joy and serenity.
As President of the Board, I'm excited to be working with our talented musicians, staff, sponsors and donors to further strengthen this community gem. The FVSO is experiencing tremendous momentum as we head into our 50th Anniversary in 2016. I'm very passionate about helping make the Symphony as accessible to the community as possible. We took a big step toward that goal this year by launching our Beer, Brats and Beethoven event in collaboration with the Timber Rattlers at Fox Cities Stadium. Thousands of people from our community heard the Symphony for free thanks to the tremendous support of area businesses and donors including the Neuroscience Group and Kimberly-Clark Corporation.
This season marks the 10th Anniversary of our partnership with Thrivent Financialas our Symphony Series underwriter. Thrivent's commitment to the FVSO is a testament to their ongoing passion for the arts in our community. Not only has Thrivent committed significant funding to the Symphony, but they've also shared their time and talent with us as well. Please join me in thanking Thrivent Financial for their leadership. We couldn't do this without them!
We are fortunate to have growing support from area businesses. We deeply appreciate the long-standing and continued support of The Boldt Company as our Lead Season Sponsor along with Community First Credit Union as our Community Partner Sponsor. This year's sponsors include JewelersMutual Insurance Company, NeuroscienceGroup, Secura Insurance, Plexus, Menasha Corporation Foundation, Associated Bank, Godfrey& Kahn, S.C., East Wisconsin Savings Bank, Alta Resourcesand Schenck SC. Thank you from the bottom of our hearts.
We are close to having all of our musicians supported through our Chair Sponsorship program. Please help us ensure that ALL chairs are sponsored now and for future seasons! There are many other ways you can support us, including making the FVSO a part of your planned giving.
Last but definitely not least, a heartfelt thank you to our musicians. Without their awe-inspiring talent, we wouldn't be here today. As I've started to get to know our family of musicians, I quickly learned that many of them have been with us for over 20 years! The level of commitment and passion is palpable with every rehearsal and performance. It is because of you future generations are inspired to carry on this great tradition.
Thank you for joining us for an experience that only an orchestra like ours can provide. It's truly a phenomenon everyone in our community should be able to experience. I look forward to working with you to help make the music live on for all to hear.
As the Fox Valley Symphony prepares to entire its 49th season as a community orchestra, we've elected new board leaders. Our new President of the 17-member Board of Directors is Jeff Amstutz, Creative Director/Principal of A2Z Design. Addie Teeters, Marketing Communication & Media Relations Manager for Expera SpecialtySolutions, was named President Elect.
Other Board Officers include Jane Chaganos, treasurer; Priscilla Daniels, secretary; and Peter Gianopolous, Immediate Past President. Jamie LaFreniere serves as Executive Director and Brian Groner is the Conductor and Music Director.
"This is an exciting time for the Symphony." says Beth Flaherty, former board President for the Fox Valley Symphony and current President of the board for the Fox Cities Performing Arts Center. "The organization is focused on a sustainable future and our new leaders are energized and looking forward to preparing for a fantastic 50th anniversary season. They will continue the work the Symphony has done to ensure a strong future for live symphonic music in the Fox Valley with innovative programming and community involvement being a top priority."
The Symphony's mission is to enrich and nurture the human spirit through symphonic music and educational opportunities that enhance the cultural development of our community. Founded in 1966, we are a non-profit providing the community with quality music, as well as performance and educational opportunities for area musicians.
Our new season starts on October 3, 2015, at the Fox Cities Performing Arts Center.
There is still time to get your Season Ticket Package, so you can lock in your seat and not miss a single night with us.
Please visit our website for more information about our concerts, or call (920)730-3760 to order your tickets today!
Now that we've wrapped up our year-end giving campaign, we just wanted to say THANK YOU!
From the bottom of our basses to the top of our piccolos, we thank you! You attended concerts, sent donations, sponsored musician chairs, funded outreach activities, and supported youth orchestra programs - we are grateful for your investment in our mission through your generosity.
The Fox Valley Symphony will honor your support by staying true to our mission to nurture the human spirit through symphonic music and educational opportunities that enhance the cultural development of our community. We will continue to be an integral part of the beautiful tapestry of arts groups that make the Fox Cities a wonderful place to live.
This is my third year as conductor of the Philharmonia, and each year has offered its own unique combination of successes, challenges, and opportunities for the students to grow as an orchestra. When I first entered the position in late spring 2012, the students had already gone through their auditions and I hadn't met or heard them (beyond the ones who were there for my interview, many of whom were in the previous year's ensemble). I had to rely on Greg Austin's (Concert Orchestra conductor) experience listening to them try out, as well as his experience with the Philharmonia-level repertoire, to help me prepare for the early fall retreat and the first concert. Greg was, and continues to be, a tremendous resource of expertise and insight into the past performances of pieces in the FVSO library. By around the time the students were preparing for their spring "mini-tour," I was finally starting to feel like I knew what I was doing, more or less! I also knew from my years of teaching that I would soon have to start from scratch, listening to many new members auditioning in (or up, to Concert Orchestra). It was a bittersweet time, offering congratulations and well wishes for good auditions that, if successful, would mean that I would no longer be working with those students.
For the second year, I wanted to build on what I saw as a successful first year while offering some different experiences, especially for students who had been in Philharmonia the year before. I tried to offer more solo opportunities, and watched students step up to leadership roles as they challenged themselves to learn these. I also programmed a piece by a living American composer (Magen Miller Frasier), and made the bold statement that the orchestra could do a "distance rehearsal" using software like Skype, even before I had tried to contact the composer! Thankfully, she was very generous with her time and praise of the students, and even requested permission to put their performance of her piece on her website. It was a great moment for the students to have a direct connection with the music-making process that I hope they always remember.
As this year began with the auditions, I was stuck by two things: how the orchestra overall seemed a bit younger, and how incredibly violin-heavy it was! This presented a challenge selecting repertoire that I thought would complement the sounds and strengths of the other sections, while also being appropriately difficult and different from the previous years. For the first time, I chose pieces that feature guest percussionists, a role that has been graciously filled by members of the Youth Orchestra percussion section. I've also seen the smaller viola, cello, and bass sections rise to the occasion and play with a strong, confident sound that allows for better balance.
On days when the orchestra has sectionals (three times for each concert cycle), I move from room to room to hear how everyone works together, and I have been continually impressed with the maturity and work ethic the students have shown. The coaches have expressed this much as well, and have appreciated how much is able to be accomplished. I feel like all the hard work and progress is helping make this first concert of the 2014-2015 season become even more polished and excellent-sounding than the past two years!
One of the best things that I get to do as a professional cellist and teacher is to play with the Fox Valley Symphony's Artistic Adventures education program for elementary age children. Collaborating this year with the Trout Museum and the Fox Cities PAC was fantastic. To consider that a string quartet this fall played in 22 up-close performances for over 700 children total is astounding and incredibly meaningful.
Experiencing live music can lead to deeper understanding, joy, and a rich emotional range that is beyond words. I am so privileged to work with other enthusiastic members of the Fox Valley Symphony in this educational outreach and in all of our symphonic concerts.
Every year I cherish these rich times that bring for all of us, performers, students and our symphonic audience at the PAC alike, priceless experiences of community and deep connection.
I have been the principal French horn of the Fox Valley Symphony since 1998. Over the course of those 16 years I have witnessed amazing artistic growth of the orchestra. The Fox Valley now has one of the premier orchestras in the state, something to be very proud of as a community.
The players come from all walks of life, many are full time professional musicians and many have day jobs but the commitment to music making and to preserving this beautiful art form is universal. This is more than a collection of musicians; it is a family that comes together to present the incredible repertoire of the symphony orchestra to the community. I have seen the response from the audience to our concerts-you can feel the pride and love that is transferred from musicians to audience and back-there really is nothing else like it.
As the musician representative on the board of directors, I am particularly struck by the dedication of the board members who support and run this fine orchestra. I have been an orchestral musician for over 30 years and I have never seen a more committed, caring, and passionate board of directors and staff.
The Fox Valley must protect and preserve this incredible asset. It should be a point of pride for everyone who lives here. When a community cares about art it creates a wonderful place to live and work.
Fox Valley Symphony is extremely fortunate to own one of the best sets of Timpani in the world, manufactured by Adams in Holland, and distributed here in the United States by Pearl Drum Co. They are known as the 'Cloyd Duff' model, named after the world-famous Timpanist of the Cleveland Orchestra, Cloyd Duff. I was fortunate to have studied with him in master classes. He is one of the greatest players ever.
Our set of five currently have a value of $40,000. They are some of the finest Timpani I have ever performed on, period. Years ago, I was fortunate to have worked with our Executive Director during Fox Valley Symphony's transition from performing at Lawrence University to our current home, the Fox Cities Performing Arts Center.
At the time, I was asked to put together a "wish list" of all percussion instruments, being mindful of both quality, tonal excellence, and budget. This was for all equipment, as back then, when at Lawrence, the FVS did not own any of its own percussion equipment. So it was a pretty big deal to get it right. This initial list did not have the Adams Timpani included; as I never thought it could possibly materialize due to the cost.
Paul Heid, owner of Heid Music, called me the very next day. The symphony was working with Heid Music to order the equipment, getting the mission-critical equipment ordered first so we could start our season at the PAC. He told me he saw the list and then asked, "As Timpanist, what would be your dream set of Timpani?"
I remember it like it was yesterday. I told him "The Adams Cloyd Duff Timpani, of course."
I said, "What do you mean, done??"
He said he would figure out a way for this to happen...and he did. He worked his magic, as he was also President of NAMM at the time. He went above and beyond, ordered up these same Timpani, showcased them at NAMM, then brought them back to Appleton.
He gave me a call and said, "Hey Paul, your drums are in. Come on down to the store and check them out!"
I walked in the store, in the back storage room where he had them placed, removed the cover of one, saw they were the real deal and started crying. I just could not believe how someone out of the goodness of their heart, could go above and beyond in such a way. It was one of the most beautiful moments of my life - and hence why I care for these drums they way I do.
I will always remember what he did for us, and will be indebted with gratitude to him forever. It was magic.
The Fox Valley Symphony Orchestra kicked off its 48th concert season with a fascinating program of challenging music. This concert also marked the beginning of Maestro Brian Groner's 20th year as conductor.
Opening the program was a spirited performance of Johann Strauss, Jr.'s delightful "Overture to Die Fledermaus." The overture is filled with an assortment of tunes that audiences have come to associate with the composer.
Attention was quickly turned to the feature work of the first half, "Piano Concerto No. 3 in C Major" by Sergei Prokofiev, featuring guest artist Claire Huangci. The youthful Huangci wowed the audience with her seemingly effortless mastery of Prokofiev's massive and demanding opus.
The first movement opens with a simply stated yet tuneful solo by the clarinet, played eloquently by principal clarinetist David Bell. This tune quickly gives way to the strings, but the melodic serenity is suddenly ended with the arrival of the allegro section in the strings and the first entry of the solo piano. It was at this point the Ms. Huangci clearly let her presence be known.
Be it brilliant scalar passages or bursts of rhythmic energy, Huangci's clarity of line was always at the forefront. In addition, she has the ability to skillfully execute the intricate weavings of the piano line within Prokofiev's constantly shifting density of orchestral structure.
Two things stood out: her precise touch at the keyboard and expert blending of dynamics, a wonderful fusion of technique and artistry.
The second movement is a set of variations, which opens with the orchestra playing the main theme, a curiously witty melody first heard in the winds. The variations feature the solo piano. It is here where Prokofiev deviates from the gavotte feeling of the theme.
Huangci undoubtedly had a clear understanding of the personality of each variation and showed it in her playing, be it the gossamer trill and glissando that opens the first variation, the rapid scalar runs up and down the keyboard in the second, the wildly syncopated and angular gestures of the third, the beautiful free dialogue between piano and orchestra in the fourth or the frenetic pacing of the final. All these personalities were distinctly executed at the keyboard, making the movement all the more exciting.
The quiet ending of the second movement merges attaca to the finale, Allegro, ma non troppo. Groner's opening tempo was quite deliberate, adhering closely to the "but not too much" advice of the tempo marking.
Unquestionably, this is the true virtuoso movement of the concerto, with multiple climaxes and a brilliant ending. It was also here where Ms. Huangci demonstrated her technical skills to the fullest.
The coda is a musical confrontation between the orchestra and soloist, with both vying for compositional importance. Huangci's energy and concentration allowed her to handle the complex ornamentation, arpeggios, glissandos and other flourishes while cutting through the massive orchestra. Four lively chords scored for piano and orchestra together bring the concerto to a dramatic close.
Beethoven's "Symphony No. 3 in E-flat Major (Eroica)" comprised the second half of the evening's program. As we've become accustomed to appreciate over the years, Groner's vision and execution of this masterwork was complete, thought-provoking, and most of all, musical.
The opening of this symphony never ceases to put a smile on my face, two marked E-flat major chords, and a gloriously simple arpeggiation of the tonic triad ... so simple, so lyrical, so Beethoven.
Groner's tempo choice unquestionably played into the heartfelt interpretation of the opening movement. Within the orchestra, the balance of the strings was particularly notable.
The haunting, well-known funeral march theme of the second movement, Adagio assai, is first heard played by the cellos and then given to the solo oboe, played beautifully by principal oboist Jennifer Hodges-Bryan. Also present in this movement was the use of fugue-like passages in the middle section. Groner's ideal choices of tempos and dynamics made the performance of this movement contributed to its success.
The third movement is an animated scherzo, filled with rhythmic energy, and a glorious passage of hunting calls heard in the horn section. The orchestra, and especially the horns, played expressively, paying careful attention to each of Groner's gestures from the podium.
The finale, Allegro molto, offered another set of variations for the evening. The movement itself is quite grandiose, and shows the direction Beethoven is moving regarding importance of the symphonic finale.
Again, Groner was at his best with his conducting, just the right tempo, energy, and clear identity to each of the thematic variations. All of these elements led to the orchestra's rendering a meaningfully expressive performance of Beethoven's masterwork.
Welcome to Our 2014-15 Season!
Thank you so much for being part of the Fox Valley Symphony's thrilling 2014-2015 season. From the first notes on opening night (the sparkling and energetic Overture to Die Fledermaus by Johann Strauss) to the last notes of the finale (brought to you by Liszt's epic tone poem Les Preludes) there will be music that inspires you.
Our orchestra is an amazing group of interesting, creative and talented people. I hope that you find a chance to speak with some of the musicians of the FVS over the course of the season. Each player brings something special to the sound; each player brings you their very best on every concert. They serve both the art of music and our audience admirably.
As I begin my tenure as president of the Board of Directors of the Fox Valley Symphony, I am very excited about this year's concert season and am grateful for the opportunity to help bring this wonderful gift, our symphony, to you. It is my belief that art and music are some of the sweetest fruits in life. They touch our soul, inspire us, and bring richness to life.
In the Fox Valley, we enjoy and celebrate a rich tradition of art and music, and our symphony is one of the biggest reasons why. From our schools and universities to the performing arts, the symphony is weaved into the fabric of our way of life. Our symphony and its talented musicians work with many other organizations, businesses, and people. By supporting and cultivating local musicians and artists in our community, we are not only enhancing our own lives but the lives of our family and friends for generations to come.
The Fox Valley Symphony is dedicated to bringing education, art, and music to this community, to the next generation, and to you, our symphony family. We are planning many new social, educational, and fun events this year and hope to see you there. Our symphony family includes you, and we are very thankful for your patronage and financial support. For without it, we would not be able to touch the lives of so many.
technical crew plans each detail before opening night. Volunteers and staff work together to ensure everything is in place before the first note hits.
We've been given this incredible opportunity, and it is always met with sincere gratitude.
We are thankful for our sponsors and donors who make our season possible. We are thankful for our board members who help plan and implement our mission. We are thankful to the teachers working with music students in our community to engage future generations of artists and patrons. And we are thankful for you, who attend each concert and show your support with applause year after year as we work toward our 50th Anniversary.
Carl Orff's Carmina Burana has become a classic for musicians and audiences because of its percussive music, hypnotic melodies, lilting passages and all-out, robust orchestration. On Saturday, May 3, more than 200 regional musicians will collaborate to present this classical masterwork in live performance at the Fox Cities PAC.
The rowdy subject matter is set to some of the most beautiful melodies in classical musical literature. The Carmina were songs of medieval traveling students and ex-monks who left universities and monasteries to pursue a roaring life of gambling, drinking and making love. The texts of the songs were discovered in a Bavarian monastery near Munich in the early 20th-century and are a mixture of 13th-century Latin and "low" German. The songs in the Carmina cover a range of topics, as familiar then as they are today: the fickleness of fortune and wealth, the ephemeral nature of life, the joy of the return of Spring, and the pleasures and perils of drinking, gluttony, gambling and lust.
The performance culminates the Fox Valley Symphony's 47th season and is a favorite of Music Director Brian Groner. "There is something wonderfully primal about the text and the music of Carmina Burana," Groner said. "When it speaks of power it is bold and over the top aggressive; when it talks of love it is either bawdy or exquisitely tender."
According to newVoices Artistic Director, Phillip Swan, the masterwork is a welcome collaboration with the symphony. "Choral/orchestral collaborations provide a cross-pollination of musical interests," Swan said. "Consequently, it's good for the community to have arts organizations working together to put on quality productions."
For singers & instrumentalists alike, Carmina Burana is a musical challenge because of the range of emotions needed to interpret the composer's music. One movement requires repetitive, full-voiced singing and playing while the next movement requires a gentle, lyrical approach.
"It takes an unusual amount of concentration to maintain the rhythmic intensity Orff demands in the score, and because it is repetitive it can be physically challenging," Groner said. "It's a big sing," Swan said. "The melodies are present an extreme of emotional singing requiring consistent vocal technique as well as artistic interpretation."
Singers in the Lawrence Academy of Music Capriccio Girl Choir in grades 5-7 are excited for the opportunity to sing with a full orchestra, professional soloists (one of whom is a girl choir alumna), and an adult choir. "The girls are learning to listen to how their part fits into the other vocal and symphonic parts," said Director of the Lawrence Academy of Music, Karen Bruno. "Singing with an orchestra allows them the opportunity to hear different timbres with their 'accompaniment.' The girls are used to hearing only the piano, with occasionally one other string or wind instrument, while they sing."
For the Hodges family, the performance will be a reunion. Father Mike Hodges is a founding member of newVoices where he sings with his son, Jeremy. Daughter Jennifer Hodges Bryan is an oboist with the symphony and brother Jonathan is a cellist. The family shares a long history of music and fostering musical development.
"We gave our kids outlets for enjoying music," Mike Hodges said. "They all started in violin and in time gravitated toward their own choice of instrument," he said. His wife, Donna, drove the kids to lessons at the Lawrence Academy of Music and checked their practice progress.
Jeremy Hodges says the opportunity to perform together is a normal part of a musical family.
"But in the end it does have a special personal meaning: the people I care most about are with me and sharing the fun," he said.
His father agrees. "I get such enjoyment from performing and to be able to have them on stage with me doubles the enjoyment. There is a sense of pride in watching their accomplishments," Mike Hodges said.
Jonathan Hodges says the different roles family members play allows for unique perspectives. "I am more toward the front of the stage, Jennifer is in the middle, my father and Jeremy are toward the back and my mother is out in the audience. Every spot does sound quite different and can expose different aspects of the performance," he said.
Family members are continuing the tradition as Jennifer Hodges Bryan has her three daughters enrolled in music lessons. "Having them learn an instrument and involved in music is something that I really wanted for them because I think there are several benefits to a child's development when they are involved in music," she said.
Both conductors urge area residents to experience the work live, rather than listening to recordings. "You can't reproduce the sound of 200 musicians live by putting it in a little speaker and expect it to sound the same. Hearing this music live is worth unplugging," Swan said.
"Some of the greatest pieces of western civilization's art music combine the forces of chorus and orchestra," Groner said. "There is a power in them that is greater than each standing alone."
Carl Orff's Carmina Burana is an enduring audience favorite, and one of the most recognizable pieces of music ever written for orchestra, chorus and soloists.
As part of our upcoming Cory Chisel concert, we are proud to be working with the Fox Cities P.A.C. and local high school students to bring another Compassion Project event to our community. "The Art of Compassion" is a silent auction of student art, inspired by the works of local non-profit organizations with all proceeds to be give to those organizations. Students chose to work with NAMI, ARC of the Fox Valley, Harbor House and the Fox Cities Emergency Shelter.
Art Student Sarah Ellisen at work on her project.
The students have been working hard on their projects, and there are over 120 pieces to bid on in the auction. We are so fortunate to have such a large group of dedicated students and teachers working on behalf of these organizations.
Chip Noffke, Visual Arts teacher at Appleton East, was kind enough to share his experience with us.
"As an AASD Fine Arts Teacher, I was excited and honored be part of this great opportunity. Visually listening to our youth is something I do on a daily basis, yet I am still amazed when I see the range of results and compassion that so easily pours from our students. It is my hope that as you enjoy the answers to this rich question, your hearts and eyes will also be opened to see the possibilities and fullness of our all futures through our young artists' eyes and these four noteworthy organizations.
"In continuation with our last community wide event, Fox Valley youth artists share how "The Fine Arts" continue to be one of the strongest and most diverse communication tools. Students have once again easily opened our emotional doors and bridged the connections between community, education and humanity through their art which focus on local organizations and the compassion they provide for the Fox Valley.
"NAMI, ARC of the Fox Valley, Harbor House and the Fox Cities Emergency Shelter are four groups that have various roles in our K-12 systems, though often over looked how. Our students had the opportunity to explore the ways in which each organization played a role in helping all ages, genders, and families succeed in coping and overcoming life's left turns. One common point that had a significant connection with students is that we all knew of somebody that has worked with one of these organizations on some level. This offered great inspiration for the artists.
"The artists involved were asked to share their interpretation of what compassion looks like for one of the organizations or how their art could offer compassion for somebody working with one of the four organizations. Artists then used their gifts and talents to visually express their feelings, thoughts, and ideas about each group to bring awareness and support to these service organizations right here in the Fox Valley with amazing results. Each original art work reflects their unique answers."
Please join us for this special event at the Fox Cities Performing Arts Center on Saturday, March 15 at 7:30pm. You can purchase tickets to the concert on our website.
Doors open at 6:30, so come early to see and bid on the art!
We are proud to partner with Appleton's Compassion Project for their second event here in the Fox Valley, the Art of Compassion.
At our March 15 Cory Chisel concert, we will be opening K.C. Theater at the Fox Cities Performing Arts Center as our art gallery. Before the concert and at intermission, our audience can view works of art from our local high schools and bid on them in a silent auction. Each piece is inspired by one of our local charities, and the money raised from the auction of each piece will be donated to that specific charity. It is an amazing way for our students to dedicate their time time and art to a charity that is meaningful to them.
St. Francis Xavier High School student Bridget Flaherty is our coordinator for this project, and we are also lucky to have her as part of our Fox Valley Symphony Youth Orchestra. For the Art of Compassion project, Bridget will be working with the artists and helping to set up the silent auction at the Fox Cities Performing Arts Center.
"Having the opportunity to work on a project like the Compassion project not only inspires me but also proves to me that there is hope for my generation," says Flaherty.
"When deciding what I wanted to work on for my required Junior Service Project at Xavier High School, I knew I wanted to choose something regarding the arts. Music and the arts have been an enormous part of my life since I was young through violin and piano lessons, participation in the Fox Valley Youth Symphonies, and participating in choir and art classes at school. When my mother, Beth Flaherty, suggested the Compassion Project I knew it was the perfect fit. Now that I have a deeper understanding of the purpose of the project and the involvement I have an even greater appreciation for the wonderful thing the project does.
"I believe the most important aspect of the project is the unification of the Appleton schools through the value of compassion. All the students participating have different perspectives on what compassion means to them, and after reading all 120 of the artist statements, my definition of compassion has broadened. Every piece of artwork is worth more than any amount of money could buy it for because of the thought and hard work put into it by the student artists.
"This exhibit will not only inspire you, but it will encourage you to step back and ask yourself what compassion means to you, and do your best to live your life with those values."
Of course we are excited about our upcoming concert with Cory Chisel and the Wandering Sons on March 15. And one of the things making this concert even more special for us is that one of our own musicians, cellist Heather Anderson, is arranging the music for the symphony and Cory!
Heather Anderson working with our Philharmonia students.
"Composing is a funny process for me. Or maybe what I experience is pretty typical. I don't know. For all the analyzing I do - keys, time signatures, form, etc. - none of it matters much in the end. No amount of analyzing and planning can create the synergy of notes working together to create something that elicits an emotional response from the musicians and audience. That takes a bit of luck, some artistry and a group that can embrace and interpret a song with zeal. The more I think about a song and analyze it the harder it is for me to actually "put pen to paper" (or in this case mouse to Finale software) and find the motivation to actually begin writing a song. It can be very scary to stare at a screen with blank staffs and not be sure which part of that giant elephant to begin eating first. It can cause anxiety and frustration.
"Blank canvases, journals or music staffs are scary to look at. Insecurities don't help. A lot of us are afraid to fail, but just as big of an inhibitor is being afraid to succeed. If I dwell on either too much, the muse flees and I can't write anything. So, where to start? As a cellist I almost always start with the bass line. I'll listen over and over. I'll hum it. Then I'll transcribe it out for our bass section. Then I listen to the melody and start to transcribe it, putting it anywhere to begin with, usually into the violins just to have it be somewhere at first. But those are still just planning and analyzing. Those don't reflect energy, style, or the soul of a piece. Often I get stuck at this point because I am still only using my left brain, still analyzing.
"Maestro Groner said something to the symphony in a rehearsal once, perhaps 5 or 6 years ago, that has really stayed with me. We were playing a modern 20th century piece that very few of the orchestra members cared for. He could sense this and he stopped us. In a calm, quiet voice he said something along these lines. "Look, if we don't believe in this piece, how will the audience ever believe in it or enjoy it? Here's the rub: You don't know what you like; you like what you know. People gravitate towards the familiar." So, we all were charged with listening to that particular piece often at home as a part of the concert preparation process. This has changed how I approach a lot of music, familiar and new, those that I like and songs I dislike. So, when arranging one of Cory's songs I listen to it A LOT. Enough that I dream about it. Enough that I know the chord changes and melodic variations from one verse to the next by heart. I'll get fixated on a piece for a week and sing it in the car, at work, in the shower. I may be a Cory Chisel expert by the end of this composing project! This week my idee fixe is "Born Again." Next it'll be "Mockingbird" since I'm starting that one tomorrow.
"At some point during my listening the magic happens. Ideas just start to pop into my head, unbidden. I didn't plan to put that melody in the trumpets, but that's what's in my head and, wow, it sounds pretty darn good there! Harmonies unfold, interesting little timbres pop out in my imagination where, for example, chimes in the percussion section would really accentuate a spot and create a little bubble of excitement. Often I'm surprised at what my imagination present to me. Sometimes I'll hear whole sections played, finished in my mind and have to write it down very quickly to remember what I "heard." But it all starts with a lot of listening to Cory's CD's and really coming to know the song. And it takes relaxing my mind and being open to the muse, if you will. And when a song is completed I'll routinely listen and ask myself "how did I do that?" The answer is: Relax, listen, and create.
"I am thrilled Cory will get to hear his music interpreted with an entire symphony orchestra - something usually reserved for huge names like Sting or Metallica. I am both excited for my peers to play my notes, my work, my interpretations of Cory's tunes and I am equally terrified. Cory, Maestro Groner and my peers have high expectations because they are all professional musicians and expect a professional level product from me. And most have never played anything of mine before. While I have premiered a piece with a few Illinois orchestras in the last few years, most of my peers never even knew I wrote music until they saw my name in the January concert program! I know that, even if I have some typos for less familiar instruments to me, the other musicians will celebrate the occasion with me and give me excellent constructive feedback so I can improve. Already I have had numerous offers from my peers to look at parts and help me understand their instruments better; they want me to succeed. This is greatly comforting and buoys my energy. I'm so excited to share Cory's and my music with them and the audience and have the chance to both compose and play something with my own symphony orchestra, my home team. This is truly a rare opportunity and I feel blessed to have been trusted with this task by Brian Groner."
January Reveiw: 2014 Off to a Great Start!
We had a great first concert of 2014!
"Cold weather didn't keep devotees of the Fox Valley Symphony Orchestra away from their subscription concert, "Celebrating Women Composers," on Saturday night. The music selected formed a rather eclectic program, spanning a wide range of musical history and varying styles.
The concert opened with a rendering of a 2008 composition by the American conductor/composer Diane Wittry, titled "Mists." Scored for full orchestra, the piece featured numerous contrasting colors and emotions, from its dark opening, to its brass-laden climax. While there were occasional moments of musical interest, in all, I found the piece to be rather lackluster, and deficient in continuity.
The orchestra's principal flutist, Linda Nielsen Korducki, was featured soloist for the Concertino for Flute and Orchestra in D major, by Cecile Chaminade.
From its familiar opening melody, and through the technically advanced passages, Korducki demonstrated her complete understanding of the music. She possesses a lovely tone, with great strength in the low register, and balance throughout the flute's entire range. Her articulation was precise as were the rapid scales featured in the concertino's middle section.
A rich fullness was present in the orchestral accompaniment; a nice balance, supporting, but never overriding the prominent role of the flute. It was an absolute joy to hear this time-honored work so beautifully played by an accomplished professional.
The crowning glory of the evening, however, had to be the performance of the "Gaelic Symphony" by Amy Beach. This 40-plus minute composition in four movements can truly be recognized as one of the great symphonies in American musical history.
The orchestra played at its best while closely adhering to conductor Brian Groner's expert direction. The color, harmony, thematic elements and sheer genius of orchestration technique put this work in a class by itself.
The opening movement, Allegro con fuoco, was filled with grand and heroic musical gestures. From the beginning, Beach was able to show her familiarity with orchestration and color, while reducing the full orchestra to many clearly defined solo passages. In the case of the first movement, these were primarily found in the principal horn and clarinet parts, expertly played by Bruce Atwell, principal horn, and Christopher Zello, principal clarinet.
This idea of "featured" solos continues into the second movement, Alla Siciliana; Allegro vivace, in three part form, alternating from the lilt of the siciliano which emphasized the winds, to a sprightly middle section calling attention to the strings.
The third movement, Lento con molto espressione, with the emphasis on expressive. The highlight of this movement was an extended violin solo played beautifully by concertmaster Yuliya Smead. This solo concludes while being joined in duet with the principal cello, again, well played by Laura Kenney Henckel. I can't help but feel that the word "gorgeous" best describes this movement.
The finale, Allegro di molto, was filled with motion and rhythmic energy. It is in this movement where Groner's direction came to the fore. His tempos were exhilarating, and his attention to detail brought out the very best that the score had to offer.
It was evident that the orchestra was feeling the excitement of playing this glorious symphony."
This Saturday, January 25, we start our performance year by celebrating women composers. You will hear pieces from Diane Wittry, Cecile Chaminade and Amy Beach.
Music history, in much the same way as history in general, has tended to neglect the contributions of women. Think for a moment about Mozart's elder sister "Nannerl", who was often thought of as having an even greater gift than her brother. When she reached what was thought of as a "marriageable age" she was no longer allowed to perform.
Another example would be that of Fanny Mendelssohn, the sister of Felix Mendelssohn. Their music teacher Carl Zelter found Fanny to be the more gifted of the two but today when we say the name Mendelssohn in musical circles we make the assumption that we are referring to the younger Felix.
And so, we are presenting a concert of music written by women to raise awareness of the fact that talent is not based on gender.
The Chaminade is a staple of the flute literature. It is that wonderful combination of demanding for the performer, and wonderfully attractive for the listener. Our own principal flute, Linda Nielsen Korducki will be our soloist for the piece!
The Gaelic Symphony of the American composer Amy Beach (Mrs. H.H.A. Beach) is beautifully written, quite late German Romantic in style and is a testament to her intellect and persistence. Her story is an interesting one. She was a true child prodigy, singing and composing before the age at which most children can speak. She had a career as a concert pianist, but was not "allowed" to continue performing when she married but was "allowed" one concert of her own compositions per year. She is known as the first American female composer of large scale compositions.
The concert is at 7:30pm at the Fox Cities Performing Arts Center in Appleton, Wisconsin.
Join us for a pre-concert talk at 6:40pm and a post-concert party in the lobby! | 2019-04-25T04:12:42Z | https://www.foxvalleysymphony.com/media-center/symphony-blog/ |
The ABD welcomes the opportunity to respond to the Government's consultation paper on speed.
The ABD was founded in 1992 to campaign for improvement in road safety, driver training and education and balance in transport policy. With a growing membership, the ABD is now the leading independent drivers' group in the UK.
The ABD is a wholly voluntary organisation, drawing its Directors, Committee Members and Members from a wide variety of industries and occupations. The ABD receives no corporate sponsorship and is funded entirely from the subscriptions of its members.
Over the last ten years transport, and road transport in particular, has moved ever further up the political agenda, and one of the most central transport issues has become speed. Speed and speed management are now key areas both in terms of their implications for the safety of all road users — not just car drivers — and for the development of a truly integrated approach to transport in the UK.
In the last few years drivers have seen many new, lower speed limits introduced, including blanket 30mph limits in some counties and proposals for them in others, a variety of engineering measures designed to reduce speeds and a far greater emphasis on speed enforcement by the police.
The debate on speed has been moved into a new phase by the Helen Brinton MP in her private members bill on "Country Roads and Villages" in which she proposes the creation of "quiet lanes" with speed limits as low as 20mph and country road limits of 40mph enforced with a series of traffic calming measures.
Speed has emerged on to the environmental stage with some local authorities talking of reducing and enforcing lower speeds so as to encourage modal shift away from the car.
At the same time, the need for rapid, flexible individual mobility has never been greater as workers travel further afield in search of jobs.
That speed needs to be managed is not the issue at stake — either in this document or on the roads outside. What is at issue is the importance of speed within the overall framework of road safety policy and the way in which it needs to be managed — and by whom — so as to have the greatest effect on road safety.
There is a clear need to examine the most effective, long term strategies for improving safety on our roads and so to implement a series of "quick fixes" should be unthinkable. All aspects of safe driving need to be addressed, not speed in isolation. There needs to be a clear focus on addressing the causes of road accidents, of poor driving, and not simply remedial attempts to tackle only the symptoms of a deeper underlying problem.
The external influences that are used to modify driver behaviour and their effects.
The place of speed and speed management in road safety.
The role and effect of driver education and training on safety.
A number of ways in which more effective speed management can be brought about.
"Speeding", in the context of this document, takes its most obvious definition: exceeding a posted speed limit. Of course, speed limits can change and are changing, and so it is perfectly possible that a driver travelling legally — and safely — at 60mph on one day can be breaking a new 30mph limit the next. Of course, although it is illegal to exceed a posted limit many thousands of drivers do so by greater or lesser degrees every day on the UK's roads. Until the recent Police initiative Operation Pride there has always been a tacit acceptance that drivers will not always adhere exactly to the speed limits — the issue at stake has been by how much they've exceeded them and the context in which they have been exceeded.
"Excessive speed" or "inappropriate speed" as far as the scope of this document is concerned — mean travelling at a speed which is unsafe for the conditions. This speed can be, and often is, lower than the posted limit — even when that limit is already set relatively low. It will be immediately clear that this definition is different from and infinitely more elastic than that of "speeding" above — however, its effects are anything but inexact — travelling at excessive speed is potentially lethal, whether that speed is 9mph too fast or 90mph too fast.
What is a "safe speed"?
A "safe speed" — one at which a driver has time to accurately observe, anticipate and react to hazards (hazards being anything that will necessitate him or her having to take some sort of action) — varies not just from road to road, but from car to car, metre to metre, time of day, traffic density and most certainly from driver to driver. There can be little doubt that whilst one driver may be perfectly competent at 70 or 80mph+ — and in the case of trained Police drivers at speeds considerably higher than this — another can be a liability at 30mph — a "safe speed" has no common or fixed value, and so to advocate exact adherence to speed limits or lower limits is unhelpful at best.
Only experience, education and training shows drivers the speeds are safe for the conditions. The motto of the Police driving school at Hendon has always been "Experience Teaches". It is considerably more than a sadness, then, that the length and funding for Police driving courses has been cut over the last few years with serious effects on accident rates and minimal investment in civilian driver training.
Let's return to speed limits and briefly look at them conceptually. First of all, posted speed limits are certainly legal absolutes but not absolutes in either a scientific or philosophical sense — the sense, for example, that water always boils at 100 degrees or that 1 + 1 always equals 2. Drivers can and do exceed a limit regularly with absolutely no ill effects, either to themselves or to road safety. In fact, when questioned in a recent survey, only 9% of magistrates said they would observe a 30mph speed limit in light traffic in daytime.
To focus on speed limits to the exclusion of individual drivers' "safe speeds" may be legally correct, but is likely to have little impact on accident or fatality figures as in some cases the safe speed may be far lower than the posted limit, in others, somewhat higher. It also means that enforcement targeted at exact adherence to a limit is unlikely to result in compliance beyond the period of enforcement.
Drivers should not drive at a safe speed because they are scared of being prosecuted or they are scared of breaking the law, but because they recognise that it is safer to do so — but to achieve this the limits must be in the drivers' perception, reasonable, consistent and realistic — and have clear safety benefits. To introduce speed limits for reasons of encouraging modal shift could be argued to be extremely dangerous as consistency and the move away from safety issues has the potential to be lost and safety-based limits brought into disrepute.
Driving a car is not an exact science — we cannot say that driving at or below 30mph will always be safe any more than we can state that exceeding that limit will always be unsafe. If this was the case, there would not be a driver left alive on the roads.
Take a classic example, a rural road running past a school, perhaps with a limit of 60mph. When children are leaving the school it could be an act of almost criminal stupidity to drive at 60mph past the gates. By the same token, a driver could potentially drive safely — if illegally — past the same school gates on Sunday morning at 70mph or higher.
Driving safely does not consist of adhering to a set of hard and fast rules — it is too complex a process for that. The series of processes that underlie driving are so complex and change so quickly in the course of a drive that it is practically impossible to model them and crystallise them as "rules" — the driver must constantly evaluate and re-evaluate all around if he or she is to be safe — and setting the correct speed for the conditions is just one element in this wider process.
There are a complex series of internal processes that go to make up the action of driving (Figs 1 and 1a). These include basic psychomotor skills, perceptual motor skills and environmental perceptual skills. They also include the feedback a driver gets from the car — cognitive skills and a whole range of information-processing skills that enable the driver to interpret and react to the world outside the cockpit.
As drivers go on through their driving careers they develop and hone these skills. When they are learning, getting into second gear doing 20mph may terrifyingly fast and be potentially dangerous as the driver perceives that a great deal is happening at the same time. But drivers learn over time the perceptual, observational and motor skills they need to pilot a car safely and well. No driver deliberately sets out to drive unsafely and have accidents.
The process of driving can be split into two broad phases — the INFORMATION phase and the CONTROL phase. In the first phase, information is gathered, given to other road users in the form of signals and interpreted. The control phase takes into account the information given, constantly modified, to control the movement of the car. Each of these phases cycles back and forwards, with new external elements constantly coming into play and being evaluated and re-evaluated.
No one factor from this model can be held up as the critical factor — each interacts with the others to ensure safe, progressive and effective driving. Speed is just one factor in the overall framework of safe driving. To concentrate solely on speed and ignore or downplay the significance of the other elements of the model is to seriously skew the structure that makes for safe driving.
For many years road safety has focused on the "Three Es" of education, enforcement and engineering. These are all external influences which attempt to modify and control driver behaviour. Since road accident statistics began to be kept in 1965, the use of these three influences has led, along with better brakes, better roads, seatbelts, ABS braking systems and airbags to a sharp decline in fatalities and accidents on the UK's roads. Since 1992, however, this curve has begun to flatten and, at a local level, is, in some cases, beginning to reverse. Road traffic volumes have increased by 36% from 1988 to 1998, so it seems unlikely that this on its own has led to the flattening of the curve (Fig 2).
The three Es remain at the heart of road safety, but their focus has narrowed to the point where speed reduction is almost their sole aim.
Taking enforcement first, it can be seen that from 1986 to 1996 the number of prosecutions for dangerous, reckless and drunken driving offences has fallen by 20% whereas the number for speeding has increased by almost 100%. The number of speeding drivers has certainly not grown by 100%, but what is seen here is a clear shift in the emphasis of enforcement. The Gatso camera has also helped in increasing the number of speed-related prosecutions. The proportion of drivers caught on camera has increased from 6% of prosecutions in 1993 by nearly six times to 34% in 1996. (Fig 3).
From the last resort, used to alter the behaviour of dangerous drivers, enforcement has now become the front line weapon in road safety. This has led to a climate where drivers increasingly regard speed enforcement as unreasonable.
Rather than educating and training drivers to drive better, speed enforcement simply concentrates on making them drive slower. The ABD believes that rather than ameliorating the effects of an accident by reducing road speeds, driver training and education should be used to ensure the accident is prevented from happening.
If hardline speed enforcement is to be advocated, then is corresponding "zero tolerance" of other traffic offences to be introduced? If there is a "zero tolerance" approach to speeding, then consistency demands that the same policy is applied, unswervingly, to ALL criminal offences. This could be taken to its logical conclusion with the Police waiting outside tyre depots prosecuting drivers driving in to have their illegal tyres changed. To demand a "zero tolerance" policy just for speed is inconsistent at best, and likely to be seen as such by the public.
Engineering measures focus increasingly on speed and speed reduction with the use of traffic calming and lower limits becoming widespread. However, the effectiveness of broader engineering measures is seriously compromised through lack of funding. The Casualty Report and Road Safety Plan for Oxfordshire states "Due to government imposed cuts in budgets we are now finding it increasingly difficult to maintain roads and pavements to an acceptable standard with serious implications for safety". With 25% of the motorway and trunk road network requiring major repair in the next 4 years this lack of funding has serious economic implications for the UK's business competitiveness as well as for road safety.
In some cases engineering measures are psychological in effect, red asphalt surfaces, the village gates we have just heard about. Others are physically designed to reduce speed — cushions, humps, carriageway restrictions. Many counties are now planning to introduce 30mph speed limits anywhere there are 20 or more buildings which could give rise to traffic movements — even where there is no evidence of accidents or fatalities. In contrast, fewer safe new roads are being built after the current government shelved plans for bypasses and relief road schemes across the country. Engineering too has become focused on speed reduction.
In the same way the third E — Education — has increasingly narrowed its focus to concentrate on speed. The national £36 million "Speed Kills" campaign has seen television and radio advertisements, roadside posters and leaflets distributed to schools and workplaces talking of the dangers of speed.
Selecting the most appropriate safe speed for road conditions is certainly a key factor in driving well and safely, but it is far from the only factor. The Association of British Drivers believes that the focus on speed and many speed reduction engineering, enforcement and education measures underestimates the need for a broader emphasis on the other factors that make for safe driving.
Is tackling speed the best way to improve road safety?
We believe that the increasingly hardline emphasis on lower speeds is not the most appropriate or effective way to reduce accident rates in the longer term and over the whole road network. The evidence for concentrating almost solely on speed as the key factor in improving road safety is, we believe, somewhat less concrete than is sometimes supposed.
As recently as 1996, the Parliamentary Advisory Council on Transport Safety ~ (PACTS) of which the ABD is a member, stated that "a detailed study including the part played by speed in accidents has not been carried out for a number of years". It goes on to say that "There is not the directly provable and unarguable link between speed and physical impairment which exists between drinking and driving".
It does not seem to be the case that there are reliable, hard and fast statistics to show that speed causes accidents. Whilst speed may make the consequences of an accident more serious, it will, in itself, not cause the accident to happen.
It has already been shown that the "fatality curve" of accidents from 1965 to 1997 is flattening out from around 1992 onwards despite the increasing emphasis and spend in road safety on speed from this point.
Taking the example of the 450 new 30mph limits introduced across Suffolk in 1996, no clear improvement across the county can be detected after the introduction of the new limits. In fact, there was an increase in fatalities of 23 after the new limits were introduced.
Local residents, some of whom originally campaigned for lower speeds have not been enamoured of the new limits either. Shortly after the limits were imposed, villagers in the village of Sicklesmere sent a petition to Suffolk County Council's speed management panel. This petition showed 128 signatures against the 30 limit in Sicklesmere and only 8 in its favour. It is not always the case that the silence of the majority of the community before the imposition of a new limit will be followed by silence after it.
The County coroner in Suffolk, Mr Bill Walrond is on record as having blamed the new limits for at least two deaths on the A134. At the very least, this example shows that the new limits have had little positive effect on accident rates.
This view was reinforced by the DETR in a recent letter to the ABD in which a member of the Road Safety Team stated "We are aware of Suffolk's policy of introducing what is in effect a blanket 30mph speed limit in their villages. Whilst their commitment to improving road safety is laudable, in the absence of any evidence of its effectiveness we would not suggest this approach to other highway authorities. We believe that compliance is much more likely if each case is considered on its merits with limits set accordingly." Despite this, many counties now plan to introduce arbitrary 30mph limits.
If speed did, indeed, kill, it would seem reasonable that these new limits would have led to a statistically significant decrease in fatalities in Suffolk as well as on the road network as a whole. In fact, the opposite is often the case.
Much of the impetus for reducing speed limits and managing speed externally comes from the TRL report "Speed, speed limits and accidents' which carries a statistic which states "speed is a contributory factor in between 23 and 26% of accidents". In the USA, two researchers from the University of California, Dr Charles Lave and Patrick Elias, have studied the effects of speed across the roads systems of a number of US states. They found that when higher freeway speeds were introduced, considerably lower accident and fatality rates were recorded.
Here in the UK, Cambridgeshire County Council and the AA Foundation for Road Safety Research studied over 7,600 accidents, and concluded that excessive speed (including speed that was excessive for the conditions but still within the speed limit) was ONE of the causes in only 5.4% of accidents.
There is a considerable body of research suggesting that the causal link between speed and accidents is unreliable at best. This research includes papers by Corbett and Simon (1992), Furnham and Snaipe (1993), Matthews et al (1991), Buckinghamshire County Council (1992), the DETR (1992), Utzelman (1976) and the Insurance Institute for Highway Safety (1991).
Much of the difficulty in ascertaining accurate accident statistics for correlation with speed has been that there has been no common method of data collection since 1959. Even so, half the police forces in England and Wales were still collecting causation data from the scene of accidents — although systems of doing so and evaluation criteria had diverged significantly. It was thus practically impossible to compare like areas with like. An example of this might be the two forces, one which attributed 19% of accidents to excess speed and another which talked in terms of a 5% attribution (TRL 323, 1999). If speed was a universal cause of accidents, this seems an unlikely set of figures at which to arrive.
The recent TRL report "A new system for recording contributory factors in road accidents" (TRL 323) sets out, using 15 "precipitating" (the "what") and 54 "causation" factors (the "why") the causes of road accidents. It goes further and attributes confidence measures to each of the causation factors — Definite, Probable and Possible in order to reflect the necessarily subjective nature of accident causation data. The report cites excessive speed — note excessive speed — as one of the top 5 factors, but only attributes 7.3% of overall factors to it and 6% of definite factors — this is a long way indeed from the "Speeds, Speed Limits and Accidents" figures, yet is borne out by the Cambridge report and also from Lave and Elias' research from the USA.
The evidence from Suffolk, Cambridge, the USA and from the 8 Police authorities surveyed in TRL 323 all point away from the prominence of speed as a major cause or contributory factor towards, accidents. However, let us assume for a moment that the 1994 TRL report "Speed, Speed Limits and Accidents" is correct and that speed is a contributory factor to between 23 and 26% of all accidents. It may even be the largest overall isolatable cause, but focusing overall road safety strategy on speed fails to tackle the causes of the remaining 64-67% of accidents.
These examples, we would argue, clearly show that there is a high degree of ambiguity in the research corpus and that it is unreasonable to state that speed kills and that lower, blanket speed limits are the answer to improving road safety. The wrong speed at the wrong time is potentially lethal, but the disbenefits of hardline enforcement of lower limits, we believe, exceeds the benefits.
Focusing the 3 Es on new, lower limits sends the message to drivers that all they have to do to be safe is adhere to the speed limit.
A particularly worrying factor is the broader application of lower limits in many counties across the UK — not just in villages but also on trunk roads, dual carriageways and A roads. It could be argued that where there is no or little history of accidents and the 85th percentile speed is considerably above that set as a limit there will be widespread disregard for the limits. This in turn may have an extremely serious effect — widespread disregard for speed limits as a whole, including those that are set within government guidelines. Artificially low limits — especially when coupled with traffic calming — will also have the effect of increasing drivers' levels of frustration, perhaps giving rise to situations where they will take risks they would have otherwise avoided. High levels of aggression are also likely.
Setting lower limits may mitigate the effects of accidents, but may not stop them and may even possibly increase them. As drivers are forced to drive more and more slowly they begin to lose concentration, attention wanders.
Worse than this, over the longer term, drivers' car control skills, the psychomotor skills outlined earlier, begin to deteriorate and they are driving closer and closer to the limits of their ability to handle a car safely. New drivers have less scope to develop the skills they will need to drive safely as limits fall.
As road safety enforcement becomes focused on speed and automated with the advent of new and more sophisticated Gatsos, drivers have less opportunity to be stopped by the Police and learn WHY their behaviour was dangerous. The only education they receive is in the form of brown envelope, a £40 fine and 3 points on their licence. Explaining to a driver who has driven at a dangerous speed at the roadside why his speed was too high for the conditions is hardly an ideal classroom situation, but it is infinitely better than the anonymous and merely penal Gatso fine. Acquiring points merely becomes an occupational hazard — there is no change of behaviour for the future — drivers will continue to drive at inappropriate speeds.
We would also advocate far greater emphasis on the first E of the three, Education. Some moves have already been made towards this, including Oxfordshire's "Think Ahead" hazard awareness training package — this moves driver education on from the simplistic "speed kills". However, this needs to be taken much further — and, of course, funded. It is not the case that the effects of driver training take a far longer time period to come into action than physical restraint measures, fleet trainers such as Drive and Survive have shown that improvements are profound and rapid — and last far beyond the confines of the traffic-calmed street or the Gatso camera.
RoSPA trained the fleet of NEWS Transport in 1990. The fleet manager, David Footit, set a target of reducing accidents by 50%. In fact, the fleet saw accidents reduce by 70%, a 10% reduction in fuel use and a return on investment of 3:1 within a year of commencing training.
At the other end of the spectrum, from the very first Learner drivers need to be taught more than the basic motor skills necessary for manoeuvring a car. Whilst experience is vital in developing skills, training in observation, anticipation, interpretation of road conditions and the actions of other road users and how best to react, all have an important part to play. Setting the correct speed for the conditions is a result of developing each of these skill areas — not an end in itself.
Defensive driving tuition teaches that it is not only culpable accidents that can be avoided, but most of those apparently the fault of the other party as well. This is true of most road user groups — drivers and pedestrians in particular, cyclists, horseriders and passengers progressively less so. There is thus a situation where most potential accidents that can befall a road user can be avoided by that individual.
People must be told that accidents can happen to them, and that should they be involved, it is overwhelmingly likely that they could have avoided it even if the other party was at fault in some way. Rather than blaming the other party, all road users should be encouraged to take responsibility for their own road safety and that of dependent children, as well as for allowing for the mistakes of others.
Only then is a positive, thinking attitude to road safety encouraged. This will put road users in a frame of mind where they can accept sound road safety advice and, by thinking about their behaviour in the right way, be able to learn from both their own mistakes and those of others.
Driving simulators can be extremely useful in developing basic skills before students drive on the road, and could even form a "compulsory basic training" in the same way that motorcyclists now undertake similar basic training. Such training should also include training in observation and interpretation as well as hazard anticipation and management. The positive effects of motorcycle training can clearly be seen in Fig 4, below.
The Government should examine the introduction of incentives for drivers to train — reduced insurance premia, reduced car tax, perhaps Government or vehicle manufacturers paying a proportion of the cost of training.
But does training and education work? The ABD believes that is the most effective form of promoting safety on our roads because it internalises the concept of safe speeds, it does not merely enforce them from outside. Taking the example of motorcycle training mentioned earlier, looking at the fatality rates from the introduction of intensive bike training in the 1980s to today the reduction is clear, and the curve is much sharper in decline that that for other road users who have received no training.
To take another example, in 1969, the Metropolitan Police's accident record was 1 accident every 80 thousand miles. In 1995 as training had been progressively cut, this rate quadrupled to 4 accidents in every 80 thousand miles. Last year, Police drivers killed 22 people — the highest total for 5 years. Yet Police driver training — particularly the Class 1 courses are being shortened and little investment is being made.
Finally, company car drivers drive some of the highest mileages of any user group, often under considerable pressure. However, company car fleet managers are leading the field in driver training. There are many studies of the effectiveness of training on fleets, but this paper will briefly consider just one.
In one particular fleet, quoted by the training firm "Drive and Survive", untrained drivers were responsible for 73% of the firm's accidents, their accidents cost more than twice as much as those of the trained drivers, but perhaps most significantly of all the accident rates of the trained drivers was only 15%, whereas that of the trained drivers was 105%.
IAM and RoSPA fleet training also hold many other statistics clearly demonstrating the benefits of training far higher than even the most effective reduction rates achieved by cameras, calming or other external speed restrictions.
Road improvements are key to improving safety as they remove the potential for certain accidents to happen. What is needed is a restoration of the bypass program to remove through traffic from towns and villages and more effort to upgrade single carriageway trunk roads to dual and to provide graded junctions in place of central reservation gaps.
In town, engineering measures should focus on improving flows and removing conflicts between classes of road users.
Kill Your Speed has undermined road improvements by facilitating the introduction of inappropriate speed reduction based "traffic calming" measures. These have taken resource away from more valid road improvements and have often actually created hazardous situations, especially for cyclists and motorcyclists, to the extent that some of them have been removed soon after installation.
Enforcement should be targeted at those individuals whose behaviour is such that other, responsible road users cannot avoid accidents caused by their recklessness. Most accidents can be avoided by both parties, but those who lose control and mount the pavement, overtake on blind bends, fail to make basic observations of the road conditions or ignore give way signs or red lights (to give a few examples) are a menace on the roads and should be targeted by trained police officers, as should reckless cyclists and drunken pedestrians.
Sometimes, especially with less serious cases, it is appropriate that a speeding summons or fixed penalty should be issued in such circumstances, as an absolute offence is easier to prove and such behaviour is often accompanied by breaking the speed limit. However, this fact should not be used to justify the reverse logic — that breaking limits goes with reckless behaviour, this is not the case.
Enforcement should not be used on a blanket basis against drivers who are travelling safely according to the conditions but in excess of a limit which may be set for conditions pertinent at another time of day — for example outside a school in the middle of the night, or for conditions existing further down the road. A speed limit that it is never safe for a highly trained driver to exceed would be set far too high for most drivers most of the time. In the same way a limit set at the lowest common denominator would cause frustration, congestion and vastly increased journey times as well as being detrimental to road safety.
Kill Your Speed has brought about large scale enforcement of speed limits which are more often than not set inappropriately low. The ability to carry out such enforcement has enabled the setting of limits which would not be tolerated by the public otherwise. This has resulted in the wholesale persecution and terrorisation of completely safe drivers and is totally unacceptable.
Legally binding national guidelines should be introduced to ensure that limits are set in accordance with existing and sensible DETR recommendations, amended only to account for the better handling and stopping distances of modern vehicles.
Kill Your Speed has created a climate whereby limits can be set according to political rather than safety criteria. Reducing a limit is often seen as good for safety and opposing such reductions is bad. This has created a situation where senseless limits are introduced after requests from a very small number of people, without any proper national structure. In addition, many Local Councils are looking for ways of making car use unpleasant to encourage "modal shift". They see low speed limits as an ideal way to achieve this.
This use of limits for environmental reasons has serious implications for the perception of speed limits and for road safety. Drivers must see speed limits set reasonably for reasons of safety, not for environmental or other reasons.
What does the ABD believe are the alternatives to what it has suggested is the over focus on lower speeds in each of the three Es? There is, of course, no easy answer — the whole issue of speed and limits is exceptionally complex.
The ABD does not for a moment disagree that speeds need to be managed — what we are suggesting is that the driver should be responsible for imposing the most appropriate speed for the conditions — only then can he or she apply the general rules for setting safe speeds to all conditions and driving situations — not merely where there are humps or cameras to affect her behaviour.
This internalising of safe speeds is absolutely vital to safe driving — the external imposition of slower speeds using either vehicular limiters or blanket limits will do little in the long term to improve safety on the road. Every driver is different, and the 60mph limit that may be far too high for one individual may be well within the skills limits of another — who might be able to travel the same road safely at 80mph.
The question then may be "how can we fit speed limiters in drivers heads rather than to their cars or to the roads"?
We would advocate a far greater emphasis on the WHOLE RANGE of skills and factors that go to make up safe driving — each of the factors mentioned in the earlier model of driver behaviour.
The ABD believes that the present emphasis on speed and external controls on the driver are leading, and will continue to lead to a "dumbing down" of drivers, leading in turn to higher accident rates as driving skills decay — no matter how low the speed limit. Slowing drivers down will NOT make them safer, and is likely to have negative consequences for road safety.
Secondly, we believe that using speed reduction as an attempt to secure modal shift is extremely dangerous. This has the potential to bring safety-based limits into disrepute, leading to them eventually being ignored and flouted wherever possible. Such a change in the driver's perception of speed limits would be disastrous.
Finally, we believe that there needs to be a clear shift away from speed and towards stressing the need for drivers to think, to observe, to anticipate and interpret — and, to achieve this, a move towards training and education. This will internalise safe driving practices and make them a permanent feature of road behaviour, not merely impose them when there is a camera, a speed bump or a lower speed limit. | 2019-04-23T12:27:14Z | http://abd.org.uk/spdconr.htm |
Nor do historians undertake the study of the past in one single moment in time. Postmodernist critics of historical studies sometimes write as though historical sources are culled once only from an archive and then adopted uncritically. The implied research process is one of plucking choice flowers and then pressing them into a scrap-book to some pre-set design.
When at work, historians should never take their myriad of source materials literally and uncritically. Evidence is constantly sought, interrogated, checked, cross-checked, compared and contrasted, as required for each particular research theme. The net is thrown widely or narrowly, again depending upon the subject. Everything is a potential source, from archival documents to art, architecture, artefacts and though the gamut to witness statements and zoological exhibits. Visual materials can be incorporated either as primary sources in their own right, or as supporting documentation. Information may be mapped and/or tabulated and/or statistically interrogated. Digitised records allow the easy selection of specific cases and/or the not-so-easy processing of mass data.
As a result, researching and writing history is a slow through-Time process – sometimes tediously so. It takes at least four years, from a standing start, to produce a big specialist, ground-breaking study of 100,000 words on a previously un-studied (or under-studied) historical topic. The exercise demands a high-level synthesis of many diverse sources, running to hundreds or even thousands. Hence the methodology is characteristically much more than a ‘reading’ of one or two key texts – although, depending upon the theme, at times a close reading of a few core documents (as in the history of political ideas) is essential too.
The whole process is arduous and exciting, in almost equal measure. It’s constantly subject to debate and criticism from peer groups at seminars and conferences. And, crucially too, historians are invited to specify not only their own methodologies but also their own biases/assumptions/framework thoughts. This latter exercise is known as ‘self-reflexivity’. It’s often completed at the end of a project, although it’s then inserted near the start of the resultant book or essay. And that’s because writing serves to crystallise and refine (or sometimes to reject) the broad preliminary ideas, which are continually tested by the evidence.
One classic example of seriously through-Time writing comes from the classic historian Edward Gibbon. The first volume of his Decline & Fall of the Roman Empire appeared in February 1776. The sixth and final one followed in 1788. According to his autobiographical account, the gestation of his study dated from 1764. He was then sitting in the Forum at Rome, listening to Catholic monks singing vespers on Capitol Hill. The conjunction of ancient ruins and later religious commitments prompted his core theme, which controversially deplored the role of Christianity in the ending of Rome’s great empire. Hence the ‘present’ moments in which Gibbon researched, cogitated and wrote stretched over more than 20 years. When he penned the last words of the last volume, he recorded a sensation of joy. But then he was melancholic that his massive project was done.6 (Its fame and the consequent controversies last on today; and form part of the history of history).
1 For this basic point, see PJC, ‘People Sometimes Say “We Don’t Learn from the Past” – and Why that Statement is Completely Absurd’, BLOG/91 (July 2018), to which this BLOG/92 is a companion-piece.
2 See e.g. K. Jenkins, Re–Thinking History (1991); idem (ed.), The Postmodern History Reader (1997); C.G. Brown, Postmodernism for Historians (Harlow, 2005); A. Munslow, The Future of History (Basingstoke, 2010).
3 J. Appleby, L. Hunt and M. Jacob, Telling the Truth about History (New York, 1994); R. Evans, In Defence of History (1997); J. Tosh (ed.), Historians on History (Harlow, 2000); A. Brundage, Going to the Sources: A Guide to Historical Research and Writing (Hoboken, NJ., 2017).
4 H. Shudo, The Nanking Massacre: Fact versus Fiction – A Historian’s Quest for the Truth, transl. S. Shuppan (Tokyo, 2005); Vera Schwarcz, Bridge across Broken Time: Chinese and Jewish Cultural Memory (New Haven, 1998).
5 PJC, ‘Writing Through a Big Research Project, not Writing Up’, BLOG/60 (Dec.2015); PJC, ‘How I Write as a Historian’, BLOG/88 (April 2018).
6 R. Porter, Gibbon: Making History (1989); D.P. Womersley, Gibbon and the ‘Watchmen of the Holy City’: The Historian and his Reputation, 1776-1815 (Oxford, 2002).
Another new word, invented by my partner Tony Belton on 26 October 2013, is ‘wrongaplomb’. It refers to someone who is habitually in error but always with total aplomb. It’s a great word, which immediately summons to my mind the person for whom the term was invented. But again, I expect that Tony has also forgotten. (He has). New words arrive and are shed with great ease. This is one which came and went, except for the fact that I noted it down.
No wonder that dictionary compilers find it a struggle to keep abreast. The English language, as a Germanic tongue hybridised by its conjunction with Norman French, already has a huge vocabulary, to which additions are constantly made. One optimistic proposal in the Gentleman’s Magazine in 1788 hoped to keep a check upon the process in Britain, by establishing a person or committee to devise new words for every possible contingency.1 But real-life inventions and borrowings in all living languages were (and remain) far too frequent, spontaneous and diffuse for such a system to work. The Académie française (founded 1635), which is France’s official authority on the French language, knows very well the perennial tensions between established norms and innovations.2 The ‘Immortels’, as the 40 academicians are termed, have a tricky task as they try to decide for eternity. Consequently, a prudent convention ensures that the Académie’s rulings are advisory but not binding.
For my part, I love encountering new words and guessing whether they will survive or fail. In that spirit, I have invented three of my own. The first is ‘plurilogue’. I coined this term at an academic seminar in January 2016 and then put it into a BLOG.3 It refers to multi-lateral communications across space (not so difficult in these days of easy international messaging) and through time. In particular, it evokes the way that later generations of historians constantly debate with their precursors. ‘Dialogue’ doesn’t work to explain such communications. Dead historians can’t answer back. But ‘plurilogue’ covers the multiplicity of exchanges, between living historians, and with the legacy of ideas from earlier generations.
Will the term last? I think so. Having invented it, I then decided to google (a recently-arrived verb). To my surprise, I discovered that there already is an on-line international journal of that name. It has been running since 2011. It features reviews in philosophy and political science. My initial response was to find the prior use annoying. On the other hand, that’s a selfish view. No one owns a language. Better to think that ‘plurilogue’ is a word whose time has come. Its multiple coinages are a sign of its relevance. Humans do communicate across time and space; and not just in dialogue. So ‘plurilogue’ has a tolerable chance of lasting, especially as it’s institutionalised in a journal title.
A second term that I coined and published in 2007 is ‘diachromesh’.4 It defines the way that humans (and everything in the cosmos for good measure) are integrally situated in an unfolding through-Time, also known as the very long term or ‘diachronic’. That latter word is itself relatively unusual. But it has some currency among historians and archaeologists.
Fig.3 Guildhall Clock on Guildford High Street, marking each synchronic moment since 1683 in an urban high street, diachromeshed within its own space and time.
Lastly, I also offered the word ‘trialectics’ in 2007. Instead of cosmic history as composed of binary forces, I envisage a dynamic threefold process of continuity (persistence), gradual change (momentum) and macro-change (turbulence).8 For me, these interlocking dimensions are as integral to Time as are the standard three dimensions of Space.
Be that as it may, I was then staggered to find that the term had a pre-history, of which I was hitherto oblivious. Try web searches for trialectics in logic; ecology; and spatial theories, such as Edward Soja’s planning concept of Thirdspace.9 Again, however, it would seem that this is a word whose time has come. The fact that ‘trialectics’ is subject to a range of nuanced meanings is not a particular problem, since that happens to so many words. The core of the idea is to discard the binary of dialectics. Enough of either/or. Of point/counter-point; or thesis/antithesis. Instead, there are triple dimensions in play.
Coining new words is part of the trialectical processes that keep languages going through time. They rely upon deep continuities, whilst experiencing gradual changes – and, at the same time, facing/absorbing/rejecting the shock of the new. Luckily there is already a name for the grand outcome of this temporal mix of continuity/micro-change/macro-change. It’s called History.
1 S.I. Tucker, Protean Shape: A Study in Eighteenth-Century Vocabulary and Usage (1967), p. 104.
3 P.J. Corfield, ‘Does the Study of History “Progress” – and How does Plurilogue Help? BLOG/61 (Jan. 2016), www.penelopejcorfield.com/monthly-blogs/.
4 P.J. Corfield, Time and the Shape of History (2007), p. xv.
6 This assumption differs from that of a small minority of physicists and philosophers who view Time as broken, each moment sundered from the next. See e.g. J. Barbour, The End of Time: The Next Revolution in our Understanding of the Universe (1999). I might call this interpretation a case of ‘wrongaplomb’.
7 S. Griffiths, ‘The High Street as a Morphological Event’, in L. Vaughan (ed.), Suburban Urbanities: Suburbs and the Life of the High Street (2015), p. 45.
8 Corfield, Time and Shape of History, pp. 122-3, 211-16, 231, 248, 249. See also idem, ‘Time and the Historians in the Age of Relativity’, in A.C.T. Geppert and T. Kössler (eds), Obsession der Gegenwart: Zeit im 20. Jahrhundert/ Concepts of Time in the Twentieth Century (Geschichte und Gesellschaft: Sonderheft, 25, Göttingen, 2015), pp. 71-91; also available on www.penelopejcorfield.co.uk.
Well, why not? Why can’t we think about Space without Time? It’s been tried before. A persistent, though small, minority of philosophers and physicists deny the ‘reality’ of Time.1 True, they have not yet made much headway in winning the arguments. But it’s an intriguing challenge.
Space is so manifestly here and now. Look around at people, buildings, trees, clouds, the sun, the sky, the stars … And, after all what is Time? There is no agreed definition from physicists. No simple (or even complex) formula to announce that T = whatever? Why can’t we just banish it? Think of the advantages. No Time … so no hurry to finish an essay to a temporal deadline which does not ‘really’ exist. No Time … so no need to worry about getting older as the years unfold in a temporal sequence which isn’t ‘really’ happening. In the 1980s and 1990s – a time of intellectual doubt in some Western left-leaning philosophical circles – a determined onslaught upon the concept of Time was attempted by Jacques Derrida (1930-2004). He became the high-priest of temporal rejectionism. His cause could be registered somewhere under the postmodernist banner, since postmodernist thought was very hostile to the idea of history as a subject of study. It viewed it as endlessly malleable and subjective. That attitude was close to Derrida’s attitude to temporality, although not all postmodernist thinkers endorsed Derrida’s theories.2 His brand of ultra-subjective linguistic analysis, termed ‘Deconstruction’, sounded, as dramatist Yasmina Reza jokes in Art, as though it was a tough technique straight out of an engineering manual.3 In fact, it allowed for an endless play of subjective meanings.
For Derrida, Time was/is a purely ‘metaphysical’ concept – and he clearly did not intend that description as a compliment. Instead, he evoked an atemporal spatiality, named khōra (borrowing a term from Plato). This timeless state, which pervades the cosmos, is supposed to act both as a receptor and as a germinator of meanings. It is an eternal Present, into which all apparent temporality is absorbed.4 Any interim thoughts or feelings about Time on the part of humans would relate purely to a subjective illusion. Its meanings would, of course, have validity for them, but not necessarily for others.
Canadian Centre for Architecture, Montreal.
In 1987, the cerebral American architect Peter Eisenman (1932- ), whose stark works are often described as ‘deconstructive’, launched into dialogue with Derrida. They discussed giving architectural specificity to Derrida’s khōra in a public garden in Paris.8 One cannot but admire Eisenman’s daring, given the nebulousness of the key concept. Anyway, the plan (see Fig. 2) was not realised. Perhaps there was, after all, something too metaphysical in Derrida’s own vision. Moreover, the installation, if erected, would have soon shown signs of ageing: losing its gilt, weathering, acquiring moss as well as perhaps graffiti – in other words, exhibiting the handiwork of the allegedly banished Time.
So the saga took seriously the idea of banishing Time but couldn’t do it. The very words, which Derrida enjoyed deconstructing into fragmentary components, can surely convey multiple potential messages. Yet they do so in consecutive sequences, whether spoke or written, which unfold their meanings concurrently through Time.
In fact, ever since Einstein’s conceptual break-through with his theories of Relativity, we should be thinking about Time and Space as integrally linked in one continuum. Hermann Minkowski, Einstein’s intellectual ally and former tutor, made that clear: ‘Henceforth Space by itself, and Time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality’.9 In practice, it’s taken the world one hundred years post-Einstein to internalise the view that propositions about Time refer to Space and vice versa. Thus had Derrida managed to abolish temporality, he would have abolished spatiality along with it. It also means that scientists should not be seeking a formula for Time alone but rather for Space-Time: S-T = whatever?
Lastly, if we do want a physical monument to either Space or Time, there’s no need for a special trip to Paris. We need only look around us. The unfolding Space-Time, in which we all live, looks exactly like the entire cosmos, or, in a detailed segment of the whole, like our local home: Planet Earth.
1 For anti-Time, see J. Barbour, The End of Time: The Next Revolution in Our Understanding of the Universe (1999), esp. pp. 324-5. And the reverse in R. Healey, ‘Can Physics Coherently Deny the Reality of Time?’ in C. Callender (ed.), Time, Reality and Experience (Cambridge, 2002), pp. 293-316.
2 B. Stocker, Derrida on Deconstruction (2006); A. Weiner and S.M. Wortham (eds), Encountering Derrida: Legacies and Futures of Deconstruction (2007).
3 Line of dialogue from play by Y. Reza, Art (1994).
4 D. Wood, The Deconstruction of Time (Evanstown, Ill., 2001), pp. 260-1, 269, 270-3; J. Hodge, Derrida on Time (2007); pp. ix-x, 196-203, 205-6, 213-14.
6 Letter from Derrida to Peter Eisenman, 30 May 1986, as cited in N. Leach (ed.), Rethinking Architecture: A Reader in Cultural Theory (1997), pp. 342-3. See also for formal diagram based on Derrida’s sketch, G. Bennington and J. Derrida, Jacques Derrida (1993), p. 406.
7 A.E. Taylor, A Commentary of Plato’s Timaeus (Oxford, 1928).
8 J. Derrida and P. Eisenman, Chora L Works, ed. J. Kipnis and T. Leeser (New York, 19997).
9 Cited in P.J. Corfield, Time and the Shape of History (2007), p. 9.
https://www.penelopejcorfield.com/wp-content/uploads/2017/02/2017-02-No3-Earth-from-Space-Vector.jpg 556 680 Penelope J Corfield https://www.penelopejcorfield.com/wp-content/uploads/2014/08/Penelope-J-Corfield.png Penelope J Corfield2017-02-01 12:00:212017-03-08 01:29:15MONTHLY BLOG 74, WHY CAN’T WE THINK ABOUT SPACE WITHOUT TIME?
It’s a significant political ambition, albeit complicated somewhat by the fact that a majority of Labour voters in 2015 (63%) actually voted for Remain. May was clearly trying to shift the post-Referendum Conservative Party closer to the centre ground. And it’s a long time since any front-line British political leader spoke so plainly about social class, let alone about the workers.
But Theresa May’s pledge strangely omits to mention the rebellious Tory Leavers. After all, the majority of the national vote against the EU in 2016 came from the 58% of voters who had voted Conservative in the General Election of 2015. They voted for Leave in opposition to their then party leader and his official party policy. In the aftermath of the Referendum, many known Labour supporters, such as myself, were roundly scolded by pro-EU friends for the Labour Party’s alleged ‘failure’ to deliver the vote for Remain. But surely such wrath should have been directed even more urgently to Conservative supporters?
Either way, the Referendum vote made clear once again a basic truth that all door-step canvassers quickly discover. Electors are not so easily led. They don’t do just what their leaders or party activists tell them. Politics would be much easier (from the point of view of Westminster politicians) if they did. That brute reality was discovered all over again by David Cameron in June 2016. In simple party-political terms, the greatest ‘failure’ to deliver was indubitably that of the Conservatives. Cameron could possibly have stayed as PM had his own side remained united, even if defeated. But he quit politics, because he lost to the votes of very many Conservative rank-and-file, in alliance with UKIP and a section of Labour voters. It was ultimately the scale of grass-roots Tory hostility which killed both his career and his reputation as a lucky ‘winner’ on whom fortune smiles.
Divisions within political parties are far from new. Schematically considered, Labour in the twentieth century drew ideas, activists and votes from reform-minded voters from the professional middle class and skilled working class.3 That alliance is now seriously frayed, as is well known.
So what about the Conservatives? Their inner tensions are also hard to escape. They are already the stuff of debates in A-level Politics courses. Tory divisions are typically seen as a gulf between neo-liberal ‘modernisers’ (Cameron and Co) and ‘traditionalists’ Tory paternalists (anti-EU backbenchers). For a while, especially in the 1980s, there were also a number of self-made men (and a few women) from working-class backgrounds, who agreed politically with the ‘modernisers’, even if socially they were not fully accepted by them. It remains unclear, however, why such divisions emerged in the first place and then proved too ingrained for party discipline to eradicate.
Viewed broadly and schematically, the Conservatives in the twentieth century can be seen as a party drawing ideas, leadership and activists from an alliance of aristocrats/plutocrats with middle-class supporters, especially among the commercial middle class – all being buttressed by the long-time endorsement of a considerable, though variable, working-class vote.4 Common enemies, to weld these strands together, appear in the form of ‘socialism’, high taxes, and excessive state regulation.
Today, the upper-class component of Toryism typically features a number of socially grand individuals from landed and titled backgrounds. David Cameron, who is a 5th cousin of the Queen, seems a classic example.5 However, he also has a cosmopolitan banking and commercial ancestry, making him a plutocrat as much as an aristocrat.6 In that, he is characteristic of the big international financial and business interests, which are generally well served by Conservative governments. However, appeals and warnings from the political and economic establishment cut no ice with many ‘ordinary’ Tory members.
Why so? There’s a widening gap between the very wealthy and the rest. The Conservative Leave vote was predominantly based in rural and provincial England and Wales. (Scotland and Northern Ireland have different agendas, reflecting their different histories). The farming communities were vocally hostile to regulation from Brussels. And, above all, the middle-aged and older middle class voters in England’s many small and medium-sized towns were adamantly opposed to the EU and, implicitly, to recent trends in the nation’s own economic affairs.
Tory Leavers tend to be elderly conservatives with a small as well as large C. They have a strong sense of English patriotism, fostered by war-time memories and postwar 1950s culture. They may not be in dire financial straits. But they did not prosper notably in the pre-crisis banking boom. And now the commercial middle classes, typified by shopkeepers and small businessmen, do not like hollowed-out town centres, where shops are closed or closing. They don’t like small businesses collapsing through competition from discount supermarkets or on-line sales. They regret the winnowing of local post-offices, pubs, and (in the case of village residents) rural bus services. They don’t like the loss of small-town status in the shadow of expanding metropolitan centres. They don’t like bankers and they hate large corporate pay bonuses, which continue in times of poor performance as well as in booms. With everyone, they deplore the super-rich tax-avoiders, whether institutional or individual.
A proportion of Tory Leavers may be outright ethnicist (racist). Some may hate or reject those who look and sound different. But many Leavers are personally tolerant – and indeed a proportion of Tory Leavers are themselves descendants of immigrant families. They depict the problem as one of numbers and of social disruption rather than of ethnic origin per se.
Theresa May represents these Tory-Leavers far more easily than David Cameron ever did. She is the meritocratic daughter of a middle-ranking Anglican clergyman, who came from an upwardly mobile family of carpenters and builders. Some of her female ancestors worked as servants (not very surprisingly, since domestic service was a major source of employment for unmarried young women in the prewar economy).8 As a result, her family background means that she can say that she ‘feels the pain’ of her party activists with tolerable plausibility.
Nevertheless, May won’t find it easy to respond simultaneously to all these Leave grievances. To help the working-class in the North-East and South Wales, she will need lots more state expenditure, especially when EU subsidies are ended. Yet middle-class voters are not going to like that. They are stalwart citizens who do pay their taxes, if without great enthusiasm. They rightly resent the super-rich individuals and international businesses whose tax avoidance schemes (whether legal, borderline legal, or illegal) result in an increased tax burden for the rest. But it will take considerable time and massive concerted action from governments around the world to get to serious grips with that problem. In the meantime, there remain too many contradictory grievances in need of relief at home.
Overall, the Tory-Leavers’ general disillusionment with the British economic and political establishment indicates how far the global march of inequality is not only widening the chronic gulf between super-rich and poor but is also producing a sense of alienation between the super-rich and the middle strata of society. That’s historically new – and challenging both for the Conservative Party in particular and for British society in general. Among those feeling excluded, the mood is one of resentment, matched with defiant pride. ‘Brussels’, with its inflated costs, trans-national rhetoric, and persistent ‘interference’ in British affairs, is the first enemy target for such passions. Little wonder that, across provincial England in June 2016, the battle-cry of ‘Let’s Take Back Control’ proved so appealing.
3 What’s in a name? In US politics, the skilled and unskilled workers who broadly constitute this very large section of society are known as ‘middle class’, via a process of language inflation.
4 See A. Windscheffel, Popular Conservatism in Imperial London, 1868-1906 (Woodbridge, 2007); and M. Pugh, ‘Popular Conservatism in Britain: Continuity and Change, 1880-1987’, Journal of British Studies, 27 (1988), pp. 254-82.
5 Queen Elizabeth II is descended from the Duke of Kent, the younger brother of monarchs George IV and William IV. William IV had no legitimate offspring but his sixth illegitimate child (with the celebrated actor Dorothea Jordan) was ancestor of Enid Ages Maud Levita, David Cameron’s paternal grandmother.
7 This sort of issue encouraged a proportion of Conservative activists to join the United Kingdom Independence Party UKIP), which drew support from both Left and Right.
https://www.penelopejcorfield.com/wp-content/uploads/2016/11/2016-11-No1-Lets-take-back-control-Dover-Cliffs.jpg 977 1276 Penelope J Corfield https://www.penelopejcorfield.com/wp-content/uploads/2014/08/Penelope-J-Corfield.png Penelope J Corfield2016-11-01 12:00:482016-12-01 01:15:21MONTHLY BLOG 71, HOW IS GROWING INEQUALITY DIVIDING THE BRITISH TORIES FROM WITHIN?
MONTHLY BLOG 62, IS THE PAST DEAD OR ALIVE? AND THE SNARES OF SUCH BINARY QUESTIONS.
Is the past dead or alive? Posing such a binary question insists upon choice; but the options constitute a false dichotomy. Nonetheless, the death of the past is often proclaimed. This BLOG examines the arguments for and against; and highlights the snares of binary thinking.
Firstly, the past, dead or alive? The ‘death of the past’ is a common, possibly reassuring notion. If you have forgotten the History dates learned at school, then don’t worry, you are in good company. Most people have. In the USA there is a sad debate entitled: ‘Is History history?’ There is at least one book entitled The Death of the Past.1 In fact, that particular study laments that people forget far too much. Nonetheless, emphatic phrases circulate in popular culture. ‘Never look back. The past is dead and buried’. ‘The bad (or good) Old Days have gone’. Something or other is irrevocably past – rendering it ‘as dead as the proverbial dodo’, which was last reliably sighted in Mauritius in 1662.
from L.W. Rothschild’s Extinct Birds (1907).
Opposition to old thinking was accordingly expressed by many later Communist leaders. The ‘new’ was good and revolutionary. Antiquity was the dangerous foe. Chairman Mao’s campaign against the ‘Four Olds’ – Old Customs, Old Culture, Old Habits, Old Ideas – was a striking example, at the time of his intended Cultural Revolution in 1966.4 Yet the fact that various traditional aspects of Chinese life still persist today indicates the difficulty of uprooting very deeply embedded social attitudes, even when using the resources of a totalitarian state.
For historians, meanwhile, it’s best to reject over-simplified choices. Many things in the past (both material and intangible) have died or come to an end. Yet far from everything has shared the same fate. Ideas, languages, cultures, religions persist through Time, incorporating changes alongside continuities; biological traits evolve over immensely long periods; the structure of the cosmos unfolds over many billennia (an emergent neologism) within a measurable framework.
Hence there’s nothing like a rigid divide between past and present. They are separated by no more than a nano-second between NOW and the immediate nano-second before NOW, so that legacies/contributions from the past infuse every moment as it is lived.
Secondly, thinking in terms of binary alternatives: Having to choose between bad/old/dead versus good/new/alive is a classic example of binary thought. It is an approach commonly cultivated by activists, for example in revolutionary or apocalyptic religious movements. Are you with the great cause or against it? Such attitudes can be psychologically powerful in binding groups together.
Either way, there is no doubt that binary thought, like binary notation, has its uses. But studying History requires the capacity to grapple with complexity alongside simplicity. Is the past dead or alive? The answer is both and neither. It falls within the embrace of ever-stable ever-fluid Time, which lives and dies simultaneously.
1 J.H. Plumb, The Death of the Past (1969; reissued Harmondsworth, 1973; Basingstoke, 2003).
2 W. Faulkner, Requiem for a Nun (1951), Act 1, sc. 3.
3 K. Marx, The Eighteenth Brumaire of Louis Napoleon (1851/2), in D. McClellan (ed.), Karl Marx: Selected Writings (Oxford, 1977), p. 300.
4 P. Clark, The Chinese Cultural Revolution: A History (Cambridge, 2008); M. Gao, The Battle for China’s Past: Mao and the Cultural Revolution (2008).
5 Y.M. Lotman and B.A. Uspensky, ‘Binary Models in the Dynamics of Russian Culture’, in A.D. and A.S. Nakhimovsky (eds), The Semiotics of Russian Cultural History (Ithaca, NY., 1985), pp. 30-66.
https://www.penelopejcorfield.com/wp-content/uploads/2016/02/2016-02-No1-Frohawk_Dodo-1905.png 244 300 Penelope J Corfield https://www.penelopejcorfield.com/wp-content/uploads/2014/08/Penelope-J-Corfield.png Penelope J Corfield2016-02-01 12:00:492016-02-04 01:15:33MONTHLY BLOG 62, IS THE PAST DEAD OR ALIVE? AND THE SNARES OF SUCH BINARY QUESTIONS.
My previous BLOG/ 57 wrote about political leaders who might hope to ride and direct the tides of History.1 But it’s not only leaders. Historical outcomes are the sum of all the actions and inactions of everybody, combined together. We don’t all have the same power to direct. Yet everybody plays some part, even if by way of abstention. Hence we can all try, if we want, to change the roles which might seem allocated to us. It’s not a very simple thing to do, certainly. As is well known, it’s much easier to make good resolutions than to achieve them. Furthermore, good intentions can also, proverbially, achieve the reverse of the effect intended. Yet things can, upon occasion, be very different.
This BLOG is about the motivations of people who make dramatic changes, quitting their daily lives and going to join crusades in distant lands. Obviously the precise combination of reasons varies from individual to individual. There are usually strong ‘push’ factors, impelled by dissatisfaction with daily life at home. At the same time, however, there’s generally one or more strong ‘pull’ factor as well, attracting via the appeal of a distant cause that’s on the side of History.
Political commitment can have that effect. The thousands of left-wingers from across the world (and especially across Europe), who went in the 1930s to fight for the International Brigades in the Spanish Civil War, were a case in point.2 They were attracted by the lure of action, as well as by their support for Spain’s democratic government. Many were communists. Even if not drilled in the niceties of Marxist theory, they were accustomed to thinking of their own great cause as marching inevitably through History towards a triumphant outcome.
Fig. 1. Memorial to Hammersmith & Fulham volunteers, who fought for the International Brigades in Spain, 1936-9 – installed in Fulham Palace Gardens SW6 in 1997.
The fact that the great cause of anti-Fascism needed an urgent helping hand was not an obstacle. For the many communists in the International Brigades their commitment was encouraged by the Marxist analysis of History, which saw the processes of change as a constant struggle. Of course, there is always opposition and conflict. But it is precisely through complex conflicts that fundamental change will emerge.3 Thus the role of struggle, if need be in the form of real fighting, was not an impediment for those of high spirits and with an active temperament. Religious motivations are even more common in calling people to action. What can be more powerful and exhilarating than fighting, either literally or symbolically, in God’s cause? It is not even necessary to be highly spiritual to heed that message. It is the call to action which is the lure, with the double promise of fighting in the winning cause of righteousness and, while so doing, of gaining divine goodwill. The rewards, whether spiritual or this-worldly (or both), will follow. It’s a high promise which provides sustenance through the possible times of loneliness, boredom, and confusion which often afflict people who are uprooted from their homes. Indeed, the promise of divine reward encourages those seriously dissatisfied with their current life to take drastic action to put things right.
Particularly electrifying in religious motivation is the call to action that comes when ‘the End is Nigh’. Generally, people muddle on from day to day without worrying about long-term trends. But religious teachings, particularly those which view History as a linear journey from the creation of the world to the last judgment, provide such a mental framework. There was a beginning. There will be an ending, often after a phase of apocalyptic upheaval and turmoil, when God’s final judgment will be revealed.
Believers should accordingly be prepared. They might also look, anxiously or eagerly as the case might be, for signs of the imminent unfolding of these great events. All attempts at second-guessing divine intentions have been consistently discouraged by orthodox religious leaders in all the great linear faiths. Nonetheless, predictions of the imminent End of the World recur in every generation, and especially in times of turmoil.4 The message, for believers, is intensely exciting and empowering.5 It overshadows all routine matters. And heeding the call provides a chance for changing lifestyles. It encourages some to leave home to follow a special teacher or leader. It sometimes leads to violence, when believers fight against unbelievers. And it can even result in mass suicides/murders, if embattled cultists decide to take their own lives and those of their young.
Fig. 2 Lady Hester Stanhope, in her own version of oriental garb, who lived in Lebanon, for almost thirty years, awaiting the return of the Messiah and the Last Judgment.
Stanhope’s experience, extraordinary as it was for a woman of her social class, was nonetheless a classic example of what can happen when an apocalyptic vision is not immediately realised. Generally, dashed hopes turn into disillusionment. Ordinary life resumes. Yet not always. Sometimes, people turn to dogged waiting. In Stanhope’s case, she left no cult behind her. Indeed, she may have realised, by the end of her life, that her hopes had been in vain. Nonetheless, she probably found consolation in the sheer pertinacity of her waiting. And that’s what can happen, not just for individuals but across generations. A group of followers can take up the cause, even after the leader has died, not only waiting but also recruiting successors to hand down the message through time.
Three comments to conclude. Firstly, the confidence that one is fighting, whether literally or symbolically, on the side of History and/or God is individually empowering, especially when worldly as well as other-worldly hopes/grievances are intertwined. Such beliefs can get people to do surprising things. For those without any previous certitudes, moreover, doing something drastic can seem the best way of gaining new faith through action.
Secondly, the force of such beliefs may be creative and affirmative, but may also unleash powers of destruction, especially when encountering opposition. Because the stakes are so high, so are the passions.
Lastly, curbing a militant commitment to fighting on the side of God or history is not easy. The main antidote is disillusionment, when the euphoria fades. Yet that can take a long time. Hence the secular authorities, generally cautious in matters of private belief, may intervene in cases of violence or potential violence. Currently, various governments are running ‘deradicalisation’ programmes, seeking to get militant Islamists to renounce armed struggle and/or to prevent others from joining them. That can’t be simply done, however, by flatly opposing the great cause. The empowering and exhilarating nature of commitment needs full acknowledgement. Only then, can it potentially be diverted into an alternative re-empowerment, in the cause of everyday, not apocalyptic, action. Different outlets for strong energies – calmer ways of navigating the tides of history.
1 See P.J. Corfield, ‘Riding the Tides of History: Why is Jeremy Corbyn like Napoleon Bonaparte?’ BLOG/57 (Sept. 2015).
2 Details in K. Bradley, International Brigades in Spain, 1936-9 (1994); A. Castells, Las brigadas internacionales de la Guerra di España (Barcelona, 1974); and M.W. Jackson, Fallen Sparrows: The International Brigades in the Spanish Civil War (Philadelphia, 1995).
3 See G.A. Cohen, Karl Marx’s Theory of History: A Defence (Oxford, 1978).
4 See listings in www.en.wikipedia.org/wiki/List_of_dates_predicted_for_apocalyptic_events; and discussions in E. Weber, Apocalypses (Cambridge, Mass., 1999); P.J. Corfield, ‘The End is Nigh’, History Today, 57 (March 2007), pp. 37-9.
5 P.J. Corfield, End of the World Cults (Historical Association Podcast, 2015) – available via www.history.org.uk/podcasts/#/p/504.
6 See K. Ellis, Star of the Morning: The Extraordinary Life of Lady Hester Stanhope (2008).
7 C. Hill, B. Reay and W. Lamont, The World of the Muggletonians (1983).
8 R.W. Schwarz and F. Greenleaf, Light Bearers: A History of the Seventh-Day Adventist Church (Nampa, Idaho, 2000); M. Bull and K. Lockhart, Seeking a Sanctuary: Seventh-Day Adventism and the American Dream (San Francisco, 1989).
https://www.penelopejcorfield.com/wp-content/uploads/2015/10/2015-10-No2-Lady-Hester_Stanhope.jpg 282 220 Penelope J Corfield https://www.penelopejcorfield.com/wp-content/uploads/2014/08/Penelope-J-Corfield.png Penelope J Corfield2015-10-01 12:00:102015-11-16 17:48:03MONTHLY BLOG 58, LIVING INTENSELY IN THE EYE OF THE STORM: WHY DO PEOPLE QUIT THEIR DAILY LIVES AND GO TO JOIN CRUSADES IN DISTANT LANDS? | 2019-04-22T14:45:09Z | https://www.penelopejcorfield.com/tag/history-2/ |
We have already defined sound as any pressure variation that can be detected by the human ear. The number of pressure variations per second is called the frequency of sound, and is measured in hertz (Hz). The normal hearing for a healthy young person ranges from approximately 20 Hz to 20000 Hz (20 kHz).
In terms of sound pressure levels, audible sound ranges from the threshold of hearing at 0 dB to the threshold of pain at 130 dB and over. Although an increase of 6 dB represents a doubling of the sound pressure, an increase of about 8 -- 10 dB is required before the sound subjectively appears to be significantly louder. Similarly, the smallest perceptible change is about 1 dB.
Copyright © 2000 Brüel & Kjær Sound & Vibration Measurement A/S.
This publication is protected by copyright law and international treaties.
The contents may be copied and distributed in whole or in part provided that the source is stated and acknowledged to be Brüel & Kjær Sound &Vibration Measurement A/S.
Brüel & Kjær Sound &Vibration Measurement A/S will not be held responsible for any direct or indirect loss or damage which may occur as a result of the use of this publication.
This booklet deals with environmental noise -- for example, noise from industrial sites, road and rail traffic, airports and fairgrounds. It does not cover related issues such as building acoustics, building vibration or domestic noise. Neither does it cover human response to vibration nor industrial uses of sound and vibration measurements. Please contact your Brüel & Kjær representative to receive further information regarding these issues.
While we have made every reasonable effort to present an up-to-date overview of standards, practices and methods, we cannot guarantee that we have covered all relevant aspects. Please consult your local authority to obtain further detailed information pertinent to your country, state, region or area.
News stories related to environmental noise problems abound. Some stories are dramatic, most less so, but huge effort and great sums of money are often invested in conflicts involving environmental noise.
Environmental noise is a worldwide problem. However, the way the problem is dealt with differs immensely from country to country and is very much dependent on culture, economy and politics. But the problem persists even in areas where extensive resources have been used for regulating, assessing and damping noise sources or for creation of noise barriers. For example, huge efforts have been made to reduce traffic noise at source. In fact, today’s cars are much quieter than those manufactured ten years ago, but the traffic volume has increased so much that the effect of this effort has been wiped out and the annoyance level has increased. Manufacturing quieter cars might have eased the problem for a period but it certainly hasn’t removed it.
There are no worldwide estimates of the impact and cost of environmental noise. However, one prominent example covering most of Europe does exist -- the European Union’s Green Paper on Future Noise Policy (1996).
Noise protection programmes differ from country to country. Legal requirements are not identical, techniques and methods differ, and political focus varies. However, there are common aspects to the work of all environmental noise officers.
Planning new developments of residential areas, industrial sites, highways, airports, etc.
These tasks are demanding and, considering the extent and significance of noise pollution, a proper level of understanding of the issues is required, not only from professionals working in the field but also from decision makers and citizens. This booklet is designed for all.
The booklet presents the problems that arise when working with environmental noise and current solutions. Unfortunately, space prevents us from dealing with each subject in depth. We cannot, for example, cover national and regional legislation in detail. However, we have done our utmost to provide a comprehensive overview of the most important issues. Please feel free to contact your local Brüel & Kjær representative to learn more.
Sound may be defined as any pressure variation that the human ear can detect. Just like dominoes, a wave motion is set off when an element sets the nearest particle of air into motion. This motion gradually spreads to adjacent air particles further away from the source. Depending on the medium, sound propagates at different speeds. In air, sound propagates at a speed of approximately 340 m/s. In liquids and solids, the propagation velocity is greater -- 1500 m/s in water and 5000 m/s in steel.
Compared to the static air pressure (105 Pa), the audible sound pressure variations are very small ranging from about 20 µPa (20 × 10-6 Pa) to 100 Pa.
20 µPa corresponds to the average person’s threshold of hearing. It is therefore called the threshold of hearing. A sound pressure of approximately 100 Pa is so loud that it causes pain and is therefore called the threshold of pain. The ratio between these two extremes is more than a million to one.
A direct application of linear scales (in Pa) to the measurement of sound pressure leads to large and unwieldy numbers. And, as the ear responds logarithmically rather than linearly to stimuli, it is more practical to express acoustic parameters as a logarithmic ratio of the measured value to a reference value. This logarithmic ratio is called a decibel or dB. The advantage of using dB can be clearly seen in the illustration on the next page. Here, the linear scale with its large numbers is converted into a manageable scale from 0 dB at the threshold of hearing (20 µPa) to 130 dB at the threshold of pain (~100 Pa).
Our hearing is less sensitive at very low and very high frequencies. In order to account for this, weighting filters can be applied when measuring sound. The most common frequency weighting in current use is "A-weighting" providing results often denoted as dB(A), which conforms approximately to the response of the human ear.
A "C-weighting" curve is also used, particularly when evaluating very loud or very low-frequency sounds.
If the sound levels from two or more sound sources have been measured separately, and you want to know the combined sound pressure level of the sound sources, the sound levels must be added together. However, due to the fact that dBs are logarithmic values they cannot just be simply added together.
Measure the Sound Pressure Level (SPL) of each noise source separately (Lp1 , Lp2).
Find the difference (change in L) between these levels (Lp2 - Lp1).
Find this difference on the horizontal axis of the chart. Move up until you intersect the curve, and then look at the value on the vertical axis to the left.
Add the value indicated (L+) on the vertical axis to the level of the noisier noise source (Lp2). This gives the sum of the SPLs of the two noise sources.
If three or more noise sources are present, steps 1 to 4 should be repeated using the sum obtained for the first two sources and the SPL for each additional source.
Note that a change in L = 0 corresponds to the situation shown in the previous illustration where 3 dB was added to the level caused by one source alone. If the difference between the two sound pressure levels is more than 10 dB the contribution from the quietest source can be discarded.
If change in L is less than 3 dB, the background noise is too high for an accurate measurement and the correct noise level cannot be found until the background noise has been reduced. If, on the other hand, the difference is more than 10 dB, the background noise can be ignored.
At home and at work, we often hear noise from ventilation or heating systems that is hardly noticeable because it has no prominent features. The noise never stops and has no tone, but if the fan suddenly stops or starts to whine, the change may disturb or even annoy us. Our hearing recognises information in the sounds that we hear. Information we don’t need or want is noise. Noise features that make us listen and take notice are tones or changes in sound level. The more prominent the tone, and the more abrupt the change in sound level, the more noticeable the noise.
When measuring noise, we need to know the type of noise so that we can choose the parameters to measure, the equipment to use, and the duration of the measurement. Often we need to use our ears to pinpoint the annoying features of the noise, before making measurements, analysing and documenting them.
Continuous noise is produced by machinery that operates without interruption in the same mode, for example, blowers, pumps and processing equipment. Measuring for just a few minutes with hand-held equipment is sufficient to determine the noise level. If tones or low frequencies are heard, the frequency spectrum can be measured for documentation and further analysis.
When machinery operates in cycles, or when single vehicles or aeroplanes pass by, the noise level increases and decreases rapidly. For each cycle of a machinery noise source, the noise level can be measured just as for continuous noise. However, the cycle duration must be noted. A single passing vehicle or aircraft is called an event. To measure the noise of an event, the Sound Exposure Level is measured, combining level and duration into a single descriptor. The maximum sound pressure level may also be used. A number of similar events can be measured to establish a reliable average.
The noise from impacts or explosions, e.g., from a pile driver, punch press or gunshot, is called impulsive noise. It is brief and abrupt, and its startling effect causes greater annoyance than would be expected from a simple measurement of sound pressure level. To quantify the impulsiveness of noise, the difference between a quickly responding and a slowly responding parameter can be used (as seen at the base of the graph). The repetition rate (number of impulses per second, minute, hour or day) should also be documented.
Annoying tones are created in two ways: Machinery with rotating parts such as motors, gearboxes, fans and pumps often create tones. Unbalance or repeated impacts cause vibration that, transmitted through surfaces into the air, can be heard as tones. Pulsating flows of liquids or gases can also create tones, caused by combustion processes or flow restrictions. Tones can be identified subjectively by listening, or objectively using frequency analysis. The audibility is then calculated by comparing the tone level to the level of the surrounding spectral components. The duration of the tone should also be documented.
Low frequency noise has significant acoustic energy in the frequency range 8 to 100 Hz. Noise of this kind is typical for large diesel engines in trains, ships, and power plants and, since the noise is hard to muffle and spreads easily in all directions, it can be heard for miles. Low frequency noise is more annoying than would be expected from the A-weighted sound pressure level. The difference between the A-weighted and C-weighted level can indicate whether there is a low frequency problem. To calculate the audibility of low frequency components in the noise, the spectrum is measured and compared to the threshold of hearing. Infrasound has a spectrum with significant components below 20 Hz. We perceive it not as sound but rather as pressure. The assessment of infrasound is still experimental, and is presently not covered by international standards.
How loud is a 10-ton truck? That depends very much on how far away you are, and whether you are in front of a barrier or behind it. Many other factors affect the noise level, and measurement results can vary by tens of decibels for the very same noise source. To explain how this variation comes about, we need to consider how the noise is emitted from the source, how it travels through the air, and how it arrives at the receiver.
To arrive at a representative result for measurement or calculation, these factors must be taken into account. Regulations will often specify conditions for each factor.
If the dimensions of a noise source are small compared with the distance to the listener, it is called a point source, for example, fans and chimney stacks. The sound energy spreads out spherically, so that the sound pressure level is the same for all points at the same distance from the source, and decreases by 6 dB per doubling of distance. This holds true until ground and air attenuation noticeably affect the level.
If a noise source is narrow in one direction and long in the other compared to the distance to the listener, it is called a line source. It can be a single source such as a long pipe carrying a turbulent fluid, or it can be composed of many point sources operating simultaneously, such as a stream of vehicles on a busy road.
The path difference of the sound wave as it travels over the barrier compared with direct transmission to the receiver (a + b - c, in the diagram).
The frequency content of the noise.
The combined effect of these two is shown in the diagram. It shows that low frequencies are difficult to reduce using barriers.
The first two factors mentioned above are the most influential and are shown in the diagram below. To summarise, low frequencies are not well attenuated by atmospheric absorption.
Wind speed increases with altitude, which will bend the path of sound to "focus" it on the downwind side and make a "shadow" on the upwind side of the source.
At short distances, up to 50 m, the wind has minor influence on the measured sound level. For longer distances, the wind effect becomes appreciably greater.
Downwind, the level may increase by a few dB, depending on wind speed. But measuring upwind or sidewind, the level can drop by over 20 dB, depending on wind speed and distance. This is why downwind measurement is preferred -- the deviation is smaller and the result is also conservative.
Temperature gradients create effects similar to those of wind gradients, except that they are uniform in all directions from the source. On a sunny day with no wind, temperature decreases with altitude, giving a "shadow" effect for sound. On a clear night, temperature may increase with altitude, "focusing" sound on the ground surface.
Sound reflected by the ground interferes with the directly propagated sound.
The effect of the ground is different for acoustically hard (e.g., concrete or water), soft (e.g., grass, trees or vegetation) and mixed surfaces. Ground attenuation is often calculated in frequency bands to take into account the frequency content of the noise source and the type of ground between the source and the receiver. Precipitation can affect ground attenuation. Snow, for example, can give considerable attenuation, and can also cause high, positive temperature gradients. Regulations often advise against measuring under such conditions.
When sound waves impact upon a surface, part of their acoustic energy is reflected from it, part is transmitted through it and part is absorbed by it. If absorption and transmission are low, as is generally the case with buildings, most of the sound energy is reflected and the surface is said to be acoustically hard. The sound pressure level near the surface is therefore due to direct radiation from the source and sound arriving from one or more reflections.
Typically, the level 0.5 m from a plain wall is 3 dB(A) higher than if there was no wall. Regulations often require the exclusion of the effect of reflection from reported results (free-field conditions).
When at home, some people like to keep their windows closed -- because of climate or tradition. Disturbing noise in the environment is then attenuated by the building, typically offering 20 - 30 dB of protection (façade sound insulation). Windows are often acoustically weak spots, but can be improved with special design.
In other countries and climates, people are used to having their windows open and experiencing the full effect of environmental noise. Regulations for environmental noise, therefore, must take into account both the way dwellings are constructed and the way they are used.
Noise assessment is generally about evaluating the impact of one specific noise source, for example, the noise from a specific production plant. This is not always an easy task. In practically every environment, a large number of different sources contribute to the ambient noise at a particular point.
Ambient noise is the noise from all sources combined -- factory noise, traffic noise, birdsong, running water, etc.
Specific noise is the noise from the source under investigation. The specific noise is a component of the ambient noise and can be identified and associated with the specific source.
Residual noise is ambient noise without specific noise. The residual noise is the noise remaining at a point under certain conditions when the noise from the specific source is suppressed.
This terminology derives from ISO 1996 and is commonly used. The term background noise (not used in ISO 1996) is also a common one but should not be confused with residual noise. It is sometimes used to mean the level measured when the specific source is not audible and sometimes it is the value of a noise parameter, such as the L A90 (the level exceeded for 90% of the measurement time).
In the context of building planning, the term initial noise is used to denote the noise at a certain point before changes, for example, the extension of a production facility or the building of barriers, are implemented.
A variety of methods are used to assess specific noise, many of them described in this booklet. These methods can range from the drastic, such as the shutting down of a production plant to isolate the residual noise, to sophisticated systems that include simultaneous and correlated measurements at several points close to and away from the source. The measured noise is often recorded on a Digital Audio Tape (DAT) recorder or directly onto a PC in order to identify and document the noise source.
Objective measurements of sound levels are an indispensable part of any environmental noise protection program. Environmental noise levels vary greatly -- noise is often impulsive or contains pure tones. In addition, disturbances from extraneous sources of noise -- be it barking dogs, flyovers, or children playing -- must be handled one way or another.
Standards and regulations specify which parameters must be measured, and in most cases they also prescribe how to set up measurement equipment and handle various factors such as meteorological conditions. On top of this, certain "good practices" exist. The result of a noise assessment is never simply a figure such as 77 dB. It is the value of specific parameters or indicators obtained under known and documented conditions.
Assessing a fluctuating noise level means getting a value for a level that is, in simple terms, the average level. Eyeball-averaging using a moving-coil instrument is a method of the past. The LA50, i.e., the level exceeded for 50% of the measurement time, is now only rarely used as an average value.
The "equivalent continuous sound level", the Leq , is known across the globe as the essential averaged parameter. The Leq is the level that, had it been a steady level during the measurement period, would represent the amount of energy present in the measured, fluctuating sound pressure level. The Leq is measured directly with an integrating sound level meter. Leq is a measure of the averaged energy in a varying sound level. It is not a direct measure of annoyance. Extensive research, however, has shown the Leq to correlate well with annoyance. It is obvious, though, that a noise level acceptable on a Wednesday afternoon may be distressing early on Sunday. Corrections for time of day may, therefore, be applied.
An analysis of the statistical distributions of sound levels is a useful tool when assessing noise. The analysis not only provides useful information about the variability of noise levels, but is also prominent in many standards as the basis for assessing background noise. For example, L90, the level exceeded for 90% of the measurement time, is used as an indicator of background noise levels while L 10 or L5 are sometimes used to indicate the level of noise events.
7 Day or Two Hour Measurements?
Measuring noise for the complete reference time interval is ideal. This could range from two hour to week-long measurements. Longer periods such as month and year-long measurements are sometimes used for good reason. In such cases, a log of values obtained every second, minute or quarter of an hour is used to obtain a time history of noise levels. However, long-term measurements can be expensive and difficult to manage. Assessments are often based on taking measurements of representative samples and piecing together the results into a complete overview. Working out a full evaluation from representative samples is indeed a daunting task. State-of-the-art software can automate the process providing accurate and reliable results efficiently and cost-effectively. However, if regulations impose absolute limits on maximum levels, a continuous monitoring of sound levels is necessary.
The L eq or, better, the LAeq(the A-weighted equivalent continuous sound level) is the most important parameter. Broadband measurements, i.e., measurements covering the whole of the audible frequency range, are made using the "A" frequency weighting when assessing environmental noise. It is good practice to always state the applied frequency weighting. Noise with distinct tones, for example, noise from fans, compressors, or saws, is generally far more annoying than other types of noise. This annoyance factor is not taken into account in a broadband measurement. A spectral analysis may be needed to assess annoyance. Pure tones can be assessed subjectively, as the human ear is good at detecting tones. Regulations often require an objective measurement of tonal content as well. In practice, this is either done by 1/3-octave analysis or narrow-band analysis (FFT -- Fast Fourier Transform).
Legislation often specifies where measurements should be made, for example at property boundaries or at a complainant’s property. Other factors also need to be taken into account when measuring because sound levels vary at different heights above ground level. They will also vary depending on the distance between the measurement point and facades and obstacles. These requirements must be noted and applied.
However, measurements can be made at the façade or at other specified heights (the European Union is considering making 4 m the standard).
It is common practice to calibrate sound level meters using an acoustical calibrator before and after each series of measurements.
For professionals, the sound level meter and the calibrator go together. But to ensure continuing accuracy, and for validity in court cases, more detailed calibrations and checks are required.
The annoyance due to a given noise source is perceived very differently from person to person, and is also dependent upon many non-acoustic factors such as the prominence of the source, its importance to the listener’s economy and his or her personal opinion of the source. For many years, acousticians have attempted to quantify this to enable objective assessment of noise nuisances and implement acceptable noise limits. When large numbers of people are involved, reactions tend to be distributed around a mean, and the Rating Level (L r ) parameter has been developed in an attempt to put a numerical value on a noise to quantify its annoyance in relation to the general population.
The Rating Level is defined in the ISO 1996-2 standard (see section on International Standards). It is basically a measure of the noise exposure corrected for factors known to increase annoyance. It is used to compare measured levels with noise limits that usually vary depending on the use of the property under investigation (see section on Assessment). The basic parameter is the A-weighted equivalent continuous sound pressure level or LAeq.
ISO 1996-2 states that the Rating Level has to be determined over reference time intervals related to the characteristics of the source(s) and receiver(s). These reference time intervals are often defined in national/local legislation and standards. The way to measure and evaluate the penalties is different from country to country, but the basic principles are the same and are described in the next section.
Current research into the relationship between the source of noise and its reaction focuses on many issues, one of which is the concept of soundscape design where the subjective pleasantness of urban soundscapes is compared to physical parameters in much the same way as in product noise design.
Soundscape design combines the talents of scientists, social scientists, architects and town planners. It attempts to define principles and to develop techniques by which the quality of the acoustic environment or soundscape can be improved. This can include the elimination of certain sounds (noise abatement), the preservation of certain sounds (soundmarks) and the combination and balancing of sounds to create attractive and stimulating acoustic environments.
Rating Level Lr -- How Much is Too Much?
International standards describe how to determine the Rating Level L r , but do not set legal limits. These are regulated individually by country or local authority. Differences in lifestyle, climate (outdoor activities, open or closed windows) and building design make international harmonisation of noise limits impossible.
Zones similar to those in the figure above are used universally, and specify different limits depending on the type and use of the area under investigation.
Absolute limits are used in most countries. They compare the Rating Level L r to a fixed limit such as 50 dB(A).
Relative limits are used in, for example, the UK. They compare the Rating Level L r to the background noise, measured as LAF90.
Almost all countries use the Rating Level Lr according to ISO 1996 when assessing industrial noise. However, in Japan, the L50 is used, while Belgium uses L95. The limit is normally in the 50 - 55dB(A) range.
The Rating Level Lr is calculated from LAeq, the equivalent continuous A-weighted sound pressure level, with adjustments (penalties) KT for tonal components and KI for impulsive noise.
The reference time periods vary from country to country. Some use just day and night, some combine day and night, and others have resting periods as well. Different assessment procedures are used for each reference time period.
A loudest time period is used in some countries to penalise intermittent noise. The duration of this period ranges from 5 minutes to one hour, depending on the country.
The penalty for tones varies between 0 dB (no penalty) and 6 dB. Some countries use a single penalty value of 5 dB, while other countries use two or more steps. In most cases, the presence of tones is determined subjectively, but objective methods are increasingly used. These methods are based on 1/3-octave or FFT (Fast Fourier Transform) analysis.
The maximum penalty for impulsiveness can vary up to 7 dB between countries, and both subjective and objective methods are used. The objective methods are based on the difference between a fast reacting and a slower reacting measurement parameter (e.g., between Impulse and Fast A-weighted levels) or it can be based on the type of source, using a list enumerating noise sources (such as hammering, explosives, etc.).
Road traffic is the most widespread source of noise in all countries and the most prevalent cause of annoyance and interference. Therefore, traffic noise reduction measures have the highest priority.
LAeq is the preferred noise index, but Rating Level Lr and percentile levels L10 and L50 are also used.
For dense traffic, it can be assumed that L10 is about 3 dB above LAeq, and L50 about 1 - 2 dB lower. Assessment is carried out using various reference time intervals depending on the country. These intervals range from one 24-hour period to three separate intervals for day, rest and night. Generally the night limits are the most difficult to fulfil. The table shows the planning limits for new roads in various countries. The limits are often above the level of 50 - 55 dB(A) recommended by WHO (World Health Organisation), so the expansion of "grey" areas is inevitable almost everywhere.
As with road traffic noise, LAeq is the preferred index for rail traffic noise. In some countries, Rating Levels are calculated from LAeqby subtracting (normally) 5 dB, the so-called railway bonus.
In Japan, LASmax is used for the Shinkansen high-speed line. Generally, using maximum levels as the only limit has the disadvantage of disregarding the number of trains.
Assessment is carried out using various reference time intervals depending on country. These intervals range from one 24-hour period to three separate intervals for day, rest and night.
The noise limits for new lines in residential areas vary between 60 and 70 dB. In some countries, the railway bonus is included in the limit values.
The railway bonus is based on social surveys from several countries, comparing the annoyance from road and rail traffic. The effect is more pronounced at higher levels.
The above graph shows dose-effect relationships for air, rail and road traffic. The percentage of highly annoyed persons is plotted against LDN levels (LAeqwith a 10 dB penalty for night time exposure between 22:00 and 07:00). It illustrates the lower annoyance caused by railway noise and the higher annoyance caused by air traffic noise, compared to road traffic noise for the same value of LDN . Due to the large spread of the underlying data, the graph is for illustration only.
The most important tool for noise control at airports is noise zoning for land-use, planning and noise insulation programs. Noise from commercial aircraft is only a problem around airports as this is where aircraft converge at low altitude and high engine power. Increasing air traffic and city expansion will exacerbate the noise problems, while aircraft noise reduction, traffic and flight path restrictions can alleviate them. As a last resort, existing dwellings can be protected against noise by improving windows and roofs.
Noise contours are used to show the extent and location of noise problem areas. The number shown with each contour indicates the noise level exceeded within that contour. Superimposed on a map, and compared to noise limits, they pinpoint areas in need of noise reduction measures.
Noise footprints show the noise contours for a single aircraft or class of aircraft. Noise footprints can be calculated from noise data for each aircraft and take into consideration flight path, aircraft operation and landscape features. They serve to assess the present and projected noise impact and help plan noise reduction measures. Noise contours around an airport calculated using INM (Integrated Noise Modelling) based on previous noise measurements.
One of the most undervalued aspects of evaluating noise is the reporting of results. Quite often, only marginal data, such as a few discrete dB values, is reported. Consequently, important information is missing, making report interpretation difficult. The level of detail in a report must be consistent with the purpose of the report to its readers. To make a full and coherent report you need to pay careful attention to the actual situation under which the measurement is made.
Standards and recommended practices are Agreat help when making a measurement report. The following standards lay out the framework for what information you must record and what information you are recommended to record.
It is also important to write the report in an easy-to-understand, readable style. Depending on your target audience, the use of graphics, sketches, and illustrations can sometimes help to explain the data. In other cases, text and figures will suffice.
If you produce many measurement reports, it is vital that you carefully archive your data. Structured bookkeeping may prove essential when old data must be retrieved for comparison with new data. A number of professional PC-software packages fulfil contemporary bookkeeping demands. Import of data from measurement equipment, preparation of structured reports, easy archiving and retrieval of data, and direct printing and exporting facilities are made easy with these software programs, saving the professional acoustician valuable time.
Noise levels at a receiver point can be calculated instead of being measured. In addition, noise propagation from one measurement point to another can also be calculated.
Calculation is normally performed in accordance with a recognised standard algorithm. This is usually determined nationally, or by industry sector, and often depends on the type of source.
The algorithms are often source-related, limiting them to use with just that particular source. An exception to this rule is the internationally accepted ISO 9613 standard that determines levels at receiver points based on the sound power levels of identified sources. Being defined in sound power levels makes the standard independent of source type (although there are limitations regarding highly impulsive sources and those with high speed).
The algorithms have normally been verified by numerous measurements and over a range of test case scenarios and allow accuracies (uncertainty) of 3 dB, similar to that achievable with measurements.
Although more advanced methods are available, most standardised algorithms in widespread use are empirical and based on simple rules of physics. In fact, many of them can be implemented with pen and paper. However, with the large number of calculation points and sources normally encountered, computers are used, enabling faster calculation, analysis, presentation and reporting.
Calculations are made using a computer model of the environment with defined noise sources, topography and features that affect the propagation of the noise to (receiver) points of interest. One or more calculation points are put into the model and the computer is then asked to evaluate the noise levels in the model. Normally, long-term LAeqlevels are calculated although octave-band levels may also be available.
All this can be done as a broadband (dB(A)) calculation or in octaves and subsequently summed to give the broadband level. In general, octave-band calculations are more accurate and more useful in subsequent analysis and in any noise reduction required.
Like measurements, the calculation should also be calibrated. This usually involves some form of valid measurements at selected positions where the measured levels can be compared to the calculated ones.
However, unlike measurements, calibration of a calculation is performed after the first calculation and used to refine results to the optimal accuracy.
Care should be taken that the source activity during the measurement is the same as that calculated. The calculation normally includes a long-term weather correction to obtain a long-term, average LAeqlevel. Comparing measurements and calculations, however, should be done under stable weather conditions with the wind blowing from source to receiver (downwind). Using results from a single day’s measurement may give systematic errors caused by non-representative wind conditions and the state of groundcover. This error can be up to 10 dB. In addition, measured data is not source-specific and includes contributions from sources other than those under investigation. Longer-term monitoring and post-processing of results to "remove" unwanted contributions are recommended.
In some cases, for example when investigating possible future scenarios, validation with measurements is not possible. Here, careful analysis of the results, or comparison with similar situations, is required to ensure optimal accuracy.
The accuracy of a particular calculation is dependent on several factors. The most important of these are scenario, levels, range, inputs and user skill.
Algorithms are optimised for use within a range of scenarios. In particular, road and rail traffic noise calculation standards are based on national databases of traffic noise emissions and can be limited in use in other countries where, in particular, the age and mix of the vehicles in use and driving/operating conditions are different. Thus, accuracy may vary with calculated noise level, with the optimal accuracy occurring over a smaller or wider range of noise levels. However, most algorithms include provisions for ensuring accuracy over a wide range of noise levels.
A bigger problem is to ensure the quality of input data as the accuracy of the result is highly dependent on this. Topographical data, machinery sound power levels and traffic flow data are areas in which care should be taken.
By using up to date GIS or AutoCAD files to generate topographical data, measuring sound power levels on site, and performing traffic flow counts at selected check points, you can reduce the risk of erroneous data. Finally, user skill and experience, both with environmental noise assessment and with the calculation algorithm itself, play an important part in optimising the result.
Used correctly within the range of scenarios for which they have been designed, the algorithms ensure global accuracies to within 3 dB.
Like measurement, calculation can be used in environmental noise assessment. Additional uses include identifying prominent sources for noise reduction, noise management through investigations of the effect of future changes in noise environment, and noise mapping (see next section on planning).
In many countries, an Environmental Impact Assessment must be made before, for example, planning permission for a new factory or motorway extension is approved. There is often a requirement to evaluate the noise impact either by preventing a fixed limit from being exceeded, or by weighing the impact of noise and other environmental factors against the socio-economic benefits of the proposal. This may lead to the development of alternative proposals to improve the environmental impact before approval.
A Weighted Noise Index quantifies the noise annoyance factor local residences are subjected to by the noise source under investigation. It can be designed such that a Weighted Noise Index of 0 indicates acceptable levels since the levels are all under the recommended limits. An example of a Weighted Noise Index can be found in the Danish regulations for the assessment of new roads.
To calculate a typical Weighted Noise Index, group the properties in the area under investigation by usage (e.g., residential, commercial or industrial). Classify the number of properties with noise levels in 5 dB categories. Multiply the number of properties in each category by an annoyance factor determined by the noise level. The higher the noise level, the higher the annoyance factor.
Adding the above indices for the different property classes results in an overall Weighted Noise Index that can be used to assess the environmental noise impact of the development and to compare the alternatives. The lower the Weighted Noise Index, the less noise impact the proposal has.
Some indices use the number of inhabitants instead of number of residences, thus giving a Population Noise Exposure Index. An example of this is the Noise Impact Index proposed by the US National Academy of Sciences.
In Switzerland, when assessing environmental noise reduction activities, the efficiency of the solution in reducing noise to the required levels is compared to its cost-efficiency. If the solution reduces the noise to below the legal limits at all selected sites and has a high cost-efficiency, it will be implemented. If it does not reduce the noise to below the legal limits at any site and/or is not cost-efficient, it will not be implemented. There is Agrey area where the decision will be influenced by other factors (see the figure below).
Global, or strategic, noise planning tries to prevent noise issues arising and to optimise the use of limited resources by mapping and managing the noise environment of a large area such as a city.
Noise mapping is already in widespread use for the purpose of managing airport noise. Here, the 65 dB and 55 dB "footprints" of the airport are used to determine planning approval for new runways and compensation for nearby residents.
At the time of going to press, the European Union is in the process of developing a European Union Noise Policy based on the Noise Policy Green Paper from 1996. This will give directions on the use of noise maps, what noise maps to produce and how to produce them. Noise maps are proposed as showing LDEN and Lnight (the night-time LAeq) of each type of source (road, rail, industry, etc.) at a height of 4m over the ground. Aggregation of levels from different sources can be performed with a stated method. The European Union is working towards all cities with more than 250000 inhabitants making maps of transportation and industrial noise using current models. Later, these cities are to make noise maps using harmonised techniques.
The most common source of environmental noise is road traffic. Road traffic noise accounts for more than 90% of unacceptable noise levels (daytime LAeq> 65 dB(A)) in Europe. Other forms of transportation noise such as train and aircraft noise is a more local problem but can still annoy many people.
Outdoor noise levels usually decrease with increasing distance from the source because of geometrical spreading of the noise energy over a bigger surface and absorption of the noise by the atmosphere and by the ground. Barriers can achieve additional reduction of noise levels.
The sound insulation of buildings is the final barrier to the potentially intruding effects of environmental noise.
Most countries encourage manufacturers to produce quieter cars and lorries by imposing noise limits on individual vehicles. These "pass-by" noise-rating limits have been reduced over the past 20 -- 30 years by approximately 8 dB(A) for cars and 15 dB(A) for lorries.
Some national governments (e.g., Norway and Italy) have implemented legislation to include tests on noise emission from vehicles during normal service. These tests are usually carried out by garages as part of general tests on the condition of the vehicle; others perform spot checks. Even so, the ever-increasing number of vehicles means that the overall noise levels have not been reduced.
Road surfaces can be improved to give lower noise output. Porous asphalt and the newer "thin noise-reduced surfaces" have shown reductions of 2 - 6 dB(A). Railway noise can be reduced by the use of welded rail track laid on a concrete bed with elastic/resilient pads or mats.
The obvious method of reducing noise is to move people as far away as possible from the sources of environmental noise. However, this is often impractical, so additional attenuation in the form of noise barriers can be applied. The barrier height and the position of the source and/or receiver relative to it are crucial to the amount of noise reduction that can be achieved. Effective barriers with heights ranging from 1.5 m (Japanese railway noise) to 10 m (US ground-based airport operations) have been reported. Barrier heights for road traffic noise reduction are typically between 3 and 7 m. In addition, the frequency spectrum of the noise source will affect the achievable reduction. Low frequencies, compared to high frequencies, are poorly attenuated by barriers. In some cases, the performance of barriers can be improved by applying sound absorbing material, avoiding parallel, reflective surfaces and shaping or angling barriers to avoid multiple reflections.
The final stage of ensuring that people are not disturbed by environmental noise in their homes is to provide sufficient sound insulation from the external noise levels. This is called Façade Sound Insulation, and is measured in terms of a Standardised Level Difference (DnT,tr) or the Sound Reduction Index (R'tr).
Why be on the Spot?
Today’s automatic equipment can be left in the field to record environmental noise data, and send reports back to the operator in the comfort of his office. This is often the most convenient and economical way to evaluate noise conditions, and is necessary if long-term or simultaneous measurements are required.
Often, a combination of attended and unattended measurements is the most efficient solution, using attended measurement for pilot studies and spot checks, and unattended measurements for long-term or permanent noise monitoring.
Permanent, 24 hour a day, 365 days a year, noise monitoring controls adherence to noise limits and allows a wide range of additional benefits. Permanent noise monitoring is used by an ever-growing variety of organisations.
For many major airports, permanent noise level monitoring is a key issue in the daily running of the airport as noise is often the number one complaint from neighbouring residents. Airport authorities have established regulations with the aim of reducing the impact caused by their operations as much as possible. They hope that these regulations will not only give them the ability to ensure that aeroplanes and pilots adhere to their regulations, but also prevent complaints.
It is often necessary to have both noise data and information about the trajectories followed by approaching or departing aeroplanes. Normally, the airport’s own radar provides the required information and, once correlated with noise data, it can easily be used to determine excess noise levels for specific aeroplanes.
It is usually used when strict noise limits are imposed by the authorities or to protect against law actions, complaints and compensation claims. Permanent monitoring can indicate noise trends and help produce noise maps.
These systems ensure automatic, round-the-clock data acquisition, collecting noise information and other relevant environmental parameters.
All measurement results are collected and stored in a monitoring terminal and transferred periodically to a central computer where all data is processed and stored. The number of permanent, monitoring system terminals necessary will depend on the area covered as well as on specific monitoring needs. Many systems have between 10 and 30 terminals, although 100-terminal systems do exist.
A noise-monitoring terminal basically consists of a weatherproof microphone, a data analysis and storage device and an information transmission system such as a land phone line.
Commonly used analyzers measure a range of noise parameters including running LAeqand LN levels as well as noise event detection. Some provide 1/3-octave band frequency analysis in real-time allowing immediate calculation of indices such as LPN perceived noise levels of each aircraft flyover.
Permanent monitoring terminals are often fast-wired to a control centre for viewing and analysing data from several positions. Short-term and/or long-term average noise levels may be shown on a public display system to create public awareness and positive Public Relations for the authority concerned.
Alternatively vans can be used as mobile terminals. These units, possibly with automatic positional identification, often have data transfer facilities via phone lines to a computer. In all cases, Type 1 instrumentation is essential for these data gathering operations (see section on International Standards, IEC 60651).
Because they are used over long periods of time, monitoring stations are susceptible to the effects of humidity, temperature, wind, corrosive atmosphere and animals. The microphone is particularly vulnerable, as it is the most exposed part of the system. To prevent damage, a special weatherproof microphone unit made of corrosive-resistant materials and with built-in protection against humidity is recommended. It is also advantageous if the noise monitoring system can automatically perform acoustical verification as well as system checks, for example, a charge injection calibration (CIC) to check that it is working properly.
Permanent monitoring systems normally have extensive databases for analysis, impact research and status evaluation including periodical results. Noise events and complaints can be correlated and combined with GIS (Geographic Information System) digital cartography to show population exposure and allow high-quality presentation.
International standards are important in the assessment of environmental noise either because they are used directly or because they provide inspiration or reference for national standards. This section highlights some of the more important standards.
There are two main international bodies concerned with standardisation. The International Organization for Standardization (ISO) deals primarily with methodology to ensure that procedures are defined to enable comparison of results. The International Electrotechnical Commission (IEC) deals with instrumentation to ensure that instruments are compatible and can be interchanged without major loss of accuracy or data.
It defines the basic terminology including the central Rating Level parameter and describes best practices for assessing environmental noise.
ISO 1996 is currently under revision with focus on updating measurement techniques to modern instrumentation, improving procedures, such as for identifying tones, and providing information on research in the effect of noise levels from different sources.
ISO 3891: "1978 Acoustics -- Procedure for Describing Aircraft Noise Heard on the Ground" deals with how to monitor aircraft noise (noise measurement and recording, data processing and reporting). It is currently under revision and is expected to cover the description and measurement of aircraft noise heard on the ground, unaccompanied long-term and short-term monitoring of aircraft noise, and airport noise management and land use.
It defines an octave-based calculation method based on point sources with a defined sound power level. Line sources can be built up with point sources.
These three standards are grouped together as they all deal with sound level meters. International standards for sound level meters are accepted by all countries worldwide. They are important because all measurement standards refer to sound level meter standards to define the instrumentation required.
In most countries, Type 1 equipment is required for environmental noise measurements.
IEC 60651 -- Sound level meters (1979, 1993): Defines sound level meters in four degrees of precision (Types 0, 1, 2 and 3). Specifies characteristics including directionality, frequency and time weighting, and sensitivity to various environments. Establishes tests to verify compliance with the characteristics specified.
IEC 60804 -- Integrating-averaging sound level meters (1985, 1989, 1993): Additional standard to IEC 651 that describes this type of instrument (i.e., those that measure L eq ).
IEC 61672 -- Sound level meters: A new, draft IEC sound level meter standard that will replace IEC 60651 and IEC 60804. Major changes: Tougher specifications, Type 3 disappears. It should mean improved test and quality control of instrumentation and improved accuracy.
A wide range of parameters are used to assess community reaction to environmental noise. The highly variable response of individuals to environmental noise and the many characteristics (level, frequency content, impulsiveness, intermittency, etc.) of different types of noise sources has led to many attempts to provide single-number ratings of the effect of that noise. The following list summarises most of the parameters in common usage.
"A" frequency weighting: The method of frequency weighting the electrical signal within a noise-measuring instrument is to simulate the way the human ear responds to a range of acoustic frequencies. It is based on the 40 dB equal loudness curve. The symbols for the noise parameters often include the letter "A" (e.g., LAeq) to indicate that frequency weighting has been included in the measurement.
LAeq,T : A widely used noise parameter that calculates a constant level of noise with the same energy content as the varying acoustic noise signal being measured. The letter "A" denotes that the A-weighting has been included and "eq" indicates that an equivalent level has been calculated. Hence, LAeqis the A-weighted- equivalent continuous noise level.
LAE : Sound Exposure Level (SEL): A parameter closely related to LAeqfor assessment of events (aircraft, trains, etc.) that have similar characteristics but are of different duration. The LAE value contains the same amount of acoustic energy over a "normalised" one second period as the actual noise event under consideration.
LAFMax, LASMax or LAIMax : Maximum A-weighted noise level measured with Fast (F), Slow (S) or Impulse (I) time weighting. They are the highest level of environmental noise occurring during the measurement time. They are often used in conjunction with another noise parameter (e.g., LAeq) to ensure a single noise event does not exceed a limit. It is essential to specify the time weighting (F, S or I).
LAFMin , LASMin or LAIMin : Minimum A-weighted noise level measured with Fast (F), Slow (S) or Impulse (I) time weighting. They are the lowest level of environmental noise occurring during the measurement time.
LAFN,T Percentile levels: The level of A-weighted noise exceeded for N% of the measurement time. In some countries the LAF90,T (level of noise exceeded for 90% of the measurement time) or LAF95,T level is used as a measure of the background noise level. Note that the time weighting (usually Fast) should be stated.
In some countries, a subjective assessment of the characteristics of the noise in question is made. In other countries, an objective test is made to see whether the noise is tonal or impulsive.
For example (1) a 1/3-octave frequency band of noise which exceeds the levels in adjacent bands by 5 dB or more for the detection of tonal noise and, (2) a measurement of the difference between and Impulse and A-weighted "Leq" parameter (LAIm,T) and LAeq,T would reveal the presence of impulses.
Aircraft Noise Parameters: If aircraft noise is assessed as just a normal noise source (as is usually the case), then the usual environmental noise parameters required are LASMax and LAE (equivalent to LAX in some older standards) for single events and LAeq,T for a succession of noise events.
In some cases (e.g., aircraft certification), more detailed analysis of the 1/3-octave spectral content of the aircraft noise is made at 0.5 second intervals. Perceived noise level (LpN ) is then calculated by converting the sound pressure levels to perceived noisiness values according to the ICAO Annex 16 standards.
If the aircraft noise spectrum has pronounced tonal content, then an additional correction of up to 6.7 dB is added to the perceived noise level (LpN ) to give a tone-corrected perceived noise level LTPN. The total subjective effect of an aircraft’s flyover must take into account the time history of the flight. This is accounted for by integrating the tone-corrected, perceived noise level to produce the effective perceived noise level, LEPN. Full details can be found in the ISO 3891 standard.
LDN: Day-night average sound level. An LAeqwith 10 dB(A) penalty for environmental noise occurring from 22:00 to 07:00 to take account of the increased annoyance at night.
Frequency spectrum: In environmental noise investigations, it is often found that the single-number indices, such as LAeq, do not fully represent the characteristics of the noise. If the source generates noise with distinct frequency components (tonal noise), then it is useful to measure the frequency content in octave, 1/3-octave or narrower (Fast Fourier Transform) frequency bands.
For calculating noise levels (prediction), octave spectra are often used to account for the frequency characteristics of sources and propagation.
Sound power is the acoustic power (W) radiated from a sound source. This power is essentially independent of the surroundings, while the sound pressure depends on the surroundings (reflecting surfaces) and distance to the receiver.
If the sound power is known, the sound pressure at a point can usually be calculated, while the reverse is true only in special cases (e.g., in an anechoic or reverberation room). So, the sound power is very useful to characterize noise sources and to calculate sound pressure.
Like sound pressure, sound power is measured in logarithmic units, the 0 dB sound power level corresponding to 1 pW (picowatt = 10 -12 W).
The symbol used for sound power level is LW, and it is often specified in dB(A), 1/1 octaves or 1/3 octaves.
Brüel & Kjær was founded by two Danish engineers, Per V. Brüel and Viggo Kjær, in 1942. For more than 50 years, sound and vibration measurements have formed the core of our activities. Brüel & Kjær is a leading supplier of microphones, accelerometers, analyzer systems, sound level meters and calibration systems worldwide. Hand-held sound level meters were made commercially available in 1961 and, ever since, Brüel & Kjær has been the market leader in solutions for professionals in the field of environmental noise and noise in the workplace.
Brüel & Kjær offers courses and training, covering environmental noise measurements, in most major countries around the world. Classes are taught by local specialists as well as by application specialists from the company's headquarters.
Brüel & Kjær Service Centres can be found in all regions. These offer calibration services and repair, including contracts allowing extension of warranty by up to 6 years.
Brüel & Kjær is represented in more than 90 countries all over the world. For more information, please contact your local representative. | 2019-04-26T02:59:03Z | http://nonoise.org/library/envnoise/index.htm |
Michelle James bridges creativity and business. She helps leaders rediscover their creativity and apply it in the workplace. In this fast-paced conversation between Michelle and me, we discuss the path to generating new ways of thinking about your challenges by tapping into your creativity as a leader, team member, and for organizations as a whole. Michelle shares the importance of balancing discomfort with safety, the infusion of improv theater principles into leadership, and how to cultivate more creative employees. She provides a great action tip for reevaluating some of your assumptions and even offers two great free resources to the listeners that can help them bring more of their creativity to work.
What’s the metaphor Michelle uses that Halelly goes gaga for that describes perfectly the importance of protecting your creative ideas from initial criticism that could otherwise destroy them?
Why is it okay to let go of comfort, but never of safety, when working on infusing more creativity in your work and your team?
What’s a great improv principles that can support greater creativity, innovation, and risk-taking in your team?
Why is it important to simultaneously think of the immediate challenge you’re trying to creatively solve and the long-term relationship and culture you’re trying to foster when coaching employees to creativity?
What’s one tip Michelle suggests everyone can put to work right away that will help you shift your thinking about creativity?
Email her and get your great resources!!
Intro/outro music for The TalentGrow Show: "Why-Y" by Esta - a great band of exquisitely talented musicians, and good friends of mine.
Michelle James, CEO/Chief Emergence Officer,is a pioneering creativity catalyst who has been using universal creative principles and the process of emergence as the basis for her work with thousands of people - individuals, corporations and communities - since starting her first business in the mid-nineties. Michelle’s passion is infusing creativity and imagination into current knowledge and information systems for individual, organizational and social transformation. Her commitment is to cultivate and focus creativity, aliveness and meaning in the workplace as part of the emerging, holistic new work paradigm - one where creativity, service, purpose and commerce are linked.
Michelle has consulted on, designed and delivered creativity and innovation initiatives and programs for organizations of all sizes. She provides generative processes, a grounded framework, whole-brain integrated learnings, and novel experiences to allow organizations to naturally co-create what is essential for them to remain consistently vital, resilient, and innovative in rapidly changing times and environments.
People naturally expand their capacities to engage their environment. Michelle expands the "playing field" by integrating multiple dimensions of creative experience into all of her work. She developed the Creative Emergence Process, Principles and Practices - which is both a framework and integrative approach for creatively unfolding what's next within an individual or system.
Known for her original and richly textured dynamic learning environments, Michelle presents at learning and creativity events internationally, and creates such events in the Washington, DC area. Her original programs and techniques have been featured in newspapers, audios, books and on television. She is one of the pioneers in the fields of Applied Creativity, Applied Improvisation and Somatic (body-centered) Creativity.
Michelle founded The Center for Creative Emergence based on her experiences with the natural creative emergence process that unfolded over time in her multi-faceted personal journey, as well as and years of work with and study of various facets of creative process and complex systems. For Michelle, the work of the Center is a calling many years in the making that began with a profound life-changing vision that emerged at age 22. This vision became her mission, passion, and purpose - and initiated the long, and sometimes rocky, road of cultivating and integrating that vision. The Creative Emergence Process, Principles and Practices are based on years of work with clients in which certain themes, patterns and self-organizing principles showed up in every emergent creative process. She later discovered a coherent fit with this process and what she learned studying process work, depth psychology, systems thinking, complexity sciences and improv theory.
Before founding the Center, Michelle owned and operated a creative services/marketing firm followed by an organizational development and training company. Prior to that, she spent a few years spear-heading innovative projects while working in communications, sales, marketing, and the media, including co-establishing a newspaper (where she got her taste of entrepreneurship and never went back). In these worlds, she learned first-hand how creative environments function and thrive. Also informing Michelle’s work is her experience and education in creative processes and techniques, brain research, organizational change, group process work, depth psychology, integral theory, emerging group/system dynamics approaches (such as holacracy, appreciative inquiry, world cafe, open space, futuresearch, polarities, psychodrama, dialogue, participative design), accelerated learning, the arts, movement, bodywork, mythology, improv theater, systems thinking, consciousness studies, storytelling and the complexity sciences...and first-hand transformational experiences in her life. Her degrees are in English Literature and Communications Studies with years of diverse post graduate study.
Michelle performed full-length improvised plays with an improv troupe for 10 years through 2010, is an abstract painting artist, and is a CoreSomatics Movement and Bodywork Master Practitioner. CS is an psycho-physical creative healing modality. She also founded and ran the Capitol Creativity Network - an experiential creativity hub meeting monthly in DC since 2003, and in 2014, started the Cville Creativity Network in Charlottesville, VA. In 2008, Michelle was recognized for Visionary Leadership in Fast Company's blog, Leading Change. She produces and curates the bi-annual Creativity in Business Conference in Washington, DC. In 2012, she developed and hosted the first online Creativity in Business Telesummit with creativity thought leader-practitioners from all over the world, and curated an accompanying ebook, Navigating the new Work Paradigm. Michelle is currently writing a book on creativity facilitation called Pattern Breaks: A Facilitator's Guide for Cultivating Creativity.
Announcer: Welcome to the TalentGrow Show, where you can get actionable results-oriented insight and advice on how to take your leadership, communication and people skills to the next level and become the kind of leader people want to follow. And now, your host and leadership development strategist, Halelly Azulay.
Halelly: Welcome back to the TalentGrow Show. This is episode 23 and I am your host, Halelly Azulay, your leadership development strategist. My guest this episode is my friend Michelle James. She is a creativity and emergence catalyst, consultant and coach. She is CEO of the Center for Creative Emergence, where she brings applied creativity to the workplace for entrepreneurs, leaders, teams and organizations. This is a great conversation in which Michelle and I talk about how to build bridges between business and creativity and why the two have a false separation that we are working to break down. She talks about the role of discomfort and the role of safety in generating creativity in the workplace, ways that we can borrow some great principles from the world of improv theater, and what it means for Michelle, at least, to be a creative leader. She also talks about what the sources of a resistance we often see to bringing more creativity into the workplace and ways in which leaders can help overcome it and maybe even prevent it in the first place. And of course as always, Michelle shares an actionable tip at the end of the episode that you can and should apply right away to make yourself an even better and more creative leader. So I hope that you enjoy this episode and I hope that you stick around until the end because Michelle offers two free resources that I think you’re going to want to snag. So here it is, episode 23, and thanks for tuning in.
I am here with Michelle James, and Michelle is a creative emergent and also the originator of the Capital Creativity Network, which is where I met her. Michelle James is all about creativity, but what I really like about Michelle is that she brings creativity to business. And she is like a bridge builder between the creative world and the business world. There are so many ways in which business and creativity go hand-in-hand, and so many of the things we use today come and have emerged from that intersection. But some people think of them as separate and I think that Michelle is fighting the good fight to unite them. Michelle, thank you for being on the TalentGrow Show and welcome.
Michelle: Thank you so much for having me. I’m excited to be here.
Halelly: Me too. And I always ask my guests to tell about their professional journey before we get started into where they are now, because it’s always kind of different and interesting, although a challenge for you because you have an interesting long journey. If you can isolate it into just a couple of minutes, but here’s your challenge – let’s do it.
Michelle: Right, that is true. There’s been such a series of events that have led to where I am, and I’ll just kind of focus on a thread. So like most people, I grew up with the belief that you do your creative work on the side or after the real work is done. You do your day job to make money and you do your creative job on the side. And I put all of my creativity kind of on the side. And then when I was in college I took a course, it was a communication class, and we had to do some sort of expression of what we were learning, and we got to do it in any way we wanted. I was in a group and we did it with theater and skits and I came so alive, knowing that there was some kind of seed of something for me in that. That you can bring things to life using your creativity. But I kind of put it on the backburner, went to a field of marketing and communications and sales, working at radio and newspaper, and then when I was with a group where a group of us started a community newspaper, and we went under. We made no money. But during that year, I got a taste of being the arts and calendar editor, a writer. I got to do interviews. We all got to create our own roles, and that led me to realize that I will only feel alive in my work if I’m getting to use my creativity in it and create my own roles and that I didn’t fit neatly in a job description. So I started taking, reading every kind of book there was on creativity and innovation, life purpose, like path. I was going to every kind of conference and I made it my mission to do work that was creatively alive for me and that I would get paid for, and so in my own life, I had to become that bridge.
Simultaneously around that time, I broke my jaw in a car accident and I found that conventional treatments weren’t helping, but I got into some body work and through that started learning how much more alive I felt when I started using my body in different ways and how my creativity was skyrocketing. I got into improv theater to overcome my fear of public speaking, and to have fun, and it totally transformed my work and life and when I started applying what I was learning there into my work, I started seeing huge differences. So, my first company was Creations Unlimited, which was a marketing company. But then I realized I was doing the creativity for the people. And then I met up with a man in the late 90s – I started my business in ’94 – and by ’97 who was running a natural learning center and he was an OD director and we formed a business, Proteus Center for Change, where he was doing organizational development and he trained me and then I was doing organizational creativity and then by 2001, we went our separate ways. He retired and I continued on with my Center for Creative Emergence, which was fully dedicated to applied creativity in the workplace. Applied creativity to leadership, to team building, to personal and professional development, that if you use your creativity and if you use creative prompts – theater, visual thinking, movement, embodiment, all kinds of things – you get more business bottom-line results, more quickly and more easily. And, more joyfully. And so that became the focus of my work for many years and a couple years later I started the Capital Creativity Network because that helped people and that helped me be able to explain it better and it helped people find a place to go to experience it. Because the key with applied creativity is in the experience.
So, now many years later, I am doing that and I’ve put creativity and business conferences on, and it’s become a much more popular and accepted thing out in the world of work. But it was tough going at the beginning. That’s the short version.
Halelly: Yeah, that’s true. I guess I didn’t really think about the evolution of the business world along with you, in a way, to kind of be more accepting and welcoming of what you bring. That makes sense. So there’s probably a lot less resistance than there used to be in the days when people had to wear, ladies had to wear skirts that covered their calves and a lady tie, and pantyhose, in dark blue pleas! Yeah, that’s true. So it’s all moving in the right direction and I’m so impressed with your career. In fact, I love that you told that story because I think that a lot of the people that I meet are in a place where perhaps they’re feeling constrained by whatever they’re doing at the moment, or they feel like they’re supposed to have these two separate lives, where they can’t really fully, completely maximize their strengths and their gifts within one way to make money and you are a good example of someone who created what she needed to be fully present in her work.
Michelle: Oh, thank you. Yeah, I would say sometimes what I’ll tell my coaching clients – because I do a lot of one-on-one coaching with entrepreneurs – is that the split that I heal, that I work with with people is healing the split between doing what’s most alive for you, doing what your highest creative expression is and income generating work. Because you can’t look to the current job descriptions out there, necessarily. A lot of times you have to be a new structure creator. You have to create your own job description in order to do that. And it’s a journey. But, you can do it, and that’s what creativity allows you to do. It allows you to think in new ways, to go beyond the status quo and to design new structures and design new ways of working.
Halelly: Yes, that’s true. And I would say to add to the creativity, there’s probably a lot of ingredients that go in there, but something that’s very present for me because I’m starting to go down a path to explore it more, is that you probably also need a lot of courage to move in that direction when there isn’t anything there. You’re going into the unknown to create something new, and so people sometimes have creative ideas that they don’t act on, and you acted.
Michelle: Right. Well, thank you, and I do think you certainly need some persistence or courage. I think what helps that become easier is if you start to treat and honor your own ideas, and the way to do that is that you keep, like whether it’s a journal or some sort of visual way of writing them, you keep them sort of sacred. You keep them sort of under the vest for a little while as you’re beginning to emerge. And the reason is because if you put your ideas out too soon, and you’re still insecure with them and you’re still not sure about them, other people, as well meaning as they might be, if they can’t make the leap or if they’re not in a job where they’re feeling fully creatively fulfilled or they’re not sure how to do it, they might inadvertently not be able to support your ideas. So I always say to people, when you have those seed ideas about your new business or new direction or something you might want to bring in, in the initial stages – whether that’s a few days or a few weeks or sometimes a couple months, it depends what it is – treat it like a new seedling tree. You know, how people put that little tiny white fence around it to protect it? And then as soon as it gets some roots, as soon as you begin to feel a little bit more secure in your capacity for ideas, that hey, I’ve now explored this idea, I’ve honored my own idea, I’ve taken it seriously, I see it having some roots, as soon as you get there, that’s a good time to share it because that’s when it can start to grow in community.
Halelly: That is such a great metaphor. I love it. I can totally see it. When it’s just a seed, if you don’t bury it for a while and allow it to germinate, it’s going to fly. It’s going to be sort of, it’ll respond to every small gust of breeze and it won’t have a chance. I love that, thank you so much! So, something that I will share with listeners, you’re a friend of mine – I’ve known you for several years and I admire you so much and I think you’re so multifaceted – that having a 30-minute conversation with you is my biggest challenge of the day! Not because I don’t have enough to talk about, but because I don’t know how to contain it within 30 minutes. So that’s my challenge today. But my goal is to allow people to have a very juicy and fruitful window into your world, and I know that they will follow up afterwards and learn more from you. So something that I think is a great meeting place between what you do and what many of my listeners are thinking about, as I think that you know, most of the listeners are leaders already, but many of them have already a leadership role within an organization, and many of them are aspiring to grow into a leadership role. So I want us to focus on the creative leader for a moment, and I read on your blog this really interesting piece about your perspective about what makes a creative leader. So I’m going to read it and then I’d love for you to expound on it a little bit more.
So you said, and I’m quoting, “A creative leader is a leader who chooses to use more of his or her own creative potential on an on-going basis, choosing to always learn and evolve personally as well as professionally. One who is dedicated more to exploring possibilities than being right, and more to discovery than maintaining the status quo. Creative leaders facilitate meaning, creativity and contribution of those he or she serves – employee, colleague, team member, customer, participant, etc.” So tell us more. What is the creative leader? And help us break down how everyone here who is listening can embody this quality.
Michelle: Sure. And most of my work now is with leaders and helping them become more of “the creative leader” so I feel very excited about this question. I think what happens a lot of times is people will say, “I want you to go be creative,” and what happens when you tell people to be creative, you’re going to bump up against, within them, all of the stories and all of the reasons that they weren’t creative. That’s true if you’re a leader and that’s true for your staff and employees. Something called natural resistance can emerge, where whatever was safe and protected, your creativity is there, and so then as soon as something new, you want something new to emerge, people might be distracted or they might start resisting because they might not feel safe. So, if you’re a creative leader, for me that means you’re going to facilitate an environment. You’re going to cultivate a creative environment for employees. And the way to do that is by pushing your own creative edges, by breaking your own patterns, by consciously and intentionally saying, “How can I expand as a creative individual?” And it doesn’t mean you have to be full on, expressing your creativity all the time. You just don’t want to get to where you’re limiting or inhibiting your employee’s creativity.
So for example, a lot of times we’ll see – because I do creative leadership programs all the time – and a lot of times we’ll see people that want their staff to be more creative, but their staff’s creativity, because the nature of creativity is so unique and expansive and different, might look differently and it’s messy and it doesn’t come out all nice and neat and it looks differently than what the leader first anticipated. And so in that moment, as a creative leader, you have a choice – is what I’m going to say going to foster and enhance the creativity coming into this meeting or from this person or into our team, or is it going to inhibit it? So a lot of times someone will throw out an idea, and you might right off know that idea won’t work. So rather than cutting it down, draw more out of it. Allow the person to say, “How would that look and what do you mean?” And draw more questions out. To use an improv term, “Yes, and” it. Add new elements to it because a lot of times the seed idea isn’t the best idea. The first idea that comes out usually isn’t the best idea that becomes workable. Sometimes it’s five iterations out. Often times people feel inhibited to present that to their boss or their leader, because they’re afraid, because it isn’t polished. So as a leader you’re thinking, “How can I support it becoming polished,” versus, “Wow, it doesn’t look familiar to me. I know this won’t work. I know it’s a bad idea,” and immediately cutting down. Because then you’re cutting off all the potential.
The other thing is, you’re going to be much more comfortable with the unknown and navigating uncertainty if you try that in low-stakes environments than when it comes to real high-stakes environments where you have a lot on the line. You have become more adaptive and responsive. And I’ll just finish with that piece for now. In my mind a creative leader is an adaptive and responsive leader, one that can meet the needs of the situation as they emerge. And that’s why I think improv theater is such a great practice. So you go do improv and you’re goofy and having fun, no stakes. All of it. But you become more adaptable and easy in your adaptability, so then when you go into the workplace and you have situations that are unfamiliar and uncomfortable, you don’t just go to autopilot or habit. You actually have more options in front of you. You don’t freak out. You handle the uncertainty. So creative leadership, a creative leader, embodies the ability to respond and adapt to what’s really happening, not relying on habit only and not relying only on what worked in the past, but maybe having to create something new to meet the situation. And, allowing the creative resources of the team to emerge and creating an environment where it’s safe for the employees to play a little, to explore, to tinker with ideas before having to present the final one, because it’s through that exploration that the next level solutions can emerge.
Halelly: There’s so much in there that I want to follow-up on. Thank you Michelle. Well, the first thing that came very present for me is I hear in your explanation, things that I often say, which is your response teaches people your approach. And you’re modeling what you’ll do if somebody comes up with a different idea. Your response, everybody is watching it. And it’s not just about that idea in that moment, but it actually creates a precedent that people will use the next time you ask for something to make a decision about what they’re willing to offer. So being able, so you kind of have to have two minds. You have to think about the present situation and what’s needed, but you also have to have a long view and recognize that you’re establishing a long-term pattern with this action. And sometimes, probably for many leaders, maybe it’s undoing an existing pattern or maybe they’ve been doing it in a way that’s less effective and maybe they realize that now. Now they have to kind of start to break down the pattern they’ve established and to create trust in you, in people that they are now willing in being open to new ideas, even if they’ve shown in the past they weren’t. Because we’re all able to change.
Michelle: Absolutely. And I think being open and transparent and authentic about that – hey, in the past, I wasn’t so open. But I’m now trying to adapt and explore, and I think if you’re open with your employees that, “I’m trying a new way of being here, and I’m learning this as we’re going,” then you side with them. It becomes non-confrontational. They want to actually support you in succeeding, because you’re partnering together. Like we’re a team now. I’m learning his new way of being to support your creativity, so you help me and so you get to use their creative resourcefulness to help teach you how to be a more creative leader. And you said something else that I thought was really key, and that’s a good distinction that I always like to make myself, too. You know, how you act is going to tell your team or your staff or your employees or customers, “Is it safe?” And so in other words, because creativity for people to really thrive and be creative and try new things, they need to feel safety. Most people get confused between, many people get confused between comfort and safety. Discomfort is natural in the creative process. It’s natural for all of us. It’s the discomfort of learning something new. You’re not going to be masterful, just like the baby walking across the floor. They fall and they might get a little bruised. They’re learning something new, but they keep going and keep doing it, because it’s a natural part of the creative process. Discomfort is okay.
So a lot of times, people will try to avoid discomfort, but then they don’t make it safe. But safety is essential. So you can be uncomfortable and be safe, but you can’t be unsafe. So safety is often created by establishing rules of engagement, by establishing rules of engagement that people feel safe in. For the next 30 minutes, we’re going to go into divergent thinking, no judgment. You can say anything. You can explore anything. You can do anything. That’s one way of making it safe. Another way of making it safe, for example, improv principle is make everyone else look good. So let’s tell my group, I’m committed to making you all look good. I can’t promise it’ll be comfortable because you’re going to learn something new, but I can promise you it’ll be safe. All the sudden that tells them that I’m going to be on their side and so I think as a creative leader, if they think you’re for them, and they think you’re on their side, that will help bring out more of their creativity.
Halelly: I like that principle a lot. Make everyone look good. It’s a very benevolent kind of intention. And just that alone, my goodness, that’s going to change the quality of many people’s work experience if more leaders use that. So I want to follow-up on the improv principles you just mentioned. And I know much of your creative awakening, you said, was through learning improv and one of the things that you teach – you do a lot of different kinds of workshops and people should go to your website, which I’m going to link to in the show notes to go see how many really cool workshops you do and all the kinds of things you do – but one of them is improvisation for leaders, using the practices of improvisational theater for leadership effectiveness. Some people might have a deer in the headlights look about them and not know what the heck that means, and then other people might be like, “Ooh, fun!” But, I also know personally, I have seen, a lot of those people out there to whom this does not feel safe. And for whom that feels really out in left field and outside their comfort zone, especially as you bring in using your body, and being silly in the workplace situation. So I bet you experience a lot of that resistance and overcome it successfully. I know you do, because that’s why you’re brought in again and again. But I would love if you could just maybe share one exemplary story of a transformation you’ve seen where someone came in with that sort of “arms crossed, I ain’t doing this improv BS,” and came out on the other end changed?
Michelle: Sure. And so right now, for example, over the last couple of years, I do regular improv for leaders programs with the Federal Executive Institute for their senior leaders that come, they come for month-long and one of the days they spend with me doing improv for leaders, for some of the group. These are people that have never done improv theater. Some of them will walk in, arms crossed, they’re not comfortable, it doesn’t seem fun, it is out of their comfort zone, and they are thinking the first thing is like, “This is what we’re spending our federal tax dollars on?” So I’m used to resistance and I’m used to that, all the time, for the kind of work I do. What’s interesting is at the end, it’s almost always the people that had the most resistance that are the ones that want to keep going the most because once they tap into their creative wellspring they realize it.
And so the first thing you do is you make it safe, and then you set up scenarios. You make it safe via the improv principles, “yes, and,” make everyone look good, serve the good of the whole – there are several improv principles, and I bring other ones. I also give them experiences of generative thinking. I give them experiences where I know it might be uncomfortable, but I know they’re going to succeed and they’ll do well. And slowly you unfold it and then the activities, you do a lot of warm-ups to get into a different state change. And so as a leader, if you just do, like I know for me, I’ll do a warm-up just to go, I do warm-ups before I facilitate. If leaders, before meetings or before you have to do a talk, if you do some kind of warm-up to break your pattern and get in a state change, you’re going to show up differently, more adaptive and flexible.
So one of the ways, one of the things that I’ve noticed – and I would say it’s more than one particular story, but it’s a pattern with certain types of people – they come in really cynical, and really like, “I don’t understand how playing a bunch of improv games,” and it is true, that you don’t understand until you ground the experience. So I give them experiences, then we ground it into the applications of leadership. So there’s a lot of science and philosophy and connecting it to what they’re already knowing and doing around leadership, so it’s not just random. And when I go into organizations, we actually do it around their actual projects, so they can see in real time how they generate more ideas. I’d say the biggest transformation occurs when you see somebody who – because often the resistance is because somewhere in their history, or whether it was through they’ve been learning or their teacher or parent, they’ve been socialized, traumatized or educated into believing they’re not creative, or into believing their creativity doesn’t matter. So they’ve shut it down. And that’s what the resistance often is. The resistance is just a discomfort in disguise. And when you get people to start to safely, in a safe environment, open up to that creativity, they have it. It’s not a skill to be taught. You’ve got nature on your side. It all comes back.
And what happens is, I see people going from, “This is silly, doesn’t have any relevance to the workplace,” to remembering how creative they are, accessing their creative wellspring, which leads them to feeling empowerment. “Oh, I have more options and choices in everything in my everyday work than I ever thought. I’m not trapped by only what I know. I don’t have to cling onto what worked before. I’m more adaptive to change. I’m more responsive. I’m more comfortable navigating the unknown because I have more confidence in my ability to generate new ideas, to facilitate new ideas, to respond in new and novel and different ways.” And so I’d say the biggest change I see is someone come in kind of stiff and resistant and leaving they’re so thankful to have connected to their creativity, which means options, choices and new possibilities.
Halelly: Fabulous. And you get to do this for a living, right? That’s so cool.
Michelle: I know, I’m so grateful.
Halelly: That is awesome. I hate that the time is running away from us because I want to keep going, but tell us, before we wrap up, what’s new and exciting for you? What’s on your horizon that you’re super charged up about?
Michelle: Well, sure. So I am currently working on a book. And it is called Pattern Breaks: A Facilitator’s Guide to Cultivating Creativity, that I expect to be done this year. And it’s focused both on for leaders and facilitators who want to facilitate other people’s creativity but also becoming a creative leader and creative facilitator. And being more creative, adaptive, resilient, both in the design and the facilitation of whatever you’re leading or cultivating. And how to bring more creativity to the group. So that’s exciting for me. It’s called Pattern Breaks, because obviously that’s the core essence is all about breaking your patterns and cultivating new. So that’s very exciting, and then I will be doing a creative facilitator workshop in D.C. If you get on my mailing list, then you’ll get all the information about where and when. But that’ll be in the spring.
Halelly: Awesome. And we’ll be sure to share with folks how to get on your mailing list, because you really only send out good stuff, actually. And I mean, really, I love how you communicate and I love the things that you create. So I highly recommend that people get in touch with you on a regular basis. And good luck with the book, that is really, really exciting. I can’t wait to see it when it comes out and to read it. So okay, let’s go into that actionable mode – what is one thing that is very actionable, that’s not too hard to do this week, even today, that you think leaders of all kinds can do to increase their creative leadership?
Halelly: And that’s a very philosophical kind of exercise, and also one I’d say that puts you squarely in the discomfort zone right away, if you even do it. If you can overcome the resistance to doing it.
Michelle: And I have a simple, non-philosophical one too.
Halelly: We’ll save that one for the next time. So this one, I’m thinking it’ll probably be helpful for people to maybe even journal about it, right? Like right down your thinking or something to help process it?
Michelle: Right. Yes. Write it down, because of all the … creativity begins with curiosity. And that’s true, I think, and most people would agree on that. And of all the things that when you reach, when you locate your limiting thoughts, and you get creative power, so yeah, writing it down, drawing it, you know, drawing it – however you want to do it – is helpful.
Halelly: Yes, and/or get a coach, like Michelle, to help you think through it, right? So if people want to get you as their coach or bring you in to do a workshop or bring you in to speak or read more of what you write, how can they keep in touch with you Michelle?
Michelle: Well, first my company is The Center for Creative Emergence, and you can go to www.creativeemergence.com and you can email me and you can just email info@creativeemergence.com and we can set up a time to talk and I can tell you about what I do and how I work and hear what you’re about. No obligation, free call, just to see what’s up and see if there’s anything I can do to offer you or your organization. Also, I want to offer your listeners, who are listening right now, a free e-book that we did. It’s a 97-page e-book where I interviewed 33 thought leaders, business leaders and applied creativity practitioners and we asked them the same six questions. One of them is “what is creative leadership?” And we got some amazing answers. And there’s also, each one was asked to provide a creative practice that you can apply. And this is all about creativity and innovation in the business world. It’s called Navigating the New Work Paradigm, and it’s all about applied creativity and innovation. So, for anyone listening, put free e-book in the subject line of the email and it’s info@creativeemergence.com and I will send you an e-book and hope you enjoy it.
Halelly: Wow, thank you so much. That is very cool. And that is a really useful resource. I also love the recorded version of your telesummit on creativity in business, which I listened to. I bought that and I’ve listened to it so much. It is so useful. So Michelle, thank you. Thank you for spending time with us on the TalentGrow Show. Thank you for sharing your knowledge. Thank you for the work that you do and for being a person who helps make the world truly, truly a better place. I really appreciate you and I hope that everyone, go out there and make today a great day.
Michelle: Thank you so much Halelly for having me. It’s great fun to talk to you.
Halelly: Thank you. Likewise. Make it great.
What did I tell you? Wasn’t that great? I love Michelle and I hope that you will take that action she suggests, even though it’s a little bit philosophical, but start journaling about or write or doodling about or drawing or just think while you’re driving somewhere and start questioning why? Why are you making some of the assumptions that you’re making? Because a lot of times the ways that we react and the way that we behave are basically embedded in habits that we’ve built over time, and sometimes they’re not really very consciously selected. They’re just sort of ways we do things by force of nature or unintentionally or subconsciously. So, I think she has a really great suggestion that I hope you’ll take her up on.
If you haven’t yet subscribed to this podcast, please take a moment and go to iTunes or to Stitcher or to whatever app you use for playing podcasts and make sure that you’re subscribed to the podcast, because that makes sure that you never miss an episode. And, I really, really would appreciate if you took a couple of moments and shared what you liked about this podcast with someone – at least one other person – that you think could also benefit from this. Because you know, people are often not aware that this is a resource that’s out there. They’re listening to other stuff and maybe they’re just looking for just the right way to develop their leadership skills, and you would be helping them so much by telling them about this podcast. So if you’ve been enjoying it, please tell at least one other person. You can tell them when you get together with them for dinner, when you meet them in the coffee shop or you can email them or send them something on social media. If every person who enjoys this podcast does that, it’s going to help me reach so many more people, which is why I do this.
So, right before I close, I want to share with you one of the latest great iTunes recommendation or reviews that I received, the podcast received, and this one comes from Michael. He said, the title is “Gems of practical wisdom,” and he gave five stars. And he said, “Listening to Halelly Azulay’s podcast is time well spent. She does a great job identifying diverse experts and asking thoughtful questions that draw out gems of practical wisdom.” Michael, thank you so much for writing that review, taking the time, and for those kind words. I appreciate you. And I hope that others, you are listening and you’re enjoying this, if you don’t mind, you would be doing me a huge favor and maybe kind of paying back for this time by just leaving me one or two sentences review. You don’t have to, but I would sure appreciate it if you did. And as always, if there’s suggestions you have or requests that you have, you just need to let me know. Because I am here for you. So, let me know what you need, tell me how I can serve you and until then, make it a great day. Thanks for listening. Bye.
Announcer: Thanks for listening to the TalentGrow Show, where we help you develop your talent to become the kind of leader that people want to follow. For more information, visit TalentGrow.com.
Don't forget to LEAVE A RATING/REVIEW ON iTUNES! It’s easy to do (here’s how to do it in 4 easy steps). Thank you!!
Get my free guide, "10 Mistakes Leaders Make and How to Avoid Them" and receive my weekly newsletter full of actionable tips, links and ideas for taking your leadership and communication skills to the next level! | 2019-04-25T08:28:10Z | https://www.talentgrow.com/podcast/episode23 |
Section 7(a)(2) of the Endangered Species Act of 1973 divides responsibilities regarding the protection of endangered species between petitioner Secretary of the Interior and the Secretary of Commerce, and requires each federal agency to consult with the relevant Secretary to ensure that any action funded by the agency is not likely to jeopardize the continued existence or habitat of any endangered or threatened species. Both Secretaries initially promulgated a joint regulation extending § 7(a)(2)'s coverage to actions taken in foreign nations, but a subsequent joint rule limited the section's geographic scope to the United States and the high seas. Respondents, wildlife conservation and other environmental organizations, filed an action in the District Court, seeking a declaratory judgment that the new regulation erred as to § 7(a)(2)'s geographic scope, and an injunction requiring the Secretary of the Interior to promulgate a new rule restoring his initial interpretation. The Court of Appeals reversed the District Court's dismissal of the suit for lack of standing. Upon remand, on cross-motions for summary judgment, the District Court denied the Secretary's motion, which renewed his objection to standing, and granted respondents' motion, ordering the Secretary to publish a new rule. The Court of Appeals affirmed.
(b) Respondents did not demonstrate that they suffered an injury in fact. Assuming that they established that funded activities abroad threaten certain species, they failed to show that one or more of their members would thereby be directly affected apart from the members' special interest in the subject. See Sierra Club v. Morton, 405 U.S. 727, 735. Affidavits of members claiming an intent to revisit project sites at some indefinite future time, at which time they will presumably be denied the opportunity to observe endangered animals, do not suffice, for they do not demonstrate an "imminent" injury. Respondents also mistakenly rely on a number of other novel standing theories. Their theory that any person using any part of a contiguous ecosystem adversely affected by a funded activity has standing even if the activity is located far away from the area of their use is inconsistent with this Court's opinion in Lujan v. National Wildlife Federation, 497 U.S. 871. And they state purely speculative, nonconcrete injuries when they argue that suit can be brought by anyone with an interest in studying or seeing endangered animals anywhere on the globe and anyone with a professional interest in such animals. Pp. 562-567.
(c) The Court of Appeals erred in holding that respondents had standing on the ground that the statute's citizen-suit provision confers on all persons the right to file suit to challenge the Secretary's failure to follow the proper consultative procedure, notwithstanding their inability to allege any separate concrete injury flowing from that failure. This Court has consistently held that a plaintiff claiming only a generally available grievance about government, unconnected with a threatened concrete interest of his own, does not state an Article III case or controversy. See, e.g., Fairchild v. Hughes, 258 U.S. 126, 129-130. Vindicating the public interest is the function of the Congress and the Chief Executive. To allow that interest to be converted into an individual right by a statute denominating it as such and permitting all citizens to sue, regardless of whether they suffered any concrete injury, would authorize Congress to transfer from the President to the courts the Chief Executive's most important constitutional duty, to "take Care that the Laws be faithfully executed," Art. II, § 3. Pp. 571-578.
SCALIA, J., announced the judgment of the Court and delivered the opinion of the Court with respect to Parts I, II, III-A, and IV, in which REHNQUIST, C.J., and WHITE, KENNEDY, SOUTER, and THOMAS, JJ., joined, and an opinion with respect to Part III-B, in which REHNQUIST, C.J., and WHITE and THOMAS, JJ., joined. KENNEDY, J., filed an opinion concurring in part and concurring in the judgment, in which SOUTER, J., joined, post, p. 579. STEVENS, J., filed an opinion concurring in the judgment, post, [p557] p. 581. BLACKMUN, J., filed a dissenting opinion, in which O'CONNOR, J., joined, post, p. 589.
JUSTICE SCALIA delivered the opinion of the Court with respect to Parts I, II, III-A, and IV, and an opinion with respect to Part III-B in which the CHIEF JUSTICE, JUSTICE WHITE, and JUSTICE THOMAS join.
This case involves a challenge to a rule promulgated by the Secretary of the Interior interpreting § 7 of the Endangered [p558] Species Act of 1973 (ESA), 87 Stat. 884, 892, as amended, 16 U.S.C. § 1536 in such fashion as to render it applicable only to actions within the United States or on the high seas. The preliminary issue, and the only one we reach, is whether the respondents here, plaintiffs below, have standing to seek judicial review of the rule.
Each Federal agency shall, in consultation with and with the assistance of the Secretary [of the Interior], insure that any action authorized, funded, or carried out by such agency . . . is not likely to jeopardize the continued existence of any endangered species or threatened species or result in the destruction or adverse modification of habitat of such species which is determined by the Secretary, after consultation as appropriate with affected States, to be critical.
In 1978, the Fish and Wildlife Service (FWS) and the National Marine Fisheries Service (NMFS), on behalf of the Secretary of the Interior and the Secretary of Commerce, respectively, promulgated a joint regulation stating that the obligations imposed by § 7(a)(2) extend to actions taken in foreign nations. 43 Fed.Reg. 874 (1978). The next year, however, the Interior Department began to reexamine its position. Letter from Leo Kuliz, Solicitor, Department of the Interior, to Assistant Secretary, Fish and Wildlife and Parks, Aug. 8, 1979. A revised joint regulation, reinterpreting [p559] § 7(a)(2) to require consultation only for actions taken in the United States or on the high seas, was proposed in 1983, 48 Fed.Reg. 29990 (1983), and promulgated in 1986, 51 Fed.Reg.19926 (1986); 50 C.F.R. 402.01 (1991).
Shortly thereafter, respondents, organizations dedicated to wildlife conservation and other environmental causes, filed this action against the Secretary of the Interior, seeking a declaratory judgment that the new regulation is in error as to the geographic scope of § 7(a)(2), and an injunction requiring the Secretary to promulgate a new regulation restoring the initial interpretation. The District Court granted the Secretary's motion to dismiss for lack of standing. Defenders of Wildlife v. Hodel, 658 F.Supp. 43, 47-48 (Minn.1987). The Court of Appeals for the Eighth Circuit reversed by a divided vote. Defenders of Wildlife v. Hodel, 851 F.2d 1035 (1988). On remand, the Secretary moved for summary judgment on the standing issue, and respondents moved for summary judgment on the merits. The District Court denied the Secretary's motion, on the ground that the Eighth Circuit had already determined the standing question in this case; it granted respondents' merits motion, and ordered the Secretary to publish a revised regulation. Defenders of Wildlife v. Hodel, 707 F.Supp. 1082 (Minn.1989). The Eighth Circuit affirmed. 911 F.2d 117 (1990). We granted certiorari, 500 U.S. 915 (1991).
whereas "the executive power [is] restrained within a narrower compass and . . . more simple in its nature," and "the judiciary [is] described by landmarks still less uncertain." The Federalist No. 48, p. 256 (Carey and McClellan eds.1990). One of those landmarks, setting apart the "Cases" and "Controversies" that are of the justiciable sort referred to in Article III -- "serv[ing] to identify those disputes which are appropriately resolved through the judicial process," Whitmore v. Arkansas, 495 U.S. 149, 155 (1990) -- is the doctrine of standing. Though some of its elements express merely prudential considerations that are part of judicial self-government, the core component of standing is an essential and unchanging part of the case-or-controversy requirement of Article III. See, e.g., Allen v. Wright, 468 U.S. 737, 751 (1984).
fairly . . . trace[able] to the challenged action of the defendant, and not . . . th[e] result [of] the independent action of some third party not before the court.
Simon v. Eastern Kentucky Welfare [p561] Rights Org., 426 U.S. 26, 41-42 (1976). Third, it must be "likely," as opposed to merely "speculative," that the injury will be "redressed by a favorable decision." Id. at 38, 43.
The party invoking federal jurisdiction bears the burden of establishing these elements. See FW/PBS, Inc. v. Dallas, 493 U.S. 215, 231 (1990); Warth, supra, 422 U.S. at 508. Since they are not mere pleading requirements, but rather an indispensable part of the plaintiff's case, each element must be supported in the same way as any other matter on which the plaintiff bears the burden of proof, i.e., with the manner and degree of evidence required at the successive stages of the litigation. See Lujan v. National Wildlife Federation, 497 U.S. 871, 883-889 (1990); Gladstone, Realtors v. Village of Bellwood, 441 U.S. 91, 114-115, and n. 31 (1979); Simon, supra, 426 U.S. at 45, n. 25; Warth, supra, 422 U.S. at 527, and n. 6 (Brennan, J., dissenting). At the pleading stage, general factual allegations of injury resulting from the defendant's conduct may suffice, for on a motion to dismiss, we "presum[e] that general allegations embrace those specific facts that are necessary to support the claim," National Wildlife Federation, supra, 497 U.S. at 889. In response to a summary judgment motion, however, the plaintiff can no longer rest on such "mere allegations," but must "set forth" by affidavit or other evidence "specific facts," Fed.Rule Civ.Proc. 56(e), which for purposes of the summary judgment motion will be taken to be true. And at the final stage, those facts (if controverted) must be "supported adequately by the evidence adduced at trial," Gladstone, supra, 441 U.S. at 115, n. 31.
ASARCO Inc. v. Kadish, 490 U.S. 605, 615 (1989) (opinion of KENNEDY, J.); see also Simon, supra, 426 U.S. at 41-42; and it becomes the burden of the plaintiff to adduce facts showing that those choices have been or will be made in such manner as to produce causation and permit redressability of injury. E.g., Warth, supra, 422 U.S. at 505. Thus, when the plaintiff is not himself the object of the government action or inaction he challenges, standing is not precluded, but it is ordinarily "substantially more difficult" to establish. Allen, supra, 468 U.S. at 758; Simon, supra, 426 U.S. at 44-45; Warth, supra, 422 U.S. at 505.
Respondents' claim to injury is that the lack of consultation with respect to certain funded activities abroad "increas[es] the rate of extinction of endangered and threatened species." Complaint ¶ 5, App. 13. Of course, the desire to use or observe an animal species, even for purely aesthetic purposes, is undeniably a cognizable interest for purpose of [p563] standing. See, e.g., Sierra Club v. Morton, 405 U.S. at 734.
But the "injury in fact" test requires more than an injury to a cognizable interest. It requires that the party seeking review be himself among the injured.
Id. at 734-735. To survive the Secretary's summary judgment motion, respondents had to submit affidavits or other evidence showing, through specific facts, not only that listed species were in fact being threatened by funded activities abroad, but also that one or more of respondents' members would thereby be "directly" affected apart from their "‘special interest' in th[e] subject." Id. at 735, 739. See generally Hunt v. Washington State Apple Advertising Comm'n, 432 U.S. 333, 343 (1977).
will suffer harm in fact as a result of [the] American . . . role . . . in overseeing the rehabilitation of the Aswan High Dam on the Nile . . . and [in] develop[ing] . . . Egypt's . . . Master Water Plan.
that threat, she concluded, harmed her because she "intend[s] to return to Sri Lanka in the future and hope[s] to be more fortunate in spotting at least the endangered elephant and leopard." Id. at 145-146. When Ms. Skilbred was asked [p564] at a subsequent deposition if and when she had any plans to return to Sri Lanka, she reiterated that "I intend to go back to Sri Lanka," but confessed that she had no current plans: "I don't know [when]. There is a civil war going on right now. I don't know. Not next year, I will say. In the future." Id. at 318.
"[p]ast exposure to illegal conduct does not, in itself, show a present case or controversy regarding injunctive relief . . . if unaccompanied by any continuing, present adverse effects."
Besides relying upon the Kelly and Skilbred affidavits, respondents propose a series of novel standing theories. The first, inelegantly styled "ecosystem nexus," proposes that any person who uses any part of a "contiguous ecosystem" adversely affected by a funded activity has standing even if the activity is located a great distance away. This approach, as the Court of Appeals correctly observed, is inconsistent with our opinion in National Wildlife Federation, which held that a plaintiff claiming injury from environmental damage [p566] must use the area affected by the challenged activity, and not an area roughly "in the vicinity" of it. 4 97 U.S. at 887-889; see also Sierra Club, 405 U.S. at 735,. It makes no difference that the general-purpose section of the ESA states that the Act was intended, in part, "to provide a means whereby the ecosystems upon which endangered species and threatened species depend may be conserved," 16 U.S.C. § 1531(b). To say that the Act protects ecosystems is not to say that the Act creates (if it were possible) rights of action in persons who have not been injured in fact, that is, persons who use portions of an ecosystem not perceptibly affected by the unlawful action in question.
suits challenging, not specifically identifiable Government violations of law, but the particular programs agencies establish to carry out their legal obligations . . . [are], even when premised on allegations of several instances of violations of law, . . . rarely if ever appropriate for federal court adjudication.
Allen, 468 U.S. at 759-760.
[w]e are satisfied that an injunction requiring the Secretary to publish [respondents' desired] regulatio[n] . . . would result in consultation.
Defenders of Wildlife, 851 F.2d at 1042, 1043-1044. We do not know what would justify that confidence, particularly when the Justice Department (presumably after consultation with the agencies) has taken the position that the regulation is not binding. [n5] The [p571] short of the matter is that redress of the only injury-in-fact respondents complain of requires action (termination of funding until consultation) by the individual funding agencies; and any relief the District Court could have provided in this suit against the Secretary was not likely to produce that action.
A further impediment to redressability is the fact that the agencies generally supply only a fraction of the funding for a foreign project. AID, for example, has provided less than 10% of the funding for the Mahaweli Project. Respondents have produced nothing to indicate that the projects they have named will either be suspended, or do less harm to listed species, if that fraction is eliminated. As in Simon, 426 U.S. at 43-44, it is entirely conjectural whether the nonagency activity that affects respondents will be altered or affected by the agency activity they seek to achieve. [n6] There is no standing.
any person may commence [p572] a civil suit on his own behalf (A) to enjoin any person, including the United States and any other governmental instrumentality or agency . . . who is alleged to be in violation of any provision of this chapter.
[This is] not a case within the meaning of . . . Article III. . . . Plaintiff has [asserted] only the right, possessed by every citizen, to require that the Government be administered according to law and that the public moneys be not wasted. Obviously this general right does not entitle a private citizen to institute in the federal courts a suit. . . .
The party who invokes the power [of judicial review] must be able to show not only that the statute is invalid but that he has sustained or is immediately in danger of sustaining some direct injury as the result of its enforcement, and not merely that he suffers in some indefinite way in common with people generally. . . . Here, the parties plaintiff have no such case. . . . [T]heir complaint . . . is merely that officials of the executive department of the government are executing and will execute an act of Congress asserted to be unconstitutional; and this we are asked to prevent. To do so would be not to decide a judicial controversy, but to assume a position of authority over the governmental acts of another and coequal department, an authority which plainly we do not possess.
that to entitle a private individual to invoke the judicial power to determine the validity of executive or legislative action, he must show that he has sustained or is immediately in danger of sustaining a direct injury as the result of that action, and it is not sufficient that he has merely a general interest common to all members of the public.
Id. at 634. See also Doremus v. Board of Ed of Hawthorne, 342 U.S. 429, 433-434 (1952) (dismissing taxpayer action on the basis of Frothingham).
standing alone, would adversely affect only the generalized interest of all citizens in constitutional governance. . . . We reaffirm Levitt in holding that standing to sue may not be predicated upon an interest of th[is] kind. . . .
assertion of a right to a particular kind of Government conduct, which the Government has violated by acting differently, cannot alone satisfy the requirements of Art. III without draining those requirements of meaning.
[t]his allegation raise[d] only the generalized interest of all citizens in constitutional governance . . . , and [was] an inadequate basis on which to grant . . . standing.
Whitmore, 495 U.S. at 160.
When Congress passes an Act empowering administrative agencies to carry on governmental activities, the power of those agencies is circumscribed by the authority granted. This permits the courts to participate in law enforcement entrusted to administrative bodies only to the extent necessary to protect justiciable individual rights against administrative action fairly beyond the granted powers. . . . This is very far from assuming that the courts are charged more than administrators or legislators with the protection of the rights of the people. Congress and the Executive supervise the acts of administrative agents. . . . But under Article III, Congress established courts to adjudicate cases and controversies as to claims of infringement of individual rights, whether by unlawful action of private persons or by the exertion of unauthorized administrative power.
[Statutory] broadening [of] the categories of injury that may be alleged in support of standing is a different matter from abandoning the requirement that the party seeking review must himself have suffered an injury.
405 U.S. at 738. Whether or not the principle set forth in Warth can be extended beyond that distinction, it is clear that in suits against the government, at least, the concrete injury requirement must remain.
We hold that respondents lack standing to bring this action, and that the Court of Appeals erred in denying the summary judgment motion filed by the United States. The opinion of the Court of Appeals is hereby reversed, and the cause remanded for proceedings consistent with this opinion.
By particularized, we mean that the injury must affect the plaintiff in a personal and individual way.
The dissent acknowledges the settled requirement that the injury complained of be, if not actual, then at least imminent -- but it contends that respondents could get past summary judgment because "a reasonable finder of fact could conclude . . . that . . . Kelly or Skilbred will soon return to the project sites." Post at 591. This analysis suffers either from a factual or from a legal defect, depending on what the "soon" is supposed to mean. If "soon" refers to the standard mandated by our precedents -- that the injury be "imminent," Whitmore v. Arkansas, 495 U.S. 149, 155 (1990) -- we are at a loss to see how, as a factual matter, the standard can be met by respondents' mere profession of an intent, some day, to return. But if, as we suspect, "soon" means nothing more than "in this lifetime," then the dissent has undertaken quite a departure from our precedents. Although "imminence" is concededly a somewhat elastic concept, it cannot be stretched beyond its purpose, which is to insure that the alleged injury is not too speculative for Article III purposes -- that the injury is "certainly impending," id. at 158 (emphasis added). It has been stretched beyond the breaking point when, as here, the plaintiff alleges only an injury at some indefinite future time, and the acts necessary to make the injury happen are at least partly within the plaintiffs own control. In such circumstances, we have insisted that the injury proceed with a high degree of immediacy, so as to reduce the possibility of deciding a case in which no injury would have occurred at all. See, e.g., id. at 156-160; Los Angeles v. Lyons, 461 U.S. 95, 102-106 (1983).
There is no substance to the dissent's suggestion that imminence is demanded only when the alleged harm depends upon "the affirmative actions of third parties beyond a plaintiffs control," post at 592. Our cases mention third-party-caused contingency, naturally enough; but they also mention the plaintiffs failure to show that he will soon expose himself to the injury, see, e.g., Lyons, supra, at 105-106; O'Shea v. Littleton, 414 U.S. 488, 497 (1974); Ashcroft v. Mattis, 431 U.S. 171, 172-173, n. 2 (1977) (per curiam). And there is certainly no reason in principle to demand evidence that third persons will take the action exposing the plaintiff to harm, while presuming that the plaintiff himself will do so.
Our insistence upon these established requirements of standing does not mean that we would, as the dissent contends, "demand . . . detailed descriptions" of damages, such as a "nightly schedule of attempted activities" from plaintiffs alleging loss of consortium. Post at 593. That case and the others posited by the dissent all involve actual harm; the existence of standing is clear, though the precise extent of harm remains to be determined at trial. Where there is no actual harm, however, its imminence (though not its precise extent) must be established.
against a party who fails to make a showing sufficient to establish the existence of an element essential to that party's case, and on which that party will bear the burden of proof at trial.
Celotex Corp. v. Catrett, 477 U.S. 317, 32 (1986). Respondent had to adduce facts, therefore, on the basis of which it could reasonably be found that concrete injury to its members was, as our cases require, "certainly impending." The dissent may be correct that the geographic remoteness of those members (here in the United States) from Sri Lanka and Aswan does not "necessarily" prevent such a finding -- but it assuredly does so when no further facts have been brought forward (and respondent has produced none) showing that the impact upon animals in those distant places will in some fashion be reflected here. The dissent's position to the contrary reduces to the notion that distance never prevents harm, a proposition we categorically reject. It cannot be that a person with an interest in an animal automatically has standing to enjoin federal threats to that species of animal, anywhere in the world. Were that the case, the plaintiff in Sierra Club, for example, could have avoided the necessity of establishing anyone's use of Mineral King by merely identifying one of its members interested in an endangered species of flora or fauna at that location. JUSTICE BLACKMUN's accusation that a special rule is being crafted for "environmental claims," post at 595, is correct, but he is the craftsman.
JUSTICE STEVENS, by contrast, would allow standing on an apparent "animal nexus" theory to all plaintiffs whose interest in the animals is "genuine." Such plaintiffs, we are told, do not have to visit the animals, because the animals are analogous to family members. Post at 583-584, and n. 2. We decline to join JUSTICE STEVENS in this Linnaean leap. It is unclear to us what constitutes a "genuine" interest; how it differs from a "non-genuine" interest (which nonetheless prompted a plaintiff to file suit); and why such an interest in animals should be different from such an interest in anything else that is the subject of a lawsuit.
We need not linger over the dissent's facially impracticable suggestion, post at 595-596, that one agency of the government can acquire the power to direct other agencies by simply claiming that power in its own regulations and in litigation to which the other agencies are not parties. As for the contention that the other agencies will be "collaterally estopped" to challenge our judgment that they are bound by the Secretary of Interior's views, because of their participation in this suit, post at 596-597: whether or not that is true now, it was assuredly not true when this suit was filed, naming the Secretary alone. "The existence of federal jurisdiction ordinarily depends on the facts as they exist when the complaint is filed." Newman-Green, Inc. v. Alfonzo-Larrain, 490 U.S. 826, 830 (1989) (emphasis added). It cannot be that, by later participating in the suit, the State Department and AID retroactively created a redressability (and hence a jurisdiction) that did not exist at the outset.
The dissent's rejoinder that redressability was clear at the outset because the Secretary thought the regulation binding on the agencies, post at 598-599, n. 4, continues to miss the point: the agencies did not agree with the Secretary, nor would they be bound by a district court holding (as to this issue) in the Secretary's favor. There is no support for the dissent's novel contention, ibid., that Rule 19 of the Federal Rules of Civil Procedure, governing joinder of indispensable parties, somehow alters our longstanding rule that jurisdiction is to be assessed under the facts existing when the complaint is filed. The redressability element of the Article III standing requirement and the "complete relief" referred to by Rule 19 are not identical. Finally, we reach the dissent's contention, post at 599, n. 4, that, by refusing to waive our settled rule for purposes of this case, we have made "federal subject matter jurisdiction . . . a one-way street running the Executive Branch's way." That is so, we are told, because the Executive can dispel jurisdiction where it previously existed (by either conceding the merits or by pointing out that nonparty agencies would not be bound by a ruling), whereas a plaintiff cannot retroactively create jurisdiction based on postcomplaint litigation conduct. But any defendant, not just the government, can dispel jurisdiction by conceding the merits (and presumably thereby suffering a judgment) or by demonstrating standing defects. And permitting a defendant to point out a preexisting standing defect late in the day is not remotely comparable to permitting a plaintiff to establish standing on the basis of the defendant's litigation conduct occurring after standing is erroneously determined.
Seizing on the fortuity that the case has made its way to this Court, JUSTICE STEVENS protests that no agency would ignore "an authoritative construction of the [ESA] by this Court." Post at 585. In that, he is probably correct; in concluding from it that plaintiffs have demonstrated redressability, he is not. Since, as we have pointed out above, standing is to be determined as of the commencement of suit, since at that point it could certainly not be known that the suit would reach this Court and since it is not likely that an agency would feel compelled to accede to the legal view of a district court expressed in a case to which it was not a party, redressability clearly did not exist.
The dissent criticizes us for "overlook[ing]" memoranda indicating that the Sri Lankan government solicited and required AID's assistance to mitigate the effects of the Mahaweli Project on endangered species, and that the Bureau of Reclamation was advising the Aswan Project. Post at 600-601. The memoranda, however, contain no indication whatever that the projects will cease or be less harmful to listed species in the absence of AID funding. In fact, the Sri Lanka memorandum suggests just the opposite: it states that AID's role will be to mitigate the "‘negative impacts to the wildlife,'" post at 600, which means that the termination of AID funding would exacerbate respondent's claimed injury.
There is this much truth to the assertion that "procedural rights" are special: the person who has been accorded a procedural right to protect his concrete interests can assert that right without meeting all the normal standards for redressability and immediacy. Thus, under our case law, one living adjacent to the site for proposed construction of a federally licensed dam has standing to challenge the licensing agency's failure to prepare an Environmental Impact Statement, even though he cannot establish with any certainty that the Statement will cause the license to be withheld or altered, and even though the dam will not be completed for many years. (That is why we do not rely, in the present case, upon the Government's argument that, even if the other agencies were obliged to consult with the Secretary, they might not have followed his advice.) What respondents' "procedural rights" argument seeks, however, is quite different from this: standing for persons who have no concrete interests affected -- persons who live (and propose to live) at the other end of the country from the dam.
post at 605. If we understand this correctly, it means that the government's violation of a certain (undescribed) class of procedural duty satisfies the concrete injury requirement by itself, without any showing that the procedural violation endangers a concrete interest of the plaintiff (apart from his interest in having the procedure observed). We cannot agree. The dissent is unable to cite a single case in which we actually found standing solely on the basis of a "procedural right" unconnected to the plaintiff's own concrete harm. Its suggestion that we did so in Japan Whaling Association, supra, and Robertson v. Methow Valley Citizens Council, 490 U.S. 332 (1989), post at 602-603, 605, is not supported by the facts. In the former case, we found that the environmental organizations had standing because the "whalewatching and studying of their members w[ould] be adversely affected by continued whale harvesting," see 478 U.S. at 230-231, n. 4; and in the latter, we did not so much as mention standing, for the very good reason that the plaintiff was a citizens' council for the area in which the challenged construction was to occur, so that its members would obviously be concretely affected, see Methow Valley Citizens Council v. Regional Forester, 833 F.2d 810, 812-813 (CA9 1987).
[p]laintiffs . . . demonstrate a "personal stake in the outcome." . . . Abstract injury is not enough. The plaintiff must show that he "has sustained or is immediately in danger of sustaining some direct injury" as the result of the challenged official conduct, and the injury or threat of injury must be both "real and immediate," not "conjectural" or "hypothetical."
Los Angeles v. Lyons, 461 U.S. 95, 101-102 (1983) (citations omitted).
While it may seem trivial to require that Mss. Kelly and Skilbred acquire airline tickets to the project sites or announce a date certain upon which they will return, see ante at 564, this is not a case where it is reasonable to assume that the affiants will be using the sites on a regular basis, see Sierra Club v. Morton, supra, 405 U.S. at 735, n. 8, nor do the affiants claim to have visited the sites since the projects commenced. With respect to the Court's discussion of respondents' "ecosystem nexus," "animal nexus," and "vocational nexus" theories, ante at 565-567, I agree that, on this record, respondents' showing is insufficient to establish standing on any of these bases. I am not willing to foreclose the possibility, however, that, in different circumstances, a nexus theory similar to those proffered here might support a claim to standing. See Japan Whaling Assn. v. American Cetacean Soc., 478 U.S. 221, 231, n. 4 (1986) ("respondents . . . undoubtedly have alleged a sufficient ‘injury in fact' in that [p580] the whalewatching and studying of their members will be adversely affected by continued whale harvesting").
it does not, of its own force, establish that there is an injury in "any person" by virtue of any "violation." 16 U.S.C. § 1540(g)(1)(A).
the legal questions presented . . . will be resolved, not in the rarefied atmosphere of a debating society, but in a concrete factual context conducive to a realistic appreciation of the consequences of judicial action.
Valley Forge Christian College v. Americans United for Separation of Church and State, Inc., 454 U.S. 464, 472 (1982). In addition, the requirement of concrete injury confines the Judicial Branch to its proper, limited role in the constitutional framework of government.
Because I am not persuaded that Congress intended the consultation requirement in § 7(a)(2) of the Endangered Species Act of 1973 (ESA), 16 U.S.C. § 1536(a)(2), to apply to activities in foreign countries, I concur in the judgment of reversal. I do not, however, agree with the Court's conclusion [p582] that respondents lack standing because the threatened injury to their interest in protecting the environment and studying endangered species is not "imminent." Nor do I agree with the plurality's additional conclusion that respondents' injury is not "redressable" in this litigation.
In my opinion, a person who has visited the critical habitat of an endangered species, has a professional interest in preserving the species and its habitat, and intends to revisit them in the future has standing to challenge agency action that threatens their destruction. Congress has found that a wide variety of endangered species of fish, wildlife, and plants are of "aesthetic, ecological, educational, historical, recreational, and scientific value to the Nation and its people." 16 U.S.C. § 1531(a)(3). Given that finding, we have no license to demean the importance of the interest that particular individuals may have in observing any species or its habitat, whether those individuals are motivated by aesthetic enjoyment, an interest in professional research, or an economic interest in preservation of the species. Indeed, this Court has often held that injuries to such interests are sufficient to confer standing, [n1] and the Court reiterates that holding today. See ante at 562-563.
The Court nevertheless concludes that respondents have not suffered "injury in fact," because they have not shown that the harm to the endangered species will produce "imminent" injury to them. See ante at 564. I disagree. An injury to an individual's interest in studying or enjoying a species and its natural habitat occurs when someone (whether it be the government or a private party) takes action that harms that species and habitat. In my judgment, [p583] therefore, the "imminence" of such an injury should be measured by the timing and likelihood of the threatened environmental harm, rather than -- as the Court seems to suggest, ante at 564, and n. 2 -- by the time that might elapse between the present and the time when the individuals would visit the area if no such injury should occur.
assure that concrete adverseness which sharpens the presentation of issues upon which the court so largely depends for illumination of difficult . . . questions.
[a]bstract injury is not enough. It must be alleged that the plaintiff "has sustained or is immediately in danger of sustaining some direct injury" as the result of the challenged statute or official conduct. . . . The injury or threat of injury must be both "real and immediate," not "conjectural," or "hypothetical."
Consequently, we have denied standing to plaintiffs whose likelihood of suffering any concrete adverse effect from the challenged action was speculative. See, e.g., Whitmore v. Arkansas, 495 U.S. 149, 158-159 (1990); Los Angeles v. Lyons, 461 U.S. 95, 105 (1983); O'Shea, 414 U.S. at 497. In this case, however, the likelihood that respondents will be injured by the destruction of the endangered species is not speculative. If respondents are genuinely interested in the preservation of the endangered species and intend to study or observe these animals in the future, their injury will occur as soon as the animals are destroyed. Thus, the only potential [p584] source of "speculation" in this case is whether respondents' intent to study or observe the animals is genuine. [n2] In my view, Joyce Kelly and Amy Skilbred have introduced sufficient evidence to negate petitioner's contention that their claims of injury are "speculative" or "conjectural." As JUSTICE BLACKMUN explains, post at 591-592, a reasonable finder of fact could conclude, from their past visits, their professional backgrounds, and their affidavits and deposition testimony, that Ms. Kelly and Ms. Skilbred will return to the project sites and, consequently, will be injured by the destruction of the endangered species and critical habitat.
The plurality also concludes that respondents' injuries are not redressable in this litigation for two reasons. First, respondents have sought only a declaratory judgment that the Secretary of the Interior's regulation interpreting § 7(a)(2) to require consultation only for agency actions in the United States or on the high seas is invalid, and an injunction requiring him to promulgate a new regulation requiring consultation for agency actions abroad as well. But, the plurality opines, even if respondents succeed and a new regulation is [p585] promulgated, there is no guarantee that federal agencies that are not parties to this case will actually consult with the Secretary. See Ante at 568-571. Furthermore, the plurality continues, respondents have not demonstrated that federal agencies can influence the behavior of the foreign governments where the affected projects are located. Thus, even if the agencies consult with the Secretary and terminate funding for foreign projects, the foreign governments might nonetheless pursue the projects and jeopardize the endangered species. See Ante at 571. Neither of these reasons is persuasive.
We must presume that, if this Court holds that § 7(a)(2) requires consultation, all affected agencies would abide by that interpretation and engage in the requisite consultations. Certainly the Executive Branch cannot be heard to argue that an authoritative construction of the governing statute by this Court may simply be ignored by any agency head. Moreover, if Congress has required consultation between agencies, we must presume that such consultation will have a serious purpose that is likely to produce tangible results. As JUSTICE BLACKMUN explains, post at 599-601, it is not mere speculation to think that foreign governments, when faced with the threatened withdrawal of United States assistance, will modify their projects to mitigate the harm to endangered species.
Although I believe that respondents have standing, I nevertheless concur in the judgment of reversal because I am persuaded that the Government is correct in its submission that § 7(a)(2) does not apply to activities in foreign countries. As with all questions of statutory construction, the question whether a statute applies extraterritorially is one of congressional intent. Foley Bros., Inc. v. Filardo, 336 U.S. 281, 284-285 (1949). We normally assume that "Congress is primarily concerned with domestic conditions," id. at 285, and therefore presume that "‘legislation of Congress, unless a [p586] contrary intent appears, is meant to apply only within the territorial jurisdiction of the United States.'" EEOC v. Arabian American Oil Co., 499 U.S. 244, 248 (1991) (quoting Foley Bros., 336 U.S. at 285).
Each Federal agency shall, in consultation with and with the assistance of the Secretary [of the Interior or Commerce, as appropriate [n3] ], insure that any action authorized, funded, or carried out by such agency (hereinafter in this section referred to as an "agency action") is not likely to jeopardize the continued existence of any endangered species or threatened species or result in the destruction or adverse modification of habitat of such species which is determined by the Secretary, after consultation as appropriate with affected States, to be critical, unless such agency has been granted an exemption for such action by the Committee pursuant to subsection (h) of this section. . . .
16 U.S.C. § 1536(a)(2). Nothing in this text indicates that the section applies in foreign countries. [n4] Indeed, the only geographic reference in [p587] the section is in the "critical habitat" clause, [n5] which mentions "affected States." The Secretary of the Interior and the Secretary of Commerce have consistently taken the position that they need not designate critical habitat in foreign countries. See 42 Fed.Reg. 4869 (1977) (initial regulations of the Fish and Wildlife Service and the National Marine Fisheries Service on behalf of the Secretary of Interior and the Secretary of Commerce). Consequently, neither Secretary interprets § 7(a)(2) to require federal agencies to engage in consultations to insure that their actions in foreign countries will not adversely affect the critical habitat of endangered or threatened species.
That interpretation is sound, and, in fact, the Court of Appeals did not question it. [n6] There is, moreover, no indication that Congress intended to give a different geographic scope to the two clauses in § 7(a)(2). To the contrary, Congress recognized that one of the "major causes" of extinction of [p588] endangered species is the "destruction of natural habitat." S.Rep. No. 93-307, p. 2 (1973); see also H.Rep. No. 93-412, p. 2 (1973), U.S.Code Cong. & Admin.News 1973, pp. 2989, 2990; TVA v. Hill, 437 U.S. 153, 179 (1978). It would thus be illogical to conclude that Congress required federal agencies to avoid jeopardy to endangered species abroad, but not destruction of critical habitat abroad.
any foreign country (with its consent) . . . in the development and management of programs in that country which [are] . . . necessary or useful for the conservation of any endangered species or threatened species listed by the Secretary pursuant to section 1533 of this title.
16 U.S.C. § 1537(a). It also directs the Secretary of Interior, "through the Secretary of State," to "encourage" foreign countries to conserve fish and wildlife and to enter into bilateral or multilateral agreements. § 1537(b). Section 9 makes it unlawful to import endangered species into (or export them from) the United States or to otherwise traffic in endangered species "in interstate or foreign commerce." §§ 1538(a)(1)(A), (E), (F). Congress thus obviously thought about endangered species abroad and devised specific sections of the ESA to protect them. In this context, the absence of any explicit statement that the consultation requirement is applicable to agency actions in foreign countries suggests that Congress did not intend that § 7(a)(2) apply extraterritorially.
In short, a reading of the entire statute persuades me that Congress did not intend the consultation requirement in § 7(a)(2) to apply to activities in foreign countries. Accordingly, notwithstanding my disagreement with the Court's disposition of the standing question, I concur in its judgment.
See, e.g., Sierra Club v. Morton, 405 U.S. 727, 734 (1972); United States v. Students Challenging Regulatory Agency Procedures (SCRAP), 412 U.S. 669, 686-687 (1973); Japan Whaling Assn. v. American Cetacean Society, 478 U.S. 221, 230-231, n. 4 (1986).
not fall indiscriminately upon every citizen. The alleged injury will be felt directly only by those who use [the area,] and for whom the aesthetic and recreational values of the area will be lessened. . . .
Thus, respondents would not be injured by the challenged projects if they had not visited the sites or studied the threatened species and habitat. But, as discussed above, respondents did visit the sites; moreover, they have expressed an intent to do so again. This intent to revisit the area is significant evidence tending to confirm the genuine character of respondents' interest, but I am not at all sure that an intent to revisit would be indispensable in every case. The interest that confers standing in a case of this kind is comparable, though by no means equivalent, to the interest in a relationship among family members that can be immediately harmed by the death of an absent member, regardless of when, if ever, a family reunion is planned to occur. Thus, if the facts of this case had shown repeated and regular visits by the respondents, cf. ante at 579 (Opinion of KENNEDY J.), proof of an intent to revisit might well be superfluous.
the Secretary of the Interior or the Secretary of Commerce as program responsibilities are vested pursuant to the provisions of Reorganization Plan Numbered 4 of 1970.
marine species are under the jurisdiction of the Secretary of Commerce, and all other species are under the jurisdiction of the Secretary of the Interior.
51 Fed.Reg.19926 (1986) (preamble to final regulations governing interagency consultation promulgated by the Fish and Wildlife Service and the National Marine Fisheries Service on behalf of the Secretary of the Interior and the Secretary of Commerce).
Respondents point out that the duties in § 7(a)(2) are phrased in broad, inclusive language: "Each Federal agency" shall consult with the Secretary and insure that "any action" does not jeopardize "any endangered or threatened species" or destroy or adversely modify the "habitat of such species." See Brief for Respondents 36; 16 U.S.C. § 1536(a)(2). The Court of Appeals correctly recognized, however, that such inclusive language, by itself, is not sufficient to overcome the presumption against the extraterritorial application of statutes. 911 F.2d 117, 122 (CA8 1990); see also Foley Bros., Inc. v. Filardo, 336 U.S. 281, 282, 287-288 (1949) (statute requiring an eight-hour day provision in "‘[e]very contract made to which the United States . . . is a party'" is inapplicable to contracts for work performed in foreign countries).
Section 7(a)(2) has two clauses which require federal agencies to consult with the Secretary to insure that their actions (1) do not jeopardize threatened or endangered species (the "endangered species clause"), and (2) are not likely to destroy or adversely affect the habitat of such species (the "critical habitat clause").
Instead, the Court of Appeals concluded that the endangered species clause and the critical habitat clause are "severable," at least with respect to their "geographical scope," so that the former clause applies extraterritorially even if the latter does not. 911 F.2d at 125. Under this interpretation, federal agencies must consult with the Secretary to insure that their actions in foreign countries are not likely to threaten any endangered species, but they need not consult to insure that their actions are not likely to destroy the critical habitats of these species. I cannot subscribe to the Court of Appeals' strained interpretation, for there is no indication that Congress intended to give such vastly different scope to the two clauses in § 7(a)(2).
encouraging the States . . . to develop and maintain conservation programs which meet national and international standards is a key to meeting the Nation's international commitments. . . .
16 U.S.C. §§ 1531(4), (5). The Court of Appeals read these findings as indicative of a congressional intent to make § 7(a)(2)'s consultation requirement applicable to agency action abroad. See 911 F.2d at 122-123. I am not persuaded, however, that such a broad congressional intent can be gleaned from these findings. Instead, I think the findings indicate a more narrow congressional intent that the United States abide by its international commitments.
I part company with the Court in this case in two respects. First, I believe that respondents have raised genuine issues of fact -- sufficient to survive summary judgment -- both as to injury and as to redressability. Second, I question the Court's breadth of language in rejecting standing for "procedural" injuries. I fear the Court seeks to impose fresh limitations on the constitutional authority of Congress to allow [p590] citizen-suits in the federal courts for injuries deemed "procedural" in nature. I dissent.
function is not [it]self to weigh the evidence and determine the truth of the matter, but to determine whether there is a genuine issue for trial.
Were the Court to apply the proper standard for summary judgment, I believe it would conclude that the sworn affidavits and deposition testimony of Joyce Kelly and Amy Skilbred advance sufficient facts to create a genuine issue for trial concerning whether one or both would be imminently harmed by the Aswan and Mahaweli projects. In the first instance, as the Court itself concedes, the affidavits contained facts making it at least "questionable" (and therefore within the province of the factfinder) that certain agency-funded projects threaten listed species. [n1] Ante at 564. The only remaining issue, then, is whether Kelly and Skilbred have shown that they personally would suffer imminent harm.
I think a reasonable finder of fact could conclude from the information in the affidavits and deposition testimony that either Kelly or Skilbred will soon return to the project sites, thereby satisfying the "actual or imminent" injury standard. The Court dismisses Kelly's and Skilbred's general statements [p592] that they intended to revisit the project sites as "simply not enough." Ibid.. But those statements did not stand alone. A reasonable finder of fact could conclude, based not only upon their statements of intent to return, but upon their past visits to the project sites, as well as their professional backgrounds, that it was likely that Kelly and Skilbred would make a return trip to the project areas. Contrary to the Court's contention that Kelly's and Skilbred's past visits "proves nothing," ibid., the fact of their past visits could demonstrate to a reasonable factfinder that Kelly and Skilbred have the requisite resources and personal interest in the preservation of the species endangered by the Aswan and Mahaweli projects to make good on their intention to return again. Cf. Los Angeles v. Lyons, 461 U.S. 95, 102 (1983) ("Past wrongs were evidence bearing on whether there is a real and immediate threat of repeated injury") (internal quotations omitted). Similarly, Kelly's and Skilbred's professional backgrounds in wildlife preservation, see App. 100, 144, 309-310, also make it likely -- at least far more likely than for the average citizen -- that they would choose to visit these areas of the world where species are vanishing.
By requiring a "description of concrete plans" or "specification of when the some day [for a return visit] will be," ante at 564, the Court, in my view, demands what is likely an empty formality. No substantial barriers prevent Kelly or Skilbred from simply purchasing plane tickets to return to the Aswan and Mahaweli projects. This case differs from other cases in which the imminence of harm turned largely on the affirmative actions of third parties beyond a plaintiff's control. See Whitmore v. Arkansas, 495 U.S. 149, 155-156 (1990) (harm to plaintiff death-row inmate from fellow inmate's execution depended on the court's one day reversing plaintiff's conviction or sentence and considering comparable sentences at resentencing); Los Angeles v. Lyons, 461 U.S. at 105 (harm dependent on police's arresting plaintiff again [p593] and subjecting him to chokehold); Rizzo v. Goode, 423 U.S. 362, 372 (1976) (harm rested upon "what one of a small unnamed minority of policemen might do to them in the future because of that unknown policeman's perception of departmental disciplinary procedures"); O'Shea v. Littleton, 414 U.S. 488, 495-498 (1974) (harm from discriminatory conduct of county magistrate and judge dependent on plaintiffs' being arrested, tried, convicted, and sentenced); Golden v. Zwickler, 394 U.S. 103, 109 (1969) (harm to plaintiff dependent on a former Congressman's (then serving a 14-year term as a judge) running again for Congress). To be sure, a plaintiff's unilateral control over his or her exposure to harm does not necessarily render the harm nonspeculative. Nevertheless, it suggests that a finder of fact would be far more likely to conclude the harm is actual or imminent, especially if given an opportunity to hear testimony and determine credibility.
The Court also concludes that injury is lacking, because respondents' allegations of "ecosystem nexus" failed to demonstrate sufficient proximity to the site of the environmental harm. Ante at 565-566. To support that conclusion, the Court mischaracterizes our decision in Lujan v. National Wildlife Federation, 497 U.S. 871 (1990), as establishing a general rule that "a plaintiff claiming injury from environmental damage must use the area affected by the challenged activity." Ante at 565-566. In National Wildlife Federation, the Court required specific geographical proximity because of the particular type of harm alleged in that case: harm to the plaintiff's visual enjoyment of nature from mining activities. Id., 497 U.S. at 888. One cannot suffer from the sight of a ruined landscape without being close enough to see the sites actually being mined. Many environmental injuries, however, cause harm distant from the area immediately affected by the challenged action. Environmental destruction may affect animals traveling over vast geographical ranges, see, e.g., Japan Whaling Assn. v. American Cetacean Soc., 478 U.S. 221 (1986) (harm to American whale watchers from Japanese whaling activities), or rivers running long geographical courses, see, e.g., Arkansas v. Oklahoma, 503 U.S. 91 (1992) (harm to Oklahoma residents from wastewater treatment plant 39 miles from border). It cannot seriously be contended that a litigant's failure to use the precise or exact site where animals are slaughtered or where toxic waste is dumped into a river means he or she cannot show injury.
The Court also rejects respondents' claim of vocational or professional injury. The Court says that it is "beyond all reason" that a zoo "keeper" of Asian elephants would have standing to contest his government's participation in the eradication of all the Asian elephants in another part of the world. Ante at 566. I am unable to see how the distant location of the destruction necessarily (for purposes of ruling [p595] at summary judgment) mitigates the harm to the elephant keeper. If there is no more access to a future supply of the animal that sustains a keeper's livelihood, surely there is harm.
to foreclose the possibility . . . that, in different circumstances, a nexus theory similar to those proffered here might support a claim to standing.
Ante at 579 (KENNEDY, J., concurring in part and concurring in the judgment).
A plurality of the Court suggests that respondents have not demonstrated redressability: a likelihood that a court ruling in their favor would remedy their injury. Duke Power Co. v. Carolina Environmental Study Group, Inc., 438 U.S. 59, 74-75, and n. 20 (1978) (plaintiff must show "substantial likelihood" that relief requested will redress the injury). The plurality identifies two obstacles. The first is that the "action agencies" (e.g., the Agency for International Development) cannot be required to undertake consultation with petitioner Secretary, because they are not directly bound as parties to the suit, and are otherwise not indirectly bound by being subject to petitioner Secretary's regulation. Petitioner, however, officially and publicly has taken the position that his regulations regarding consultation under § 7 of the Act are binding on action agencies. 50 CFR § 402.14(a) (1991). [n2] And he has previously [p596] taken the same position in this very litigation, having stated in his answer to the complaint that petitioner "admits the Fish and Wildlife Service (FWS) was designated the lead agency for the formulation of regulations concerning section 7 of the ESA." App. 246. I cannot agree with the plurality that the Secretary (or the Solicitor General) is now free, for the convenience of this appeal, to disavow his prior public and litigation positions. More generally, I cannot agree that the Government is free to play "Three-Card Monte" with its description of agencies' authority to defeat standing against the agency given the lead in administering a statutory scheme.
Emphasizing that none of the action agencies are parties to this suit (and having rejected the possibility of their being indirectly bound by petitioner's regulation), the plurality concludes that "there is no reason they should be obliged to honor an incidental legal determination the suit produced." Ante at 569. I am not as willing as the plurality is to assume that agencies at least will not try to follow the law. Moreover, I wonder if the plurality has not overlooked the extensive involvement from the inception of this litigation by the Department of State and the Agency for International Development. [n3] Under [p597] principles of collateral estoppel, these agencies are precluded from subsequently relitigating the issues decided in this suit.
[O]ne who prosecutes or defends a suit in the name of another to establish and protect his own right, or who assists in the prosecution or defense of an action in aid of some interest of his own, and who does this openly to the knowledge of the opposing party, is as much bound by the judgment and as fully entitled to avail himself of it as an estoppel against an adverse party, as he would be if he had been a party to the record.
Souffront v. Compagnie des Sucreries, 217 U.S. 475, 487 (1910). This principle applies even to the Federal Government. In Montana v. United States, 440 U.S. 147 (1979), this Court held that the Government was estopped from relitigating in federal court the constitutionality of Montana's gross receipts tax, because that issue previously had been litigated in state court by an individual contractor whose litigation had been financed and controlled by the Federal Government.
Thus, although not a party, the United States plainly had a sufficient "laboring oar" in the conduct of the state court litigation to actuate principles of estoppel.
The second redressability obstacle relied on by the plurality is that "the [action] agencies generally supply only a fraction of the funding for a foreign project." Ante at 571. What this Court might "generally" take to be true does not eliminate the existence of a genuine issue of fact to withstand summary judgment. Even if the action agencies supply only a fraction of the funding for a particular foreign project, it remains at least a question for the finder of fact whether threatened withdrawal of that fraction would affect foreign government conduct sufficiently to avoid harm to listed species.
The plurality states that "AID, for example, has provided less than 10% of the funding for the Mahaweli project." Ibid. The plurality neglects to mention that this "fraction" amounts to $170 million, see App. 159, not so paltry a sum for a country of only 16 million people with a gross national product of less than $6 billion in 1986, when respondents filed [p600] the complaint in this action. Federal Research Division, Library of Congress, Sri Lanka: A Country Study (Area Handbook Series) xvi-xvii (1990).
Respondents have produced nothing to indicate that the projects they have named will . . . do less harm to listed species, if that fraction is eliminated.
Ante at 571. As an initial matter, the relevant inquiry is not, as the plurality suggests, what will happen if AID or other agencies stop funding projects, but what will happen if AID or other agencies comply with the consultation requirement for projects abroad. Respondents filed suit to require consultation, not a termination of funding. Respondents have raised at least a genuine issue of fact that the projects harm endangered species and that the actions of AID and other U.S. agencies can mitigate that harm.
The Sri Lanka government lacks the necessary finances to undertake any long-term management programs to avoid the negative impacts to the wildlife. The donor nations and agencies that are financing the [Mahaweli project] will be the key as to how successfully the wildlife is preserved. If wildlife problems receive the same level of attention as the engineering project, then the negative impacts to the environment can be alleviated. This means that there has to be long-term funding in sufficient amounts to stem the negative impacts of this project.
Id. at 216. [p601] I do not share the plurality's astonishing confidence that, on the record here, a factfinder could only conclude that AID was powerless to ensure the protection of listed species at the Mahaweli project.
As for the Aswan project, the record again rebuts the plurality's assumption that donor agencies are without any authority to protect listed species. Kelly asserted in her affidavit -- and it has not been disputed -- that the Bureau of Reclamation was "overseeing" the rehabilitation of the Aswan project. App. 101. See also id. at 65 (Bureau of Reclamation publication stating: "In 1982, the Egyptian government . . . requested that Reclamation serve as its engineering advisor for the nine-year [Aswan] rehabilitation project").
injury-in-fact requirement . . . [is] satisfied by congressional conferral upon all person of an abstract, self-contained, noninstrumental "right" to have the Executive observe the procedures required by law.
Ante at 573. Whatever the Court might mean with that very broad language, it cannot be saying that "procedural injuries" as a class are necessarily insufficient for purposes of Article III standing.
Most governmental conduct can be classified as "procedural." Many injuries caused by governmental conduct, therefore, are categorizable at some level of generality as [p602] "procedural" injuries. Yet, these injuries are not categorically beyond the pale of redress by the federal courts. When the Government, for example, "procedurally" issues a pollution permit, those affected by the permittee's pollutants are not without standing to sue. Only later cases will tell just what the Court means by its intimation that "procedural" injuries are not constitutionally cognizable injuries. In the meantime, I have the greatest of sympathy for the courts across the country that will struggle to understand the Court's standardless exposition of this concept today.
transfer from the President to the courts the Chief Executive's most important constitutional duty, to "take Care that the Laws be faithfully executed," Art. II, sec. 3.
Ante at 576, 577. In fact, the principal effect of foreclosing judicial enforcement of such procedures is to transfer power into the hands of the Executive at the expense -- not of the courts -- but of Congress, from which that power originates and emanates.
The Court recently has considered two such procedurally oriented statutes. In Japan Whaling Assn. v. American Cetacean Society, 478 U.S. 221 (1986), the Court examined a [p603] statute requiring the Secretary of Commerce to certify to the President that foreign nations were not conducting fishing operations or trading which "diminis[h] the effectiveness" of an international whaling convention. Id. at 226. The Court expressly found standing to sue. Id. at 230-231, n. 4. In Robertson v. Methow Valley Citizens Council, 490 U.S. 332, 348 (1989), this Court considered injury from violation of the "action-forcing" procedures of the National Environmental Policy Act (NEPA), in particular the requirements for issuance of environmental impact statements.
a written statement setting forth the Secretary's opinion, and a summary of the information on which the opinion is based, detailing how the agency action affects the species or its critical habitat.
[t]his is not a case where plaintiffs [p604] are seeking to enforce a procedural requirement the disregard of which could impair a separate concrete interest of theirs.
To prevent Congress from conferring standing for "procedural injuries" is another way of saying that Congress may not delegate to the courts authority deemed "executive" in nature. Ante at 577 (Congress may not "transfer from the President to the courts the Chief Executive's most important constitutional duty, to ‘take Care that the Laws be faithfully executed,' Art. II, sec. 3"). Here Congress seeks not to delegate "executive" power, but only to strengthen the procedures it has legislatively mandated.
We have long recognized that the nondelegation doctrine does not prevent Congress from seeking assistance, within proper limits, from its coordinate Branches.
Touby v. United States, 500 U.S. 160, 165 (1991).
Congress does not violate the Constitution merely because it legislates in broad terms, leaving a certain degree of discretion to executive or judicial actors.
Ironically, this Court has previously justified a relaxed review of congressional delegation to the Executive on grounds that Congress, in turn, has subjected the exercise of that power to judicial review. INS v. Chadha, 462 U.S. 919, 953-954, n. 16 (1983); American Power & Light Co. v. SEC, 329 U.S. at 105-106. The Court's intimation today that procedural injuries are not constitutionally cognizable threatens this understanding upon which Congress has undoubtedly relied. In no sense is the Court's suggestion compelled by our "common understanding of what activities are appropriate to legislatures, to executives, and to courts." Ante at 560. In my view, it reflects an unseemly solicitude for an expansion of power of the Executive Branch.
In short, determining "injury" for Article III standing purposes is a fact-specific inquiry.
Typically, . . . the standing inquiry requires careful judicial examination of a complaint's allegations to ascertain whether the particular plaintiff is entitled to an adjudication of the particular claims asserted.
Allen v. Wright, 468 U.S. at 752. There may be factual circumstances in which a congressionally imposed procedural requirement is so insubstantially connected to the prevention of a substantive harm that it cannot be said to work any conceivable injury to an individual litigant. But, as a general matter, the courts owe substantial deference to Congress' substantive purpose in imposing a certain procedural requirement. In all events, "[o]ur separation of powers analysis does not turn on the labeling of an activity as ‘substantive' as opposed to ‘procedural.'" Mistretta v. United States, 488 U.S. 361, 393 (1989). There is no room for a per se rule or presumption excluding injuries labeled "procedural" in nature.
Marbury v. Madison, 1 Cranch 137, 163 (1803).
The magnitude of the Accelerated Mahaweli Development Program could have massive environmental impacts on such an insular ecosystem as the Mahaweli River system.
The Sri Lankan government lacks the necessary finances to undertake any long-term management programs to avoid the negative impacts to the wildlife.
Id. at 216. Finally, in an affidavit submitted by petitioner for purposes of this litigation, an AID official states that an AID environmental assessment "showed that the [Mahaweli project] could affect several endangered species." Id. at 159.
(a) Requirement for formal consultation. Each Federal agency shall review its actions at the earliest possible time to determine whether any action may affect listed species or critical habitat. If such a determination is made, formal consultation is required. . . .
For example, petitioner's motion before the District Court to dismiss the complaint identified four attorneys from the Department of State and AID (an agency of the Department of State) as "counsel" to the attorneys from the Justice Department in this action. One AID lawyer actually entered a formal appearance before the District Court on behalf of AID. On at least one occasion, petitioner requested an extension of time to file a brief, representing that "[a]n extension is necessary for the Department of Justice to consult with . . . the Department of State [on] the brief." See Brief for Respondents 31, n. 8. In addition, AID officials have offered testimony in this action.
The plurality now suggests that collateral estoppel principles can have no application here, because the participation of other agencies in this litigation arose after its inception. Borrowing a principle from this Court's statutory diversity jurisdiction cases and transferring it to the constitutional standing context, the Court observes: "The existence of federal jurisdiction ordinarily depends on the facts as they exist when the complaint is filed" (emphasis in original). Ante at 569, n. 4 (quoting Newman-Green, Inc., v. Alfonzo-Larrain, 490 U.S. 826, 830 (1989)). See also Mollan v. Torrance, 9 Wheat. 537, 539 (1824) (Marshall, C.J.). The plurality proclaims that "it cannot be" that later participation of other agencies in this suit retroactively created a jurisdictional issue that did not exist at the outset. Ante at 570, n. 4.
The plurality, however, overlooks at least three difficulties with this explanation. In the first place, assuming that the plurality were correct that events as of the initiation of the lawsuit are the only proper jurisdictional reference point, were the Court to follow this rule in this case, there would be no question as to the compliance of other agencies, because, as stated at an earlier point in the opinion: "When the Secretary promulgated the regulation here, he thought it was binding on the agencies." Ante at 569. This suit was commenced in October, 1986, just three months after the regulation took effect. App. 21; 51 Fed.Reg.19926 (1986). As the plurality further admits, questions about compliance of other agencies with the Secretary's regulation arose only by later participation of the Solicitor General and other agencies in the suit. Ante at 569. Thus, it was, to borrow the plurality's own words, "assuredly not true when this suit was filed, naming the Secretary alone," ante at 569, n. 4, that there was any question before the District Court about other agencies being bound.
Third, the rule articulated in Newman-Green is that the existence of federal jurisdiction "ordinarily" depends on the facts at the initiation of the lawsuit. This is no ironclad per se rule without exceptions. Had the Solicitor General for example, taken a position during this appeal that the § 7 consultation requirement does in fact apply extraterritorially, the controversy would be moot, and this Court would be without jurisdiction.
In the plurality's view, federal subject matter jurisdiction appears to be a one-way street running the Executive Branch's way. When the Executive Branch wants to dispel jurisdiction over an action against an agency, it is free to raise at any point in the litigation that other nonparty agencies might not be bound by any determinations of the one agency defendant. When a plaintiff, however, seeks to preserve jurisdiction in the face of a claim of nonredressability, the plaintiff is not free to point to the involvement of nonparty agencies in subsequent parts of the litigation. The plurality does not explain why the street runs only one way -- why some actions of the Executive Branch subsequent to initiation of a lawsuit are cognizable for jurisdictional purposes, but others simply are not.
More troubling still is the distance this one-way street carries the plurality from the underlying purpose of the standing doctrine. The purpose of the standing doctrine is to ensure that courts do not render advisory opinions rather than resolve genuine controversies between adverse parties. Under the plurality's analysis, the federal courts are to ignore their present ability to resolve a concrete controversy if, at some distant point in the past, it could be said that redress could not have been provided. The plurality perverts the standing inquiry. | 2019-04-25T18:24:35Z | https://www.law.cornell.edu/supremecourt/text/504/555 |
Window Dressing/Get Lit! installations feature the transformation of empty storefronts into a symbiotic environment for the celebration of literary inspiration provided by our festival authors for visual art created by local talents. This year, artist Mallory Ware will be creating an exhibit inspired by Alexandra Teague’s The Principles Behind Flotation and artist Zach Grassi will be using Jason Rekulak’s The Impossible Fortress as his inspiration.
Both installations will be at the Ridpath Hotel during the entire month of April, and for some time beyond. The authors will be encouraged to visit the exhibit after their reading at Auntie’s Bookstore on Saturday at 3 pm, so please join us then for a reading/exhibit walk or check out the installations on your own during the week!
Come enjoy a special happy hour event at The Wandering Table that is sure to inspire. As part of our Get Lit! Inspired Partnerships, talented chef and restaurant owner Adam Hegsted, a semifinalist in 2016 for the prestigious James Beard Award, has used our festival authors as inspiration to create special appetizers and drinks that will be offered throughout the week, and during this unique happy hour event. Come enjoy these inspired dishes while you mingle with the literary movers and shakers of Spokane. Chef Hegsted is generously donating one dollar to Get Lit! for every inspired special sold, so please come down to support Get Lit! and experience a culinary/literary pairing not to be missed!
The top eight poets from the 2016-2017 Spokane Poetry Slam season compete against each other in an epic three-round slam. The top four poets from the night will make up the team that will represent Spokane at the National Poetry Slam in Denver, Colorado in August. The top scoring poet from the night earns the title of Spokane Poetry Slam Grand Slam Champion and will represent Spokane at the Individual World Poetry Slam, which will be held in Spokane in October.
Randy Henderson is the author of the darkly humorous The Family Arcana trilogy. The series begins with Finn Fancy Necromancy, in which we meet necromancer Finn Graymaraye. After serving twenty-five torturous years for a crime he did not commit, Finn is soon to be released from his disembodied imprisonment. But it would seem the same someone who framed him before is back--and up to their old tricks. What’s a necromancer to do? Finn’s adventure continues with Bigfootloose and Finn Fancy Free and Smells Like Finn Spirit. Henderson was a grand-prize winner of the Writers of the Future contest in 2014, graduated from the Clarion West workshop in 2009, and is a self-proclaimed milkshake connoisseur. His short fiction has appeared in publications such as Escape Pod and Realms of Fantasy.
How to make your external plot work with the inner transformation of your main characters to enhance the impact of both, and how to get unstuck when they don't play well together.
We are accepting a suggested donation of $5 for the writing workshop, though all are welcome. The workshop will begin at 7 p.m., following the reading. Interested writers may sign up for the workshop at the event, reserve a space by e-mailing getlit@ewu.edu, or by calling 509-828-1498.
This poetry project, led by Spokane Poet Laureate Laura Read and poet Kathryn Smith, explores our experiences with the "unloved" characters of the animal kingdom via art by Jessica Wade. Through a two-session writing workshop, we've explored our experiences with creatures often left on the margins of affection and then workshop our pieces. Poems from the workshop will appear in a limited edition Spark Central chapbook released during GetLit! on Tuesday, April 18th. Workshop participants will receive a copy of the chapbook which will be sold to the public for $10. All proceeds from these sales will go to Spark Central.
Poets perform their work in the form of a friendly competition, scored by an audience of judges. It’s an event that makes space for creative thinkers to explore the power of their words both written and spoken.
The Get Lit! poetry slams turns the spotlight on teens (age 15-18) and college students of any age.* Participants may compete alone or in teams. Each participant should bring two poems, and each performance will be scored, following the same rules as Spokane Poetry Slam. Poets are judged on content, originality and performance, and limited to three minutes per round.
The poetry slams are hosted by EWU’s Writers in the Community, a program which allows graduate students to volunteer at area schools, correctional facilities, hospitals, shelters and other community organizations as creative writing teachers.
*Participants on the cusp of one age group may choose where they feel more comfortable competing. We don’t place language or content restrictions on the slams, so we leave this decision to the poets and their parents.
This special event begins with three talented undergraduate writers who will get the chance to sit down with widely published authors for one-on-one feedback, Writers’ Center style. Following this private event at 11a.m., the authors will discuss the revision process with EWU Writers’ Center responder, and poet, Kristina Pfleeger. The feedback sessions will be private, but the panel at 12:00 is free and open to the public.
What do rock stars, Nobel laureates, bestselling novelists, astronauts, and attorneys have in common? A teacher changed their lives. Bruce and Holly Holbert have combined their creative talents to bring us the stories of teachers who have changed lives all around the country in an anthology of true, personal experiences called Thank You, Teacher. Bruce is a teacher of 30 years who holds an MFA in Creative Writing. Holly holds a degree in Education from Eastern Washington University. They will be joined during this event by Michael Copperman, author of the memoir Teacher, which considers the distance between the idealism of Teach for America’s creed that “One day, all children in this nation will have the opportunity to attain an excellent education and reach their full potential” and what it actually means to teach in America’s poorest and most troubled public schools.
in Seattle, Washington. She is the founder and current creative director of Bent, a writing institute for LGBTIQ people based in Seattle. In 2002, she was elected by the people and named by the city council as Seattle's Poet Populist, or poet of the people, and she won the Seattle Grand Slam Champion title in the same year. She holds an MFA from Vermont College in fiction writing, and an MSW from the University of Michigan in community organizing.
Laila Lalami is the winner of the American Book Award, the Arab American Book Award, and, among many other accolades, is a Pulitzer Prize finalist. Her highly acclaimed second novel, The Moor’s Account, is a fictional memoir tracing the journey of a Moroccan slave as he and a crew of Spanish explorers face starvation, disease, and other obstacles after landing in Florida. Moroccan-born and raised, Lalami’s work is frequently concerned with identity and culture, as well as contemporary issues facing muslims and the Arab world. Her first book, Hope and Other Dangerous Pursuits, is a collection of short stories about immigrants attempting to escape Morocco for a better life. Her debut novel, Secret Son, explores culture, class, and individuality in a chaotic, sectarian world from a Moroccan teenager’s point of view. Also an essayist and columnist, she is a regular contributor for Newsweek, The Nation, and the Los Angeles Times. Lalami holds a Ph.D. in linguistics from the University of Southern California. A recipient of a British Council Fellowship, a Fulbright Fellowship, and a Guggenheim Fellowship, she currently teaches at the University of California at Riverside.
Young writers Lauren Gilmore (Outdancing the Universe), Ben Read, and Brittan Hart (editor of RiverSide Storybook) share how to go from good to exceptional. Listen as they discuss what useful workshopping looks like, how to find meaningful feedback and opportunities to grow your skills, how to keep writing when the "feeling" isn't there, and strategies to make time to write while balancing homework and extracurricular activities.
**NOTE: The printed festival guide shows this panel on Wednesday the 19th. This is incorrect. The panel is Thursday the 20th. Please contact us with any quesitons.
Join the Love & Outrage collective and the local writers featured in our Spring 2017 issue, "Resistance," for a night of poetry, prose, song, love, and outrage. The new quarterly zine will be available for purchase for $7 and back issues of Love & Outrage will be available for $5 while supplies last. More information and call for submissions available at www.loveandoutrage.org.
The Love & Outrage collective was founded in fall 2015 by poets who noticed a disconnect between art and activism in the Spokane community. Since then, L&O has published a quarterly zine of poetry, prose, and art on topics including patriarchy, nature, police, and government. After the 2016 elections, Love & Outrage formed a choir, which performs at rallies and in mischievous settings around the city. We exist to promote a counter culture of courage in an age of uncertainty and fear. For more information, visit www.loveandoutrage.org.
This event is all ages, however there may be adult content and language.
$10; free to high school and college students with student ID.
In celebration of Christopher Howell’s new collection of poetry, Love’s Last Number, he will be joined by his friends and fellow poets Nance Van Winckel and Albert Goldbarth to read from their latest works. The poems in Love’s Last Number are deeply concerned with the nature of time and our relationship with it. As the publisher, Milkweed Editions, says, these poems get at the core of “what we do about memory, love, grief, war, and the contradictions implicit in the human search for meaning.” Alberto Rios describes reading Love’s Last Number as an experience of being “quietly drawn into an entire book of war poems, demonstrating the abidingly cruel relationship between human beings and the inexorable. It is a circumstance so quietly and powerfully vivified time and again.” Howell’s tenth collection of poetry marks the continuation of a talented career distinguished by appearances in anthologies and journals such as Antioch Review, Colorado Review, Crazy Horse, Denver Quarterly, Field, Gettysburg Review, Harper’s, Hudson Review, Iowa Review, Northwest Review, Poetry Northwest, Southern Review, and Volt. He is the recipient of three Pushcart Prizes and two National Endowment Fellowships.
Nance Van Winckel’s most recent poetry collection, Our Foreigner, won the Pacific Coast Poetry Series Prize. She often challenges the expectations of form in her work, as she does in Ever Yrs, a novel in the form of a scrapbook, and Book of No Ledge, an encyclopaedic collection of visual poems. Van Winckel is the recipient of two NEA poetry fellowships, three Pushcart Prizes, and many other awards.
Albert Goldbarth is the author of over twenty collections of poetry and three collections of essay. He has received prestigious fellowships and awards such as the Guggenheim Foundation Fellowship, the National Book Critics Circle Award, and the Mark Twain Poetry Award. His newest collection, The Loves and Wars of Relative Scale, engages perspective, proximity, and the intersection of large-scale universe and individual-scale joys and tragedies. Goldbarth delivers yet another dynamic collection to the discussion.
Founded in 1998, Lost Horse Press has continually published and promoted fine contemporary literature. Lost Horse Press will sell all of the books mentioned above at the event and will host a signing after the reading.
Queer-identifying authors will read from their work and discuss writers who have had the most influence on them as writers and queer women.
Aileen Keown Vaux earned her MFA in Creative Writing from the Inland Northwest Center for Writers at Eastern Washington University. Currently, she lives in Spokane, WA, where she serves as a Career Advisor for the College of Arts, Letters, and Education at EWU. She writes non-fiction and poetry, and is at work on a collection of poems inspired by her passion for county fairs and her experiences growing up in Central Washington.
Liz Rognes is a writer, musician, and teacher in Spokane, Washington. She is a singer/songwriter, composer, and multi-instrumentalist whose classical and pop musical influences range from folk to baroque to jazz. Her essays and poems have been featured in various publications, including Trestle Creek Review; Brain, Child: The Magazine for Thinking Mothers; and Railtown Almanac. She teaches at Eastern Washington University and lives in Spokane, Washington with her rock ‘n’ roll librarian and their son.
Ellie Kozlowski grew up in a working class family in the suburbs of Boston. Her work has appeared in several places including Stirring, Lesbilicious, and Knockout. She is currently working on a memoir about identity and, as always, amassing poems. She lives in Seattle with her partner and their rat.
Elissa Ball is a writer, Tarot reader, humorist, and performance poet originally from Yakima, WA. She writes a weekly astrology column called Space Witch for The Seattle Weekly. Her first book of poetry, The Punks Are Writing Love Songs, was published by Blue Begonia Press in 2012.
The panel will be moderated by Molly Priddy, a writer and editor who lives in Northwest Montana. Her bylines can be found in The Guardian, The Toast, and Autostraddle.
A panel about the importance of writing communities. Full description forthcoming.
We would like to thank SCC and the Hagan Foundation Center for the Humanities for their generous support.
Sideways: Memoir of a Misfit is the story of Japanese American girl, who was born in an Idaho concentration camp during World War II. The first chapter of Sideways was nominated for the Pushcart Prize and published in the spring 2014 issue of The New Orphic Review. Diana Morita Cole is the recipient of a grant from the Columbia Basin Trust and the Columbia Kootenay Cultural Alliance. Her reading will be followed by a screening of the documentary, Hidden Internment, the story of Art Shibayama, who was kidnapped from Peru, smuggled into the United States, and imprisoned in Crystal City, Texas in 1944. An update regarding the public hearing in March of Shibayama's case before the Inter-American Commission on Human Rights will be offered.
The program concludes with dramatic readings from Diana Morita Cole's new collection of stories regarding her family's dispersal from Hood River, Oregon and imprisonment in the Tule Lake and Minidoka concentration camps.
EWU Career Services is sponsoring a panel of six writers who have used their literary background and skills to catapult them into various careers. Moderated by Career Advisor and MFA alum, Aileen Keown Vaux, the panel will focus on practical advice for those hoping to put their literary arts degree to good use, and also how to snag a career in the literary arts with an educational background in another field.
William O’Daly – A 1981 graduate of EWU’s MFA in Creative Writing program, O’Daly taught literature and creative writing for a number of years before he became involved in technical and research writing.
Leesa Dean – Dean taught English and ESL for several years in different Canadian provinces before accepting a position as a core faculty member in the creative writing program at Selkirk College's School of University Arts.
Almeda Glenn Miller – Before graduating with an MFA in Creative Writing from EWU in 1998, Miller owned a bookstore. She is a core faculty member of Selkirk College's School of University Arts creative writing program and also edits for Big Bad Wolf Press.
Sheri Boggs – Currently the Youth Collection Development Librarian for the Spokane County Library District, Boggs previously spent nearly six years as arts and culture editor at the Inlander.
Scott Eubanks -- Eubanks has done technical writing for Spokane-area schools and businesses, and currently works as a content strategist for EWU’s digital communications team.
Brooke Matson -- Executive director and chief engineer at Spark Central, Matson taught for seven years at Mead Alternative High School, and has presented workshops to K-12 teachers in Washington and Idaho on Adverse Childhood Experiences (ACEs) and integrating human rights concepts into curriculum.
With all the buzz surrounding Bob Dylan’s receipt of a Nobel Prize for Literature, we brought together a collection of fantastic local singer-songwriters to dig into the conversation around the literary merit of well-thought and carefully-crafted music and lyrics. The panel will be moderated by musician, singer-songwriter, and teacher Liz Rognes.
Join us for readings from acclaimed writers associated with EWU’s undergraduate literary magazine, Northwest Boulevard. Our featured readers include Derek Annis, Jonathan Johnson, and Rachel Toor.
Poet John Rybicki will do a reading and discussion in Cheney, WA on the EWU campus. The reading and discussion will be in Showalter Hall, Room 109. This event is free and open to the public.
"We Are a Poem" is a staged reading of an original play adapted, by Jeff Sanders and Sara Goff, from the collected works of John Rybicki and Julie Moulds. It is a celebration of an indomitable love in the face of horrific suffering and loss. John Rybicki and Julie Moulds were not only partners in life but fellow poets in arms with their soulful and enraptured rendering of Julie’s long battle with cancer. Their writings will “break and mend your heart, then break it again.” This is a world premiere theatrical event that tells a true love story about two warrior poets who will sing themselves into your soul.
Come soak up the creativity at this dynamic, informal poetry salon. Originating in eighteenth century Paris, a salon gathers people together around discussions of literature, art, and philosophy. Each of the featured poets will read selections from their work, answer questions, and talk about the writing life. The discussion will be moderated by Spokane Poet Laureate Laura Read.
Most writers have a clear picture of who their hero(s) and heroine(s) are, but portraying their true emotional depth may take us a few drafts. In this workshop, we’ll explore how to maximize the emotional punch of early drafts and thereby write our way to the final product quicker. We'll learn tips for understanding the characters’ arcs earlier in the writing process, but still leave openings for surprises along the way. Handouts and practical exercises will teach you how to increase your productivity by spending a short bit of extra time to get to know your characters before you set them on their adventure. Whether you are a pantser, plotter, or a plantser, this workshop will give you the tools to get to your final draft quicker.
to bring paper and pen, or a laptop for writing.
Regular writing practice can have the same health benefits as other forms of meditation and, as such, can be looked at as a means to improve the overall health of individuals and communities, whether that be a geographic community, workplace, or organization. Learn more about the personal benefits of writing practice and recommendations for setting up a writing practice group at your local library, community center, school, or place of business. Includes a prepared guide with writing prompts and instructions. Leading the workshop will be Paula Coomer, author of poetry collections Devil at the Crossroads and Nurses Who Love English, short story collection Summer of Government Cheese, and novels Dove Creek and Jagged Edge of the Sky.
Our Community Reading features Chelsea Martin, Joseph Edwin Haegar, and Spokane Poet Laureate Laura Read. Martin is a writer and comic artist from Santa Rosa, California. She is the author of four books: Everything Was Fine Until Whatever, The Really Funny Thing About Apathy, Kramer Sutra, and her small press bestseller, Even Though I Don’t Miss You. Martin has published her work with numerous magazines and journals, including Buzzfeed Books, Hobart, Fanzine, and Electric Literature. She is the founder and Creative Director of Universal Error. Joseph Edwin Haeger is the author of Learn to Swim (University of Hell Press, 2015). His writing has appeared in The Pacific NW Inlander, The Big Smoke, Hippocampus Magazine, and others. Laura Read is Spokane’s second poet laureate, appointed to the position in October of 2015. She is the author of one chapbook, The Chewbacca on Hollywood Boulevard Reminds Me of You, and one poetry collection, Instructions for My Mother’s Funeral. The reading will be followed by an open mic, allowing writers from the community of Spokane to come up and share their work. The reading will begin at 11:30 and go until 12:45.
What’s it like to publish locally, you ask? We’ve put together a panel of local writers to discuss the local publishing scene. The panel will feature Ellen Welcker, Tim Greenup, Simeon Mills, Kathryn Smith, and Ben Cartwright. This discussion will be moderated by Thom Caraway.
Founded by fiction writer Sharma Shields, Scablands Books is a fledgling boutique press based in Spokane. Scablands Books aims to publish strange, smart, innovative writing, with an emphasis on writers from the Inland Northwest. The published titles reflect the uncanny and unique landscape of the Channeled Scablands region. In 2016, Scablands Books published Ram Hands by Ellen Welcker, Without Warning by Tim Greenup, and Butcher Paper by Simeone Mills.
Sage Hill Press was launched in 2004 by Thom Caraway, then a graduating MFA candidate at Eastern Washington University. Collections have come out from Mike Dockins, Alan Botsford, Marci Rae Johnson, Jeffrey Tucker, and Ben Cartwright. Sage Hill has also published Railtown Almanac, a Spokane poetry anthology, a follow-up prose anthology, and All We Can Hold, an anthology of poems on motherhood.
The word “witness” means to testify: to tell the truth. To testify is an act of responsibility as well as an expression of faith. Rock & Sling is a literary journal of witness, published twice a year at Whitworth University in Spokane. Founded in 2004 by Susan Cowger, it came to Whitworth in 2010 under the direction of Thom Caraway as Editor-in-chief. Rock & Sling is a member of the CLMP and distributed nationwide by Ubiquity Distribution.
Writing and editing/working in publishing go hand in hand in the lives of these authors. This panel will focus on how they manage to balance their editing and writing, whether editing has improved their writing, and much more. If you’re a writer who is considering entering the world of editing and publishing, these authors can help. Panelists will include Almeda Glenn Miller, author of Tiger Dreams and Begin with the Corners, co-founder of Big Bad Wolf Press; Meghan McClure, poet and editor at Floating Bridge Press; Michael Schmeltzer, author of Blood Song and editor at Floating Bridge Press; Sharma Shields, author of The Sasquatch Hunter’s Almanac and founder/editor of Scablands Books; Alexandra Teague, author of The Wise and Foolish Builders and editor with Broadsided Press; and Jason Rekulak, publisher of Quirk Books and author of The Impossible Fortress. The panel will be moderated by Leesa Dean, author of Waiting for the Cyclone.
Undoubtedly, one of the greatest joys of reading is in its ability to take us somewhere as yet unknown to us. Join award winning authors John Rybicki and Polly Buckingham for an afternoon of prose and poetry that isn’t afraid to ask the big questions.
The stories in Polly Buckingham’s collection "The Expense of a View" provide the kind of painful enlightenment that I would prefer to know more from fiction and less from personal experience. Her writing resonates with a dark simplicity reinforced by a sense that what happiness and love can be found must be treasured and hoarded as a precious commodity. These melancholy stories provide me with reassurance that I am not alone in wanting to examine wounds whether perpetrated by a senseless deity, wreaked by a blind source of justice or self-inflicted in an effort to resolve inner conflict.
A mother tires to coax her son into another CAT scan during a brief remission from his cancer. After passing around a serving of guilt a couple abandons Thanksgiving dinner and is drenched by the Seattle rain. A father in Florida waits up late for fish to swarm while trying to accept the fact his prodigal son will not return. The Oregon Coast is the setting for a story bemoaning lost love and another where a lonely woman is drawn to a drug addict between binges. A young boy, neglected by his father, goes on a walkabout with his dog through the scablands of Eastern Washington.
These are stories with a sense of place where people live somewhere long enough so that a portion of that landscape is written into their lives, branding them with a sorrow that cannot be erased. Anton Chekov wrote “At the door of every happy person there should be a man with a hammer whose knock would serve as a constant reminder of the existence of unfortunate people.” Polly Buckingham is such a person on the other side of the wall and "The Expense of a View" is her hammer.
Rybicki’s newest collection of poetry, When All the World Is Old, is a heart-wrenching confrontation with grief.
Free; books will be on sale.
Thanks to a generous grant from Spokane Arts, Spokane Poetry Slam is releasing a collection of poems featuring over 20 poets who have made Spokane into a poetry city. All sales of Uncensored: Spokane Poetry Slam will go directly towards supporting the Individual World Poetry Slam, coming to Spokane this October.
Join four romance authors for a discussion of what it’s like to write romance, why it is their favorite genre, and how the diversity of romance's sub-genres, characters, stories, and readers contributes to its strength and popularity. Whether you are an avid fan or just want to know more about this popular fiction genre that outsells Mystery, Fantasy, and Graphic Novels combined, this will be a fun and informative event. We promise shenanigans and door prizes.
Hear two talented authors read from their newest books, both published by Willow Springs Books in 2017. Winner of the 2015 Spokane Prize for Short Fiction, Glori Simmons, will read from her debut fiction collection. Simmons’s stories and characters spring from the dark corners of our psyches, revealing the fears and contradictions that give shape to unconditional love. Poet Michael McGriff will read from his chapbook Black Postcards, the latest addition to Willow Spring’s Surrealist Chapbook Series. McGriff's poems achieve a ghostly, dreamlike sense of loss, while reveling in the full beauty and awe of the human experience.
Hear excerpts from two hot new releases in fiction: The Principles Behind Flotation by Alexandra Teague and The Impossible Fortress by Jason Rekulak. While each work is unique, they share a special similarity. Jesica DeHart (former NW Bookseller, currently traveling the world and working in author/book publicity) describes that both books are set in the 80’s with characters that struggle with being taken seriously. These books have YA crossover appeal and will appeal to anyone who experienced the 80’s. Did you love Stranger Things? Have you been missing your parachute pants lately? Or maybe you just need another fix of the glory of adolescence in the ‘80s? We see you, and this reading is our gift to you.
Please note the venue for this event is Auntie's Bookstore.
Meghan McClure and Michael Schmeltzer will read excerpts from their forthcoming collaborative nonfiction piece, A Single Throat Opens. McClure and Schmeltzer create a lyric exploration of addiction. They consider the hollow spaces of desire and the lack thereof once it turns sour, touching on a range of elements--from fermentation to growing trees, from rock bottom to family ties entangled by addiction. Although the book won’t debut until June 2017, it will be available for pre-order at the reading.
Get Lit! would like to thank the Mukogawa Fort Wright Institute and the Spokane chapter of the Japanese American Citizens League for their generous sponsorship.
Poet Ellen Welcker will workshop specific components of a literary arts submission packet, which will include a discussion of the purpose of a query letter and examples of successful ones, discussion on how to craft a successful artist statements and bios (for grant applications and general use). There will be time to create and workshop those materials as well. Please bring a notebook, writing utensil, and any materials you already have drafted.
What does it mean to describe a piece of fiction as autobiographical or semi-autobiographical? What are the ethical implications of incorporating lived experience into a fictionalized story? Where does the desire to transform our lives into art arise from? This discussion will touch on both craft and compulsion, how and why we tell our stories.
A Low Frequency Oscillator (LFO) produces a sonic waveform that typically doesn’t reach into the range of human hearing. Though we can’t hear it, we can hear its effect on a sounds we can hear. In a synthesizer keyboard, this is used to modulate some aspect of a sound and create subtle or not so subtle dynamics. In this class, we will explore this and other sonic tricks as metaphors for how to create more variation and thus more emotional and intellectual engagement in a poem. There will be a discussion on how subtle language is used to manipulate society and what the writer's role might be in encouraging free thought. Through writing prompts, conversation, and demonstrations we will probe the unseen worlds and find the pulse between image, metaphor, philosophy, culture, and music. Please bring a pen, pencil, crayon, computer, or other writing implement.
Good songwriting includes a keen awareness of both music and text. Songs have the capacity to impact a listener in multiple, simultaneous ways through the uses of melody, rhythm, phrasing, story, imagery, and so forth. This workshop will focus specifically on the role of lyrics in songwriting. We will consider examples of song lyrics from various genres that demonstrate different ways of incorporating rhythm, rhyme, other literary devices, and musical prosody, and we will practice exercises to help you write song lyrics. You do not have to be a musician to attend this workshop, but we will have a guitar available.
Graduate students from four regional MFA programs will join together to share fiction, nonfiction, and poetry. Readers from Boise State University include Ashley Barr, Kathryn Jensen, and Colin Uriah Johnson. Participants from the University of Montana include Sarah Aronson, Nate Duke, and Stephanie Pushaw. Students from the University of Idaho include Canese Jarboe, Ross Hargreaves, and Lauren Westerfield. And students from Eastern Washington University include Mary Leauna Christensen, Abigail Hancher, and Nahla Hoballah.
Thanks to Nyne Bar and Bistro for their generous sponsorship.
Songwriters share the stage, alternating with a slam poet, with visual artists creating live, joined by a mix of backing musicians, while the audience sits quietly a few feet away from a low stage, all together to create a unique evening of high-quality and collaborative arts. The Round was founded in Seattle, WA in 2005, and founded in Spokane in 2014.
Line-up and tickets available via www.thebartlettspokane.com.
In what promises to be a cunning and hilarious evening full of double entendre, five writers will read original fiction based on Classic Cartoons. Each writer will choose their own source text and imagine what might happen if, say, Rainbow Brite and Darkwing Duck happened to catch each other's eye in a crowded bar. Readers will be Shawn Vestal, Sheri Boggs, Kris Dinnison, Travis Naught, and Rachel Mindell. The evening will be emceed by Aileen Keown Vaux.
We’re very excited to bring you a line-up of some of Spokane’s finest singer-songwriters for a benefit show. Come enjoy acoustic performances from singer-songwriters Scott Ryan Ingersoll, Sam Foley, Keleren Millham, and Liz Rognes. While we suggest a $10 donation that will help Get Lit! to fund new and exciting events in the future, we will happily accept a donation of any amount as an entry fee. We would like to thank The Bartlett for their generous sponsorship.
Justin Torres is the celebrated author of We the Animals (Mariner Books, 2012), a small novel about a family of five living in New York. But for it’s all concision, the novel does not lack in power. O Magazine describes the book by saying that “in stark prose, Torres shows us how one family grapples with a dangerous and chaotic love for each other, as well as what it means to become a man.” The novel earned Torres a spot on The National Book Foundation’s list of 2012’s 5 Under 35. Since the book’s debut, he has continued to publish shorter work in The New Yorker, Harper's, Granta, Tin House, The Washington Post, Glimmer Train, Flaunt, and other publications. Torres recently served as the Picador Guest Professor for Literature at the University of Leipzig in 2016. A graduate of the Iowa Writers' Workshop, he is a recipient of the Rolón United States Artist Fellowship in Literature, and is now a Wallace Stegner Fellow at Stanford. He has worked as a farmhand, a dog-walker, a creative writing teacher, and a bookseller.
Come hear professors Gregory Spatz, Sam Ligon, Christopher Howell, Nance Van Winckel, Jonathan Johnson, and Rachel Toor discuss topics of interest to writers in many stages of their careers. They will discuss many different topics including the road to publication, the value of writing programs, how to land a teaching position, and generally how to navigate the writing life. They will also discuss how the MFA at EWU is contributing to the vibrant literary community in Spokane, and how members of the community can get involved in MFA events, internships, etc. This discussion is sure to be both informative and engaging; the writers will be encouraged to read short pieces of their own work, or the work of writers who have been influential to their work. MFA graduate Ellie Kozlowski will moderate the panel.
Alumni from the EWU MFA Program will be reading their recent works at Barrister Winery for the We All Lived Here: An EWU MFA Alumni Reading and Reception on April 23rd from 4 p.m. to 6:30 p.m. The reading will be followed by a reception where mingling and buying books is encouraged. The readers will include Kimberly Lambright, author of the full-length collection of poetry Ultra-Cabin; winner of the 42 Miles Press Poetry Award, Almeda Glenn Miller, who has published her novel Tiger Dreams and her poetry collection Begin with the Corners; William O’Daly, translator of Pablo Neruda and poet with two chapbooks entitled The Whale in the Web and The Road to Isla Negra; Kristine Lloyd who has been published in Slate, the New York Times column “Modern Love”, and elsewhere; Leyna Krow, the author of I’m Fine but You Appear to be Sinking; and Maya Jewell Zeller, author of Rust Fish, and Yesterday, the Bees. The reading will be hosted by Yvonne Higgins Leach, author of Another Autumn, and Wendy Fox, author of The Pull of It and the Press 53 contest-winning collection The Seven Stages of Anger and Other Stories. Both the reading and the reception are free and open to the public. | 2019-04-20T04:34:17Z | https://2017getlitfestival.sched.com/list/descriptions/ |
On April 2-4, Foreign Minister of the Republic of Kyrgyzstan Erlan Abdyldaev will be in Moscow on a working visit, at the invitation of Foreign Minister Sergey Lavrov.
The ministers will exchange opinions on key issues of political, economic, military and humanitarian cooperation, including within the framework of the Eurasian Economic Union, and will discuss cooperation at the international stage, including at the CSTO, the CIS, the SCO, the UN and the OSCE. They will focus on regional security issues.
The Russian and Kyrgyz ministers will review their countries’ efforts to implement the agreements that were reached during the official visit by President of Russia Vladimir Putin to the Republic of Kyrgyzstan on February 28. These agreements provide for strengthening bilateral relations through an intensive political dialogue, which is evidence of the high standards of allied relations and strategic partnership between Russia and Kyrgyzstan.
On April 7, Foreign Minister Sergey Lavrov will participate in the meeting of the CIS Foreign Ministers Council in Tashkent.
Russia holds the rotating presidency of the Commonwealth of Independent States this year. On September 9, 2016, President Vladimir Putin approved the concept of Russia’s CIS presidency and an action plan for its implementation. These documents cover all aspects of the multifaceted cooperation within the CIS. The CIS ministerial meeting in Tashkent is one of the key meetings of the high CIS agencies planned for 2017. These events also include a meeting of the CIS Heads of Government Council in Kazan on May 26, and a meeting of the CIS Heads of State Council in Moscow on September 11.
The CIS is playing a crucial unifying role. Russia’s Foreign Policy Concept, which President Putin approved on November 30, 2016, prioritises the development of bilateral and multilateral cooperation with the CIS countries and the strengthening of the CIS integration organisations, in which Russia is involved.
This explains the packed agenda of the upcoming meeting. The ministers will discuss a broad range of issues pertaining to the development of international cooperation and the coordination of foreign policy issues between the CIS countries. They will also exchange opinions on key foreign policy issues.
The upcoming meeting of the CIS Foreign Ministers Council will focus on the adoption of a joint statement condemning religious intolerance and the discrimination of Christians, Muslims and members of other religions. They will also review the interim results of their countries’ efforts to implement the decision on adjusting the CIS to modern realities, which was made by the CIS Heads of State Council on September 16, 2016. This decision includes a set of measures to strengthen the CIS status, enhance the efficiency of its agencies and optimise the CIS budgetary expenditures. The ministers will also discuss cooperation between their law enforcement and humanitarian organisations.
By tradition, Foreign Minister Sergey Lavrov will take part in the Assembly on April 8 and share his assessments of global political developments, talk about the priorities in the work of Russian diplomats and our approaches to key issues on the international agenda, as well as the Foreign Ministry’s vision and assessments.
The Foreign Ministry highly values long-term productive cooperation with the Council, which, alongside other international NGOs, is providing effective expert support to Russia’s foreign policy, prepares analytical materials upon the Foreign Ministry’s requests and develops practical recommendations. The Council places high emphasis on developing breakthrough ideas and proposals, many of which materialise into actual projects. The Council on Foreign and Defence Policy is a strong brand.
We have already commented on this issue when we answered your questions. Today, I would like to talk about it in more detail.
Another round of regional consultations on Afghan issues will take place in Moscow on April 14. The talks will focus on security in Afghanistan and its prospects. In our opinion, the main goal of the consultations is to develop a single regional approach with regard to further promotion of the national reconciliation process in that country, while maintaining Kabul's leading role and complying with the earlier reviewed and approved principles on the integration of the armed opposition into peaceful life.
Invitations to participate in consultations were extended to Afghanistan, Central Asian countries, China, India, Iran, Pakistan and the United States. I would like to say that Washington and US officials expressed their interest in attending this event and participating in the international discussion on this subject. We sent them an invitation. Most of the countries have already confirmed their participation. We expect some of our Central Asian partners to provide a response soon. We consider the participation of the Central Asian states important. An agreement on this was reached during the previous meeting of the Moscow format on February 15. Thus, all the neighbours of Afghanistan and the key states of the region will be represented at the upcoming talks. We regret Washington's refusal to take part in the consultations. The United States is an important player in the Afghan settlement, so it joining the peacekeeping efforts of the countries of the region would help to reinforce the message to the Afghan armed opposition regarding the need to stop armed resistance and to start talks.
We are alarmed by the deteriorating situation in southeastern Ukraine. According to SMM OSCE reports, observers registered 500 to 5,000 ceasefire violations a day in March.
Towns and villages in the Donetsk and Lugansk people’s republics were shelled on 14 occasions between March 13 and 26 alone, sometimes by MLRS, which are banned by the Minsk Agreements. Residential buildings and a secondary school in Dokuchayevsk were damaged and 13 people injured. I stress again that civil infrastructure was shelled.
The SMM goes on reporting the presence of heavy weapons along the contact line in violation of the Minsk Package, with 58 units of the Ukrainian Armed Forces against militias’ 24 units.
OSCE observers also report that the Ukrainian army-controlled stretch of the demarcation line in Zolotoye and a road in Katerinovka are mined.
The Ukrainian Armed Forces continue to shell the Donetsk water filtering station. On March 17, the station was shelled in the presence of SMM observers, Russian officers of the Joint Coordination Control Centre and local repairmen. Clearly, the shelling of such facilities poses a threat of chemical contamination to the area.
As for the situation in other parts of Ukraine, the SMM reports more instances of vandalism and blocking Russian banking offices in Kiev, Kharkov and Dnepropetrovsk by local radicals with officials’ blatant connivance.
The SMM also monitored the trade and transport blockade of Donbass. Indicatively, the blockers told OSCE observers that they had found a way to bypass police posts. The Ukrainian authorities are making bewildering contradictory statements suggesting that they have not yet determined whether to support or condemn the blockade, while the Ukrainian National Bank and Finance Ministry have already made forecasts of its negative impact on the national economy.
We call upon the OSCE Special Monitoring Mission to continue its objective observation of the situation in Donbass and other parts of Ukraine in conformity with its mandate, which has been prolonged to March 2018.
With tenacity worthy of a better cause, Kiev continues its policy towards the total de-Russification and forcible Ukrainisation of the country. Following the infamous laws which deprived the Russian-speaking population of Ukraine of the right to receive objective information in their native language, the Kiev authorities intend to actually legalise a ban on the Russian language.
The Verkhovna Rada introduced a draft law On the State Language, which provides for mandatory use of the Ukrainian language in all areas of daily life without exception. Any attempts to establish the official use of more than one language in that country are equated with an attempt to overthrow the political system and are subject to prosecution. I would like to say that we are talking about decisions and actions of the very authorities that came to power not illegally, but on the declaration of their allegiance to European democratic values.
The draft law on media languages adopted on March 23 in the first reading, which prohibits publications in the languages of neighbouring countries, is part of the same approach. Had the Ukrainian authorities tried to learn how the issues of multilingualism are addressed in European countries, they would have realised that they had been heading in the opposite direction all those years they were in power and declared their commitment to European values. Look at how the Scandinavian and Western European countries, as well as the United States and Canada, approach these issues. After all, it’s not about the minorities residing in Ukraine, but the people who have been using this language, which created the common culture of Ukraine, for many centuries. Most importantly, it is not about the people who moved to Ukraine in recent years or even decades, but the indigenous population. Under this document, national TV channels would have to allocate 75 percent of the air time to programming in the Ukrainian language.
Approving such documents would mean actual legalisation of the forcible Ukrainisation of the country, a legitimatised fight not only against the Russian language and culture, but also languages spoken by other ethnic groups residing in Ukraine. This “creative law-making” is nothing more than a tool to limit human rights and crack down on dissent. All international legislative acts and regulations governing human rights issues in the European and North Atlantic space signed by Ukraine as a sovereign state clearly state the inadmissibility of restricting human rights in this sphere or any crackdowns on dissent.
Acting in this way, the Kiev regime not only violates its own constitution, which guarantees “the free development, use and protection of Russian, and other languages of national minorities of Ukraine” (Article 10), but also openly demonstrates disdain for universally recognised human rights protection standards, enshrined, in particular, in the European Charter for Regional or Minority Languages, as well as in the Framework Convention for the Protection of National Minorities. That country is, in fact, about to introduce “language genocide” at the state level.
We realise perfectly well why official Kiev is doing this. It is under heavy pressure from the nationalist ideas of radicals, whom they once encouraged to take appropriate actions, and today, they cannot force that genie back into the bottle. Any attempts to use language issues as a way to flirt with radicals can cost Kiev dearly, especially given the highly polarised Ukrainian society. Suffice it to recall that the attempt to repeal the current law On the Foundations of the State Language Policy in 2014 provoked the separation of Crimea from Ukraine and the onset of the armed conflict in Donbass. This is precisely what led to the momentous changes in Ukraine.
The intra-Syrian talks based on UN Security Council Resolution 2254 have been underway in Geneva under the auspices of the UN since March 27. The consultations are being held separately. Special Envoy of the UN Secretary General for Syria Staffan de Mistura and his staff are making efforts to guide the discussion between the Syrian government and the opposition into a constructive course. Russian representatives in Geneva, namely, Deputy Foreign Minister Gennady Gatilov and Special Envoy of the Foreign Minister for the Middle Eastern Settlement and Director of the Foreign Ministry’s Middle East and North Africa Department Sergey Vershinin, are actively involved in this process. Moscow looks forward to the Syrian parties showing their willingness to achieve a compromise on all four baskets of the agreed-upon agenda in order to make headway towards peace and stability in Syria.
We assess the military and political situation in Syria as tense.
The Syrian army continues its anti-terrorist operation in eastern districts of Damascus, which was undertaken in response to the rebels’ attempts to invade the city centre on March 19–22. The extremists from Jabhat al-Nusra who organised this raid suffered significant losses and were forced to retreat into the suburban towns of Jobar and Qaboun and retaliated with rocket and mortar fire on Damascus. The shells exploded in the districts of Tijara and Qusur and in the suburban town of Sayyidah Zaynab. There are casualties among civilians.
The offensive by Nusra and their accomplices in the north of the Hama province, where terrorists created an immediate threat to the administrative centre of the province and the Christian town of Mahardah, was stopped.
Relief efforts are underway following a major bloody provocation undertaken by the terrorists during a counter-offensive by government forces, and the Syrian military are regaining their temporarily lost positions.
We took note of the fact that the terrorist attacks outside Damascus and Hama were synchronised and well prepared. Radicals from Nusra managed to involve militant formations officially participating in the agreement on cessation of hostilities, in their actions.
This kind of propaganda game is unacceptable. Everyone should clearly understand that any actions taken with the participation of Nusra, ISIS, or other Al-Qaeda offshoots, are subject to decisive and unconditional condemnation.
The Syrian government troops are continuing to drive ISIS out from eastern Aleppo. They blocked an ISIS unit outside the town of Deir Hafer in Aleppo and are on an offensive in the direction of the Jirah Airbase controlled by ISIS. An operation is underway seeking to destroy it.
A lightning-fast attack by Kurdish militiamen undertaken with the support of the US special forces made it possible to seize a bridgehead on the right bank of the Euphrates River and drive ISIS from the airbase outside the town of Tabqa. The town itself remains under the control of the terrorists, who clearly stated that air strikes by the US-led coalition may destroy the Euphrates, Syria’s largest hydroelectric power plant, built with the technical assistance of the Soviet Union. Indeed, two security valves in the southern part of the dam were damaged during an air raid on March 26. Military operations in the vicinity of the power plant have been stopped. Engineers were provided with an opportunity to inspect the dam and take proper measures to prevent this catastrophe.
In this regard, we urge all participants of the US-led coalition to act responsibly as they fulfill their mission to defeat terrorists in Syria and Iraq in order to prevent civilian casualties and damage to critical civilian infrastructure.
Work is underway to sign local reconciliation deals between the opposing sides in order to avoid unnecessary loss of life and to alleviate the sufferings of the civilian population. In accordance with the plan, the evacuation of rebels and their families from the al-Waer neighbourhood in the city of Homs continues.
On March 29, media reported that, with Qatar's mediation, an agreement had been reached to evacuate the defenders of the Shiite enclaves of al-Foua and Kafraya in Idlib in exchange for the rebels withdrawing from Zabadani, Madai and the Yarmouk Palestinian refugee camp outside Damascus. We welcome this agreement, which provides for the rebels and the civilians who wish to evacuate to be evacuated, the unhindered delivery of humanitarian aid and the adoption of measures to strengthen mutual trust and release prisoners. We hope that the agreements will be fully implemented.
At the same time, I would like to remind everyone that, within the Astana format, Russia has suggested that its participants adopt a provision on a reconciled area, which would identify a clear path towards stopping hostilities, including responsibilities on the part of the parties, which would exclude any rumours about alleged forced relocations. Unfortunately, as is known, the armed opposition representatives refused to come to Astana this time.
We are saddened by the statements issued by Western capitals, by officials and representatives of foreign states with regard to the Syrian settlement, most of which are absolutely devoid of objectivity. I’d like to elaborate on one of them.
Against the backdrop of efforts to promote a political settlement in Syria, which continue in the Astana and Geneva formats, statements released by some of our Western partners arouse dismay and disappointment. We think that they are beyond mere propaganda. We believe that they can be qualified as direct instigation. In this context, we have taken note of French Foreign Minister Jean-Marc Ayrault’s speech at the Arab World Institute on the occasion of the sixth anniversary of the Syrian conflict, during which he made absolutely inappropriate and destructive remarks.
True, we have many differences with our partners, as you well know. We speak at length about them and spell out our position both publicly and, above all, during bilateral contacts. At the same time, a sincere wish to resolve the Syrian crisis should, in our opinion, push all the parties concerned not to fixate on contradictions or criticise each other (often without any proof), but to search for new common points and expand the area of understanding. This is not so hard to do, if there is a wish, because this area is outlined by relevant resolutions, above all, UN Security Council resolution 2254, International Syria Support Group (ISSG) decisions and other jointly adopted documents. They should be regarded as a single set, without distortions or wishful thinking. It is impossible to build an effective counter-terrorism strategy against the seat of international terrorism in Syria based on political pressure on Damascus and its allies. Let me remind you that the Russian military are in Syria and are helping Syrians fight terrorists on legal grounds, unlike our European and American partners.
The position, according to which the removal of the legitimate president of a UN member state is proclaimed a condition for bringing aid to the population of that country, seems paradoxical. One gets the impression that this is a kind of blackmail and that the officials in Paris have stopped understanding humanistic values. From a political standpoint, it is hard to combine the thesis that the Syrians themselves have the right to decide their own future with attempts to force them preemptively to accept humiliating terms: make one choice and get a carrot, make a different choice and get a stick.
On the whole, continuing public talk of the “Bashar al Assad must go” variety fully contradicts our common – I would like to stress that – beliefs that it is up to the Syrians themselves to determine their future and choose the government that will steer them there. Frankly speaking, that slogan virtually torpedoes and undermines any attempts to move forward along the path of intra-Syrian talks and dialogue and to separate the armed Syrian opposition from the ISIS and Nusra terrorists. This is something that Mr Ayrault cannot fail to understand.
We have repeatedly emphasised that Russia is ready for equal and mutually respectful cooperation with all partners interested in a political solution and the liquidation of the terrorist seat in Syria. Those are very serious priorities requiring collective efforts on a solid international legal basis. And here, there is no room for envy, jealousy or unhealthy competition.
The situation around Mosul is continuing to deteriorate. The military operation to free the city, which has been going on for four months now, has not yet achieved its declared goals, specifically eliminating ISIS’s main base in Iraq. Despite the forces and assets used in combat operations, Iraqi government troops, unfortunately (we take note of this), have bogged down in gruelling urban fighting in the western right-bank part of Mosul. Each step forward here comes at great cost. Regular army forces and militias have to breach ISIS’s multi-layered defence involving the use of locals and civilians as a human shield. Unfortunately, these tactics are well known to us.
Meanwhile, in UN estimate, as many as 500,000 people remain in terrorist-controlled districts. With such density, what kind of “surgical” air strikes (something that our Western partners like to talk about) are possible here? Consider this. Statistics speak for themselves. According to the Office of the UN High Commissioner for Human Rights, between March 17 and 22 alone, at least 307 civilians were killed and 273 injured in western Mosul. And this is only confirmed data reported by the UN. However, what is happening in reality and what are the actual casualty figures? It is terrible to think about the actual figures and the casualty scale has yet to be assessed.
US military representatives had to acknowledge, albeit with the utmost reluctance, the mass casualties among the Iraqis as a result of the air strikes by the US-led anti-ISIS coalition. A few days ago, Lt. Gen. Stephen Townsend, commander of the Combined Joint Task Force, made statements to that effect. It may be recalled that this refers to the March 17 air strike on the al-Jadid district. According to various sources, 200 civilians were killed there. On March 22, a residential building was razed as a result of an air strike against the Rajm al-Hadid district, burying people alive, including children. These are only two tragic episodes that have been widely reported in the media. UN High Commissioner for Human Rights Zeid Ra'ad Al Hussein aptly described the operation to free the main city in northern Iraq as a massacre of civilians, when coalition forces bomb residential districts from the air while ISIS militants kill people on the ground.
The humanitarian situation in Mosul has escalated to the limit. Iraqi President Fuad Masum has compared it to a full-blown disaster. Now is the time to sound the alarm and constantly remind [everybody] that 400,000 residents remain in the city, where food and medical supplies are running out. Experts are warning about the danger of mass famine if the storming of Mosul drags on. Unfortunately, by all indications, this is the most likely scenario.
The position of hundreds of thousands of residents who have fled the city is also unenviable. Their suffering continues even after they escape from that hell. The provision of aid still leaves a lot to be desired, which is also recognised by international agencies.
It is impossible to understand why world media outlets are keeping to mainstream coverage. To say nothing about what is going on in Mosul is simply a crime, as evidenced by reports occasionally filtering through that show the real picture of what is happening in the city.
We have taken note of a statement by UN High Commissioner for Human Rights Zeid Ra’ad Al Hussein, timed to the second anniversary of the Yemeni conflict. He cites civilian casualty statistics specifying that these are only figures obtained by his agency. As far as he knows, 4,773 have been killed and 8,272 injured in these two years, while the actual casualties are much greater. The United Nations does not deny these figures, I stress again. More than that, 21 million Yemenis, or 82 per cent of the population, are in urgent need of humanitarian relief. A nationwide catastrophe has broken out.
Last month alone brought 106 civilian deaths, mainly in air raids and naval artillery shelling. An incident is mentioned in which 32 Somali refugees and a Yemeni died, ten Somalis were reported missing, and 29 Somalis, including six children, were injured, some of them badly. According to eyewitness accounts, their ship was attacked by the Coalition’s Apache helicopter. The UN High Commissioner mentions a number of other instances of helicopters shelling fishing vessels, and the Khokha marketplace tragedy, where 18 civilians died in an airstrike.
Instances are also reported of indiscriminate strikes by people’s committees associated with the Houthis and former President Ali Abdullah Saleh. They are also reported to impede humanitarian deliveries to Taiz.
A similar statement was made by Stephen O’Brien, UN Under-Secretary-General for Humanitarian Affairs, who said that even appalling casualties do not entirely reflect the scope of the Yemeni humanitarian disaster, with the economy in ruins and seven million people starving.
This is not an industrial accident or a natural calamity that has stricken Yemen but a human-caused disaster. I took notice of a statement by United States Ambassador to the UN Nikki Haley, who said that the United States is “the moral conscience of the world”. If you are really the moral conscience, why do you turn a blind eye to what is happening to people in Yemen? Or is it a new hybrid kind of conscience, which does not send signals to the brain or other vital organs? It is impossible not to see the disaster. I realise that the US media are preoccupied with other problems. The words “Yemen”, “Mosul” and “Syria” do not occur in their front-page news. They are focusing on Russia. We will talk about it later. But can “moral conscience” be mute to such an extent? Surely, it cannot have atrophied completely. This means there is no such conscience at all.
Two years of violence, bloodshed, despair, famine and destruction are more than enough for all sides to see the necessity of an urgent search for a peaceful settlement of the conflict. All this bears out our assessments of the Yemeni situation and the correctness of repeated appeals to an urgent peaceful settlement.
The international community’s duty is to work towards an immediate cessation to all violence, whatever motivations might be found for it. We are firmly convinced that there is no military solution to the Yemeni conflict. The sides should return to the negotiating table with assistance from UN Special Envoy for Yemen Ismail Ould Cheikh Ahmed and work for a lasting ceasefire and the political settlement of the conflict.
We have taken note of yet another attempt to play the Russian card in the internal political debates in the United States. Personally, I wouldn’t describe this as an attempt but the continuation of a campaign and a new round of the hellish propaganda campaign launched under the previous US administration. The point at issue this time is a fresh bout of hysterics over routine diplomatic contacts of the Russian Embassy’s leaders and staff in Washington.
Some US and other media are again writing about Russia’s alleged meddling in the US presidential election last year. It looks as if you are preparing for a new round of an internal election campaign. I think you should see that it’s time to do some work in-between the election campaigns. As it is, it looks as if the US administration will approach the next election cycle with only one result – artistic demagoguery about Russia meddling in the previous election. I would describe this behaviour by some US journalists and media outlets as a threat to our diplomats. If our diplomats refuse to give interviews on highly specific matters – we understand that requests for such interviews are made to keep the issue of Russia’s alleged meddling in the US elections afloat – fresh batches of “compromising information” will be planted in the media. We see this as dirt throwing and misinformation. We are told about the fake news that appeared in January, which was spearheaded against President-elect Donald Trump and contained allegations about Russia. It was published by BuzzFeed and hinted that Russia should be more actively involved in this information war or they would do everything without us. Actually, this is information blackmail.
I can cite one more example. To avoid generalising, I will provide hard facts. One of the items included allegations concerning our colleague, Russian diplomat Mikhail Kalugin, even though we published a refutation when Mr Kalugin’s name was first mentioned in the items about the alleged Russian spies and agents in Washington. We said that this is disinformation that has nothing in common with reality. However, these allegations continue.
I want to once again make it quite clear that neither Mikhail Kalugin nor any other member of the Russian diplomatic and other agencies in the United States was connected in any way with the US presidential election. We believe it’s time to stop playing these dirty information games.
I would like to say more about Mr Kalugin. We have taken note of a recent item published by a BBC correspondent in Washington. It is a long item that has no respect for personal data. It includes claims that have no relation to reality and is supplied with many photographs. It is an absurd story that violates BBC principles. As I have said, the item has been published, and I want to comment on it.
This item mentioned Russian diplomat Mikhail Kalugin, who headed the Russian Embassy’s economics division until last August. According to this item, Mr Kalugin is a spy and this confirms Mr Steele’s dossier about the Russian connection in last year’s election campaign in the United States.
I want to say that we have published the necessary refutations. However, more than two months later, the allegation is being repeated in an item that provides photographs, personal data and the photos of the Russian Embassy in order to give more weight to the allegation.
I would like to repeat what I already said at a briefing [in January] that Mikhail Kalugin is absolutely not guilty of the allegations laid against him and the Russian Embassy. He is a Russian diplomat who has worked in the United States for six years. His mission in the United States was to facilitate the Russian and American companies’ business in Russia and the United States. He helped promote bilateral economic relations and, contrary to what the media claimed, he left the United States when his contract ended to assume new responsibilities at the Foreign Ministry. He goes to his office [in Moscow] every day. Contrary to what the BBC claims, when he worked in the United States he regularly met with representatives of the US Department of State, the National Security Council and various US economic departments, including the Department of Commerce, the Department of the Treasury and the Department of Energy. I am saying this now to lay to rest the fake news published by BBC and its Washington correspondent. This is all lies, nothing but lies, fake news and disinformation.
Mikhail Kalugin was also engaged in the public sphere giving lectures and interviews on the prospects of our bilateral relations. You can check this information and conduct your own investigations. By the way, the BBC item claims that State Department staff who dealt with Russia did not come across Kalugin. This is nonsense. However, I really do wonder if the State Department knows anything. Based on my contacts with our American colleagues over the past few years, I can tell you that they only admitted six months after the beginning of the Ukrainian crisis that they had a clearer view on what was happening there than they did at the beginning. For a long time, there was nobody in the State Department with whom we could discuss matters. In the past six months, it was unclear whom we could phone there in case of problems. It is also unclear whom the BBC correspondent talked with. He mentioned reliable sources. We know only too well just how reliable these sources are.
I have told you about the areas where Mikhail Kalugin worked and his contacts. As for the claim that he never went to the State Department or communicated with State Department staff, I can tell the BBC reporter Paul Wood that he simply doesn’t know that in 2014 the US State Department curtailed communications with Russian diplomats, which had been maintained in full in many areas before that. Russian diplomats could only get an appointment with the State Department in the case of an emergency. All other humanitarian and economic contacts were curtailed. The Russian Foreign Ministry holds regular consultations on information issues with the foreign policy departments of all countries, both those with which we maintain trust-based relations and those with which we are poles apart on information matters. We hold consultations, exchange opinions and discuss issues of concern for us and them. I can tell Mr Wood how we pressed the US State Department to talk with us on information matters. Trying to get an appointment to talk with State Department staff was no easy feat. You are writing nonsense, of course, but at least try not to put your head on the block with such items as this one.
So, the US State Department curtailed any contact with us in 2014 as prompted by the Obama administration. The Russia-US Bilateral Presidential Commission was suspended by our American partners. The same happened to other bilateral formats. When coming to any conclusions on a cosmic scale, remember what writer Mikhail Bulgakov said about conclusions that can turn out to be silly on a cosmic scale.
Although all official forms and methods of interaction were curtailed at the initiative of the Obama administration, our diplomats searched for and found ways to keep our bilateral relations afloat. I have said above with which officials and agencies our diplomats cooperated. It is an absolutely normal practice.
And lastly, I would like to present this “tough, arrogant KGB man”, as the BBC reporter described him. Can you imagine this? A “tough, arrogant KGB man” in 2017? Guys, the KGB was closed down long ago. What are you talking about? Mikhail Kalugin goes to his office every day, but today he changed his routine to come to the Foreign Ministry Press Centre. Here he is, this “tough, arrogant KGB man”. He will be available to make comments, and he will tell you about his work. This is a paradox, an information paradox. We have to comment on these rumblings, which are published again and again. There are such problems as Yemen, ISIS, Jabhat al-Nusra, drug trafficking, organised crime, migration, illegal migration and Afghanistan. However, the intellectual power of Washington, including the media and analysts, are busy searching for the Russian connection in all their problems and failures. There will come a time when these cases will be cited in textbooks as drivel, and this horrible period in our history will be sharply criticised in the United States itself. People will come to their senses and they will see that they wasted their time on fighting imaginary dragons. Regrettably, it will only happen later, not now.
It looks as if the experience of American and West European colleagues who have repeatedly expressed fears about a “Russian threat” is having a harmful impact on other countries. The “infection” has caught up with Lithuania. Bad examples are known to be contagious. In a March 24 interview published on the website of the US magazine Foreign Policy the Lithuanian President Dalia Grybauskaite said that Russia posed a threat not only to Lithuania, but to the whole of Europe. Well, perhaps she sees something we don’t. Her fears are prompted by the stationing of Iskander missile systems in Kaliningrad. In this connection Dalia Grybauskaite has called on the US to deploy a permanent military contingent and elements of the US missile defence system in her country.
The Russian Foreign Minister Sergey Lavrov has already described these remarks as absurd and groundless, noting that these claims are politically motivated and create a negative background for bilateral affairs. I have no doubt that this is part of a media campaign. All this is part of a massive information effort focused on the search for an enemy. The enemy has been found. It is Russia. Nevertheless Russia has stressed that “we do not see insuperable barriers to the relations between Russia and the Baltic states developing in the spirit of good neighbourliness and mutually beneficial cooperation.” Although our relations with that country have seen various periods, we have never retracted our global proposal and the wish to have comprehensive interaction in all fields.
It is true that such statements made by officials in the spirit of paranoid Russophobia are becoming ever more characteristic of the Baltic states. They harp about a mythical Russian threat hanging over them like a Damocles sword. Many Baltic politicians labour under a misapprehension that Russia is cherishing imperialist plans and wants to challenge the sovereignty of their countries.
I would like the Baltic countries to calm down, along with all the other countries which consider Russia to be an aggressor. We are doing all we can to oppose any manifestations of international aggression, come out for peaceful settlement of problems and are not out to conquer anyone.
On the other hand, how independent the Baltic countries are today is a big question. A rhetorical question as far as we are concerned.
An act of vandalism against the memorial to the victims of Nazism was perpetrated in a Riga district on March 24. The front of a granite obelisk on the site of a mass burial of 13,000 people was smeared with paint. The inscription about Nazi atrocities committed on the site has also been painted over.
In connection with this incident, our Embassy in Latvia has sent a corresponding note to the country’s Ministry of Foreign Affairs demanding an investigation into this flagrant act and measures to prevent such incidents in the future.
We hope that the Latvian authorities will take all the necessary measures to remove the consequences of this outrageous act and to bring the people responsible to account.
A regular meeting of the Russia-NATO Council (RNC) at the level of permanent representatives is to be held today.
I would like to recall that last year three meetings of the Council were held after a two-year break. The resumption of the work on the RNC platform takes on added significance under the current conditions of the build-up of military-political tensions and a media campaign. We want the dialogue mechanism of the Council to be used on a regular basis to discuss the issues its participants consider to be important, topical and necessary.
Among the priority topics the Russian side plans to raise today are predictability of military activities, reducing the risk of escalation as a result of unforeseen military incidents and regional issues. We will also raise the issue of the build-up of NATO military presence and military training activities along the Russian borders.
Question: The US co-chair of the OSCE Minsk Group, Richard Hoagland, recently said that a meeting was being planned in Moscow between the foreign ministers of Azerbaijan and Armenia to prepare the ground for a meeting between the two countries’ presidents. Can the Russian Foreign Ministry confirm this information? If so, what timeline are we talking about? It will shortly be a year since the escalation of the Nagorno Karabakh conflict. In this connection the co-chairs of the Minsk Group have announced that this year must be marked by an encounter at the negotiating table and not on the battlefield. Can the Russian Foreign Ministry comment on the date and give its assessment of the settlement process during the past year?
Maria Zakharova: I am not aware that an early meeting between the Foreign Ministers of Armenia and Azerbaijan is being planned in Moscow. If such information comes to hand I will share it with you. At this point in time such a meeting is not on our schedule. I repeat, I am ready to check this information and let you know.
Question: After yesterday’s meeting of the State Council in Turkey the President and Prime Minister of that country announced the end of the Euphrates Shield operation in the neighbouring state. How does Russia see it and does it have anything to do with the decision (the reference is to bilateral discussions on this matter between Moscow and Ankara)?
About 20 days ago Turkey told Russia that terrorists cannot be used to fight terrorists. Has Russia determined its attitude to the terrorist groups in the region?
Maria Zakharova: It is interesting to hear the Turkish side asking whether terrorists could be used to fight terrorists. This is dialectics. The Russian Federation has an absolutely clear position that terrorists cannot be used or divided into good and bad, moderate or active in order to justify supporting them. This position has repeatedly been articulated by the Russian leadership, reaffirmed in all our basic documents as well as during the course of work on international legal acts. We have a clear-cut position. Certainly, there is the process of a peaceful settlement, which implies “conversion” or an invitation to people who preach the use of force, including terrorist methods, to renounce their ideology and sit down at the negotiating table to put in place the process for a peaceful settlement. These are different things.
On the first question, let me reiterate that flirting with terrorists, still less supporting them in order to solve one’s own tasks or get involved through some terrorist groups in some internal political or international conflicts is simply inadmissible. If we are talking about peaceful political processes, which promise a chance that the people who preach the principles of terrorism to further their ends renounce these principles, then on the basis of international law and proceeding from the norms, laws and international documents, that would be another matter. The Syrian crisis is vivid proof that such a concept may work. Only a year ago irreconcilable opposing sides were preaching not just extremism, but terrorism, pure and simple, and now they are trying in one way or another to work out a common platform and an approach to trigger a political process and set the situation in Syria on a peaceful track. This is one example, and there are more examples in the world.
As for your question about the end of the military operation, it is up to the military experts to answer it. The Russian Ministry of Defence is in a better position to comment on this aspect. The Foreign Ministry gives political assessments.
We are in contact with Turkey on the Syrian settlement in the framework of bilateral contacts and the responsibility assumed by Moscow, Teheran and Ankara. Dialogue and active work on the issue continue with the Turkish colleagues. We think it is constructive, though not devoid of difficulties.
Question: It looks as if the “hand of Moscow” has reached Poland. Ukraine accuses Russia of being complicit in the blockade between Ukraine and Poland.
Maria Zakharova: I think it would be right and honest if Ukraine, which constantly accuses Russia of throwing its weight about and of instigating many internal political decisions and actions on Ukrainian territory, published a list of persons who, in its opinion, are “Kremlin agents.” Let it make an inventory and analyse those people. Instead they are engaged in vetting (I don’t know if they have dropped this practice). Perhaps they need a second round of vetting at the present stage. Let them say what opposition forces, organisations, people in Kiev’s City Hall or the Poroshenko Administration are working for Moscow, in their opinion. Let them point the finger at Ukrainian citizens whom they suspect of working under orders from Moscow. Making unsubstantiated accusations is really not the way.
A couple of years ago I talked with a colleague from the Ukrainian Foreign Ministry and asked her if they were not aware that what their nationalist radicals were doing in Kiev and in the regions was doing harm to Ukraine. She replied that they had strong suspicions that they were working for Moscow. In other words, we come out against Ukrainian radicals and argue that this is destructive for Ukraine and the Ukrainian people, and they are accusing us of this very thing. Let them step back, present proof and say which forces are suspect at the Rada, for example. Is Oleg Liashko a Russian agent too? The leaders of some parties, people who wear nationalistic armbands and promote the theories that there is no shared history with Russia – I ask you are they Russian agents too?
One has to start with banal things, with determining the foundations, with taking a long hard look at the ideology inside Ukraine. It is not right to say each time that Russia is to blame for all the bad things that happen on Ukrainian territory. Most importantly, there are laws. The blockade declared by Ukrainian citizens themselves falls under certain internal Ukrainian laws and can be regulated through legislation. There are law-enforcement agencies, and laws that can be complied with and these people can be punished. It is odd when on the one hand, radicals are encouraged and egged on using the anti-Russian theme, and the people in Donetsk and Lugansk are declared enemies and then, after the radical elements start boiling over and pass on from words to actions, to blockade, to look on and say that this is “the hand of Moscow.” I believe that Kiev has to do its internal ideological work. I repeat, perhaps it would mark the second round of vetting. I find the word abhorrent, but since they are actively engaged in witchhunt, it may help to determine who among them is on which side of the barricades.
Speaking seriously, everything we warned people about two years ago is coming true. We did not say it somewhere on the quiet or during private conversations, we stated clearly that this strategy and putting the stake on radicals and nationalists would lead to a dead end. Nationalism is a beast that constantly demands sacrificial offerings, it needs somebody’s blood to feed on. When the topic of southeastern Ukraine is milked dry and is no longer sufficient while problems in the country multiply and Donetsk and Lugansk are no longer enough to explain them away, a new target, a new victim will be needed. They will then go after other ethnic groups, and social classes. That is how it all happens.
Ukraine needs to get its act together, by staging “another round of vetting,” as I noted sarcastically, or in earnest. Serious work is a hard slog, but it is the only way to deal with mistakes made over many years. Renunciation of nationalism and the use of radical forces, the search for a national consensus, discarding the methods of tearing society apart, as I have said, including on grounds of language, and an attempt to analyse the interests of Ukraine to understand what the Ukrainian people is, what its true and intrinsic interests are, who makes it up and how to ensure the rights of all the categories of the population. This is a colossal lot of work. Torches and nationalist slogans would not be enough. One would have to work seriously, perhaps bringing in international experts from the organisations I have mentioned which work out international provisions on protecting human rights, and the rights of ethnic minorities. This is serious and profound work. A lot of time has been lost, but things may keep getting worse.
I repeat, it is the mechanism launched in Ukraine several years ago, and not “the hand of Moscow” that got Ukraine to where it is today. I repeat, things may get even worse. No one is setting barriers in the way of this absolute ideological collapse, on the contrary, the process is gaining momentum. Afterwards, people will ask who is to blame and look for culprits not in Donbass, Donetsk or Lugansk, but in Kiev, in big cities, in next-door flats and houses. You cannot forever keep people on a diet of stories about a mythical enemy, mythical tanks allegedly flown by air to Ukraine, about “Kremlin agents” etc. Some day that tale will come to an end, and it will be indeed a tragic end.
Question: Italian newspaper La Stampa carries a contribution on the US administration warning Rome about the major political party Five Star Movement’s direct contacts with Russia, which tries to influence Italy and other European countries at future elections as it implements its interference strategy. Can you comment on this information? Is it a hoax?
Maria Zakharova: I don’t quite see what “a political party has contacts with Russia” may mean. What hard facts are there behind this statement? You have come to the Foreign Ministry Press Centre, and it is possible to say that Italian journalists have contacts with Russian government agencies. Anything can be misinterpreted as you wish.. It is also possible to say merely that Italian journalists visit the Foreign Ministry spokesperson’s news briefing and ask her questions.
You know our strategy of non-interference in other countries’ internal affairs, and our approach to this matter. We maintain contacts with national capitals and official governments, work on NGO lines within Russian legal limits and with due respect for the relevant laws of other countries. We have official contacts with many opposition and pro-government parties and movements in keeping with diplomatic traditions.
That was the case before the US presidential election, when Hillary Clinton’s people came to Russia but no one took any interest in it afterwards. They came repeatedly to talk to officials and had very informal meetings with particular people to discuss diverse matters. However, the US press takes no interest in this- for some reason. However, Russian Embassy phone calls in Washington (I needn’t say how many people in the US relished these conversations) give reason to accuse the election winner of some kind of ties with Moscow.
I think this is just a mere part of an information campaign. If you have hard facts I can confirm or deny, please cite them. I have no idea how to comment on vague allegations of Italian parties’ ties with Moscow.
Question: How far has the investigation into the murder of the Russian Ambassador to Turkey Andrey Karlov got? There were reports about a Russian girl who was allegedly involved, but they have not been confirmed.
Maria Zakharova: Russian experts from several agencies are involved in the investigation. This is serious interdepartmental work. Contacts with our Turkish partners are maintained primarily via our embassy in Ankara. As you said, new information has come to light, just as in any other investigation. Some pieces of information are confirmed and others are not. I expect to be able to provide you with the latest information very soon.
Question: Has the date for US Secretary of State Rex Tillerson’s visit to Russia been set?
Maria Zakharova: As I said before, this visit is a possibility. However, at this point we don’t know the precise date and have no other information about the visit we could share with the media. If these plans come to fruition and we coordinate the format and date that would be suitable to both parties, we will inform you about this. As I said during the previous briefing, preparations for a visit are not limited to agreeing on the visit and setting the date. Preparations also include the agenda and the choice of issues that would be of concern to both parties, as well as working with experts. In short, a great deal of factors must first click into place.
Question: During the congressional hearing on Wednesday, American politicians and generals agreed that the counterterrorism operation must continue despite the civilian casualties in Mosul. Why didn’t they say this during the operation in Aleppo?
Maria Zakharova: Can’t you guess, or do I have to tell you? Propaganda is a tool used by all countries, though to a different degree by some. It amounts to promoting one’s own interests in the sphere of information. All states are involved in information work and the promotion of their policies. This is normal. However, it’s bad when the media take up the propaganda campaign. Also, the lengths to which our Western colleagues go are unacceptable. They distort facts completely, which is actually very much like disinformation.
As for Aleppo, their goal was to publish material that would convince the public that Russia’s role in settling the Syrian conflict was not constructive or positive but, on the contrary, extremely destructive, which would explain our partners’ political and military failure in Syria. This issue was also used for election purposes, because Hillary Clinton’s team was concerned with foreign policy when she was US Secretary of State. It was therefore clear that her election campaign would be focused on US foreign policy achievements and victories. This is why Russia’s involvement in Syria and its allegedly unconstructive role there was given as much attention as possible.
As I have said, facts about the situation in Mosul are being hushed up to minimize the information damage to the United States. The Mosul operation did not begin yesterday or a month ago; it was launched by the Obama administration almost six months ago. We described it as part of the election campaign. They needed a short victorious war, but the war is neither short nor victorious. The war would have been completely acceptable – after all, it is a war on terror – had it not been timed for the election campaign. It should have been a carefully planned operation with provisions for keeping the civilian casualties low, with humanitarian corridors and humanitarian aid, as well as assistance for those who wanted to leave the city. All these considerations were sacrificed to the time factor though and the election campaign. As a result, we have what we have, that is, what Iraq and the Iraqis have.
Question: The United States has refused to attend the Moscow conference on Afghanistan in April. Will Washington’s absence affect the outcome of the conference?
Maria Zakharova: I have already commented on this too. We sent an invitation to our American colleagues at their request, because they had expressed an interest in this. A while later, they said they would not attend the conference. So the conference will go ahead without US representatives. We wanted as many countries as possible to attend it not because we are after numbers, but because different countries could make different contributions to the common search for a solution to this complicated issue.
I don’t think I need to tell you about the US role in Afghanistan. As an Afghan journalist, you know what the Americans were doing all these years in Afghanistan. I would like to remind you that apart from their interest and political involvement, here is also the factor of the UN Security Council mandate for a US-led counterterrorist operation. I would like to remind you that in the decade since this operation was launched the United States and the US-led coalition in Afghanistan never reported to the UN Security Council about their achievements there. The UN Security Council issued the mandate and set the goals, but it has never learned if these goals were attained and what strategy the United States pursued in that region. We could only judge about this from the statements made at a national level. There was no documentary proof in the form of a report.
Our American partners expressed a desire to attend the conference, we duly sent the invitation to them, but they have refused to come. I have the impression that, unfortunately, this decision was taken largely because at present Washington does not have a global foreign policy strategy yet. We are waiting for them to formulate this strategy, so that we will be able to interact more actively. We are open to any form of US involvement in the formats where our American partners are traditionally present, including on Syria and Afghanistan.
Question: My question concerns Mosul. So much has been said about a lack of information concerning the operation to liberate Mosul. We know that thanks to the Rudaw TV channel, the world has become aware of heavy casualties in Mosul. A month ago, a female reporter of that channel was killed while covering mass burials around Mosul. How would you comment on the role of the Kurdish media in covering the Mosul operation?
Maria Zakharova: Look, I am going to advertise the Kurdish media without any material provision. As you know, the Russian government has repeatedly said that it is necessary to engage Kurdish forces and movements in various processes, given their active role “on the ground” and the fact that these associations, political parties and movements represent a large number of people. They have their own interests. They play a big role in a number of international issues, and, therefore, the role of the media, which reflect the standpoint of such a large number of people, should, indeed, be active. We believe that objectivity remains the principal factor here. The most important component today is objectivity and comprehensive coverage. The materials you have mentioned fill the gaps in the information picture. We stand for objectivity in the presentation of materials, they must not be engaged or serve the interests of only one group or political force, etc. A general picture is necessary.
I can repeat that if this is the way the media work, one cannot but applaud them.
Question: The Turkish leadership indirectly, or perhaps even directly, accused Russia of cooperation with terrorism, meaning that Russia cooperates with “terrorists” in the fight against terrorism. The case in point is Russia’s cooperation with the Syrian Kurdish Democratic Union Party (PYD), which has been actively fighting international terrorism and ISIS. Does Russia regard PYD as a terrorist organisation?
Maria Zakharova: I gave a very detailed answer to this question.
Question: Recently, Russia has faced accusations that Russian hackers influenced the election results in the United States. But you somehow missed the recent elections in Bulgaria. Is it because of fraternal friendship?
Maria Zakharova: The Russian hackers, it seems, had a day off on that day.
This topic has indeed become ridiculous. There is no proof whatsoever. These are the same songs sung over and over and the fantasies of mass media. This topic has become a convenient way to excuse someone’s own defeats and failures. Curiously, when the results suit the interests of the mainstream, the Russian hackers did not intervene, and when the results came as a surprise for the mainstream, they were blamed on the “Russian hackers.” It’s very strange logic indeed.
Question: Recently, I have seen your photograph in a gym on Facebook. I would like to ask you what gym you go to.
Maria Zakharova: The Foreign Ministry has its own sports complex. Not just me, but many of my colleagues work out there. It seems to me that we made a series of reports about it. I can give you more details. It opened inside this building in 2011. This is a comfortable and functional building, and it has a small sports complex of its own.
Question: Yesterday, Great Britain launched the official procedure of its withdrawal from the EU. How does the Russian Foreign Ministry see this event? In your opinion, can Russia benefit from Brexit? Foreign Policy magazine, for example, wrote that Brexit is Russia’s victory. Do you agree?
Maria Zakharova: We consider Brexit to be an internal matter for Great Britain and see it in terms of relations between London and Brussels. Naturally, we analyse the potential consequences of this event for Russia, in the economy, for example, or perhaps also in other spheres, including finance. As for political assessments, they are made by our analysts, journalists and political scientists, who study global processes and movements, the future of Europe, a certain strategy and development prospects for countries.
We do not have any special attitude to Brexit because this is an internal affair for Great Britain. It’s an area of responsibility for Britons themselves and their relations with the EU. Of course, we have been watching this process since we live on the same continent and we have relations both with London and Brussels.
Question: Russian Deputy Foreign Minister Vladimir Titov is currently visiting Japan. What is his mission? Has this to do with preparations for Prime Minister Shinzo Abe’s upcoming visit to Russia or the continuation of a recent dialogue by Russian Foreign Minister Sergey Lavrov and Deputy Foreign Minister Igor Morgulov with their Japanese counterparts?
Maria Zakharova: We maintain regular contacts with our Japanese colleagues. We are very glad that a normal diplomatic dialogue has resumed, which has always been characteristic of Russian-Japanese relations. It is normal when a dialogue is conducted not from one visit by the head of state to another, but is maintained regularly at different levels. Russian First Deputy Foreign Minister Vladimir Titov’s visit to Japan is proof of that. This is normal regular contact with our Japanese colleagues in a variety of areas. Unfortunately, much time has been wasted over the past two years. The dialogue was interrupted. Today, there are very many issues that need to be solved. Certainly, there is also an element of preparations for future visits, but this too is simple, routine work aimed at restoring the dialogue, directing it and addressing routine bilateral issues.
Question: The US Congress has proposed putting North Korea on the list of terrorism sponsoring countries. Do you think such sanctions will promote the Korean Peninsula settlement?
Maria Zakharova: You know our position regarding unilateral sanctions – we consider them absolutely non-constructive. In the context of Korean Peninsula, as well as in other situations, we have always emphasised that only sanctions imposed by UN Security Council resolutions can be effective. We regard a collective approach to crisis settlement, rather than its exacerbation, as the sole opportunity for using sanction instruments.
We proceed from the assumption that the current situation on Korean Peninsula is just the case that demands collective efforts to settle the crisis rather than bring it to a head. It is questionable whether the US rhetoric and moves you mention will improve the situation. We suspect the result will be quite the contrary.
Question: US intelligence data show that North Korea is about to carry out new nuclear tests. Russia has said repeatedly that it resolutely objects to continuous nuclear tests and missile launches. Is Russia doing anything to prevent possible nuclear tests? Is it working to influence North Korea or other nations in the region? Does Russia intend to introduce sanctions against North Korea if it goes through with the test?
Maria Zakharova: As for sanctions, I have said that they are introduced by the UN Security Council, not Russia. As far as work is concerned, we cooperate with our interested colleagues at the relevant agencies, and have general discussions of the Korean Peninsula situation in the context of current international efforts. We think that the available efforts and mechanisms can be very effective when implemented. On the contrary, when one engages in political creativity which, instead of following the line of established international institutions and formats, pursues particular domestic political goals, such conduct does not help to address the problem. We have said repeatedly that we deem it necessary to work in the available formats. We have everything for it, and need only goodwill. We have goodwill, and we are ready to cooperate.
Question: What does the Foreign Ministry think about US Secretary of State Rex Tillerson’s statement on the United States being interested in an urgent peaceful settlement of the Nagorno-Karabakh conflict?
Maria Zakharova: I have answered that question already.
Question: The media and other Russian written sources often use the letter е instead of ё. I am raising this issue because you alone can help me. If I do it on my own, it will take me 5-10 years. So I am asking you officially as the representative of one of the key Russian ministries. We would like the media and other agencies to use the letter ё because there is such a letter on the keyboard. We think it will be of great help to students from CIS countries who learn the Russian language.
Maria Zakharova: As far as I know, the two letters can be used interchangeably according to Russian language rules. But then, I am no expert in this field, so just refer to the relevant rules and regulations. If I have an opportunity, I will certainly highlight this matter in relevant formats.
Question: Almost three months have elapsed since the tragic death of Russian Ambassador to Turkey Andrey Karlov. Russia has not yet appointed a new ambassador. Would you specify whether there are any clear prospects?
Maria Zakharova: An ambassadorial appointment requires a procedure for internal coordination made not only by the Foreign Ministry but also by relevant government agencies. This issue is outside my competence, and I will not comment on it before officially approved information comes out. I can say only that it is under consideration. | 2019-04-20T10:58:35Z | http://rusembindia.com/home/briefings/186-briefings/8370-briefing-by-foreign-ministry-spokesperson-maria-zakharova-moscow-march-30-2017 |
The committee met at 1208 in committee room 1.
Bill 55, An Act to amend the Collection Agencies Act, the Consumer Protection Act, 2002 and the Real Estate and Business Brokers Act, 2002 and to make consequential amendments to other Acts / Projet de loi 55, Loi modifiant la Loi sur les agences de recouvrement, la Loi de 2002 sur la protection du consommateur et la Loi de 2002 sur le courtage commercial et immobilier et apportant des modifications corrélatives à d’autres lois.
The Chair (Mr. Garfield Dunlop): Good afternoon, everyone, and welcome to the Standing Committee on the Legislative Assembly. We’re here to discuss clause-by-clause consideration of Bill 55, An Act to amend the Collection Agencies Act, the Consumer Protection Act, 2002 and the Real Estate and Business Brokers Act, 2002 and to make consequential amendments to other Acts.
I’d like to welcome the committee members here today. We will return now to the motions that were stood down at the last meeting. I should also point out to everyone that, under the programming motion, we have three hours today to complete our amendments. If we don’t get those amendments completed, those amendments will be amended as up to date—so if we don’t get a few amended for some reason, they have to stay that way, okay? That’s under the programming motion agreed to by the House leaders.
The Chair (Mr. Garfield Dunlop): Okay, so hold on a second.
The Chair (Mr. Garfield Dunlop): Oh, I’m sorry. We’ll go back to that, yes.
It’s just motion 0.3. Mr. McDonell, can you go ahead with that one? You’ll have to read it into the record again, please.
Mr. Jim McDonell: Okay, so you’re looking at 0.3?
The Chair (Mr. Garfield Dunlop): It’s 0.3, the PC motion, and that part was stood down before, because we had a replacement after that.
Mr. Jim McDonell: We were looking at withdrawing that in favour of the government motion or amendment that’s coming through.
The Clerk of the Committee (Mr. Trevor Day): Okay, so 0.3 is withdrawn?
The Chair (Mr. Garfield Dunlop): You’re withdrawing 0.3?
Mr. Jim McDonell: There’s a government amendment that looks after most of what we were looking at.
The Clerk of the Committee (Mr. Trevor Day): Okay, so that’s withdrawn.
The Chair (Mr. Garfield Dunlop): So that’s withdrawn. Okay.
So then we go to the government motion, Mr. Dhillon? And that’s—let me make sure I got the right one on this.
The Chair (Mr. Garfield Dunlop): Hold on. Just excuse me a sec. Which one is this again?
The Clerk of the Committee (Mr. Trevor Day): Which one are you reading now?
The Clerk of the Committee (Mr. Trevor Day): What number?
Mr. Vic Dhillon: It’s 0.3.0.1R.
The Clerk of the Committee (Mr. Trevor Day): So 0.3.0.1R—this is the replacement. Okay, that’s in the secondary package that everyone received.
The Chair (Mr. Garfield Dunlop): That’s on everybody’s desk? Okay.
Mr. Vic Dhillon: Is that the right one?
The Clerk of the Committee (Mr. Trevor Day): Yes.
The Chair (Mr. Garfield Dunlop): Yes, go ahead.
Mr. Jim McDonell: Just to clarify the number in the top corner that he’s reading, is it 0.3.0.1R? Which one is he reading now?
The Chair (Mr. Garfield Dunlop): He’s reading 0.3.0.1R.
The Chair (Mr. Garfield Dunlop): Okay?
Mr. Vic Dhillon: Thank you. I’ll start all over again.
The Chair (Mr. Garfield Dunlop): Yes, thank you very much. Go ahead.
I think this just combines the previous motions and it’s just coming up with a compromise. I think the words “reasonably necessary” were necessary to improve this motion.
The Chair (Mr. Garfield Dunlop): Okay. Thank you very much. I go now to the official opposition. Any questions on it?
Mr. Jim McDonell: No. We’re fine.
The Chair (Mr. Garfield Dunlop): Okay, then we’ll go to the third party, Jagmeet?
Mr. Jagmeet Singh: I reviewed this and I think it satisfies the concern about disclosure of the source of the funding. My only issue, and I ask both counsels to respond to this—and let’s just get this right at the beginning for the record: Hartung?
Mr. Neil Hartung: That’s right.
Mr. Jagmeet Singh: Got it.
Ms. Cindy Forster: I made him practise.
Mr. Jagmeet Singh: I did. I was reviewing Hansard and I think I called you all sorts of different names. Every time, I changed the counsel’s name, so I felt bad.
Mr. Neil Hartung: As a bureaucrat, it’s my duty to keep pace, so I’m quite happy with that.
Mr. Jagmeet Singh: I think I assisted you in your duties, then.
Could I ask both the legislative counsel and the ministry counsel their opinion on the use of the words “reasonably necessary”? I’m going to propose an amendment, if you agree with me, that I think “reasonably” weakens the word “necessary,” and it opens it up to interpretation, and that just having “all information that is necessary to explain the sources” is stronger. Would you provide your input on whether “reasonably” weakens the term “necessary” and does it open up the opportunity to have a grey area where you have to assess what is reasonable and what is not? Mr. Wood, please.
Mr. Michael Wood: Very often in law, the standard is used of a “reasonable person.” I’m not sure exactly whether there is a huge difference between “that is necessary” and “that is reasonably necessary,” because I suspect—and Mr. Hartung, the ministry counsel, may want to confirm this or modify it—that a court would, if faced with interpreting the phrase “that is necessary,” would take into consideration the circumstances and not view something as “necessary” if it wasn’t “reasonable” in the circumstances. So I don’t see a huge amount of difference there, because, in law, the standard of a “reasonable person” is supposed to be a somewhat objective standard, anyway.
Mr. Jagmeet Singh: Mr. Hartung?
Mr. Neil Hartung: I agree with Mr. Wood. I would also note that the Collection Agencies Act has a registrar who’s responsible for licensing matters. So the word “reasonable” allows that registrar to communicate to the licensees what is determined to be reasonable, whereas if it’s an absolute standard like “necessary,” I think you invite multiple interpretations of what truly is necessary. It’s a grant, almost, of discretion to the person who administers the statute, who is the registrar, to say what they think is reasonable in the circumstances. It would be up to the licensee to try and oppose that in some fashion, likely through a hearing at the Licence Appeal Tribunal or through the imposition of terms and conditions.
Mr. Jagmeet Singh: Okay, I am satisfied with that. I don’t think it’s necessary to add an amendment, so I’m okay with moving to the next step. Those are all my comments. Thank you very much.
The Chair (Mr. Garfield Dunlop): Okay. Any questions, government members?
Mr. Vic Dhillon: No questions.
The Chair (Mr. Garfield Dunlop): Okay. Based on that, then, I’m going to call the vote on 0.3.0.1R. All those in favour of that amendment? That’s carried.
Mr. Vic Dhillon: We’ll be withdrawing the original. I believe the Clerk is aware of that.
Shall schedule 1, section 4, as amended, carry? Carried.
That’s the whole section. That’s carried. We’ll now go to schedule 1, section 9. The PC motion had been withdrawn.
The Clerk of the Committee (Mr. Trevor Day): We’re on 0.8.
Ms. Cindy Forster: Of the package?
The Clerk of the Committee (Mr. Trevor Day): Of the package.
The Chair (Mr. Garfield Dunlop): Of the package, yes. It was stood down, though, wasn’t it?
The Clerk of the Committee (Mr. Trevor Day): It was.
The Chair (Mr. Garfield Dunlop): It was stood down at the previous meeting, so we’re going back to schedule 1, section 9. It’s a PC motion.
The Clerk of the Committee (Mr. Trevor Day): It’s 0.8.
The Chair (Mr. Garfield Dunlop): It’s 0.8. Mr. McDonell, we understand this motion was dependent on an earlier motion that did not pass.
Mr. Jim McDonell: You’re talking about schedule 1, section 9?
The Chair (Mr. Garfield Dunlop): Yes.
Mr. Jim McDonell: It was a housekeeping item, so it belonged to the other one, so we’ll have to withdraw. The other one didn’t pass.
The Chair (Mr. Garfield Dunlop): Okay, so you’re withdrawing this?
The Chair (Mr. Garfield Dunlop): Okay. In that case, then, shall schedule 1, section 9, carry—as amended, carry? No, it’s not amended, is it?
The Chair (Mr. Garfield Dunlop): I’m sorry. It was amended by another motion.
Shall schedule 1, section 9, as amended, carry? Carried.
Mr. Jim McDonell: What number?
The Chair (Mr. Garfield Dunlop): The whole schedule, schedule 1.
Shall all of schedule 1, as amended, carry? Carried.
We’re now going to schedule 2, section 4.
Mr. Jagmeet Singh: Which number is that?
The Chair (Mr. Garfield Dunlop): It’s your motion, 0.11.1. I believe that was withdrawn before, Mr. Singh.
The Clerk of the Committee (Mr. Trevor Day): It was deferred, so it’s on. Mr. Singh.
Mr. Jagmeet Singh: Sure. I’ll move the motion.
The Chair (Mr. Garfield Dunlop): Okay.
“(1.1) The person who contacts the consumer on behalf of the supplier for the purpose of making the confirmation described in clause (1)(a) shall not be the same person who enters into the agreement with the consumer on behalf of the supplier.
The Chair (Mr. Garfield Dunlop): More time to explain that?
Mr. Jagmeet Singh: Sure. It sets out when the 20 days begin, so when the cooling-off period will commence, as well as a requirement that the individual or the person who signs the agreement can’t be the same person who actually confirms the agreement, to add that extra level of consumer protection; and then a clause regarding the concern around a consumer who has made or entered into an agreement, that during the cooling-off period there shouldn’t be any further soliciting that goes on during that period of time. There should be a cooling-off period that also precludes soliciting. Those are the components.
The Chair (Mr. Garfield Dunlop): Okay. We’ll go to the government members. Any questions on it?
Mr. Vic Dhillon: This is a pretty reasonable motion. We tried to work with the ministry on the wording. We weren’t able to come up with the appropriate wording. We somewhat agree with this, but we feel that this would be better dealt with through regulations.
The Chair (Mr. Garfield Dunlop): Okay. Any other questions from the government members?
Okay, the official opposition: Any questions on this?
Mr. Jim McDonell: You guys okay with it? Do it with a regulation.
The Chair (Mr. Garfield Dunlop): Okay. Further questions on this?
Mr. Vic Dhillon: Perhaps the ministry counsel may want to explain.
Mr. Neil Hartung: It does introduce some changes to how the cooling-off period generally works. The general rule for the cooling-off period is, once you receive a copy of the agreement, the cooling-off period starts. This amendment would say that the cooling-off period essentially doesn’t start until the verification call is made, which potentially lengthens the period to an uncertain time frame. That was one of the things that we were struggling with: that you wouldn’t be able to have that certainty as to when the cooling-off period actually begins and finishes. When disclosing to the consumer when they’re going to receive this brand new rental in their house, they won’t be able to say with any degree of certainty that it’s going to be on the 25th or 26th, or it might be on the 30th. For that reason, from sort of a practical, pragmatic perspective of how to implement this amendment, we ran out of time and out of the ability to solve this problem.
Mr. Vic Dhillon: Thank you.
The Chair (Mr. Garfield Dunlop): Mr. McDonell.
Mr. Jim McDonell: Yes, we have a problem with this because it allows the incumbent, I guess, to contact the consumer, but it doesn’t allow the direct seller to contact the consumer. We think that’s a bit of a disconnect. I think that if there are negotiations going on in the background, we don’t believe that—like some of the other agreements we’ve seen by this government and agreed to, the incumbent shouldn’t be allowed to take on aggressive resale tactics. But not to allow the original direct seller to be involved: We think that’s a problem.
The Chair (Mr. Garfield Dunlop): Okay. Any further comments from anyone?
Those in favour of the amendment? Those opposed? That doesn’t carry.
Mrs. Amrit Mangat: It carried?
The Chair (Mr. Garfield Dunlop): It doesn’t carry.
We’ll now go to the next motion. That’s the PC motion. That’s 0.13R, in that same area. Mr. McDonell.
We feel that this is a clearer motion, and it’s really talking about contacting the customer when it’s terminated.
The Chair (Mr. Garfield Dunlop): Okay. Any other comments, Mr. McDonell?
Mr. Jim McDonell: Not at this time.
The Chair (Mr. Garfield Dunlop): Any questions from the third party on this motion, this amendment? No questions?
Any questions from the government members?
Mr. Vic Dhillon: Just that we won’t be supporting this because it hinders fair business practices. Our intention is to strike the right balance, so we will not be supporting this.
The Chair (Mr. Garfield Dunlop): Okay. Mr. McDonell?
Mr. Jim McDonell: The purpose of this is—we’re talking about trying to promote competition. Some of the small suppliers—we’re finding, or hearing about aggressive retention activities that really go against the ability for these direct sellers to actually make a sale. They’re not allowed to contact, according to this legislation, during the 20-day period, so really, you’re never going to see this competition take place that I think this bill is trying to do.
The Chair (Mr. Garfield Dunlop): Okay. Further questions from anyone?
Those in favour of Mr. McDonell’s motion? Those opposed? You’re opposed?
Mr. Jagmeet Singh: No, we’re in favour.
The Chair (Mr. Garfield Dunlop): Okay, you’re in favour.
The Chair (Mr. Garfield Dunlop): We had the hands go up in between here. Let me do this again: Those in favour of Mr. McDonell’s motion?
The Chair (Mr. Garfield Dunlop): And those opposed? Everyone here.
Okay, I’ll be supporting the motion in its original form, so it does not pass.
Mr. McDonell, on the next one, will you be withdrawing your previous motion?
Mr. Jim McDonell: The previous one?
Mr. Jim McDonell: We replaced it.
The Chair (Mr. Garfield Dunlop): Okay. That’s withdrawn.
Committee, shall schedule 2, section 4, carry? It’s carried.
We’ll now go to schedule 2, section 14.1. We have an NDP motion: Mr. Singh.
The Chair (Mr. Garfield Dunlop): Pardon me? I’m sorry.
Mr. Jagmeet Singh: I have to clarify something with legislative counsel and yourself, and we might be able to withdraw this motion—one sec.
Mr. Jagmeet Singh: Yes. I’m just asking for a five-minute recess to clarify something, so it’s not encumbering anyone in an awkward way.
The Chair (Mr. Garfield Dunlop): Okay. Can we agree to a five-minute recess, everyone?
Mr. Bas Balkissoon: Okay. Sure.
The Chair (Mr. Garfield Dunlop): Okay, a five-minute recess.
The committee recessed from 1231 to 1236.
The Chair (Mr. Garfield Dunlop): Okay, everyone. Thanks for that recess.
Mr. Singh, we’re back to you again.
Mr. Jagmeet Singh: Thank you very much. On this motion, motion 14.1, I’m not moving this motion because there is another motion that deals with the same matter and it addresses the right section. So I’m not moving this 14.1.
The Chair (Mr. Garfield Dunlop): It’s withdrawn. So we’ll move now to schedule 2, section 5. We have a PC replacement motion, which is 0.15R. Mr. McDonell, go ahead, please.
“(b) the prescribed circumstances exist.
“(iv) shall not make any change to the consumer in connection with the cancelled agreement except a monthly rental charge prorated for the time from the date of installation of the heater to the date of the cancellation.
“(3) If a supplier supplies a water heater in contravention of subsection (1), the consumer exercises the right to cancel the direct agreement under clause 43(1)(a) or under clause (2)(a) and the consumer incurs charges from a third party that are related to the supplier’s contravention, the supplier is liable to reimburse the consumer for the amount of those charges.
“(4) The consumer may commence an action, in accordance with section 100, to recover the amount described in subsection (3) and may set off the amount against any amount owing to the supplier under any consumer agreement between the consumer and the supplier, other than the direct agreement described in subsection (1).
The Chair (Mr. Garfield Dunlop): Any comments or explanations, Mr. McDonell?
Mr. Jim McDonell: Clause (a), the first one, just adds the waiver in there that allows that to happen. Clause (b) is just housekeeping. So if we install within the 20 days without consent, we’re looking at extending that. For instance, if the heater gets installed on day 19, it doesn’t give much left for the consumer to actually make his—if the cooling-off period only has one day left, there may not be time to actually fulfill that or follow through on it, so that just gives them more time for that to happen.
As we go down through it—just looking after the cost that the consumer pays, that he is reimbursed in full, so he’s not out of pocket for any of these things.
If we go back to the end, the last part, it just allows the minister to designate other goods and special treatment. Right now, it only applies to hot water heaters.
The Chair (Mr. Garfield Dunlop): Questions from the third party?
I think, at the end of the day, though, the cooling-off period where there is no installation is important for consumer protection, and there have been consumer advocacy groups that have said that you shouldn’t be able to waive that cooling-off period. For those reasons, we’re not going to be able to support the amendment.
The Chair (Mr. Garfield Dunlop): Members of the government?
Mr. Vic Dhillon: We’ll be voting against it because, again, this could be better dealt with in regulations.
The Chair (Mr. Garfield Dunlop): Any other questions from anyone? All those in favour of Mr. McDonell’s amendment? Those opposed? That does not carry.
Mr. McDonell, we now go back to your original motion; we have it on the list here as well. Will you be withdrawing that?
Mr. Jim McDonell: We’ll withdraw the old one.
The Chair (Mr. Garfield Dunlop): Okay. Withdrawn.
We’ll now go to the NDP motion 0.15.1: Mr. Singh?
The Chair (Mr. Garfield Dunlop): Any more explanation you’d like on that?
Mr. Jagmeet Singh: Yes. It just provides some protection to the consumer if a supplier contravenes subsection 1. So if the supplier violates this code, there’s a remedy suggested. The remedy is that the supplier would have to return the consumer to their whole condition, so basically put them back in the position that they were in before.
It just adds an extra layer of protection, specifically with third-party charges. If the agreement is with another individual but there are some ancillary charges, some other charges that are also a part of that, those third-party charges are also covered by the person who contravenes the act. If you violate the act, you have to return the person to their whole condition, including any other charges that may have flowed from it.
Mr. Vic Dhillon: Okay. I’ll continue. It’s a fairly good motion that we will support.
The Chair (Mr. Garfield Dunlop): You’ll support? That’s good. I’m glad you’ve got staffers here.
The Chair (Mr. Garfield Dunlop): Okay. In that case, all those in favour of Mr. Singh’s amendment? That’s carried.
The next item is government motion 0.15.1.1: Go ahead, Mr. Dhillon.
Mr. Vic Dhillon: We’ll be withdrawing this motion.
The Chair (Mr. Garfield Dunlop): This is being withdrawn, 0.15.1.1?
The Chair (Mr. Garfield Dunlop): Withdrawn.
That takes us to: Shall schedule 2, section 5, as amended, carry? That’s carried.
Schedule 2, section 5.1—it’s a new section. It’s a PC motion.
Mr. Toby Barrett: It’s found on page 0.16: schedule 2, section 5.1 of the bill.
“‘Application to other kinds of consumer agreements re water heaters etc.
“‘(b) are Internet agreements, remote agreements or any other kinds of agreements that are not direct agreements.
“‘(2) In the event of conflict between subsection (1) and sections 37 to 40 (Internet agreements), subsection (1) prevails.
The Chair (Mr. Garfield Dunlop): Mr. Barrett, we have to rule this out of order. It’s outside the scope of the intention of the bill.
Mr. Jim McDonell: Just a comment on it. We can’t comment on it?
The Chair (Mr. Garfield Dunlop): We’ve ruled it out of order, yes.
Mr. Jagmeet Singh: A question: Is there a way to ask for unanimous consent to open up this section?
The Clerk of the Committee (Mr. Trevor Day): With unanimous consent, the committee can consider a motion ruled out of order.
Mr. Bas Balkissoon: Nobody has had time to look at this in depth.
Mr. Jim McDonell: Yes, seeking unanimous.
The Chair (Mr. Garfield Dunlop): So I’m asking for unanimous consent so he can discuss this—for the committee to consider this motion.
Mr. Jagmeet Singh: Mr. Chair, some questions on that: Can he provide reasons? I’m going to ask for something quite similar. This overlaps with something that I’m going to be asking for later on.
The Clerk of the Committee (Mr. Trevor Day): Unanimous consent first. If he gets it, he moves it and we move on.
Mr. Jagmeet Singh: I see.
Mr. Jim McDonell: So I can ask for unanimous consent that we review this amendment?
The Chair (Mr. Garfield Dunlop): Is there unanimous consent that we review this at all? I’m not getting unanimous consent; no. We’ll move on to the next motion.
Schedule 2, section 5.1, and that is an NDP motion. That’s 0.16.1.
Mr. Jagmeet Singh: Before I move this motion—because once I move it, it will be ruled out of order, so I’m not moving it yet. Just as a friendly discussion with my fellow colleagues here as MPPs, there’s a motion that we may discuss, in a couple of seconds, that talks about opening up the protection provided by this bill, which is for direct agreements, and there’s a defined remote agreement. Remote agreements are basically anything but direct agreements, so it could be telephone—I understand it should apply to Internet as well.
Remote agreements are basically not direct agreements. Direct agreements are door-to-door. My argument is going to be that we should provide the same protections that we provide to people door-to-door to people who are reached through the telephone or reached through other means that are remote. The “remote agreement” definition is included in the act.
Mr. Bas Balkissoon: Chair, you can’t do that.
The Chair (Mr. Garfield Dunlop): We don’t have an amendment on the floor. So why don’t you make the amendment and we’ll ask each of them to make a comment. Okay?
Mr. Jagmeet Singh: As soon as I make the amendment, it can be ruled out of order. Then I won’t be able to discuss this. So if I just could ask a quick question and then I’ll move ahead as you like, Mr. Chair.
Would you agree with that comment, Mr. Wood, that this would apply to water heaters?
Mr. Michael Wood: Am I allowed to answer?
The Chair (Mr. Garfield Dunlop): He’s not allowed to answer. There’s no motion to comment on.
So you can go ahead, if you want; I am going to probably rule it out of order, though.
Mr. Jagmeet Singh: I’m sure I get some points for creativity, though.
In light of the circumstances, would I be able to ask for a very brief recess of two minutes, just to ask a question so that I could make a submission?
The Chair (Mr. Garfield Dunlop): If it’s agreed, everybody?
Mr. Bas Balkissoon: Two minutes? Sure.
The Chair (Mr. Garfield Dunlop): Two-minute recess, fine.
The committee recessed from 1250 to 1255.
The Chair (Mr. Garfield Dunlop): The recess time is up. I’ll go back to Mr. Singh again.
Mr. Jagmeet Singh: That was very helpful. Thank you so much. I’m just going to move it. It will be ruled out of order, and that’s okay. We’ll just leave it at that.
The Chair (Mr. Garfield Dunlop): So it’s withdrawn at this point?
Mr. Jagmeet Singh: I’m just going to read it out and it will be ruled out of order. I’m not going to ask for unanimous consent.
The Chair (Mr. Garfield Dunlop): Thank you very much, Mr. Singh. I’m going to rule it out of order because section 47(1) of the bill is not open.
We’ll now go to schedule 2.
Mr. Bas Balkissoon: Thank you, Chair.
Mr. Bas Balkissoon: Thank you.
The Chair (Mr. Garfield Dunlop): We’ll now go to schedule 2, section 6. Shall schedule 2, section 6, carry? It’s carried.
Shall schedule 2, section 7, carry? Carried.
Finally, shall schedule 2, as amended, carry?
Mr. Jim McDonell: Can we have a recorded vote on that?
The Chair (Mr. Garfield Dunlop): On this one?
The Chair (Mr. Garfield Dunlop): Okay. A recorded vote has been asked for. On this one, we’re asking for a recorded vote.
The Chair (Mr. Garfield Dunlop): The schedule carries, as amended.
Schedule 3—we’ve got a number of amendments here. Amendment 0.17 by the PCs: Mr. McDonell.
Mr. Jim McDonell: We’ll be withdrawing our existing one. We have a revised one in. Do we want to just read the revision?
The Clerk of the Committee (Mr. Trevor Day): Do the revised one.
The idea around this is that the potential purchaser would be able to ask about offers before he actually made a binding offer.
The Chair (Mr. Garfield Dunlop): Any questions from the third party on this?
Mr. Jagmeet Singh: My concern is that I think we’re contemplating—OREA requested that we have an amendment so that they don’t require that brokers or brokerages hold on to offers, because there are certain issues around holding on to offers. They can keep track of the offers or another document, as prescribed, and we’re contemplating an amendment for that. In the case of if the brokerage doesn’t have the actual offer, but has another document, how would they be able to then fulfill this inquiry? If they don’t actually have the offer but they have the other document, would that still satisfy the inquiry? Because they don’t have a number of written offers. They may have a number of other written documents.
The Chair (Mr. Garfield Dunlop): Please feel free to respond to that.
Mr. Jim McDonell: There’s a good chance that we may amend this bill. I think there’s a common interest to make sure that any amendments to offers or any counter-offers are made in a simpler form. This would apply to those as well.
We wanted to have it so that somebody can request to know if there are any official offers on a property. Right now, I believe the way the legislation is written, they have to make an offer before they can actually inquire. We’re just making it so they could actually inquire if there are offers before they make a binding offer. I think that’s kind of the practice today, but this is a problem. By putting this in legislation, it just allows them to do that.
Mr. Vic Dhillon: Chair, we will not be supporting this.
The Chair (Mr. Garfield Dunlop): Okay. Any other comments from anyone on Mr. McDonell’s amendment? Those in favour of the amendment?
Mr. Jim McDonell: Recorded vote, please.
The Chair (Mr. Garfield Dunlop): Recorded vote.
The Chair (Mr. Garfield Dunlop): Okay, so that won’t carry. It changes the format. That one is lost.
The NDP motion is next: 0.17.1.
Mr. Jagmeet Singh: Thank you very much, Mr. Chair. Can I just confirm that the government is moving a motion that addresses this same issue?
Mr. Jagmeet Singh: We’re happy with the wording that the government is going to be proposing, so we don’t need to move our motion, then.
The Chair (Mr. Garfield Dunlop): Withdrawn?
Mr. Jagmeet Singh: Yes, I’m not moving it.
The Chair (Mr. Garfield Dunlop): Okay. Government motion number 1.
The Chair (Mr. Garfield Dunlop): Would you like any time to explain that?
Mr. Vic Dhillon: This is supported by OREA. As Mr. Singh stated, this is, I think, worded a bit better and clarifies the issues around it.
The Chair (Mr. Garfield Dunlop): Hold on.
The Clerk of the Committee (Mr. Trevor Day): Are you reading number 2 or number 1?
The Chair (Mr. Garfield Dunlop): It’s government motion number 1.
Mr. Bas Balkissoon: That’s why he read the wrong one.
The Chair (Mr. Garfield Dunlop): Oh, I apologize. We’ll do that again, okay?
Mr. Jagmeet Singh: Yes, because it wasn’t what I was looking at.
The Chair (Mr. Garfield Dunlop): I want him to re-read that one in.
Mr. Vic Dhillon: I will, Chair.
The Chair (Mr. Garfield Dunlop): Yes, you’re right.
Mr. Vic Dhillon: —so I apologize.
The Chair (Mr. Garfield Dunlop): That’s our fault, too. Let’s do number 1 again.
Mr. Vic Dhillon: That’s fine. I move that subsection 35.1(2) of the Real Estate and Business Brokers Act, 2002, as set out in section 1 of schedule 3 to the bill, be amended by adding “or copies of all other prescribed documents related to those offers” after “real estate”.
The Chair (Mr. Garfield Dunlop): Okay. Any further explanation on government motion 1?
Mr. Vic Dhillon: Again, it’s the same as I explained before, Chair.
The Chair (Mr. Garfield Dunlop): Okay. Mr. McDonell? Any questions from the official opposition?
Mr. Jagmeet Singh: Why was it broken up over two motions, 1 and 2, versus keeping it all in 2?
Mr. Bas Balkissoon: They’re different clauses.
Mr. Vic Dhillon: Yes. Could legislative counsel explain?
Mr. Michael Wood: There are two government motions involved here. One affects subsection 35.1(2) of the Real Estate and Business Brokers Act. The second motion affects section 35.1(4), which actually is identical to an NDP motion that follows it. The NDP motion is labelled 2.1, and that seems to be identical to government motion 2.
Mr. Bas Balkissoon: But 1 and 2 are different.
The Chair (Mr. Garfield Dunlop): They’re somewhat different, and number 2 resembles yours, Mr. Singh.
Any questions, then, on government motion 1? Those in favour of it? That’s carried.
Okay, government motion 2: Mr. Dhillon?
The Chair (Mr. Garfield Dunlop): I know you’ve read it once before, but do it again, and we’ll just make sure it’s okay for them.
Mr. Vic Dhillon: Not a problem.
The Chair (Mr. Garfield Dunlop): Okay. We’ve heard your explanation. Would you like to explain any more on that?
The Chair (Mr. Garfield Dunlop): Okay. Any questions from the official opposition on this? Or from Mr. Singh?
Mr. Jagmeet Singh: No, thank you.
The Chair (Mr. Garfield Dunlop): Okay. All those in favour of that? That’s carried.
We’ll now go to 2.1, the NDP motion.
Mr. Jagmeet Singh: I’ll withdraw that.
The Chair (Mr. Garfield Dunlop): It’s identical and it’s out of order.
Mr. Jagmeet Singh: It is identical to the previous one.
The Chair (Mr. Garfield Dunlop): Thank you. Okay, then. Shall schedule 3, section 1, as amended, carry? Carried.
Schedule 3, section 2: Shall schedule 3, section 2, carry? Carried.
Schedule 3, section 3: Shall schedule 3, section 3, carry? Carried.
Okay. We stood down sections 1, 2 and 3, so we’ve got to go back to those for a moment and make sure they all get passed properly here.
Shall sections 1 to 3 carry? Carried. All right.
Shall Bill 55, as amended, carry? Carried.
Shall I report the bill, as amended, to the House?
Mr. Toby Barrett: Chair, just a comment before we report: I know that a number of amendments were not passed, or were not felt worthy of being incorporated within the actual legislation. At least one amendment that was passed was felt to be already in regulation, not legislation, although it did become legislation courtesy of this committee. I think it was the credit counselling debt consolidation amendment, I guess, of the last week, where a credit counsellor has to maintain money in Ontario that they receive.
This difference between legislation and regulation—I know this committee has done a lot of work on these amendments. The only thing I would offer up is if any of these amendments were felt worthy by staff to be reviewed or to be considered as regulation down the road, I’d just like to offer that up, if the committee felt that was appropriate.
The Chair (Mr. Garfield Dunlop): Well, thank you for your advice on it. You’re asking for them to consider it, and we appreciate your advice.
Mr. Toby Barrett: I’m sorry?
Mr. Bas Balkissoon: Staff are all here, and they’re listening.
Mr. Toby Barrett: Okay. I guess that’s maybe good enough for me, is it?
The Chair (Mr. Garfield Dunlop): I guess so.
Mr. Toby Barrett: We don’t need a motion or anything?
The Chair (Mr. Garfield Dunlop): Shall I report the bill, as amended, to the House? Carried.
We are back here next week—at what time?
The Clerk of the Committee (Mr. Trevor Day): Twelve noon.
The Chair (Mr. Garfield Dunlop): Twelve noon for Bill 49.
Thank you very much, everybody, for your time. We’re adjourned.
The committee adjourned at 1309. | 2019-04-21T12:50:35Z | https://www.ola.org/en/legislative-business/committees/legislative-assembly/parliament-40/transcripts/committee-transcript-2013-nov-20 |
2005-04-18 Assigned to CISCO TECHNOLOGY, INC. reassignment CISCO TECHNOLOGY, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JOSHI, SATISH, DANNER, RYAN ALAN, DODRILL, LEWIS DEAN, MARTIN, STEVEN J.
A unified web-based voice messaging system provides voice application control between a web browser and an application server via an hypertext transport protocol (HTTP) connection on an Internet Protocol (IP) network. The application server, configured for executing a voice application defined by XML documents, selects an XML document for execution of a corresponding voice application operation based on a determined presence of a user-specific XML document that specifies the corresponding voice application operation. The application server, upon receiving a voice application operation request from a browser serving a user, determines whether a personalized, user specific XML document exists for the user and for the corresponding voice application operation. If the application server determines the presence of the personalized XML document for a user-specific execution of the corresponding voice application operation, the application server dynamically generates a personalized HTML page having media content and control tags for personalized execution of the voice application operation; however if the application server determines an absence of the personalized XML document for the user-specific execution of the corresponding voice application operation, the application server dynamically generates a generic HTML page for generic execution of the voice application operation. Hence, a user can personalize any number of voice application operations, enabling a web-based voice application to be completely customized or merely partially customized.
This application is a continuation of application No. 09/567,223, filed May 9, 2000, which issued on May 31, 2005 as U.S. Pat. No. 6,901,431.
This application claims priority from provisional application No. 60/152,316, filed Sep. 3, 1999, the disclosure of which is incorporated in its entirety herein by reference.
The present invention relates to generating and executing voice enabled web applications within a hypertext markup language (HTML) and hypertext transport protocol (HTTP) framework.
The evolution of the public switched telephone network has resulted in a variety of voice applications and services that can be provided to individual subscribers and business subscribers. Such services include voice messaging systems that enable landline or wireless subscribers to record, playback, and forward voice mail messages. However, the ability to provide enhanced services to subscribers of the public switched telephone network is directly affected by the limitations of the public switched telephone network. In particular, the public switched telephone network operates according to a protocol that is specifically designed for the transport of voice signals; hence any modifications necessary to provide enhanced services can only be done by switch vendors that have sufficient know-how of the existing public switched telephone network infrastructure.
An open standards-based Internet protocol (IP) network, such as the World Wide Web, the Internet, or a corporate intranet, provides client-server type application services for clients by enabling the clients to request application services from remote servers using standardized protocols, for example hypertext transport protocol (HTTP). The web server application environment can include web server software, such as Apache, implemented on a computer system attached to the IP network. Web-based applications are composed of HTML pages, logic, and database functions. In addition, the web server may provide logging and monitoring capabilities.
In contrast to the public switched telephone network, the open standards-based IP network has enabled the proliferation of web based applications written by web application developers using ever increasing web development tools. Hence, the ever increasing popularity of web applications and web development tools provides substantial resources for application developers to develop robust web applications in a relatively short time and an economical manner. However, one important distinction between telephony-based applications and web-based applications is that telephony-based applications are state aware, whereas web-based applications are stateless.
In particular, telephony applications are state aware to ensure that prescribed operations between the telephony application servers and the user telephony devices occur in a prescribed sequence. For example, operations such as call processing operations, voicemail operations, call forwarding, etc., require that specific actions occur in a specific sequence to enable the multiple components of the public switched telephone network to complete the prescribed operations.
The web-based applications running in the IP network, however, are state-less and transient in nature, and do not maintain application state because application state requires an interactive communication between the browser and back-end database servers accessed by the browsers via a HTTP-based web server. However, an HTTP server provides asynchronous execution of HTML applications, where the web applications in response to reception of a specific request in the form of a URL from a client, instantiate a program configured for execution of the specific request, send an HTML web page back to the client, and terminate the program instance that executed the specific request. Storage of application state information in the form of a “cookie” is not practical because some users prefer not to enable cookies on their browser, and because the passing of a large amount of state information as would normally be required for voice-type applications between the browser and the web application would substantially reduce the bandwidth available for the client.
Commonly-assigned, copending application Ser. No. 09/480,485, filed Jan. 11, 2000, entitled Application Server Configured for Dynamically Generating Web Pages for Voice Enabled Web Applications (Attorney Docket 95-409), the disclosure of which is incorporated in its entirety herein by reference, discloses an application server that executes a voice-enabled web application by runtime execution of extensible markup language (XML) documents that define the voice-enabled web application to be executed. The application server includes a runtime environment that establishes an efficient, high-speed connection to a web server. The application server, in response to receiving a user request from a user, accesses a selected XML page that defines at least a part of the voice application to be executed for the user. The XML page may describe any one of a user interface such as dynamic generation of a menu of options or a prompt for a password, an application logic operation, or a function capability such as generating a function call to an external resource. The application server then parses the XML page, and executes the operation described by the XML page, for example dynamically generating an HTML page having voice application control content, or fetching another XML page to continue application processing. In addition, the application server may access an XML page that stores application state information, enabling the application server to be state-aware relative to the user interaction. Hence, the XML page, which can be written using a conventional editor or word processor, defines the application to be executed by the application server within the runtime environment, enabling voice enabled web applications to be generated and executed without the necessity of programming language environments.
Hence, web programmers can write voice-enabled web applications, using the teachings of the above-incorporated application Ser. No. 09/480,485, by writing XML pages that specify respective voice application operations to be performed. The XML documents have a distinct feature of having tags that allow a web browser (or other software) to identify information as being a specific kind or type of information. In particular, commonly-assigned, copending application Ser. No. 09/501,516, filed Feb. 1, 2000 entitled Arrangement for Defining and Processing Voice Enabled Web Applications Using Extensible Markup Language Documents (attorney docket 95-410), the disclosure of which is incorporated in its entirety herein by reference, discloses an arrangement for defining a voice-enabled web application using extensible markup language (XML) documents that define the voice application operations to be performed within the voice application. Each voice application operation can be defined as any one of a user interface operation, a logic operation, or a function operation. Each XML document includes XML tags that specify the user interface operation, the logic operation and/or the function operation to be performed within a corresponding voice application operation, the XML tags being based on prescribed rule sets that specify the executable functions to be performed by the application runtime environment. Each XML document may also reference another XML document to be executed based on the relative position of the XML document within the sequence of voice application operations to be performed. The XML documents are stored for execution of the voice application by an application server in an application runtime environment.
Hence, the XML document described in the above-incorporated application Ser. No. 09/501,516, which can be written using a conventional editor or word processor, defines the application to be executed by the application server within the runtime environment, enabling voice enabled web applications to be generated and executed without the necessity of programming language environments.
Web-based service providers have offered personalized web pages to attract users to their web sites. In particular, web applications today are written using a combination of HTML user interface pages and common gateway interface (CGI) programs, enabling the user interface to be customized through HTML without disrupting the application logic and associated functions contained in the CGI program. Two classes of customization typically are found in personalized web pages, the first being interaction by a user with a web application that provides a presence for the user; the second class of customization involves a user interacting with an application that provides assistance for the user. An example of the first class of customization is when the web home page provides a customized presence for the user that others, who may not have any applications of their own, can interact with and select options from. An example of the second class of customization are pages such as “My Yahoo” or “My Netscape” that provide a customized presence for the user to interact with the web application that is specific to the user's needs.
The personalized web pages, however, require a client-side data record (i.e., a “cookie”) to be sent between the browser and the web server. In particular, cookies are needed to enable a web server to track a user's status as the user moves from one web page to another; as the user navigates through different web page, the web server updates the user's cookie, eliminating the necessity for the user to identify himself or herself (by user name and password) for each web page access.
There is a need for an arrangement that enables a user to personalize his or her voice enabled web applications, especially without the necessity of client-side data records such as cookies.
These and other needs are attained by the present invention, where an application server, configured for executing a voice application defined by XML documents, selects an XML document for execution of a corresponding voice application operation based on a determined presence of a user-specific XML document that specifies the corresponding voice application operation. The application server, upon receiving a voice application operation request from a browser serving a user, determines whether a personalized, user specific XML document exists for the user and for the corresponding voice application operation. If the application server determines the presence of the personalized XML document for a user-specific execution of the corresponding voice application operation, the application server dynamically generates a personalized HTML page having media content and control tags for personalized execution of the voice application operation; however if the application server determines an absence of the personalized XML document for the user-specific execution of the corresponding voice application operation, the application server dynamically generates a generic HTML page, based on a generic XML page, for generic execution of the voice application operation. Hence, a user can personalize any number of voice application operations, enabling a web-based voice application to be completely customized or merely partially customized.
One aspect of the present invention provides a method in an application server for executing a voice application. The method includes receiving an HTTP request requesting a prescribed voice application operation from a user. The method also includes selectively executing one of a generic XML document that specifies the prescribed voice application operation and a user-specific XML document that specifies the prescribed voice application operation personalized for the identified user, based on a determined presence of the user-specific XML document, for generation of an HTML page having media content corresponding to the prescribed voice application operation. The selected execution of either a generic XML document or a user-specific XML document enables a user to personalized his or her voice application, as desired. Hence, a user can personalize selected XML pages in order to provide a personalized interface, as well as personalized voice application logic and voice application functions such as procedure calls to external databases. Hence, a user can create a voice home page to greet callers with customized options, and/or a customized interface for accessing and retrieval of messages from the user's mailbox.
Another aspect of the present invention provides a system configured for configured for executing a voice application. The application server includes a hypertext transport protocol (HTTP) interface for receiving an HTTP request specifying execution of a prescribed voice application operation for an identified user. The application runtime environment is configured for dynamically generating, in response to the HTTP request, a first hypertext markup language (HTML) document having media content for execution of the voice application operation for the identified user based on execution of a selected XML document, the application runtime environment selecting one of a generic XML document that specifies the prescribed voice application operation and a user-specific XML document that specifies the prescribed voice application operation personalized for the identified user, based on a determined presence of the user-specific XML document. Hence, the application server generates an HTML page for execution of the voice application operation by a browser, based on the determined presence of a user-specific XML document, enabling a user to personalize their application interface and/or voice application functions.
Additional advantages and novel features of the invention will be set forth in part in the description which follows and in part will become apparent to those skilled in the art upon examination of the following or may be learned by practice of the invention. The advantages of the present invention may be realized and attained by means of instrumentalities and combinations particularly pointed out in the appended claims.
FIG. 1 is a block diagram illustrating an system enabling a personalization of voice enabled web applications according to an embodiment of the present invention.
FIG. 2 is a diagram illustrating development tools usable for personalization of the voice enabled web applications.
FIG. 3 is a diagram illustrating an XML document configured for defining a voice application operation for the application server of FIGS. 1 and 2.
FIG. 4 is a diagram illustrating a browser display of a form for user entry of voice application parameters.
FIG. 5 is a diagram illustrating in detail the application server of FIGS. 1 and 2 according to an embodiment of the present invention.
FIG. 6 is a diagram illustrating a system for executing a personalized voice application according to an alternative embodiment of the present invention.
FIGS. 7A and 7B are diagrams illustrating methods for executing a personalized voice application according to respective embodiments of the present invention.
FIG. 1 is a block diagram illustrating an architecture that provides unified voice messaging services and data services via an IP network using browser audio control according to an embodiment of the present invention, reproduced from FIG. 3 of the above-incorporated application Ser. No. 09/501,516. The clients 42 a and 42 b, referred to herein as “fat clients” and “thin clients”, respectively, have the distinct advantage that they can initiate requests using IP protocol to any connected web server 64 to execute part or most of the applications 44 on behalf of the clients. An example of a fat client 42 a is an e-mail application on a PC that knows how to run the application 44 and knows how to run the IP protocols to communicate directly with the messaging server via the packet switched network 50. An example of a thin client 42 b is a PC that has a web browser; in this case, the web browser 56 can use IP protocols such as HTTP to receive and display web pages generated according to hypertext markup language (HTML) from server locations based on uniform resource locators (URLs) input by the user of the PC.
As shown in FIG. 1, each of the clients (tiny clients, skinny clients, thin clients and fat clients) are able to communicate via a single, unified architecture 60 that enables voice communications services between different clients, regardless of whether the client actually has browser capabilities. Hence, the fat client 42 a and the thin client 42 b are able to execute voice enabled web applications without any hardware modification or any modification to the actual browser; rather, the browsers 56 in the clients 42 a and 42 b merely are provided with an executable voice resource configured for providing browser audio control, described below.
The user devices 18 a, 18 b, and 18 c, illustrated as a cordless telephone 18 a, a fax machine 18 b having an attached telephone, and an analog telephone 18 c, are referred to herein as “skinny clients”, defined as devices that are able to interface with a user to provide voice and/or data services (e.g., via a modem) but cannot perform any control of the associated access subnetwork.
The wireless user devices 18 d, 18 e, and 18 f, illustrated as a cellular telephone (e.g., AMPS, TDMA, or CDMA) 18 d, a handheld computing device (e.g., a 3-Com Palm Computing or Windows CE-based handheld device) 18 e, and a pager 18 f, are referred to as tiny clients. “Tiny clients” are distinguishable from skinny clients in that the tiny clients tend to have even less functionality in providing input and output interaction with a user, rely exclusively on the executable application in an access subnetwork to initiate communications; in addition, tiny clients may not be able to send or receive audio signals such as voice signals at all.
Hence, the skinny clients 18 a, 18 b, and 18 c and the tiny clients 18 d, 18 e, and 18 f access the unified voice messaging services in the unified network 60 via a proxy browser 62, configured for providing an IP and HTTP interface for the skinny clients and the tiny clients. In particular, browsers operate by interpreting tags within a web page supplied via an HTTP connection, and presenting to a user media content information (e.g., text, graphics, streaming video, sound, etc.) based on the browser capabilities; if a browser is unable to interpret a tag, for example because the browser does not have the appropriate executable plug-in resource, then the browser typically will ignore the unknown tag. Hence, the proxy browser 62 can provide to each of the skinny clients and tiny clients the appropriate media content based on the capabilities of the corresponding client, such that the cordless telephone 18 a and telephone 18 c would receive analog audio signals played by the proxy browser 62 and no text information (unless a display is available); the fax machine 18 b and pager 18 f would only receive data/text information, and the cellular telephone 18 d and the handheld computing device 18 e would receive both voice and data information. Hence, the proxy browser 62 interfaces between the IP network and the respective local access devices for the skinny clients and the tiny clients to provide access to the unified messaging network 60.
The proxy browser 62 and the web browsers 56 within the fat client 42 a and the thin client 42 b execute voice enabled web applications by sending data and requests to a web server 64, and receiving hypertext markup language (HTML) web pages from the web server 64, according to hypertext transport protocol (HTTP). The web server 64 serves as an interface between the browsers and an application server 66 that provides an executable runtime environment for XML voice applications 68. For example, the web server 64 may access the application server 66 across a common gateway interface (CGI), by issuing a function call across an application programming interface (API), or by requesting a published XML document or an audio file requested by one of the browsers 56 or 62. The application server 66, in response to receiving a request from the web server 64, may either supply the requested information in the form of an HTML page having XML tags for audio control by a voice resource within the browser, or may perform processing and return a calculated value to enable the browser 56 or 62 to perform additional processing.
The application server 66 accesses selected stored XML application pages (i.e., pages that define an application) and in response generate new HTML pages having XML tags during runtime and supply the generated HTML pages having XML tags to the web server 64. Since multiple transactions may need to occur between the browser 56 or 62 and the application server 66, the application server 66 is configured for storing for each existing user session a data record, referred to as a “brownie”, that identifies the state of the existing user session; hence, the application server 66 can instantiate a procedure, return the necessary data, and terminate the procedure without the necessity of maintaining the instance running throughout the entire user session.
Hence, the application server 66 executes voice application operations from a stored XML document based on a transient application state, where the application server 66 terminates the application instance after outputting the generated XML media information to the browser 18 or 42.
According to the disclosed embodiment, users are able to create personalized voice applications, where a user may create a voice homepage to greet callers with customized options. Alternatively, a user may create a customized interface for accessing their mailbox and retrieving their messages. In particular, commonly assigned, application Ser. No. 09/559,637, filed Apr. 28, 2000 entitled Browser-Based Arrangement for Developing Voice Enabled Web Applications Using Extensible Markup Language Documents and having issued as U.S. Pat. No. 6,578,000, the disclosure which is incorporated in its entirety herein by reference, discloses in detail an arrangement for the forms-based methodology for defining voice-enabled web applications using XML documents. A browser-based executable voice application defined by XML documents can be created or modified by users lacking expertise in application development or XML syntax by use of the forms based representation of the application defining XML documents. In particular, the application server 66 is configured for providing an HTML forms representation of the application defining XML documents. The application server 66 is configured for parsing an existing XML document that defines a voice application operation, inserting selected XML tag data that specify application parameters into entry fields of an HTML-based form, and outputting the HTML based form to the browser 56. The browser 56, upon receiving the HTML document having the form specifying entry fields for application parameters for the XML document, displays the form in a manner that enables a user of the voice application to create or modify voice application operations. After the user has input new application parameters or modified existing application parameters in the entry fields, the user submits the form to a prescribed URL that is recognized by the application server 66. The application server 66, upon receiving the form from the corresponding web browser 56, can then create or modify the XML document by inserting the input application parameters as XML tag data in accordance with XML syntax. The application server 66 can then store the XML document for later execution for the user.
According to the disclosed embodiment, the forms-based arrangement for defining voice-enabled web applications is extended to enable users to personalize application defining XML documents. Once the user has created personalized XML documents, the application server 66 can provide a personalized voice application based on detecting a user-specific XML document that specifies the prescribed voice application operation personalized for the identified user. If the application server 66 detects an absence of any user-specific XML document for the corresponding voice application operation, the application server 66 executes a generic XML document for execution of the prescribed was application operation.
Hence, a user can develop a personalized voice application by accessing forms generated by the application server 66, followed by execution of selected XML documents by the application server 66 in order to provide a user-specific voice homepage to greet the callers with customized options, or alternately a user-customized interface for accessing the user's mailbox. A brief description will first be provided from the above-incorporated application Ser. No. 09/559,637 of defining voice applications to illustrate how a user can personalize a voice application, followed by a description of the methodology for selectively executing a personalized voice application according to an embodiment of the present invention.
FIG. 2 is a diagram illustrating development tools usable for development (e.g., personalization) of the voice enabled web applications. As shown in FIG. 2, the web server 64, the application server 66, and the voice web applications 68 reside within a gateserver 70. The gateserver 70 provides HTTP access for a browser based XML editor tool 56 b that enables a web programmer to design voice applications by editing XML pages. Generic XML pages (i.e., XML documents that are executable for any user) are stored as XML applications and functions 72, for example within a database accessible by the application server 66. The XML pages stored within the XML application and functions database 72 define the actual application operations to be performed by the application server 66 in its application runtime environment. Hence, the application server 66 executes stored XML applications and functions 72, and in response generates dynamic HTML pages having XML tags, also referred to as HTML/XML pages 74. As described in further detail below with respect to FIGS. 5, 6 and 7, personalized (i.e., user-specific) XML documents are stored in user-specific directories separate from the generic XML documents stored in the XML application and functions database 72.
Four types of XML documents are used by the application server 66 to execute web applications: menu documents, activity documents, decision documents, and “brownies”. The menu documents, activity documents, and decision documents are XML documents, stored in the document database 72 or the user-specific directories, that define user interface and boolean-type application logic for a web application, hence are considered “executable” by the application server 66. The brownie document, stored in a separate registry 92 in FIG. 5, is an XML data record used to specify application state and user attribute information for a given XML application during a user session. Hence, the XML documents define user interface logistics and tie services and application server events together in a meaningful way, forming a coherent application or sets of applications. Additional details regarding the definition of executable voice applications using XML documents are described in the above-incorporated application Ser. No. 09/501,516.
Certain development tools having direct access to the application server 66 can be used to establish context information used by the application runtime environment within the application server 66 for execution application operations based on parsing of XML documents. In particular, development tools such as a graphic based development system 80 a, a forms-based development system 80 b, an editor-based development system 80 c, or an outline-based development system 80 d may be used to define XML tags and procedure calls for the application runtime environment. The development tools 80 may be used to establish an application and resource database 84 to define low-level operations for prescribed XML tags, for example dynamically generating an XML menu page using executable functions specified by a menu rule set in response to detecting a menu tag, performing a logical operation using executable functions specified by a logic rule set in response to a decision tag, or fetching an audio (.wav) file in response to detecting a sound tag.
The development tools 80 may be used to establish an application programming interface (API) library 82 (e.g., a SQL interface) for the application runtime environment, enabling the application server 66 to issue prescribed function calls to established services, such as IMAP, LDAP, or SMTP. The library 82 may be implemented as dynamically linked libraries (DLLs) or application programming interface (API) libraries. If desired, the development tools 80 may also be used to generate an XML application as a stored text file 86, without the use of the forms generated by the application server 66, described below.
A user of the browser 56 typically sends a request to the application server 66 (via the web server 64) for a voice application operation 82, for example using an interface executable by a browser 56 or 62, for accessing new voice mail messages, new facsimile messages, new e-mail messages, and the like. A user of the browser 56 also can send a request to the application server 66 for creating or modifying an XML document defining a voice application operation, via a development tool common gateway interface (CGI). In particular, the web browser 56 posts a user input for an application operation (i.e., an HTTP request) to a first URL for the voice application operation. In contrast, the web browser 56 posts to another URL for accessing the development tool CGI. Accessing the application server via the CGI enables the application server 66 to access a selected XML document, for example the XML document 100 illustrated in FIG. 3, in order to dynamically generate a form 102, illustrated in FIG. 4, that specifies selected application parameters of the XML document 100. Hence, accessing the application server by posting the user input according to a first URL causes execution of the XML document 100, whereas accessing the application server via the CGI causes the application server 66 to generate a form that specifies the contents of the XML document 100.
Hence, accessing the application server 66 via the CGI enables the web browser to perform different operations on the selected XML document 100, described in further detail in the above- incorporated application Ser. No. 09/559,637.
FIG. 4 illustrates the insertion of the application parameters 106 into respective entry fields 108 by the application server 66 for display of the form 102 by the browser 56. As shown in FIG. 4, the application server 66 parses the XML tags 104 a, 104 b, 104 c, . . . 104 g and in response inserts the application parameters 106 a, 106 b, 106 c, . . . 106 g into the respective entry fields 108 a, 108 b, 108 c, . . . 108 g. For example, the application server 66, in response to detecting the XML text tag 104 a, dynamically generates an HTML document that specifies a form 102 having the entry field 108 a and including the corresponding application parameter 106 a; hence, each of the XML tags 104 has a corresponding entry field 108 within the form 102 specified by the HTML page generated by the application server 66, including XML tags 108 g having empty application parameters 106 g. Note that XML tags 110 used to define the XML document attributes (and consequently the structure of the form 102) are predefined by one of the developer workstations 80 or the browser based XML editor tool 56 b that do not rely on the form 102.
The application server 66 also parses the XML option tags 112 for insertion of menu application parameters 114 into the respective menu entry fields 116. For example, the application server 66 inserts the menu application parameters 114 a 1, 114 a 2, and 114 a 3 into the respective menu entry fields 116 a 1, 116 a 2, and 116 a 3, and inserts the menu application parameters 114 c 1, 114 c 2, and 114 c 3 into the respective menu entry fields 116 c 1, 116 c 2, and 116 c 3 generated by the HTML document in step 206.
The application server 66 also specifies an entry field 118 that enables the browser user to specify the filename 120 of the XML document (i.e., the designation used by the application server 66 when referring to the “current state”). In addition, the application server 66 specifies an addition button 122 that enables users to add menu options 112 to an XML document; hence, if the user enters a new file name within the entry field 124 and presses the addition button 122, the browser 56 posts to a prescribed URL to cause the application server to generate a new XML document having a name specified in the field 124, and to generate another HTML form having an additional menu entry field 116 for the new prompt.
The application server also specifies within the HTML form 102 prescribed URLs associated with command hyperlinks 126, such that posting the form 102 by the browser 56 to a corresponding one of the URLs 126 results in a corresponding operation performed by the application server 66.
Hence, the HTML entry form 102 generated by the application server 66 provides all the fields and command functions necessary for a user to create or modify a new or existing XML document, regardless of whether the XML document is a menu-based XML document or a non-menu XML document.
The above-described arrangement enables a user lacking programming skills or knowledge of XML syntax to personalize his or her voice-enabled web applications defined in XML documents, by accessing default XML documents, modifying the default documents using the form 102, and posting the form 102 back to the application server 66 via the CGI for storage as a personalized XML document. As described below, the application server 66 stores the personalized document separately from the generic XML application documents stored in the XML document database 72.
FIG. 5 is a diagram illustrating in detail the application server 66 according to an embodiment of the present invention. The application server 66 is implemented as a server executing a PHP hypertext processor with XML parsing and processing capabilities, available open source at http://www.php.net. As shown in FIG. 5, the server system 66 includes an XML parser 220 configured for parsing the application-defining XML documents (e.g., XML document 100) stored in the XML document database 72, or the XML documents (i.e., “brownies”) stored in the registry 92 and configured for specifying the state and attributes for respective user sessions. The application server 66 also includes a high speed interface 222 that establishes a high-speed connection between the application server 66 and the web server 64. For example, the PHP hypertext processor includes a high-speed interface for Apache Web servers.
The application server 66 also includes a runtime environment 224 for execution of the parsed XML documents. As described above, the runtime environment 224 may selectively execute any one of user interface operation 98, a logic operation 226, or a procedure call 228 as specified by the parsed XML document by executing a corresponding set of executable functions based on the rule set for the corresponding operation. In particular, the application runtime environment 224 includes a tag implementation module 230 that implements the XML tags parsed by the XML parser 220. The tag implementation module 230 performs relatively low-level operations, for example dynamically generating an XML menu page using executable functions specified by a menu rule set in response to detecting a menu tag, performing a logical operation using executable functions specified by a logic rule set in response to a decision tag, or fetching an audio (.wav) file in response to detecting a sound tag. Hence, the tag implementation module 230 implements the tag operations that are specified within the XML framework of the stored XML documents.
The application server 66 also includes a set of libraries 82 that may be implemented as dynamically linked libraries (DLLs) or application programming interface (API) libraries. The libraries 82 enable the runtime environment 224 to implement the procedures 228 as specified by the appropriate XML document. For example, the application server 66 may issue a function call to one of a plurality of IP protocol compliant remote resources 240, 242, or 244 according to IMAP protocol, LDAP Protocol, or SMTP protocol, respectively, described below. For example, the PHP hypertext processor includes executable routines capable of accessing the IMAP or LDAP services. Note that the mechanisms for accessing the services 240, 242, or 244 should be established within the application server before use of XML documents that reference those services. Once the services 240, 242, or 244 are established, the application runtime environment 224 can perform a function operation by using executable functions specified by a function call rule set.
The arrangement for executing a personalized voice-enabled web application will now be described. As described above with respect to FIGS. 2, 3 and 4, a user is able to personalize his or her voice application by sending an HTTP request to the application server 66, for example via a CGI interface, for generation of an HTML document that specifies the form 102 for modifying application parameters of a prescribed XML document. The application server 66 responds to the HTTP request by accessing application document database 72 for retrieval of the selected generic XML document, and by generating the HTML document having the form 102 with the selected application parameters. Once the user modifies (i.e., personalizes) the application parameters within the form 102 and posts the completed form 102 to a prescribed URL via the CGI interface, the application server 66 generates a new user-specific XML document that specifies the voice application operations as personalized by the corresponding user, and stores the user-specific XML document in a user-specific database.
Hence, the use of user-specific XML documents enables a user to override pre-existing generic XML documents for certain application operations. Hence, a subscriber can create a personalized unified voice, e-mail and fax messaging system for other users attempting to access the subscriber, as well as a personalized user interface for a subscriber accessing his or her mailbox for retrieval of messages.
FIGS. 5 and 6 illustrate alternative arrangements for storing the user-specific XML documents. In particular, FIG. 5 illustrates the arrangement where the application server 66 stores user-specific XML documents in the user's IMAP account within the IMAP message storage 240, where the IMAP message storage 240 includes an XML folder as a subfolder of the user's inbox for storage of the user-specific XML documents. Hence, the IMAP message storage 240 can store greetings, messages (e.g., voice, e-mail or fax), as well as the user-specific XML documents. Alternatively the user-specific XML documents may be stored within the LDAP directory 242 as part of the subscriber profile information for the corresponding user.
FIG. 6 illustrates an alternative arrangement for storing the user-specific XML documents, where the application server 66 outputs via the web server 64 an HTTP put request to another web server 300 configured for storing and retrieving user-specific XML documents from a database 302, for example a SQL database. The database 302 is configured for storing user-specific XML documents in user directories 304, wherein each user directory 304 is identified according to the corresponding user identity (e.g., “user1”). Hence, the application server 66 can cause the web server 64 to output an HTTP put request having a uniform resource locator (URL) that specifies the host name of the custom XML web server 300, the user identity, and the XML document name. The application server 66 also can retrieve a user-specific XML document by outputting an HTTP get request using the same URL to obtain the corresponding user-specific XML document for execution of the prescribed voice application operation for the corresponding user.
FIG. 7A is a flow diagram illustrating the method of executing a voice application according to the arrangement of FIG. 5. FIG. 7B is a flow diagram illustrating the method of executing a voice application according to the arrangement of FIG. 6. The steps described in FIGS. 7A and 7B can be implemented as executable code stored on a computer readable medium (e.g., a hard disk drive, a floppy drive, a random access memory, a read only memory, an EPROM, a compact disc, etc.).
As shown in FIG. 7A, the application server 66 stores in step 400 the personalized XML document, generated based on the posted form 102, in the corresponding user directory of an external database, such as the IMAP data store 240 or the LDAP directory 242. The application server 66 will later receive an HTTP request in step 402 that specifies a user identity and a voice application operation. For example, the HTTP request may specify initiation of a unified voice messaging routine to enable a calling party to leave a message for the user; alternatively, the HTTP request may specify initiation of a message retrieval routine by the user specified by the user identity.
The application server 66 then checks in step 404 whether the user having generated the HTTP request is already logged into an existing application session, for example by determining whether the URL in the HTTP request specifies a pre-existing valid session identifier for a brownie within the registry 92.
If the application server 66 determines that the user is not logged in, the application server 66 retrieves in step 406 from the external database (e.g., the IMAP message storage 240 or the LDAP directory 242) an index that specifies all the user-specific XML documents available for the corresponding user. In particular, the index eliminates unnecessary function calls by providing the application server 66 with information necessary to determine whether the user has a corresponding user-specific XML document personalized for the corresponding voice application operation; the application server 66 generates a new brownie having a valid session identifier and user ID, and stores in step 408 the index within the brownie. Hence, the index within the brownie enables the application server 66 to determine the presence of a user-specific XML document for the corresponding voice application operation, without the necessity of repeated access of the IMAP or LDAP external databases.
If in step 404 the application server 66 detects a valid session identifier within the URL, the application server 66 uses the session identifier to obtain the corresponding brownie based on the user identity from the registry 92 in step 410. The application server 66 then parses the XML tags within the brownie to determine in step 412 whether the brownie includes an index that specifies a personalized XML document for the requested voice application operation. If in step 412 the application server 66 detects the presence of the user-specific XML document from the index, the application server 66 accesses in step 414 the personalized XML document from the external database (e.g., the IMAP message storage 240 or the LDAP directory 242) based on the user identifier specified in the brownie, and based on the requested application operation. If in step 412 the application server 66 detects an absence of any personalized XML document for the requested voice application operation, the application server 66 accesses in step 416 the generic XML document from the XML document database 72.
The application server 66 then executes in step 418 the accessed XML document (e.g., the personalized XML document or the generic XML document) and dynamically generates an HTML page having XML control tags for media control of text, audio files (e.g., .wav files), etc. The HTML page is output by the web server 64 to the web browser 56 or 62 for execution of the voice application operation.
FIG. 7B is a flow diagram illustrating storage and execution of personalized XML documents according to the system of FIG. 6. The application server 66 stores a personalized XML document by posting an HTTP put command in step 450 to the custom XML server 300. In particular, the application server 66 generates a URL that specifies the host name of the custom XML server 300, the user ID of the user having generated the personalized XML document as a folder 304 within the database 302, and the name of the XML document. It should be noted that the name of the personalized XML document and the generic XML document typically are identical, enabling the personalized XML document to simply be substituted for the generic XML document during execution by the application runtime environment 224.
The application server 66 will later receive an HTTP request in step 452, similar to step 402 in FIG. 7A. If the application server 66 determines in step 404 that the user is logged in, the application server 66 accesses the brownie in step 410; otherwise, the application server 66 creates a new brownie for the new user session in step 454.
The application server 66 then parses the brownie to determine the application state, for example the name of the next XML document to be executed. The application server 66 outputs in step 456 an HTTP get request to the custom XML server 300 based on the user identifier specified in the brownie and the application operation name. For example, the application server 66 generates a URL that specifies the host name, the folder 304 for the corresponding user, and the name of the application operation or the XML document name to be retrieved for execution.
The custom XML server 300 accesses the database 302 in response to the HTTP get request, and generates an HTTP response based on whether the requested XML document was found for the corresponding user. For example, the custom XML server 300 outputs an HTTP response that includes the user-specific XML document in response to detecting a match between the user identity for the folder 304 and the user identifier specified within the HTTP get request, and a match between the voice application operation specified in the HTTP get request and a function identifier (i.e., the name of the stored XML document). However if the custom XML server 300 is not detect a match in either the identified user or the requested was application operation, the XML server 300 outputs an HTTP response indicating an unavailability of the requested user-specific XML document.
The application server 66 determines in step 458 whether the HTTP response from the custom XML server 300 includes the requested XML document. If the HTTP response includes the XML document, the application server accesses the personalized XML document from the HTTP response in step 460 and dynamically generates the HTML page in step 462, similar to step 418. However, if the application server 66 determines that the HTTP response specifies that the requested XML document is not available, the application server 66 accesses in step 464 the generic XML document from the document database 72, and dynamically generates the corresponding HTML page by parsing the generic XML document in step 462.
According to the disclosed embodiment, personalized XML documents are stored and retrieved by an application server configured for execution of voice-enabled web applications defined by XML documents. Hence, users can create a voice homepage to greet callers with customized options, as well as a customized interface for accessing their mailbox for retrieval of messages, without any modification to existing document used to define a predetermined sequence of voice application operations.
While this invention has been described in connection with what is presently considered to be the most practical and preferred embodiment, it is to be understood that the invention is not limited to the disclosed embodiments, but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.
executing a user-specific extensible markup language (XML) document that specifies the prescribed voice application operation personalized for the user for generation of a hypertext markup language (HTML) page having media content corresponding to the prescribed voice application operation.
2. The method of claim 1, further comprising determining the presence of the user-specific XML document based on a user identity for the user and the requested prescribed voice application operation.
3. The method of claim 2, wherein the determining step includes accessing an external database, configured for storing data for respective users, for retrieval of the user-specific XML document.
determining whether the first XML document for the corresponding user specifies the presence of the user-specific XML document for the requested prescribed voice application operation.
5. The method of claim 4, wherein the step of accessing an external database includes accessing an Internet Message Access Protocol (IMAP) database for retrieval of the user-specific XML document.
6. The method of claim 4, wherein the step of accessing an external database includes accessing a Lightweight Directory Access Protocol (LDAP) database for retrieval of the user-specific XML document.
storing the index into the first XML document.
8. The method of claim 7, wherein the step of determining the presence of the user-specific XML document includes searching the list in the first XML document for the user-specific XML document.
receiving a response to the HTTP get request from the server, the response including one of the user-specific XML document or a denial of the HTTP get request.
10. The method of claim 9, further comprising outputting an HTTP put request to the server for storage of a new user-specific XML document for the corresponding user.
11. The method of claim 10, wherein the step of outputting an HTTP put request includes posting the HTTP put request to a uniform resource locator specifying a host name of the server, and a directory within the server for the user.
12. The method of claim 9, wherein the step of outputting an HTTP get request includes posting the HTTP get request to a uniform resource locator specifying a host name of the server, and a directory within the server for the user.
13. The method of claim 1, wherein the executing step is performed by an application instance executed by the server, the method further comprising, terminating the application instance based on the HTML page having been output to a browser.
means for selectively outputting an HTTP response including the user-specific XML document in response to reception of an HTTP get request having a user identifier that matches the user identity and a function identifier that matches the prescribed voice application operation.
15. The web server of claim 14, further comprising means for outputting a second HTTP response indicating an unavailability of the requested user-specific XML document based on at least one of a determined absence of a match between the user identifier and the user identity, or a determined absence of a match between the function identifier and the prescribed voice application operation.
16. The web server of claim 15, wherein the means for storing stores the user-specific XML document in a database directory according to the user identity.
17. The web server of claim 16, further comprising means for searching the database directory for the user-specific XML document based on the function identifier.
18. The web server of claim 14, wherein the means for selectively outputting is configured for generating the HTTP response by an application instance executed by the web server, the web server configured for terminating the application instance based on the HTTP response having been output to a browser.
an application runtime environment configured for dynamically generating, in response to the HTTP request, a first hypertext markup language (HTML) document having media content for execution of the prescribed voice application operation for the identified user based on execution of a selected extensible markup language (XML) document, the application runtime environment configured for selecting a user-specific XML document that specifies the prescribed voice application operation personalized for the identified user, based on a determined presence of the user-specific XML document.
20. The server of claim 19, wherein the application runtime environment is configured for determining the presence of the user-specific XML document based on accessing a first XML document configured for storing user-specific application state and attribute information, the application runtime environment configured for searching the first XML document for an indicated presence of the user-specific XML document for the corresponding prescribed voice application operation.
21. The server of claim 20, wherein the application runtime environment is configured for accessing an external database in response to detecting the indicated presence within the first XML document.
22. The server of claim 21, wherein the application runtime environment is configured for accessing the external database based on one of Internet Message Access Protocol (IMAP) or Lightweight Directory Access Protocol (LDAP).
23. The server of claim 21, wherein the application runtime environment is configured for retrieving from the external database an index of the user-specific XML documents in response to an initial HTTP request requesting a login of the corresponding user based on the user identity, and storing the index into the first XML document.
24. The server of claim 19, wherein the application runtime environment is configured for determining the presence of the user-specific XML document based on outputting an HTTP get request for the user-specific XML document to a server configured for management of the external database, and receiving a response to the HTTP get request from the server having one of the user-specific XML document or a denial of the HTTP get request.
25. The server of claim 24, wherein the application runtime environment is configured for outputting an HTTP put request to the server for storage of a new user-specific XML document for the corresponding user into the database.
26. The server of claim 19, wherein the application runtime environment is configured for dynamically generating the first hypertext markup language (HTML) document by an application instance executed by the server, the application runtime environment being configured to terminate the application instance based on the HTML document having been output to a browser.
28. The medium of claim 27, further comprising instructions for performing the step of determining the presence of the user-specific XML document based on a user identity for the user and the requested prescribed voice application operation.
29. The medium of claim 28, wherein the determining step includes accessing an external database, configured for storing data for respective users, for retrieval of the user-specific XML document.
31. The medium of claim 30, wherein the step of accessing an external database includes accessing an Internet Message Access Protocol (IMAP) database for retrieval of the user-specific XML document.
32. The medium of claim 30, wherein the step of accessing an external database includes accessing a Lightweight Directory Access Protocol (LDAP) database for retrieval of the user-specific XML document.
34. The medium of claim 33, wherein the step of determining the presence of the user-specific XML document includes searching the list in the first XML document for the user-specific XML document.
36. The medium of claim 35, further comprising instructions for performing the step of outputting an HTTP put request to the server for storage of a new user-specific XML document for the corresponding user.
37. The medium of claim 36, wherein the step of outputting an HTTP put request includes posting the HTTP put request to a uniform resource locator specifying a host name of the server, and a directory within the server for the user.
38. The medium of claim 35, wherein the step of outputting an HTTP get request includes posting the HTTP get request to a uniform resource locator specifying a host name of the server, and a directory within the server for the user.
39. The medium of claim 27, wherein the executing step is performed by an application instance executed by a server, the medium further comprising instructions for terminating the application instance based on the HTML page having been output to a browser.
means for dynamically generating, in response to the HTTP request, a first hypertext markup language (HTML) document having media content for execution of the prescribed voice application operation for the identified user based on execution of a selected extensible markup language (XML) document, the generating means configured for selecting a user-specific XML document that specifies the prescribed voice application operation personalized for the identified user, based on a determined presence of the user-specific XML document.
41. The server of claim 40, wherein the generating means is configured for determining the presence of the user-specific XML document based on accessing a first XML document configured for storing user-specific application state and attribute information, the generating means configured for searching the first XML document for an indicated presence of the user-specific XML document for the corresponding prescribed voice application operation.
42. The server of claim 41, wherein the generating means is configured for accessing an external database in response to detecting the indicated presence within the first XML document.
43. The server of claim 42, wherein the generating means is configured for accessing the external database based on one of Internet Message Access Protocol (IMAP) or Lightweight Directory Access Protocol (LDAP).
44. The server of claim 42, wherein the generating means is configured for retrieving from the external database an index of the user-specific XML documents in response to an initial HTTP request requesting a login of the corresponding user based on the user identity, and storing the index into the first XML document.
45. The server of claim 40, wherein the generating means is configured for determining the presence of the user-specific XML document based on outputting an HTTP get request for the user-specific XML document to a server configured for management of the external database, and receiving a response to the HTTP get request from the server having one of the user-specific XML document or a denial of the HTTP get request.
46. The server of claim 45, wherein the generating means is configured for outputting an HTTP put request to the server for storage of a new user-specific XML document for the corresponding user into the database.
47. The server of claim 40, wherein the means for dynamically generating is configured for dynamically generating the first hypertext markup language (HTML) document by an application instance executed by the server, the means for dynamically generating being configured to terminate the application instance based on the HTML document having been output to a browser.
outputting structure constructed and arranged to selectively output an HTTP response including the user-specific XML document in response to reception of an HTTP get request having a user identifier that matches the user identity and a function identifier that matches the prescribed voice application operation.
49. The web server of claim 48, wherein the outputting structure is constructed and arranged to output a second HTTP response indicating an unavailability of the requested user-specific XML document based on at least one of a determined absence of a match between the user identifier and the user identity, or a determined absence of a match between the function identifier and the prescribed voice application operation.
50. The web server of claim 48, wherein the storing structure is constructed and arranged to store the user-specific XML document in a database directory according to the user identity.
US628284A (en) * 1898-10-28 1899-07-04 George J Quinsler Rubber tire for carriage-wheels.
BOS, "XML in 10 Points", W3.org (Aug. 26, 1999). | 2019-04-25T06:48:50Z | https://patents.google.com/patent/US7831430B2/en |
The original Medicare legislation in 1965 stated that: ''"... There must be a regular periodical maintenance and testing program for medical devices and equipment. A qualified individual such as a clinical or biomedical engineer, or other qualified maintenance person must monitor, test, calibrate and maintain the equipment periodically <font color = red>in accordance with the manufacturer's recommendations</font> and Federal and State laws and regulations. ..."'' But beginning in 1989 and as recently as 2011 the corresponding standards of the Joint Commission allowed equipment that was <font color = red>not considered to present a significant physical risk</font> to be excluded from any specific maintenance requirements stating only that PM frequencies should be based on ''"criteria <u>such as</u> manufacturer's recommendations, <font color = red>risk levels</font>, or current hospital experience,"'' and they, in effect, <font color = red>endorsed the original Fennigkoh-Smith risk-based methodology</font>. The original Medicare legislation in 1965 stated that: ''"... There must be a regular periodical maintenance and testing program for medical devices and equipment. A qualified individual such as a clinical or biomedical engineer, or other qualified maintenance person must monitor, test, calibrate and maintain the equipment periodically <font color = red>in accordance with the manufacturer's recommendations</font> and Federal and State laws and regulations. ..."'' But beginning in 1989 and as recently as 2011 the corresponding standards of the Joint Commission allowed equipment that was <font color = red>not considered to present a significant physical risk</font> to be excluded from any specific maintenance requirements stating only that PM frequencies should be based on ''"criteria <u>such as</u> manufacturer's recommendations, <font color = red>risk levels</font>, or current hospital experience,"'' and they, in effect, <font color = red>endorsed the original Fennigkoh-Smith risk-based methodology</font>.
1.2 1.2 What is maintenance?
1.3 1.3 What exactly does the term "PM" mean in the context of medical equipment maintenance?
1.4 1.4 What are the causes of medical device failures?
1.5 1.5 Which kinds of medical device failures can be hazardous?
1.8 1.8 Which kinds of medical equipment failures are PM-preventable?
1.10 1.9.1 Question 1. How, and to what extent, does performing PM on medical equipment improve patient safety?
1.11 1.9.2 Question 2. What kind of PM program is called for in the current CMS regulation?
1.15 1.9.5 Question 5. What changes to current PM work practices would be beneficial?
1.15.1 1.8.2 A new approach to PM prioritization using RCM-based risk criteria.
1.15.3 All potentially PM-critical devices are not necessarily high-risk devices!
1.17 In summary - on Question 2 - How much of an impact can PM have on the safety of medical devices?
when it functions as it should, but in an unsafe or otherwise unsatisfactory manner.
It is a truism, similar to the impossibility embedded in the concept of perpetual motion, that there is no such a thing as an infallible device. All devices fail in one way or other, at some time or other. The simplest measure of a device’s reliability is its failure rate - the number of times that it failed to perform during a particular time period. Since failures are predominantly random, their failure performance (their reliability) is usually expressed as an average number of failures over a particular time period. However, a more intuitive way of expressing device reliability is in the form the device's mean time between failures or MTBF which is the inverse of the failure rate over a particular period of time. For example; a device that has a failure rate of one failure every 75 years (on average) has a mean time between failures of 75 years.
Mean time between failures (MTBF) is the inverse of the failure rate. For example, a device that has failed twice in nine years is demonstrating a failure rate of 0.22 failures per year and an MTBF of 4.5 years. Average failure rates can also be derived by dividing the total number of device failures occurring during the observation period by the number of device-years making up the total device experience. For example, if a batch of 10 devices experiences two failures during nine years, then the failure rate is 0.022 failures per year and the MTBF is 45 years. The larger the experience base (in device-years), i.e. the greater the number of devices in the sample and the longer the observation period, the closer the observed failure rate will be to the device’s true failure rate.
It is generally easier for lay persons to relate to an MTBF because it is an integral period of time, such as 3 years or 30 years - a simple, easily comprehended metric. For example, most people will have little difficulty in considering a device with an MTBF of just one month to have a relatively poor level of reliability and, conversely, considering a device with an MTBF of 50 years to be quite reliable. But when expressed as the equivalent failure rate, the MTBF of 1 month (= 12 failures per year) versus the MTBF of 50 years (= 0.02 failures per year) the contrast between the two levels of reliability (12 versus 0.02) does not seem quite so striking.
Since, ideally, we would like to separate various different kinds of devices into neat compartmentalized categories such “safe” and “hazardous” we have to confront the difficulty of setting boundaries and consequent gray areas around those boundaries. For example, setting a threshold of, say, 75 years for the MTBF that should be considered safe creates the hard-to-answer question of how much less reliable (and thus less safe) is a device with an MTBF of 74 years than one with an MTBF of 75 years? There is, of course, no simple answer to that question. There are grey areas. It is all relative.
This discussion is made a little more complicated by the fact that there are a several different reasons why devices fail, and lumping all of these failures for these different reasons into one overall failure rate, or corresponding MTPF, might well raise the question that this total failure rate does not seem to fairly describe what we think of as either the reliability of the device itself, or the effectiveness of the way we maintain it. Section 1.4 below addresses the nature of these different causes of failure and how they can be categorized and used to develop a helpful and meaningful analysis.
There are several adequate dictionary definitions of maintenance but, in the context of maintaining equipment, it is best defined as "the process of keeping the equipment in proper working order, in good physical condition and acceptably safe". The definition used in the highly respected RCM approach to equipment maintenance is “keeping the equipment available for use”. For more about RCM, see HTM ComDoc 14. "An introduction to Reliability-centered Maintenance (RCM): The modern approach to Planned Maintenance".
Corrective maintenance or, as it is more commonly called, repair, is the process of returning a device that is in a failed state (i.e. that is no longer doing what the user wants it to do) to a safe condition and proper working order. This includes correcting any significant hidden failures even though they do not usually disable the primary functions of the device.
Cosmetic repair, is the process of restoring a device that is damaged to a safe and cosmetically like-new condition. While cosmetic repairs are generally considered a lower priority because the device may still be functioning within the manufacturer’s functional specifications it may be damaged in such a way that it is unsafe. For example, a damaged cover may be presenting a sharp edge that could be hazardous to either the patient or to a user.
Preventive maintenance. This third component is very important because from the very beginning, with the earliest machines developed during the time of the industrial revolution, it was widely believed that restoring the device's non-durable parts, as needed, before the end of the device's anticipated lifetime would be beneficial because it would reduce the number of unexpected machine breakdowns. In return for these scheduled PM interventions to restore the device's non-durable parts, the device users expect a lower level of the disruption and loss of productivity, as well as some reduction in overall maintenance costs, because the device should experience fewer breakdowns.
Non-durable parts (NDPs)- which are sometimes loosely called disposables or disposable parts - are components of the device that are subject to progressive wear or deterioration. They typically include moving parts,such as bearings, drive belts, pulleys, mechanical fasteners and cables, which require periodic cleaning and lubrication as well as certain non-moving parts such as electrical batteries, gaskets, flexible tubing and various kinds of filters which may need to be cleaned, adjusted, refurbished or replaced sometime during the useful lifetime of the device. Which particular parts the device manufacturer considers to be non-durables is identified by the presence of corresponding device restoration tasks in the manufacturer's recommended PM procedure.
Belief in this traditional device restoration approach to improving machine reliability continues to this day, particularly in certain relatively small industry sectors, even though the findings that started the revolutionary RCM approach to maintenance in the 1970s have caused a considerable amount of rethinking about whether or not intrusive maintenance interventions really do improve the device's overall reliability. Certainly there are still quite a number of medical devices such as ventilators, spirometers and traction machines that are more mechanical than electronic, where the manufacturers still recommend that certain parts be given some kind of periodic restoration (cleaning, refurbishment or replacement). However, we don’t yet have good, independent evidence as to whether or not these manufacturer-recommended PMs, particularly those involving the more intrusive overhauls, are truly beneficial or cost-effective. We have not yet gathered the data on the impact of these recommended interventions on the reliability of these more mechanical devices. That investigation is one of the goals that the Maintenance Practices Task Force (MPTF) has set for itself. We discuss this data gathering challenge in more detail in HTM ComDoc 4.
1.3 What exactly does the term "PM" mean in the context of medical equipment maintenance?
In the special case of maintaining medical equipment, there is a second very important reason besides device restoration for making periodic scheduled interventions. And that is testing the device to detect critical degradation in the functional performance of the device or in its condition with respect to safety. These deteriorations can be quite subtle, and in RCM jargon they are called hidden failures. The term is appropriate because these subtle changes do not completely disable the device's primary functions and so they will usually go unnoticed by the device users.
It is important to detect these subtle deteriorations (hidden failures) because there are certain kinds of medical devices that can cause a patient injury if their performance becomes significantly substandard or their level of safety falls below the relevant requirements. Elsewhere (see HTM ComDoc 3.) we characterize the types of devices that have a theoretical potential to injure a patient if they deteriorate in this way as hidden failure-critical or HF-critical devices. These devices need to be subjected to periodic safety verification tasks. Appropriate safety verification tasks for checking out each particular type of device are typically included as a part of the device manufacturer's recommended PM procedure.
Similarly we can characterize devices that have a theoretical potential to injure a patient, if they simply stop working, as life support devices (See HTM ComDoc 3.) As the descriptor (life support) implies it is important to minimize failures of these devices. If these devices have manufacturer-designated non-durable parts (NDPs) they are vulnerable to what the Task Force calls wear out type failures and they need to be subjected to appropriate device restoration (DR) tasks to prevent the device from failing. This will eliminate one (but only one) source of device failures. So, a life support device that has manufacturer-designated non-durable parts vulnerable to wear-out type failures. The test for this is whether or not the device manufacturer's recommended PM procedure includes any device restoration tasks.
One of the recurring obstacles in our discussions of PM over the years has been the use of a number of imprecise and inconsistent terms. Unfortunately there is still no general consensus. So, in an attempt to establish a standardized and more consistent PM terminology, we are proposing (below) some new terms.
We believe that it would be quite difficult to get the entire population of engineers and technicians practicing in the medical equipment maintenance field to change from using the long-established traditional diminutive “PM”. To accommodate this practical issue we are proposing to introduce another term with the same diminutive. The new term, "planned maintenance" will be used to define the combination of the traditional device restoration tasks (what we have traditionally called “preventive maintenance”) and the performance/ safety-oriented safety testing tasks that are more or less unique to the medical field. In this new formulation we are proposing to use the term “device restoration tasks" as a short label for the restoration of the device's non-durable parts. It is a simple and appropriately descriptive term.
We are suggesting this new terminology in full recognition of the fact that there are a number of other competing terms that have evolved over time. For example the term “scheduled maintenance” has been proposed as an alternative to “preventive maintenance” but it is not a very good fit semantically because it implies that the device restoration tasks are always performed according to some kind of clock; either by conventional timing (e.g. every 6 or 12 months) or by a time-of-use clock (e.g. every 1000 hours of use). There is, however, a more modern practice in which the deteriorating part is restored on a more efficient “just-in-time” basis by monitoring the actual condition of the part. In some cases the monitoring is performed by some kind of sensor but more commonly in the medical equipment sector it is simply done by conducting periodic visual inspections. In the RCM approach this “just-in-time” restoration is called predictive maintenance. In addition to this, what we are proposing to call safety verification (SV) tasks have been given the collective name “inspections” by ECRI Institute and others. We prefer the more descriptive term “safety verification” tasks.
So, in summary, in the context of medical equipment maintenance, the contraction “PM” should be understood to mean “planned maintenance” which is defined as a combination of two different types of tasks; one (device restoration tasks) aimed at preventing wear-out failures, and the other (safety verification tasks) aimed at detecting then correcting hidden failures; i.e.
1.4 What are the causes of medical device failures?
The first set of causes can be classified as inherent reliability-related failures (IRFs) that are attributable to the design and construction of the device itself, including the inherent reliability of the components used in the device. They typically represent 45 - 55% of the repair calls. This type of failure can be reduced (but not to zero) only by redesigning the device or changing the way it was constructed.
Category IR1 Random failure. A device failure caused by the random failure or malfunction of a component part of the device.. A result of the device’s inherent unreliability. IR1 calls typically represent between 46-52% of all repair calls.
Category IR2 Poor construction. A device failure attributable to poor fabrication or assembly of the device itself..
Category IR3 Poor design. A device failure attributable to poor design of the hardware or processes required to operate the device..
The second set of causes can be classified as process-related failures (PRFs). They typically represent 40 - 50% of the repair calls. Reducing or eliminating these types of failure typically requires some kind of redesign of the system’s processes - for example, by using better methods to train the equipment users to operate the equipment (as intended by the manufacturer) or to train them to treat the equipment more carefully. They are not failures that can prevented by any kind of maintenance activities.
Category PR1 Use error. A device failure attributable to incorrect set-up or operation of the device by the user.. User has not set the device up correctly or does not know how to operate it. Typically PR1 calls represent between 13-20% of all repair calls. (Note that although this type of “failure” does not represent a complete loss of function, it can have the same effect. For example, an incorrectly set defibrillator can result in a failure to resuscitate the patient).
Category PR2 Physical damage. A device failure caused by subjecting the device to physical stress outside its design tolerances.. PR2 calls typically represent between 6-25% of all repair calls.
Category PR3 Discharged battery. A device failure attributable to a failure to recharge a rechargeable battery. PR3 calls typically represent between 7-8% of all repair calls.
Category PR4 Accessory problem. A device failure caused by the use of a wrong or defective accessory.. PR4 calls typically represent between 3-9% of all repair calls.
Category PR5 Environmental stress. A device failure caused by exposing the device to environmental stress outside its design tolerances.. PR5 calls typically represent between 1-7% of all repair calls.
Category PR6 Tampering). A device failure caused by human interference with an internal control.. PR6 calls typically represent <1% of all calls.
Category PR7 Network problem. A device system failure caused by an issue within a data network connected to the device’s output.
The third set of causes can be classified as maintenance-related failures (MRFs). They typically represent 2 - 4% of the repair calls. These types of failure can be prevented through some kind of maintenance strategy incorporated into the facility’s maintenance program.
Category MR1 PM-preventable failure. A device failure that could have been prevented by more timely restoration or replacement of a manufacturer-designated non-durable part. E.g. a battery failure, a clogged filter, or build up of dust. Failures due to trapped cables should not be coded this way. MR1 calls typically represent between 1-3% of all repair calls.
Category MR2 Poor set up. A device failure caused by poor or incomplete initial installation or set-up of the device.. MR2 calls typically represent between 1-3% of all repair calls.
Category MR3 Needed recalibration. A device failure attributable to improper periodic calibration. MR3 calls typically represent <1% of all repair calls.
Category MR4 Re-repair. A device failure attributable to a poor quality previous repair of the device.. MR4 calls typically represent <1% of all repair calls.
Category MR5 Intrusive PM. A device failure attributable to earlier intrusive maintenance.. MR5 calls typically represent much <1% of all repair calls.
While the device’s overall reliability, which corresponds directly to the total number of the repair calls - irrespective of what caused them – determines the device's effective reliability, it is the numbers of maintenance-related failures (MRFs) and inherent reliability-related failures (IRFs) that are of greatest interest to us, as maintainers, at this time. The level of MRFs provides a good measure of the effectiveness of the facility’s maintenance program, and the level of IRFs provides an equally good measure of the basic or inherent reliability of the devices in question.
1.5 Which kinds of medical device failures can be hazardous?
There are four ways in which medical equipment failures can be hazardous. However, not all of those failures are PM-preventable failures.
If the device is damaged in such a way that it is presenting some kind of direct physical threat to the safety of patients or staff, such an exposed sharp edge.
For example, the case or enclosure of a piece of equipment might be damaged, say as a result of the item being dropped, in such a way that the damaged casing poses a risk of injury to the patient or user, even though the item still works. Or the protective outer layer of the device's electrical cord might be damaged so that it exposes a live conductor posing the risk of an electric shock. These could be hazardous to the patient, to the device user and possibly others. It is to be expected that damage such as this would be noticed and repaired at the time of its periodic maintenance - so, to the extent that this kind of damage occurs and goes unreported, periodic PM contributes to the levels of overall safety. These are not considered to be PM-preventable failures but periodic PM may shorten the time that individuals are exposed to these potentially hazardous outcomes. Situations such as this appear to be encountered quite rarely.
If the failure is a sudden, total failure.
There are a number of devices that are life-supporting in the sense that a sudden, total failure while they are in use could put the patient’s life at risk. Examples include critical care ventilators, anesthesia units, heart lung machines, intra-aortic balloon pumps, external pacemakers, defibrillators, AEDs, cardiac resuscitators, infant incubators, neonatal monitors, apnea monitors - and in some circumstances - patient monitors, oxygen monitors and pressure cycled ventilators. In addition to spontaneous random failures it is possible that a device could suddenly stop working if a part that is recommended for periodic restoration fails prematurely. This could also occur if the maintenance interval has been set too long. The failure of any device that is attributable to the failure of a critical part that requires timely restoration is considered to be a PM-preventable failure. However, situations such as this appear to be encountered quite rarely.
If the device develops some kind of hidden failure.
There are some devices that have the potential to cause a patient injury if their functional performance falls below a certain critical point in such a way that the deterioration is not obvious to the user. Examples include a defibrillator whose delivered output energy is significantly lower than the level set by the user; or an infusion device that delivers medication at a significantly lower or higher rate than that set by the user. Similarly there are some devices that have the potential to cause a patient injury if their compliance with a relevant safety specification falls below an acceptable point and this deterioration is not obvious to the user. Examples include; an open ground connection in a device that has exposed metal that could conceivably become "live", and a malfunction in devices that have critical alarms. While, strictly speaking, these failures are not totally prevented by periodic PM, the time that patients are exposed to these potentially hazardous outcomes is reduced. Elsewhere (ref ?) we have shown that the exposure of the patient to this possible hazard is reduced from 100% (as it would be with no PM) to a lesser percentage determined by the ratio of the frequency with which the PM testing is performed to the frequency with which the hidden failure occurs. With typical PM intervals in the range of 6 months to 5 years and mean time between failures of these random hidden failures in the range of 50 to 250 years, the reduction in exposure of the patient will be reduced by 95 - 99%. Hazardous hidden failures appear to be encountered quite infrequently.
If the device is used improperly.
Almost all medical devices have the potential to injure patients if they are used improperly. However, this is a type of failure that cannot be prevented or mitigated by conventional planned maintenance and they are not considered to be PM-preventable equipment failures. Accident statistics show that misuse of medical devices represent the most common reason for device-related patient injuries.
For more on this subject see HTM ComDoc 8. "Maximizing medical equipment safety"
the device is no longer in compliance with the relevant safety specifications for the device in question, but this deterioration is also not obvious to the user. These kinds of failures are usually the result of imperceptible random failures in the device's components or subsystems. They are detected through performance or safety tests made during the periodic PMs.
When this more subtle type of failure introduces a significant performance or safety degradation that can be detected only by some kind of performance or safety test it can constitute a serious safety threat. For example, a heart rate alarm that has malfunctioned so that it no longer goes off at the set limit will remain as a hidden but potentially hazardous failure until the alarm function is checked and the potentially dangerous degradation discovered. The potential seriousness (i.e. level of severity) of hidden failures will depend on the nature of the failure and on how far the performance or safety flaw is out of specification. For example; a significant reduction in the output of a defibrillator has to be considered life-threatening but a small excess in the electrical leakage current of a laboratory centrifuge – while it should be noted in the test report - is unlikely to constitute a significant hazard, or be considered an imminent threat.
Hidden failures are discovered when the performance verification and safety testing tasks are performed during the PM. When they are found they should be described in a note on the PM work order or the PM report and it would be helpful if the description of the findings provided enough information to enable a judgment to be made as to the worst case potential level of severity (LOS 3, LOS 2, LOS 1 or LOS 0 - see Section 1.7 below) of the adverse outcome that would have resulted if the hidden failure had not been discovered.
A particularly important type of hidden failure is one that disables the proper operation of an automatic protection mechanism (APM) that is included as a component of the device. An APM is usually included in the design to provide protection against another possible hidden failure that is itself considered to be capable of a serious or potentially life-threatening adverse consequence.
There is a wide range of possible adverse outcomes from device failures. Some create potential physical harm to the patient (or to the device user). Others can result in additional direct or indirect costs to the facility and thus create an economic or business risk to the organization. We address these economic/business risks in greater detail in HTM ComDoc 9. "Medical devices that may benefit from PM from a business/ economics viewpoint"
In the case of outcomes creating the possibility of physical harm it is helpful if there is a need to conduct some kind of risk analysis or risk assessment to define a hierarchy of three levels of severity (LOS) of possible physical harm to the patient, or - in the case of economic harm to the facility - three levels of economic harm to the business.
LOS 3 = Serious, life-threatening injury - The patient (or the user) may lose his or her life.
LOS 2 = Less serious, non life-threatening injury - The patient (or the user) may sustain a direct or indirect injury ranging from minor to serious.
LOS 1 = No injury, but possible disruption of care - The incident may cause a temporary disruption of care, such as requiring one or more patients to be rescheduled, delaying treatment or delaying the acquisition of diagnostic information.
LOS 0 = No discernible injury or possible disruption of care.
There are some devices, such as critical care ventilators and defibrillators, on which the the patient's continued well being may be totally dependent. These are sometimes called life support devices. Any type of failure that causes such a device to stop working completely or to stop working properly has the potential to result in an adverse outcome at the highest severity (LOS 3) level. If the device also happens to have one or more non-durable parts that needs timely and competent periodic restoration, this device then becomes critically vulnerable to a wear-out failure and it therefore becomes a device that should be given a high priority for PM. The same is true if the device has a hidden failure that could cause a high severity outcome.
1.8 Which kinds of medical equipment failures are PM-preventable?
1. Wear-out failures that could cause the device to stop working completely. These are failures that are caused by a non-durable part not receiving timely, competent restoration.
2. Hidden failures resulting from imperceptible failures of components within the device that do not cause the device to stop working completely but which might reduce the device's performance or safety below a critical level. These are failures that are discovered when performance and safety testing tasks are performed during PMs and although the PM testing does not totally prevent the possibility that a patient will be exposed to the device while it is in a defective state, the discovery and correction of these hidden failures does shorten the period during which patients are exposed to the failure. This benefit is addressed more completely in Sections 6.3 and 6.4 in HTM ComDoc 6.
The foregoing analysis puts us in a position to answer the first of the five basic questions about PM - some of which have been addressed previously in HTM ComDoc 15.
1.9.1 Question 1. How, and to what extent, does performing PM on medical equipment improve patient safety?
Generally speaking, PM does improve patient safety, but only to the extent that it detects then corrects the two kinds of PM-preventable failures that were identified just above in Section 1.8 (wear-out failures and hidden failures). And the extent of the improvement in patient safety varies for different devices according to the "level of risk" that the device would have presented if those potential failures had not been detected, and then eliminated. According to the modern theories of risk management, the level of risk takes into account both the level of the severity of the adverse outcome of the event and the likelihood that the event will actually occur.
In this case we are specifically concerned about the level of risk posed by PM-preventable failures, so the extent of the improvement in patient safety is determined by a combination of the potential severity of the outcome of the failure (with the higher levels of outcome severity - such as LOS 3 - being more serious than LOS 2, etc), and the likelihood of the failure occurring. The proper measure of this likelihood of the failure occurring is what the Task Force calls the device's PM-related reliability. We discuss this "likelihood of failing from a PM-preventable cause" more in HTM ComDoc 4 "Consideration of the device's PM-related reliability".
The Task Force has investigated both of these factors. Table 4 provides a ranking of the various device types according to the severity of each device's potential PM-preventable failures. For more on this investigation, see HTM ComDoc 3 "Risk assessment: Determining which medical devices are made safer by PM". The device types at the top of the listing in Table 4 (rows 1 through 7) are judged to have potential PM-preventable failures with life-threatening outcomes. The PM-related reliability of each of the top twenty highest severity device types in Table 4 are currently being investigated and as the results become available they will posted to columns C8 and C9 of Table 13. For more on this investigation, see HTM ComDoc 4 "Consideration of the device's PM-related reliability".
The Task Force has set tentative thresholds for what should be considered an acceptable (safe) level of PM-related reliability for the devices in each of the three top levels of potential PM-related risk categories (namely those labeled high, moderate and low in column C10 of Table 13). From this table, once it is completed, professionals in charge of medical equipment maintenance programs will be able to identify which devices (by manufacturer and model) should continue to be maintained strictly according to their manufacturer's recommendations, and for the others, what level of PM-related reliability (which corresponds to PM-related safety when the category of severity is taken into account) is typically achieved when the indicated PM interval and procedure is used. The Task Force has also suggested a way in which the level of PM-related patient safety can be monitored on a continuous basis (see Section 1.12 ?).
As can be seen from the summary below there are several other benefits from performing regular PM besides improving patient safety.
Improving patient safety. … Some devices - but only some - are made safer by performing appropriate PM. Not all failures have the potential to cause a serious injury, and not all failures are PM-preventable.
Regulatory compliance. … As we explain more fully in HTM ComDoc 11. the CMS regulation addressing PM for medical devices has traditionally been that all medical devices must be maintained strictly according to the device manufacturers' recommendations. Even after the regulations were changed in 2013 there is still a requirement that certain devices be subjected to periodic PM. (For more on this see HTM ComDoc 16).
Customer courtesy and/ or customer reassurance. … We may choose to perform PM on some devices because a user has asked us to do so, or because we believe that periodically inspecting and cleaning equipment used for patient care creates a reassuring "cared for" appearance that the user staff appreciates. While this is a qualitative rather than a quantitative benefit it should not be underestimated. These periodic inspections may also be useful by leading to the discovery of unreported broken equipment.
1.9.2 Question 2. What kind of PM program is called for in the current CMS regulation?
The original Medicare legislation in 1965 stated that: "... There must be a regular periodical maintenance and testing program for medical devices and equipment. A qualified individual such as a clinical or biomedical engineer, or other qualified maintenance person must monitor, test, calibrate and maintain the equipment periodically in accordance with the manufacturer's recommendations and Federal and State laws and regulations. ..." But beginning in 1989 and as recently as 2011 the corresponding standards of the Joint Commission allowed equipment that was not considered to present a significant physical risk to be excluded from any specific maintenance requirements stating only that PM frequencies should be based on "criteria such as manufacturer's recommendations, risk levels, or current hospital experience," and they, in effect, endorsed the original Fennigkoh-Smith risk-based methodology.
This changed in 2011 when CMS issued revised regulations that narrowed the still official CMS requirement to use the manufacturer's maintenance recommendations from all equipment to just " ::3. Fail-safe design. Again, for devices with this level of risk, it would be prudent to choose (if it is available) a version of the device that has some kind of built-in fail-safe design, such as component redundancy. All equipment critical to patient health and safety</font> and any new equipment until a sufficient amount of maintenance history has been acquired." The "risk-based" option that TJC had been allowing was effectively rescinded. The revised CMS requirement specifically stated that for what they were now calling equipment critical to patient health and safety " Alternative equipment maintenance (AEM) methods are not permitted." However, there was no clear indication of which particular devices they intended to target with this definition of "critical." They seemed to be placing the responsibility for this onto the facility by stating that the "... hospital may adjust its maintenance, inspection, and testing frequency and activities for facility and medical equipment from what is recommended by the manufacturer, based on a risk‐based assessment by qualified personnel".
Faced with some push-back from members of the HTM community CMS issued a "clarification" memo in 2013 (HTM ComRef 28) in which they tried to address the uncertainty about the precise meaning of the phrase "equipment critical to patient health and safety". The key language in the 2013 memo is quoted in Section 11.3 of HTM ComDoc 11. Suffice it to say that this new language does not clarify sufficiently what the agency intends by the term "critical" and the Task Force's interpretation of their intention is described in Section 11.4 of HTM ComDoc 11. The new regulatory language does however introduce a major concession by allowing devices that are not considered to be "critical" to be included in an Alternative Equipment Management (AEM) program where they can be maintained other than as the manufacturer recommends. As reported also in HTM ComDoc 11., the Task Force summarizes its conclusions about the agency's intention in the form of the following two recommended AEM program inclusion criteria.
The Task Force's suggestions for implementing an efficient risk-based AEM program that will be compliant with these two criteria are contained in a recently-published two-part article in AAMI"s BI&T journal (HTM ComRef 35 and HTM ComRef 36). Much of that material is also contained in HTM ComDoc 16 "Implementing a simple RCM-based Alternate Equipment Management (AEM) program."
HTM ComDoc 10. "Alternate Maintenance Strategies and Maintenance Program Optimization" identifies the following four maintenance strategies that are relevant to maintaining medical devices.
The least efficient maintenance strategy in terms of using up scarce technical manpower is (#1) the traditional fixed interval preventive maintenance strategy. Predictive maintenance (#2) is the next least efficient. It differs from strategy #1 primarily in effectively extending the interval between restorations or replacement of the device's non-durable parts by substituting a visual inspection for the original restoration task. The most efficient strategy is, of course, the light maintenance strategy (#4). The periodic safety verification strategy is neutral with respect to efficiency because it must be performed on all devices that have a potential high severity (LOS 3) outcome to a hidden failure. It may also be considered prudent to perform periodic safety verification on all devices that are projected to have a less severe potential (LOS 2) outcome to a hidden failure.
Step 1 Identify which devices can be classified as non-critical devices (see Section 3.8 in HTM ComDoc 3), and to change these immediately to a run-to-failure maintenance method (i.e. perform no scheduled PM).
Step 2 Determine the potential PM priority levels of the devices in the facility's medical equipment inventory by consulting AEM eligibility based on outcome severity of failure graphic (see HTM ComDoc 3).
Step 3 Look over the recommendations below that are taken from Section 4.10 of HTM ComDoc 4 and HTM ComRef 36. Then make the changes that you feel comfortable with (see also .... and HTM ComRef 35).
These are potentially hazardous devices with either overt or hidden PM-preventable failures that could cause a life-threatening injury and that are demonstrating PM-related failure rates greater than the currently acceptable level (not more than one failure every 75 years). For these devices, it would be prudent to continue to follow the manufacturer-recommended PM procedure (for both the interval and the scope of the tasks) and to routinely monitor the levels of patient safety being achieved, as described in Section 3.10 of HTM ComDoc 3 and HTM ComRef 35. This should be continued until acceptable evidence exists in the national database (Table 13) that some other procedure with more efficient tasks and/or a longer interval is found to demonstrate the same or better level of PM-related reliability or a comparable level of patient safety.
These are potentially hazardous devices with hidden PM-preventable failures capable of causing a life-threatening injury that are demonstrating PM-related failure rates greater than the currently acceptable level (not more than one failure every 75 years). For these devices, for which the only “maintenance” that the manufacturer recommends is periodic safety verification, it would be prudent to continue to follow the manufacturer-recommended safety verification testing schedule and routinely monitor the levels of patient safety being achieved, as described in Section 3.10 of HTM ComDoc 3 and HTM ComRef 35, until evidence exists in the national database (Table 13) that testing at a longer interval results in the same or better level of PM-related reliability or a comparable level of patient safety.
When testing for possible hidden failures with potential high-severity outcomes, there is no optimum interval — shorter is always better. However, it has been shown (see Section 6.3 in HTM ComDoc 6.) that for safety verification–related (hidden) failures with MTBF values greater than about 50 years, the increase in the time that the patient would be exposed to potentially hazardous hidden failures if the testing interval was increased from six months to as long as five years is very small.
These lower PM-risk devices qualify for inclusion in an AEM program either because of the lower level of severity of the outcomes of potential failures or because they have demonstrated an acceptable level of PM-related reliability. Therefore, they can be maintained using a maintenance procedure or strategy other than that recommended by the manufacturer. They can be transitioned immediately to less stringent PM strategies, such as the cost-efficient light maintenance (run-to-failure) strategy - which is mentioned in Appendix A of the CMS memo (HTM ComRef 28). At the very least, the manufacturer-recommended procedures can be modified (such as by omitting electrical safety checks that the facility has found to be nonproductive), or by extending the testing interval to make it coincide with a more convenient or more efficient routine.
The logical rule here is to explore the national database (Table 13) for evidence of more efficient maintenance procedures. It would be prudent to monitor the levels of patient safety (as described in Section 3.10 of HTM ComDoc 3 and HTM ComRef 35) being achieved by the current procedure (or any of the more efficient procedures, if chosen) for devices categorized as PM priority 2 (moderate PM-risk) devices. Monitoring those in the lower risk categories is much less important but can be undertaken if the facility chooses.
If these devices should fail, there is a negligible or zero additional risk to patient safety. Therefore, in the absence of other regulatory mandates, unless there is a convincing case that periodic PM can be justified through lower maintenance costs, these devices are excellent candidates for the very efficient light maintenance (run-to-failure) strategy. It was by adopting this run-to-failure maintenance strategy in the early 1960s that the civil aviation industry was able to reduce its maintenance costs by 50% while, unexpectedly, also improving the reliability and safety statistics for civilian aircraft by a factor of 200 times.
"To the best of our knowledge, all of the studies reported to date have shown that only a very small percentage of injuries resulting from failures of medical devices are attributable to poor maintenance. See,for example, reference HTM ComRef 12). And, as we describe in Section 1.4 of HTM ComDoc 1, ...the great majority of medical device failures can be attributed to one or other of a fairly wide range of other causes.... However, if the cause of each device failure is routinely documented in the manner suggested in that same section of HTM ComDoc 1, this information (on which of those causes is currently contributing the most to device failures in a particular facility) can be very helpful in managing device failure prevention activities other than PM, and in monitoring the effectiveness of those efforts.
Give preference during device acquisition to those devices that are reported to have the highest level of inherent reliability. The possible impact of this strategy is unknown at this time but current statistics indicate that the inherent unreliability of the devices themselves accounts for 45-55% of all failures.
Implement additional measures to reduce failures from the list of causes presented immediately below. They are listed in descending order of anticipated effectiveness.
13-20% - User-related issues such as controls or switches that have been set incorrectly. Although this type of failure may not always lead to a complete loss of function, it can have the same effect as actual failure. For example, an incorrectly set defibrillator can jeopardize patient resuscitation. (These Category PR1 calls typically represent between 13-20% of all of the repair calls).
6-25% - Physical damage usually caused by a combination of poor design and user carelessness, such as dropping the device. (These Category PR2 calls typically represent between 6-25% of all of the repair calls).
3-9% - Problems with an accessory, such as patient cables and electrodes. (These Category PR4 calls typically represent between 3-9% of all of the repair calls).
1-7% - Problems resulting from an out-of-specification environmental condition, such as poor control of the ambient temperature. (These Category PR5 calls typically represent between 1-7% of all of the repair calls).
1-4% - Lack of timely PM (i.e. failing to restore [replace or refurbish] a part of the device that requires periodic attention. (These Category MR1 calls typically represent between 1-4% of all of the repair calls).
1-3% - Poor installation or poor initial set-up of the device. (These Category MR2 calls typically represent between 1-3% of all of the repair calls).
<1% - Tampering with internal switches or other controls that are not intended to be user-accessible. (These Category PR6 calls typically represent <1% of all of the repair calls).
Third, consider implementing pre-use inspections or testing to verify that the device is functioning safely immediately prior to use .
Enhanced Risk Management Program. A very beneficial use for some or all of the resources made available by improving the efficiency of the facility's maintenance program would be to implement an enhanced Risk Management Program incorporating some or all of the additional measures described above.
1.9.5 Question 5. What changes to current PM work practices would be beneficial?
There is no question that the most beneficial change to current PM work practices would be for the entire community to standardize the way we perform and report our maintenance activities (see Section 15.3 of HTM ComDoc 15. "Why we need to standardize the format of our maintenance reports").
There are three extremely important benefits that could be realized if a significant number of managers of the HTM community's maintenance programs could be persuaded to standardize on a common format for their maintenance activities and reporting.
A standard coding system for characterizing the way devices fail could provide valuable information that would allow us to analyze the effectiveness of the facility's equipment safety strategies.
A standard format for documenting the findings of the PMs that are performed on all critical devices would enable us to optimize the PM intervals used for the various devices.
The maintenance entity must use some form of coding for repair calls that allows for a separate count of the failures that are attributable to inadequate PM (similar to the MR 1 described in HTM ComDoc ?). Because of its value in maximizing total equipment safety, we also recommend a coding of at least the three basic causes of total failure described in HTM ComDoc 1- namely IRFs or inherent reliability-related failures; MRFs or maintenance-related failures; and PRFs or process-related failures. Adopting the full 15 category classification and coding method described in HTM ComDoc 1 and HTM ComDoc 8. is highly desirable because of its value in diagnosing possible non-maintenance remedial actions.
1.8.2 A new approach to PM prioritization using RCM-based risk criteria.
Some devices can deteriorate in such a way that their performance or level of safety falls to such a degree that the device is potentially hazardous to the patient or user (these are called hidden failures because this deterioration is often not obvious to the user). These hazards are detected and corrected during periodic planned maintenance.
To maximize patient safety it is important to ensure that all devices whose failure can put the safety of the patient at risk receive appropriate attention. Restoring or replacing a device’s non-durable parts in a timely manner (using what we call device restoration or DR tasks) will reduce the device’s overall failure rate to some degree (but certainly not to zero). And periodic safety verification or SV tasks will uncover any potentially hazardous hidden failures, hopefully before they can cause a patient injury.
It is important to point out here that not all possible hidden failures are listed in column 5 of Table 3. In many cases there may be a number of possible hidden failures and the best way of identifying them is to review the test protocols listed in the performance verification and safety testing (PVST) section of the device's generic PM procedure. For example, by looking at this section of the generic PM procedure for a defibrillator-monitor (click on the PM Code in the 3rd column of Table 3 - DEF-01 you can see that Tasks (S4 thru S7) have been labelled as "Serious failure is potentially Life-threatening". The example cited in the fifth column of Table 3 is that the ""hidden failure caused the unit to under-deliver"" which would correspond to a PM finding that Task S7 indicated that the delivered energy was significantly less than what the energy level selected. According to the extent to which the device is found to be out-of-spec (OOS) the adverse outcome should be judged to be of either LOS 1, LOS 2, or LOS 3 level of severity. In both of these cases (an anticipated overt failure or a hidden failure) the analyses in the tables should include this additional judgment on the outcome and worst case level of severity of each anticipated failure, entered in the sixth or seventh column of the respective table.
Table 2 and Table 3 illustrate how this concise risk characterization process works. We have used the compounded result of these risk assessments to filter and categorize the subset of the 70+ more complex device types (listed in Table 1 ) that we believe represent all or most of the device types that are likely to meet either of the Task Force's first three risk criteria. Although this particular subset represents only about 5-10% of the 700 to 1500 different types of medical equipment in modern hospitals, we believe that it represents all of the types of device that are likely to injure a patient, either if they stop working completely or if they develop some kind of significant hidden degradation.
The Task Force has prepared a brief statement documenting the why this PM Criticality questionnaire is consistent with established industry standards of practice.
As best we can estimate there are, in round numbers, between 750 and 1500 different types of healthcare-related devices in use in today’s healthcare facilities. An unknown number of these are non-clinical devices such as printers or other device accessories that do not even fall into the formal category of a medical device that is regulated by the FDA. These non-clinical devices are extremely unlikely to be PM-critical. At the other end of the scale there is a group of about 70 device types that are more likely to be PM-critical, either because of their complexity, or for some other reason that was captured in the original Fennigkoh-Smith criteria.
The Task Force believes that a large percentage of the estimated remaining balance of at least 700 device types will prove to be non-critical when they are analyzed. One example is a set of patient scales. When the HTMC generic PM procedure for a set of patient scales (PA.SC-01)is analyzed using the questionnaire process described in section 3.3 of HTM ComDoc 3., responses (1), (2) and (6) are all “no”, and so - according to our criteria - a set of patient scales should be classified as a non-critical device.
Based on the preliminary findings shown in Table 2. and Table 3. we believe that a large number of device types can be shown to be non-critical. This is a very important step because it provides a very solid, rational argument for why a very large number of medical devices can be used quite safely without any kind of periodic PM whatsoever. They simply have no high-severity, PM-preventable failure modes and so, by definition, they are non-critical. The evidence for this is that there are simply no tasks listed in the relevant manufacturer’s PM procedure that would either prevent the device, if it could cause harm if it failed, from failing - or that would detect a hidden failure that could cause harm that had already developed.
This leaves a list of about 70 device types, shown in Table 4., that are potentially PM-critical. However, as we will show in Part 2 of this article, by implementing Step 2 of this new risk analysis, which will draw on aggregated maintenance data from the new community-wide database, we will be able to determine which of these devices should actually be designated as PM-critical (high risk) devices and given periodic PM according to the manufacturer’s recommendations. The others are all more reliable, lower risk devices. We anticipate that, when fully implemented, the analysis in Step 2 will reveal devices with risk levels distributed across the full spectrum from high-risk to very low risk devices.
All potentially PM-critical devices are not necessarily high-risk devices!
The likelihood that the event (the PM-preventable device failure) will actually occur.
This required combination of two factors means that devices that have a manufacturer-recommended PM procedure with critical device restoration tasks or safety testing tasks will not necessarily become hazardous just because the manufacturer's procedure is not followed or even utilized at all. If the likelihood of any PM-related failures actually occurring (even if they are critical failures with high-severity outcomes) is very low - with a mean time between failures (MTBFs) of, say, 50-75 years or more - then the corresponding risk of harming the patient is reduced from high to moderate, to low, or even to very low. The actual level of risk at each of the three levels of severity is, in fact, accurately represented by the probability that the device will actually fail, either totally, or by developing some significant degradation. This is why traveling on a commercial airliner is considered to be safe. While there is a theoretical potential for a high-severity outcome if the plane should crash, the likelihood that this will actually happen is very low – so the level of risk when flying on a commercial airliner is also very low, relative to other ways of traveling.
In order to determine which devices have the theoretical potential to cause a patient injury (or some less severe adverse outcome) if the device should fail because its PM was not completed in a timely manner - we first need to be clear about what is achieved by performing the various tasks listed in the manufacturer’s recommended PM procedure.
In general, there are two kinds tasks contained in a medical device’s PM procedure. The first kind is a task that restores the device to something close to its original, like-new condition. The Maintenance Practices Task Force calls these device restoration tasks. They are tasks in which components that are subject to deterioration during the useful lifetime of the device, such as batteries, cables, fasteners, gaskets and tubing, are periodically refurbished or replaced. The second kind is some sort of test to detect any hidden degradations in the functional performance or safety of the device that are sufficiently hazardous to require immediate correction. The Task Force calls these safety testing tasks.
It is entirely possible for some manufacturer-model versions of any of the PM-critical device types listed in Table 2. and Table 3. to be classified as low-risk devices if they can be shown to have good reliability (a demonstrated low probability of failing). Table 12. shows the Task Force's tentative definitions of what should be considered acceptable levels of reliability. We will discuss this in more detail in section 3.3 of HTM ComDoc 4.
In summary, all non-critical device types (i.e. those that have no critical PM-related failure modes) are, by definition, inherently safe with respect to needing PM. Whereas, all PM-critical device types are potentially high-risk (potentially hazardous) devices unless certain manufacturer-model versions of those device types can be shown to have good reliability (i.e. a low likelihood that the PM-related failures will actually occur), in which case they can be categorized as lower risk devices. See Table 12. for a more details on the tentative definitions of the various levels of device risk.
We will describe how to determine which devices are PM-critical/high risk devices in section 4.x of HTM ComDoc 4.
So, if the total failure or critical degradation of the device is highly unlikely to occur, the level of risk associated with using the device is correspondingly small. Devices that are classified in the tables as having potentially life-threatening severity (LOS 3) outcomes from total failure or from critical degradation should more properly be called potentially hazardous or potentially high-risk devices because the actual level of risk at each of the three levels of severity is, in fact, accurately represented by the probability that the device will actually fail, either totally, or by developing some significant degradation.
To quote from the third paragraph of the statement titled “Background” on the Introductory Materials page of the website created by the Maintenance Practices Task Force (MPTF), one of the primary motivations prompting this project, which AAMI began supporting in November 2015, is to address the huge problem created by the failure of the Healthcare Technology Management (HTM) community to establish a “… generally-agreed way of quantifying current levels of maintenance-related medical equipment safety …”.
Much has been written about medical technology, and virtually all of it states that the ultimate, overriding consideration must always be assuring the very highest levels of patient safety. Maximizing patient safety is, of course, a very worthy goal - with which there can be no quarrel - but to paraphrase one of the better maxims of the business world – if you can’t measure it, you can’t manage it. And since virtually all of the regulations and standards governing the HTM business include a requirement, either direct or indirect, to provide levels of patient safety that are “generally acceptable”, this current lack of an accepted metric for medical device safety – and maintenance-related medical equipment safety in particular - makes it impossible to prove how well (or not) we are satisfying this important obligation. This same lack of the proper tools also makes it very difficult to compare the levels of maintenance-related medical equipment safety achieved by different maintenance strategies.
A current manifestation of this quandary is the requirement in the recently amended medical equipment maintenance regulations of The Centers for Medicare & Medicaid Services (CMS) which implies very strongly that the use of any of the now-permitted alternate equipment management (AEM) strategies for maintaining the facility’s medical equipment must keep the equipment just as safe as it would be if the devices were being maintained according to the manufacturer’s recommendations. This is clearly a very reasonable requirement but it is creating practical difficulties for facilities trying to introduce more cost-effective maintenance practices, as well as for the various survey and inspection teams who are responsible for confirming that maintenance practices other than those recommended by the device manufacturer are not exposing patients to higher levels of risk.
Everyone familiar with the standard texts on risk management knows that safety itself is not directly measurable (see, for example, the third chapter in “ Of Acceptable Risk: Science and the Determination of Safety “ by William Lowrance). The only aspect of safety that is measurable is the actual level of risk created by some specified potential hazard. So when we say that something such as a medical device is safe, what we are really doing is making a judgment relative to some recognized standard that the risk created by one or more particular potential hazards (such as, in this case, the potential for an adverse patient outcome attributable to inadequate device maintenance) is generally acceptable. Devices that are deemed “safe” in this way are really only safe with respect to the specifically identified hazard, or hazards.
While all of the various participants in the HTM business - including the regulating authorities - have cited patient safety as the primary driver within their respective areas of responsibility, there has been a lack of meaningful efforts to establish a rational, scientific basis for making these judgment calls on the level of safety of the patient. This is certainly true of the regulatory framework that is intended to ensure the safety of medical devices in their working lifetime, subsequent to the device having passed through the FDA ‘s initial device approval process. It has already been pointed out in the just-published AEM Program Guide that some of the accreditation standards based on the CMS regulation (referenced above) contain sloppily incorrect or inconsistent terminology as well as a complete lack of direction on how conformance to what are allegedly the “generally acceptable” levels of patient risk should be demonstrated.
By adopting the widely used and very well respected scientific methodology embedded in reliability-centered maintenance (RCM), the Maintenance Practices Task Force (name shortened elsewhere in this report to “the Task Force”, “the MPTF” or just “the TF”) has made significant progress towards solving this fundamental problem. As described in HTM ComDoc 1 and several other related documents on the website, the Task Force has created a useful method for characterizing the level of the PM-related risk associated with the different manufacturer-model versions of the most PM-critical medical devices. Each of the identified levels of maintenance-related risk are combinations of two parameters; one representing an assessment of the worst-case level of severity of the adverse outcome of a PM-preventable failure of the device (the TF has selected three representative levels - either a life-threatening injury, a serious but less than life-threatening injury, or a less serious outcome such as a delayed diagnosis or delayed treatment) and a second parameter quantifying the likelihood of a PM-preventable failure actually occurring (represented by the device’s documented PM-related failure rate).
The Task Force has also proposed a practical method for establishing what level of PM-related risk should be considered acceptable – another notable step forward. In this particular context it seems logical to set the standard for acceptable maintenance-related safety at the typical level of PM-related risk achieved when the devices in question are maintained strictly according to the manufacturer’s recommendations. Just what this level is, can and will be determined (see project Objectives # 3 & 4) by conducting a statistically satisfactory number of tests to determine and document the actual PM-related failure rates demonstrated by a sample drawn from a number of the potentially most critical devices during a time when they are being maintained according to their manufacturer’s recommendations.
Much has been written about medical technology and virtually all that is written cites maximizing patient safety as the ultimate, overriding consideration. This is, of course, a very worthy goal with which there can be no quarrel; it is the motherhood and apple pie of healthcare technology management (HTM) and a cherished icon that we all serve dutifully and enthusiastically. In addition to this, virtually all of the regulations and standards governing the HTM business include either a direct or indirect obligation to provide acceptable levels of patient safety. The rub comes however when we attempt to quantify how well our efforts are measuring up to this rather vague obligation to maximize patient safety.
A recent piece by …. on the debate over medical device service urging …. is an good example.
Safety itself is not measurable. The only aspect of safety that is measurable is the actual level of risk created by some specified potential hazard. So when we say something is safe, what we are really doing is making a judgment that the level of risk posed by one particular potential hazard is considered to be acceptable. The device is indeed safe but only with respect to this one particular hazard (cite Lowrance).
To illustrate this we will use an example from recent investigations (cite ?) into alternative equipment management (AEM) strategies that would make medical devices just as safe as they would be if the device were being maintained according to the manufacturer’s recommendations – something now permitted by recent revisions to the regulations of the Centers for Medicare & Medicaid Services (CMS) relating to medical equipment (cite ?). In this example the risk that we are concerned with is the risk that the device will fail from a PM-preventable cause.
The key to identifying which device failures can be attributed to a PM-preventable cause (could have been prevented by a more effective or more timely PM activity) is to examine each of the tasks listed in the manufacturer’s PM procedure. This will identify which of the device’s components needs some kind of periodic restoration such as a filter that needs cleaning or a battery that needs to be replaced. If a device is presented for repair and the only thing wrong with it can be traced a component that is scheduled for some kind of restoration during PM, then it is quite likely that this failure can be considered to be a PM-preventable failure. Maybe the restoration performed during the last PM was ineffective or maybe the PM interval is too long. Similarly, the manufacturer’s PM procedure may include testing the performance of the device to detect deteriorations in either its functional performance or in its compliance with certain safety requirements that would not be obvious to the user – so-called hidden failures. While these deteriorations have not caused a complete failure the diminished performance could be putting the patient at risk and these should be considered to be PM-preventable failures. A shorter PM interval would have reduced the length of time that the patient was exposed to some level of risk.
In order to gather reliable information on the frequency with which PM-preventable failures are encountered it is very important to standardize the techniques and criteria for diagnosing when a user-reported failure is legitimately attributable to inadequate or tardy PM. Similarly, we need to standardize the techniques and criteria for diagnosing failures encountered when the actual PM is performed. Obviously a PM finding that the device failed one or more of any critical performance or safety tests included in the PM procedure constitutes a PM-preventable failure (It is an indicator that the PM interval is too short). And the Maintenance Practices Task Force has proposed that discovering a part that was scheduled for some kind of restoration during the PM has already deteriorated to the point where it could have been interfering with the proper operation of the device is also considered to be a PM-preventable failure. This is also an indicator that the PM interval is too short.
Unfortunately there is still some considerable variation in the kind of maintenance data collected throughout the field. While there have been recommendations for standardizing on these particular indicators that the failure was PM-preventable, they are not yet in widespread use.
So even though we are often required to characterize something such as a maintenance practice or maintenance “strategy” as safe or unsafe we generally fail to address the judgment call nature of this requirement. Although we champion data driven decisions – and this is an important and laudable step forward - we need to recognize that with respect to safety there are generally no prescribed boundaries separating acceptable (i.e. safe) levels of risk from unacceptable (i.e. unsafe) levels of risk.
The data driving the decisions are the levels of risk relevant to certain specific hazards.
In summary - on Question 2 - How much of an impact can PM have on the safety of medical devices?
Back to Main Page or on to HTM ComDoc 2 "Important definitions" | 2019-04-26T09:48:35Z | http://htmcommunitydb.org/wiki/index.php?title=HTM_ComDoc_1&curid=1870&diff=17527&oldid=17526 |
I really wanted to make a message to appreciate you for these great ideas you are showing at this site. My considerable internet lookup has at the end been honored with incredibly good facts to talk about with my neighbours. I ‘d believe that many of us visitors are rather endowed to live in a good network with many outstanding professionals with good guidelines. I feel quite fortunate to have used your entire web page and look forward to really more excellent times reading here. Thanks once again for a lot of things.
I precisely desired to thank you very much yet again. I am not sure the things that I would’ve created in the absence of the information revealed by you over such field. It had been the intimidating case in my opinion, nevertheless coming across a well-written way you managed it forced me to weep over joy. I will be thankful for your help and thus wish you find out what a powerful job you are always undertaking instructing many people with the aid of your webpage. Probably you have never met all of us.
I not to mention my guys were actually checking out the nice techniques from your web page then all of the sudden I had an awful feeling I had not thanked the website owner for those secrets. Those guys had been so thrilled to study all of them and have clearly been taking advantage of them. Appreciate your genuinely really considerate and also for picking certain quality resources millions of individuals are really desirous to learn about. My sincere regret for not saying thanks to you sooner.
I and my buddies came examining the excellent secrets and techniques from your website then unexpectedly I got a horrible feeling I had not thanked you for them. All of the ladies are already as a result glad to study them and already have seriously been taking pleasure in these things. Thanks for truly being really helpful as well as for picking out some fine useful guides millions of individuals are really needing to learn about. My personal sincere regret for not expressing gratitude to sooner.
I want to express appreciation to you for rescuing me from this particular challenge. After exploring throughout the world-wide-web and obtaining recommendations which were not beneficial, I figured my entire life was over. Living without the answers to the issues you have sorted out by means of your main post is a crucial case, and the kind that could have in a wrong way affected my career if I had not discovered the website. The talents and kindness in playing with all the stuff was crucial. I’m not sure what I would have done if I hadn’t come upon such a solution like this. I can at this moment look ahead to my future. Thanks for your time very much for the high quality and result oriented help. I won’t think twice to propose the sites to any individual who needs to have counselling about this matter.
I am also commenting to make you know of the great encounter my cousin’s girl found going through your webblog. She mastered too many issues, most notably what it’s like to have a very effective helping style to get many others without hassle know just exactly some complicated topics. You really surpassed my expected results. Many thanks for producing the interesting, dependable, revealing and in addition fun tips on your topic to Emily.
I have to express thanks to this writer just for bailing me out of this particular setting. Because of scouting throughout the world-wide-web and finding tips which are not productive, I assumed my life was gone. Existing without the approaches to the difficulties you have sorted out all through this report is a critical case, as well as the kind which could have adversely damaged my career if I hadn’t discovered your web page. Your main know-how and kindness in taking care of the whole thing was very useful. I don’t know what I would’ve done if I hadn’t encountered such a solution like this. I’m able to at this time look forward to my future. Thanks for your time very much for your specialized and results-oriented help. I will not think twice to endorse your web sites to anyone who needs to have direction about this matter.
My spouse and i have been glad when Jordan managed to do his researching using the ideas he received from your very own web site. It is now and again perplexing just to happen to be giving freely tricks which the others may have been selling. We recognize we need the blog owner to give thanks to for this. All the illustrations you made, the straightforward website navigation, the friendships you give support to engender – it’s most superb, and it is letting our son in addition to the family reason why that idea is interesting, and that’s really serious. Many thanks for all!
I as well as my pals have been following the good secrets located on your website and so before long came up with a horrible suspicion I had not thanked the blog owner for those strategies. Most of the boys are actually thrilled to read through all of them and already have honestly been enjoying these things. We appreciate you really being quite thoughtful as well as for obtaining some good tips most people are really desirous to understand about. Our honest apologies for not saying thanks to sooner.
I want to express my admiration for your kindness giving support to persons who really want help on your concern. Your real commitment to getting the solution along appeared to be extremely functional and have regularly encouraged men and women just like me to realize their targets. The invaluable report can mean this much to me and substantially more to my office workers. Best wishes; from each one of us.
Needed to write you the little remark in order to thank you so much yet again regarding the beautiful thoughts you’ve shown above. It was certainly remarkably open-handed of you giving unreservedly exactly what a few individuals could possibly have marketed for an ebook to help with making some dough on their own, mostly considering that you might have tried it in case you considered necessary. The techniques also served to become a easy way to know that most people have a similar eagerness similar to my own to know a good deal more on the topic of this issue. I’m sure there are a lot more pleasurable sessions up front for individuals that looked at your website.
Thank you so much for providing individuals with an extraordinarily special chance to discover important secrets from this web site. It really is very beneficial and also stuffed with a great time for me personally and my office colleagues to search your website on the least three times in a week to see the latest secrets you have. And indeed, I’m so at all times contented considering the special hints served by you. Certain 3 tips in this post are absolutely the most suitable we have ever had.
I needed to draft you one very little note to finally give many thanks again considering the great pointers you have featured at this time. It has been quite surprisingly open-handed of you to present extensively exactly what a number of people would have marketed for an e-book to get some dough for their own end, certainly now that you could possibly have done it if you wanted. The ideas also acted to become easy way to be aware that other individuals have the identical interest the same as my very own to realize very much more with respect to this issue. I believe there are a lot more enjoyable instances up front for those who scan your website.
My wife and i felt absolutely ecstatic when Emmanuel managed to complete his web research using the ideas he obtained when using the blog. It’s not at all simplistic to just possibly be releasing steps other folks could have been selling. And we know we now have the website owner to appreciate because of that. The main illustrations you have made, the simple site navigation, the friendships you assist to create – it is everything powerful, and it’s aiding our son and our family recognize that that situation is interesting, and that is extremely fundamental. Thank you for all!
I’m commenting to let you understand what a really good experience my child found going through yuor web blog. She realized lots of issues, including how it is like to possess an excellent helping character to have men and women just grasp specified tortuous things. You really exceeded our own desires. Many thanks for giving such important, trustworthy, educational as well as easy guidance on that topic to Ethel.
I would like to show appreciation to this writer for bailing me out of such a condition. Because of looking throughout the world wide web and obtaining tricks which are not helpful, I thought my entire life was done. Being alive without the presence of strategies to the problems you have resolved through the guideline is a crucial case, as well as the kind that might have adversely damaged my entire career if I had not discovered your blog. Your expertise and kindness in playing with a lot of stuff was crucial. I don’t know what I would have done if I had not come upon such a point like this. I’m able to now look ahead to my future. Thanks a lot very much for this skilled and sensible help. I won’t be reluctant to suggest your blog to anyone who desires counselling about this topic.
I have to show some thanks to this writer for bailing me out of this challenge. Just after surfing around through the internet and coming across concepts which are not beneficial, I figured my entire life was gone. Being alive devoid of the strategies to the issues you’ve resolved by means of your main guide is a serious case, as well as the kind which could have in a negative way damaged my career if I had not noticed your blog post. Your own personal mastery and kindness in playing with all areas was precious. I am not sure what I would have done if I hadn’t come across such a thing like this. I can now look forward to my future. Thanks for your time so much for the expert and results-oriented help. I won’t be reluctant to suggest your blog to any individual who needs and wants guide on this issue.
I definitely wanted to jot down a quick note so as to express gratitude to you for all the magnificent ideas you are showing here. My particularly long internet look up has now been recognized with useful knowledge to share with my co-workers. I would point out that we website visitors actually are quite blessed to live in a great network with many brilliant professionals with valuable pointers. I feel truly fortunate to have seen your webpages and look forward to so many more fabulous times reading here. Thanks a lot again for all the details.
Thank you a lot for giving everyone an extremely splendid possiblity to read from here. It really is so sweet and full of a good time for me and my office peers to search the blog particularly 3 times in one week to read the fresh guides you have. And of course, I am just at all times pleased concerning the impressive hints served by you. Selected 1 facts in this posting are in fact the very best we have all had.
I wanted to make a quick message to say thanks to you for some of the awesome advice you are placing on this site. My extensive internet lookup has at the end been honored with reputable suggestions to share with my great friends. I ‘d declare that many of us site visitors actually are undeniably blessed to exist in a decent network with many awesome professionals with insightful things. I feel very much fortunate to have seen your entire website page and look forward to plenty of more awesome minutes reading here. Thank you once more for everything.
My husband and i ended up being really joyous Emmanuel could do his researching from the precious recommendations he gained through your web page. It is now and again perplexing to just always be handing out key points which other folks have been trying to sell. And we fully understand we now have the blog owner to give thanks to for that. The specific explanations you have made, the straightforward site menu, the friendships you can assist to foster – it’s got mostly fabulous, and it is facilitating our son and our family understand that subject is awesome, and that’s extraordinarily serious. Thank you for the whole lot!
I definitely wanted to jot down a brief message in order to say thanks to you for all the splendid recommendations you are giving at this site. My time consuming internet investigation has at the end been recognized with brilliant insight to write about with my guests. I ‘d state that that many of us website visitors actually are definitely blessed to dwell in a fabulous website with very many lovely individuals with very beneficial methods. I feel truly fortunate to have used the webpages and look forward to many more pleasurable minutes reading here. Thanks a lot once again for all the details.
I simply wanted to construct a brief comment in order to say thanks to you for all of the remarkable hints you are writing here. My considerable internet investigation has now been recognized with extremely good information to share with my colleagues. I would suppose that we visitors actually are unquestionably blessed to live in a wonderful place with very many special professionals with insightful tips and hints. I feel somewhat blessed to have discovered your webpages and look forward to tons of more excellent moments reading here. Thanks once again for a lot of things.
Thank you so much for giving everyone such a terrific chance to read critical reviews from this website. It really is very fantastic and also stuffed with a great time for me and my office fellow workers to visit your web site really three times per week to read through the fresh tips you will have. And of course, I am also actually contented with the striking tricks you give. Some two facts in this post are definitely the most efficient we have all ever had.
Thanks a lot for providing individuals with remarkably marvellous chance to read articles and blog posts from this site. It is often so amazing and also full of a lot of fun for me and my office friends to visit your web site at minimum three times in one week to read the newest things you have got. Not to mention, I am also always impressed with your terrific information served by you. Certain 1 tips in this posting are ultimately the simplest we have all ever had.
A lot of thanks for your whole labor on this website. Gloria really loves carrying out investigation and it is easy to understand why. Most of us know all about the dynamic manner you provide great solutions via the website and therefore foster contribution from people on this subject matter so our simple princess is undoubtedly becoming educated a lot. Enjoy the remaining portion of the new year. You’re doing a powerful job.
I want to express my passion for your generosity supporting visitors who require assistance with this important matter. Your personal commitment to getting the solution up and down came to be extraordinarily invaluable and has consistently encouraged girls much like me to realize their aims. Your new insightful suggestions means a great deal to me and even further to my office colleagues. Thanks a ton; from all of us.
My spouse and i got quite more than happy that Ervin managed to finish off his studies via the precious recommendations he was given through your site. It’s not at all simplistic to just always be freely giving thoughts which the rest have been making money from. And now we grasp we have got the writer to thank because of that. The type of explanations you’ve made, the straightforward website navigation, the relationships you will assist to instill – it’s everything unbelievable, and it’s aiding our son and us consider that the theme is fun, and that is truly essential. Many thanks for the whole thing!
I as well as my pals have already been checking the best strategies from your web page and so instantly I had a terrible feeling I never expressed respect to the blog owner for those techniques. Most of the women were for this reason very interested to see them and now have very much been taking pleasure in them. I appreciate you for really being really considerate as well as for choosing certain fine topics most people are really desperate to discover. Our own honest regret for not expressing gratitude to you sooner.
I precisely had to thank you so much again. I am not sure the things I would have taken care of without the actual aspects documented by you about this area of interest. It truly was a real intimidating difficulty for me personally, nevertheless being able to see the expert manner you managed the issue took me to weep over fulfillment. I’m just thankful for this service and expect you realize what a great job you are always getting into instructing others via a web site. Most probably you have never got to know any of us.
My spouse and i were very delighted John managed to conclude his inquiry because of the ideas he grabbed from your very own site. It is now and again perplexing to just be making a gift of ideas that the others have been trying to sell. So we fully grasp we need the website owner to give thanks to for that. These illustrations you have made, the easy website menu, the friendships your site make it easier to instill – it’s everything awesome, and it’s leading our son in addition to our family know that this subject is awesome, which is exceedingly fundamental. Thanks for the whole lot!
I actually wanted to jot down a comment to appreciate you for those pleasant guidelines you are posting on this site. My incredibly long internet lookup has at the end of the day been rewarded with wonderful strategies to exchange with my best friends. I ‘d say that most of us visitors are quite blessed to be in a useful place with so many outstanding people with useful guidelines. I feel truly lucky to have used the web pages and look forward to plenty of more fabulous moments reading here. Thanks again for everything.
My spouse and i felt really peaceful that Ervin managed to complete his researching out of the precious recommendations he got using your blog. It’s not at all simplistic to just always be releasing tips and tricks that many people may have been trying to sell. Therefore we figure out we now have the website owner to be grateful to for that. The illustrations you have made, the easy website navigation, the relationships your site make it possible to engender – it’s got everything powerful, and it is helping our son in addition to our family recognize that this article is interesting, which is certainly incredibly important. Thanks for all the pieces!
I not to mention my guys have been looking at the great helpful hints on your site and so all of the sudden developed a terrible feeling I never thanked you for those strategies. All the ladies came totally stimulated to see them and have pretty much been enjoying them. Appreciation for actually being quite helpful as well as for selecting varieties of high-quality subjects most people are really wanting to be aware of. My very own sincere regret for not expressing gratitude to sooner.
I simply needed to say thanks all over again. I’m not certain the things I would have implemented in the absence of the entire creative ideas discussed by you relating to such a question. It was actually the terrifying condition in my opinion, nevertheless discovering the very expert approach you processed it made me to weep with happiness. I’m just thankful for this advice as well as have high hopes you recognize what an amazing job you are putting in instructing people by way of your webblog. More than likely you’ve never come across any of us.
I together with my buddies have already been checking the excellent techniques located on your web site and the sudden I got a terrible suspicion I never thanked you for those techniques. My people happened to be excited to read through them and have extremely been using those things. Thank you for being very kind and then for making a decision on this kind of fantastic guides millions of individuals are really desperate to learn about. My very own honest regret for not expressing appreciation to you earlier.
Thanks for all your work on this blog. Betty really likes working on research and it is obvious why. My spouse and i hear all of the powerful manner you deliver rewarding tips and tricks via this blog and in addition recommend response from other people about this point then our favorite princess is without question understanding a lot of things. Take pleasure in the remaining portion of the new year. You are always carrying out a great job.
I’m commenting to let you understand what a fantastic discovery my daughter enjoyed studying yuor web blog. She realized lots of things, which include how it is like to possess a great giving character to make other people really easily learn about some tortuous subject areas. You undoubtedly did more than our own expectations. Thank you for coming up with those productive, healthy, informative not to mention unique tips about the topic to Evelyn.
A lot of thanks for all your work on this blog. Kate takes pleasure in setting aside time for research and it’s obvious why. Many of us notice all regarding the powerful ways you make functional techniques on the website and as well as encourage response from other people on this subject then our favorite simple princess is in fact understanding a whole lot. Take pleasure in the rest of the new year. You have been conducting a fabulous job.
Thanks for all of the efforts on this web site. Kate delights in going through investigations and it is easy to understand why. We all learn all about the dynamic tactic you make both interesting and useful tricks by means of this web site and in addition encourage participation from others on that concept then our own daughter is really studying so much. Have fun with the rest of the new year. Your carrying out a dazzling job.
I needed to write you one little bit of remark to finally say thanks yet again just for the breathtaking techniques you’ve discussed on this website. This is shockingly open-handed of you to deliver extensively exactly what a number of us would’ve distributed as an electronic book to help make some dough for themselves, notably considering the fact that you might well have done it if you desired. These techniques as well worked as a fantastic way to fully grasp that someone else have similar fervor like my own to know the truth significantly more in terms of this matter. I’m sure there are many more pleasant times in the future for individuals that looked at your site.
I would like to show my appreciation to you for bailing me out of this type of dilemma. Just after surfing around through the the net and seeing principles that were not pleasant, I thought my life was over. Existing minus the solutions to the issues you’ve fixed by means of your good guideline is a crucial case, and those which may have negatively damaged my career if I hadn’t come across your web blog. The talents and kindness in taking care of the whole thing was tremendous. I don’t know what I would’ve done if I had not come upon such a subject like this. It’s possible to at this moment look ahead to my future. Thanks very much for the expert and results-oriented help. I won’t think twice to refer your web page to any person who should have guidelines about this problem.
I wanted to send you one little word to help give many thanks again for your personal incredible information you have provided here. It was so strangely open-handed of you to deliver extensively all many people could possibly have offered as an e-book to help with making some bucks for themselves, even more so now that you might well have done it if you considered necessary. These solutions also served as a good way to fully grasp the rest have a similar dreams like my own to see lots more regarding this problem. I believe there are thousands of more pleasant sessions up front for individuals who view your blog post.
My spouse and i ended up being quite more than happy that Louis could complete his researching while using the precious recommendations he was given when using the web site. It is now and again perplexing to just continually be giving freely methods which often some others may have been making money from. We really acknowledge we need the blog owner to appreciate because of that. Most of the illustrations you’ve made, the straightforward web site navigation, the relationships you can help create – it’s got all unbelievable, and it’s really leading our son in addition to our family recognize that that issue is cool, and that’s exceptionally important. Thank you for the whole lot!
I am glad for commenting to let you know what a cool experience our child encountered reading through yuor web blog. She came to understand lots of pieces, which include what it is like to possess a very effective helping spirit to get many people very easily thoroughly grasp some grueling things. You truly did more than our own expectations. Many thanks for churning out these practical, healthy, edifying and easy thoughts on the topic to Ethel.
I have to show some appreciation to the writer for bailing me out of such a crisis. As a result of checking through the world-wide-web and getting notions that were not productive, I believed my entire life was done. Existing without the presence of solutions to the difficulties you’ve resolved by means of your entire website is a critical case, and the ones that might have negatively damaged my entire career if I had not encountered your web blog. Your actual mastery and kindness in touching everything was excellent. I don’t know what I would’ve done if I had not come upon such a step like this. I am able to at this time relish my future. Thanks for your time very much for your expert and effective help. I will not be reluctant to suggest the sites to anyone who ought to have guidance about this problem.
I’m writing to let you be aware of of the really good discovery my friend’s daughter gained using your webblog. She noticed too many issues, not to mention what it’s like to possess a very effective teaching spirit to get certain people with no trouble understand several tricky issues. You undoubtedly exceeded readers’ expected results. Many thanks for coming up with those useful, dependable, revealing and as well as cool thoughts on your topic to Tanya.
I enjoy you because of all of the labor on this website. My niece enjoys doing investigation and it’s easy to see why. All of us notice all about the powerful manner you offer priceless tricks via your web site and recommend contribution from other ones about this subject matter then my child is always learning a lot of things. Enjoy the remaining portion of the new year. You are always carrying out a powerful job.
I enjoy you because of all of the hard work on this website. Ellie really loves going through investigation and it’s obvious why. Most of us notice all of the dynamic form you convey very important suggestions on the website and as well as attract contribution from the others on that matter while our daughter has always been understanding a great deal. Take pleasure in the remaining portion of the year. You are doing a great job.
I wanted to create you a very little observation to help say thanks a lot as before for the awesome tips you have featured above. It is open-handed with you in giving unreservedly what many individuals would’ve distributed as an e-book to get some dough for their own end, especially seeing that you might have tried it if you decided. These advice also served to become easy way to be aware that many people have the identical passion really like my personal own to find out much more in respect of this condition. I’m certain there are thousands of more fun instances in the future for individuals who browse through your site.
A lot of thanks for all your valuable work on this web page. Debby takes pleasure in managing internet research and it is obvious why. A lot of people hear all relating to the compelling ways you give invaluable guidelines via this web site and inspire contribution from other people on this issue plus our own simple princess is really starting to learn a lot of things. Take advantage of the remaining portion of the new year. You are always doing a fabulous job.
Needed to create you that little bit of note to say thanks again for the splendid things you have shown on this site. This is really open-handed of people like you in giving unhampered precisely what most people would’ve offered as an e book to help make some money on their own, especially since you might have tried it if you wanted. These suggestions likewise worked to be a fantastic way to be sure that other individuals have a similar passion really like my very own to know great deal more with regard to this condition. I believe there are thousands of more enjoyable periods in the future for individuals that look into your site.
I wish to express some thanks to you for bailing me out of this predicament. As a result of surfing through the world wide web and finding opinions that were not powerful, I believed my life was done. Being alive minus the solutions to the difficulties you’ve fixed all through the website is a serious case, as well as ones which might have in a negative way damaged my career if I hadn’t discovered the blog. Your own skills and kindness in controlling a lot of stuff was very useful. I am not sure what I would have done if I hadn’t come upon such a solution like this. I’m able to at this time look forward to my future. Thanks very much for the professional and results-oriented help. I won’t think twice to propose your web blog to anyone who ought to have assistance on this problem.
My husband and i felt very relieved that John could do his web research while using the precious recommendations he obtained out of your web page. It’s not at all simplistic to simply always be handing out ideas which usually a number of people may have been trying to sell. And we all fully grasp we have got the writer to be grateful to for that. The specific explanations you have made, the easy website navigation, the friendships your site give support to instill – it is mostly powerful, and it’s making our son and the family do think that idea is excellent, which is certainly unbelievably serious. Many thanks for the whole thing!
I am glad for writing to make you understand of the amazing encounter our princess had reading through yuor web blog. She mastered some pieces, with the inclusion of what it’s like to possess an ideal helping style to get other people effortlessly know just exactly selected complex things. You truly surpassed our expected results. Many thanks for distributing the good, dependable, explanatory and also unique tips about the topic to Janet.
My wife and i ended up being now joyful that Louis could do his survey because of the precious recommendations he grabbed using your blog. It’s not at all simplistic to just possibly be handing out points which often many people might have been trying to sell. And we all fully understand we’ve got the writer to give thanks to for this. The type of illustrations you’ve made, the simple site navigation, the friendships you give support to engender – it’s mostly astonishing, and it’s really facilitating our son and us understand this content is brilliant, which is certainly especially indispensable. Many thanks for all the pieces!
I would like to point out my gratitude for your generosity in support of persons who really want guidance on in this idea. Your personal dedication to passing the solution along came to be amazingly significant and has all the time encouraged people just like me to reach their aims. Your own invaluable guideline denotes this much a person like me and still more to my office colleagues. Regards; from each one of us.
Thank you so much for providing individuals with an extraordinarily special possiblity to read in detail from here. It is often so good plus stuffed with amusement for me and my office friends to visit the blog nearly three times in a week to read through the new guides you will have. And indeed, I am just always fascinated considering the unique solutions you give. Certain 1 facts in this post are indeed the best we have all had.
My spouse and i felt lucky that John managed to conclude his researching with the precious recommendations he had out of the web pages. It’s not at all simplistic just to continually be giving out procedures some others have been selling. We really realize we now have the website owner to give thanks to because of that. The type of illustrations you made, the easy blog navigation, the friendships you make it easier to instill – it is everything exceptional, and it’s really letting our son and the family believe that this theme is fun, and that is exceedingly vital. Many thanks for all the pieces!
I would like to convey my gratitude for your generosity for those individuals that require assistance with this concern. Your special dedication to getting the message all-around appears to be especially insightful and has in every case empowered folks just like me to get to their desired goals. The important tutorial signifies a lot to me and still more to my office workers. Thanks a ton; from all of us.
I simply wanted to develop a word in order to thank you for some of the superb tips and tricks you are showing at this website. My time consuming internet lookup has finally been recognized with brilliant suggestions to talk about with my guests. I would assert that many of us readers are very lucky to live in a useful place with many wonderful people with helpful pointers. I feel rather lucky to have discovered the web pages and look forward to so many more pleasurable moments reading here. Thank you again for everything.
A lot of thanks for all your efforts on this site. My mom really loves managing internet research and it is easy to see why. I hear all relating to the compelling tactic you convey very helpful guides through the web blog and as well as improve contribution from people on the idea so our favorite simple princess is undoubtedly starting to learn so much. Take pleasure in the rest of the new year. You are always doing a powerful job. | 2019-04-21T16:40:55Z | http://www.ogura-ortho.com/blog/?p=835 |
In a recent post, we reminded you of deadlines for various awards from the Association for the Study of Food and Society. It looks like we missed one that could be of great interest to SAFN folks who teach: the ASFS Pedagogy Award. Fortunately, the deadline has not yet passed, although it is coming soon: February 15, 2019.
Teaching Food and Culture. Edited by Candice Lowe Swift and Richard Wilk. Walnut Creek, California: Left Coast Press, Inc., 2015. 209 pp. US$39.95, paper. ISBN 978-1-62958-127-9.
In Teaching Food and Culture Swift and Wilk present a compilation of papers that use food “to transform research into pedagogy,” arguing that food is a productive medium to engage students in the core themes and topics of anthropology. One of the strengths of this volume is the editors’ commitment to all four subfields of the discipline; however, every author demonstrates a commitment to a holistic approach to teaching and research that is reflective of the trans-disciplinary nature of the study of food. Several authors specifically mention that assignments can be adapted to courses in a range of disciplines including gender studies, communications, public health, religion, economics, and history, giving the volume a broad readership. After presenting an overview of the chapters and the goals of the book in Chapter 1, Chapter 2 is an interview with the late and notable food scholar Sydney Mintz. The interview took place via email correspondence and is Mintz’ thorough responses to three questions posed by the editors of the volume.
Section II of the book, Nutrition and Health, begins with a chapter on “Teaching Obesity: Stigma, Structure, and Self.” The authors of Chapter 3 describe the ways they use the topic of obesity to address key concepts in their upper and lower division undergraduate courses on anthropology and global health including poverty, discrimination, and responsibility. While they describe the sensitive nature of teaching obesity and problems that can arise in having students research and debate this topic, more concrete examples of how to avert these problems in the classroom would be beneficial. In Chapter 4, Sept describes how she structures her upper division archaeology course, Prehistoric Diet and Nutrition. Blending biological anthropology and archaeology she links studies of genetic change and the development of taste, with popular culture trends in food such as the paleo-diet. She details a related in-class scenario-building exercise that prepares students for debates on hunting and scavenging. After providing a brief history of the development of nutritional anthropology and the biocultural approach to food in Chapter 5, Wiley outlines the history and social life of milk. A detailed semester-long assignment presented in the appendix guides students through their own single-food project, yet the body of the chapter itself could be strengthened by more classroom examples.
The three chapters of Section III: Food Ethics and the Public offer the most pedagogical insight with discussion of activities and student’s responses to these approaches. First Benson (Chapter 6) describes three different assignments he has used to emphasize the role of food in the study of consumption, explaining how they “…[have] students look inside themselves at their own issues of dependency and habituation as well as upward at the powerful institutions that make the myths and realties of consumption” (111). This balance is carefully analyzed in several other chapters including Chapters 7, 8, and 12 where the notion of linking research and praxis, and demonstrating how the personal is political are emphasized. In Chapter 7 Counihan describes her research and teaching that encourages her students to reexamine the places where food is produced, purchased, and consumed. Using Lancaster’s historic farmers market, she provides students with a central research question, “Does Central Market promote a just and community-building system of food production and consumption?” This guides students through ethnographic research on the intersections of food, gender, class, race, power, economics, and politics. Service learning courses that address these same themes are the focus of Chrzan in Chapter 8. By offering readers a history of her service learning courses, she describes her successes and failures, allowing readers to avoid these pitfalls in their own courses. The active ethnographic requirements of the assignments in these chapters illustrate how students learn to apply anthropology beyond academe in ways that also promotes food justice and democracy.
Finally, the chapters in Section IV: Food, Identity, and Consumer Society discuss identity creation and how food and eating can illustrate “otherness”. Sutton and Beriss (Chapter 9) explain how place, identity, and community can be analyzed through an exploration of restaurants. However it seems that Chapter 9 would be better suited to the third section of the volume. Chapters 10 and 11 accentuate the role of language in the study of food. Stross (Chapter 10) presents a narrative of his syllabus, and highlights several innovative in-class activities. In Chapter 11, O’Connor explains how she uses food to teach semiotics with an emphasis on helping students understand the relationship between theory and method. In the final chapter, Van Esterick reviews her decades of research and teaching on food, discussing how her research informed her teaching, which in turn informed new research. She writes poignantly about the emotional reactions experienced by both scholars and students through discourse on family, hunger, health, and disordered eating.
Several authors reflect that their courses on food attract a diversity of students making teaching both challenging and enjoyable as they learn from their experiences. As students grapple with how to analyze personal experience in an academic context, food becomes a tangible and emotionally charged vehicle for applying anthropological theory. In teaching anthropology courses, this is not an uncommon problem. However, this volume could benefit from deeper discussion of how to handle pedagogical challenges in the classroom. While ethical dilemmas such as students who struggle personally with issues such as food security and eating disorders are regularly mentioned, precisely how these problems are resolved in the classroom is largely absent (with Chrzan’s chapter a notable exception).
This volume will be of most use to graduate students and professors who are preparing to teach new courses, or wish to infuse their existing courses with new assignments, activities, and articles. Nearly every chapter includes expansive reference lists for readings and films, and many authors list website URLs for resources and classroom activities. A major strength of the volume is that most authors describe a specific assignment used in their course that is subsequently listed in the appendix. These assignments are excellent additions to the volume, providing easily adaptable teaching examples for readers.
This month, the Food Pedagogy Series is pleased to offer a special pair of interviews. Doctoral candidate and instructor Aimee Hosemann was recommended by one of her students, and both Hosemann and the student, Clara Broomfield, agreed to be interviewed about the class. We will hear first from Hosemann, and then we will hear from Broomfield for a student’s perspective on the same course.
In this interview, Hosemann discusses the use of commensality practicums and a class cookbook in her course “Food and Culture” at the University of Texas at Austin. She also reflects on professors’ responsibilities when discussing their own diets, and the challenges of teaching as a doctoral candidate.
LRM: Before we start talking about the course, I would love to hear a little bit more about your work.
AH: My work is linguistic and sociocultural anthropology with a Brazilian indigenous group called the Wanano/Kotiria. I’m specifically interested in women’s expressive practices.
About a year and a half ago, I was drawn into reading a bunch of stuff by vegan ultra athletes. I noticed how many professional athletes were moving into vegan or whole foods plant-based diets. They were telling stories that sounded like religious conversion narratives: they reached a moment of crisis in their lives, and found plant-based diets. They’re very powerful in the same way that religious testimony is. What started as a hobby turned into what I’m going to focus on for the next couple of years.
LRM: Looking at your course description, I noticed that the last statement says you will focus on “how flows of dietary images and discourses shape race and ethnicity, gender, social class, and other identifications.” I found that an interesting phrasing, because it seems that many food-related syllabi invert that—they look at race, ethnicity, gender, and social class shaping dietary practices rather than the other way around. Can you talk about that phrasing in particular, and about your goals for the class more generally?
AH: That’s such a great question! One of the things I’m interested in is how all manner of things have semiotic content that people interpret. Thinking about food as a globalized thing, you can imagine food and images of food moving around in different social networks. When people take those things on, there’s something appealing about those objects. I think about how people respond to these things, and how that shapes some of their ideas about themselves.
I think it would have worked equally well if I had inverted those things, but I guess I’m trying to play with the concept that dietary practices, and talk about dietary practices, are enactments of something—like the discourse-centered approach to language and culture, where language, culture, and society aren’t necessary the same thing, but they are constantly reconstituting each other.
I think that that plays in really well to my general goals for the class: more than anything, I want the students to adopt an anthropological mindset, and learn to think about things, ideas, and people as reflective of, and constituting, networks of relationships. The way that I wrote the description and my goals for the class work hand in hand with each other.
LRM: Does that approach shape the topics you address or the order of the content of the course?
AH: They did, sort of. I tried to think about things that students had ready experience with. So we have readings about coffee, and a discussion about the Paleo diet and physical anthropology evidence for or against it. Then, there is another structural element to consider, which is that we used Gillian Crowther’s textbook Eating Culture. I looked in that book for inspiration about things that students might really want to know, and things that are my interests—for example, readings toward the end of the semester, about vegan sexuality, about lacking food when you’re in a detention center a migrant, or the cultural and environmental impacts of the BP oil spill. These were things that I was really interested in at this particular moment.
LRM: It sounds like you are working to keep the content fairly current—these are very current issues, migrant detentions, and the BP oil spill.
AH: Yeah, and those two things especially, it’s not too difficult to think about how they would apply to a student body that largely comes from Texas. The oil industry is absolutely integral to a lot of people’s livelihoods here, and then in Texas we have some family detention centers that got a lot of media attention because they were not doing a very good job of housing people in a humane fashion. I really wanted to be able to think about those things, and the lens of food and food practices is a way to sneak at controversial topics.
LRM: Did you feel like that was effective, like you were able to broach more controversial topics successfully this way?
AH: I think it definitely helped. One of the things that I hammered constantly in my class was the need to understand where our food comes from, and how it is interrelated with other things, like immigration. If people want immigration reform, they need to be willing to pay more for their tomatoes. In class, we talked about: How important is a tomato to a particular cuisine? Would you pay for this cuisine that includes tomatoes, and what happens if you only want to be able to pay very cheap prices for your tomato? Who actually paid for that?
LRM: I want to back up a little bit: how big is the course, what kind of students enrolled, etc.?
AH: The class had 46 students. They were a lot of upperclassmen. It was not meant to be a super high-level course, but it assumed that students had some background in anthropology. Anthropology students got the first seats, then people from other departments. There were a number of ethnic and racial backgrounds represented, as well as traditional and non-traditional college students. Many people had worked in some kind of food-related industry, and had some experience with the work of food. That gave us the ability to really talk at a higher level about what the world of food is like.
LRM: 46 students…that’s a pretty large class to think about eating together. Could you talk a little bit about the physical structure of the class?
AH: We were in a relatively small auditorium, with about 80 chairs in it stadium-style. When I was lecturing, I tried to move around the room and get people to move around in their seats to engage. For some of the people down in the front rows, they never saw some of their classmates in the back, so they didn’t always know who was speaking. So I tried to move around as much as possible, and sometimes let them take over the conversation and turn their backs on me, and look upward in the classroom. Our eating sessions helped that, because then people could move around a lot more, and talk to different people.
LRM: You’ve mentioned this “commensality practicum,” and the student who recommended you spoke specifically about this a fantastic aspect of the course. Can you tell me more about it?
AH: It’s actually something I drew on from my high school newspaper class. We sometimes had what were called “interpersonal skills test,” which were times to kick back and let the stress melt away for just awhile. I always thought that was such a good idea, because we could talk and have fun together, and get to know each other as newspaper staff in a different way.
My class was scheduled at noon, and because it was a food class at noon, there had to be some way to integrate actual food on a reliable basis. I was really taken by the idea of having something like a discussion period every so often so that if there were things they wanted to talk about, they had a chance to do that.
LRM: Did everyone bring food to share, or their own lunches?
AH: Everyone brought food for themselves, or somebody might have a little extra something to share with people who were close by. We had a separate day, when their recipes were due, that people made food to share. They liked that so much that the next week we had class brunch.
LRM: Was there a structured discussion topic for the commensality practicums?
AH: I would start out with an idea. One time, we talked about what kind of structure we might like our class cookbook to take on. Another time, I asked them how they feel about the concept of food as a human right. I would think about something that was in the air, and then ask them to get a conversation going. They would take it from there.
LRM: You mentioned a class cookbook. Can you tell me about that?
AH: Every student had to include a recipe. It could be for anything that they wanted, it just needed to be something that they liked, and I requested that they provide cultural or familial information; pictures if they wanted them; information on special kinds of techniques or shopping, and to really have fun with it. And some of them were absolutely amazing! Some were written bilingually, in the home language and in English. Beautiful photography, beautiful stories. People scanned and took pictures of the original recipe cards. They submitted them through Canvas, our course management site, and I am compiling them into a single document that they will all get electronically.
LRM: It sounds like they were excited about it.
AH: They were very excited about it! Even if they didn’t show it in class, it came out in the writing. They talked about what the food is like, and how meaningful it is to them. I could really feel the excitement in their submissions.
LRM: Could you talk a little about a couple of the concepts you really wanted to get across in the course?
AH: There were a couple that really took on lives of their own in the course. First, “What is the idea of gender in relationship to cooking?” Professional cooking and domestic cooking are valued very differently. They really got engaged in that, especially thinking about it in relation to coming up with recipes that were good enough for class. Often, they went to female family members to ask for things, and it gave many of them a new way to think about what was happening when people were cooking at home for them versus when they were eating at restaurants.
Another was food as a marker of health and a marker of security. Because we were really trying to get to an understanding of health as something that is subjective, and even though there are things that we can say about health that seem like they’re pretty objectively true, that objectivity actually hides a lot of cultural context.
Thinking about food security and how it relates to issues of health, one of the things we discovered in class conversation is that the university has a lot of food available. They could conceivably eat just about any time they want to, but it’s not actually that accessible to them, either because of time or budgetary constraints. The things they want to eat are too expensive or too far away. Even though there is food around, as college students—even at this university that considers itself a Public Ivy—a lot of them are at least temporarily food insecure.
LRM: Can you talk a little about how you bring in linguistic anthropology to teach food?
AH: I love to use linguistic anthropology with food! And I’m a big fan of Jillian Cavanaugh’s work on salami, and her work with the documentary processes around food production. There’s a piece in the Journal of Linguistic Anthropology, “What Words Bring to the Table: The Linguistic Anthropological Toolkit as Applied to the Study of Food,” which details how anthropologists who do linguistics have found themselves doing food, and about how those things meld together. You can’t talk about food without talking, and the way that people talk about food—the how, the why, the when—all of these are just as important to the cuisine as the food itself. We spent a lot of time thinking about how what people say reflects ideas about food and their bodies, and what they have access to, what’s appropriate. That adds a whole level of analysis, and a lot of richness.
LRM: Did students latch on to the importance of language in relation to food?
AH: I think they did. One of the pieces we looked at was Paugh and Izquierdo’s “Why is This a Battle Every Night?: Negotiating Food and Eating in American Dinnertime Interaction” about dinnertime arguments over food. That piece is so rich because the transcripts are just beautiful, and you really get the sense of the dynamic that’s happening. I was able to show them, by talking through these transcripts, how this discussion about food is emergent but also plays onto particular family histories.
LRM: The student who recommended you commented that this course integrates many “culturally relevant internet sources and films.” I wonder if you could talk about those?
AH: What with having a Facebook or Twitter feed, I saw all these interesting things. I’m always looking for interesting snippets to show people the connection between journal articles and real life. Then, I got a Netflix account last semester, and went through their entire holding of food movies. One of the hits of the class was the French movie Haute Cuisine, because it’s such a beautiful depiction of the gender issues between a private home cooking, and high-status chef cooking. The food photography was beautiful, the talk between the characters about food was beautiful, and it just really nicely tied together a lot of things in the class.
LRM: Were there any other films you felt were particularly successful?
AH: A Year in Burgundy was good. We watched that while we were having brunch, and that was really cozy. Both of these movies are very cozy movies. They just make you feel warm, and want to engage with other people, and so that set a really good tone for talking, and appreciating what landscape does for food, what culture does for food. Both of those worked really well. Jiro Dreams of Sushi also went over really well.
I sent out links to a lot of things through Canvas, or through our Facebook group, so that people could look at them on their own time.
LRM: Can you tell me about the Facebook group?
AH: Yeah! It was a student suggestion in the last few weeks of class. Some of the core members of the class got along together really well, and they really wanted to have a way to keep in touch with each other and keep sharing materials. I think about 16 members of the class have joined up now and have been trading videos and having discussions about different things. I’m hoping to use it to keep touch with anyone who takes a food course with me.
LRM: Do you also incorporate those extra media items into class time?
AH: Yes. Luckily with food, things come up. For example, in Crowther’s textbook, there’s a discussion of Appadurai’s work on gastro-politics, and how being a daughter-in-law in a Tamil family can be a very difficult position around food. Well, the week after we talked about this, this news story came out about a daughter-in-law who was feeling very put upon by her in-laws, and didn’t like them messing around in her marriage. She had been urinating into their tea every day for a year to get back at them. Her mother in law was so angry, and wanted her arrested or to sue her for justice—but part of Appadurai’s point with gastro-politics is that, while the mother-in-law thinks she is having particular impacts on her daughter-in-law’s food experiences, the daughter-in-law can also approach this through subversion and claim her own kind of power in relation to her family food situation. That was one that I brought in, but students like to bring things in, too.
LRM: Do you feel like students’ interactions with the world changed as a result of the class?
AH: One of the questions they could answer on their final exam was about something that they learned about food and cultural relationships, and what kind of knowledge gaps they had before the class started. So far, what seems to be very strongly coming through in their answers is that, for a lot of them, they hadn’t really thought of food as a cultural entity, or that it was bound up in other things.
That’s an interesting thing to reflect on, because if you think of food as existing outside of social and linguistic relationships, that says interesting things about your own food history. A lot of students have starting thinking about the fact that white bread, or Starbucks Coffee, or other things that seem ubiquitous actually refer to a whole bunch of other things that they didn’t even think about.
LRM: Is there anything in the course that you didn’t feel worked well, or that you won’t continue?
AH: One of the things I want to do is get the class down to a size where I can have them doing journal reflections a few times over the semester. I’ve done that in other classes, and it’s one of the single most highly rated pedagogical things that I’ve done in any class.
LRM: Can you tell me a little about how that works in other classes?
AH: The journals are their own personal reflections, on what’s really making them angry or that they have a question about but don’t want to talk about in class. So, they write 3-5 journals over the course of the semester of about 3 pages. They submit them electronically, and then I give them fairly substantial comments so that we have an actual conversation about where they are. At the end of the semester, they have this record of how they’ve change as human beings.
LRM: You’re teaching a 2-2 schedule, and you’re ABD. Do you have any thoughts or reflections on teaching these classes while also working on a dissertation?
AH: I have a lot of thoughts about that! One of the things that teaching a 2-2 does, very obviously, is slow down your progress on your dissertation in certain ways. But it also is a lesson in time management. You have to figure out very quickly what your work style is. Do you need extended periods of time to work on certain things, or can you work efficiently in short bursts? That’s been really interesting, and it’s been interesting thinking about moving on to a tenure track position–because it’s not exactly going to get any easier from this point on. So, it’s been kind of a baptism by fire, and it really does make me consider, “is this something that I actually want to do?” On the positive side, teaching things that I’m very interested in has been actually really beneficial for my research in a lot of ways. The students get excited about it, and they ask a lot of questions, and we have really good conversations. And seeing people who are just getting introduced to my work, and find it interesting and ask me questions, then gives me new things to write about.
LRM: Do you talk about your own work in your classes?
AH: I do. I trend vegan in my own diet, and there are particularly strong reasons why I feel that way, and I will talk about them. I also have sort of a complicated worldview about food, because I am also in favor of responsibly hunting. I talk about the complexity of that, but I also try to shy away from talking about my own dietary practice too much, until a student asks me directly what I eat.
LRM: These hesitations about not wanting to impose your own dietary views—do you talk to students about that?
LRM: And how did they respond?
AH: They responded that they absolutely did not, under any circumstances, want me to tell them how to eat.
LRM: How would you compare this to the ways some professors might advocate for local, organic, or sustainable foods when teaching about food?
AH: I have heard about some programs where there is more explicit focus on local and organic food sources, and push people to shift their dietary practices that way. It does seem like a more widely accepted thing, if you were to evangelize for a particular diet.
However, in another class I talked to students about this. We talked about how, once you start talking about local, organic food sources—never mind even vegan stuff—you’re often dealing with people who are white and upper middle class, and their dietary experience may be very disconnected from some of the students in their classrooms.
LRM: How do students respond to that position?
AH: The students themselves are very critical of a lot of the food discourses that they hear. They understand that people might think them to be good ideas and very socially transformative, but they also understand that there are people who get excluded for structural reasons. They were as openly critical of those kinds of things as I might have been.
LRM: In some of the readings you’ve assigned, you touch on topics of moral judgments of obesity. Do you feel like students’ sensitivity and critiques of local food discourse is extended to the way they understand discourses around obesity, as well?
AH: Oh yeah. There’s a video from Spokane Public TV, “Our Supersized Kids” about childhood obesity. When we watched it, they identified a lot of things that even I hadn’t noticed. For example, while there is talk about the unhealthfulness of obesity, there is also a lot of bullying of kids who are perceived as being unhealthy. A lot of it is framed as their fault. They caused it by virtue of being obese and unhealthy. If they would change themselves, then everyone else would change. That’s a very common logic that underlies a lot of victim blaming. The students were really able to identify those very quickly.
LRM: Aimee, thank you for taking the time to speak with me. This seems like an excellent course, and I am excited you’ve opened your teaching to commentary from a student, as well. It will be wonderful to have varied perspectives on the same course.
Welcome to the inaugural interview of SAFN’s new Food Pedagogy Interview Series. Each month, we will feature a food scholar who teaches a course related to food or nutrition. They will share tips, tricks, and cautionary tales from their classrooms. If you would like to participate, or would like to nominate an excellent instructor for the interview series, please email LaurenRMoore@uky.edu.
2015 kicks off with an interview with Susan Rodgers, Professor of Anthropology at The College of the Holy Cross in Worcester, Massachusetts. Rodgers was the 2013 Carnegie Foundation for the Advancement of Teaching/CASE Massachusetts Professor of the Year. Though Rodgers’ own work focuses on the politics of art and literature in Indonesia, she has developed a challenging and provocative food class for first and second year students at her college. She speaks here about the course, successful components and cautionary tales, and why anthropologists should have high expectations for introductory classes.
SAFN members can access the syllabus Dr. Rodgers discusses here through the SNAC 4 resource page.
Lauren R. Moore: Can you tell me a little about how this course got started?
Susan Rodgers: First of all, I’m not an anthropologist of food. My work and publications are on very different things. I’ve worked with the Angkola Batak people of Indonesia since the mid 1970s on issues of the politics of print literature, and minority arts in Indonesia in general.
I came to Holy Cross to help the school set up a new anthropology program in 1989, after teaching at Ohio University for 11 years. About 7 years ago, the college made me the Garrity Chair, which is a rotating, endowed professorship [during which] you have to design a brand-new course that speaks to issues that the Garrity family was interested in—social justice issues, basically, and fine liberal arts teaching with challenging texts. At the time, I was using a lot of Paul Farmer’s work in a freshman seminar. I was really impressed by how well Paul Farmer’s work teaches to first and second year students, so I decided to create this Food, Body, Power course. It’s an anthro of food course, but undergirded very explicitly with Paul Farmer’s understanding of the structural violence of poverty.
I ask students to read Farmer pretty seriously and then see if his understanding of structural violence can be applied to issues of food insecurity both domestically and worldwide. He himself hasn’t done that yet to any extent. But I imported the theory from Paul Farmer, and based the course around that. So that’s the origin of Food, Body, Power. I had taught a more broad-based Anthro of Food course for several years before this, but Food, Body, Power is an offshoot.
LRM: One of the things that drew me to this syllabus in the SAFN materials was how you’re really tackling complex topics and serious readings in a 100-level class. Does the institutional context at Holy Cross relate to the kind of syllabus you’ve created?
SR: Holy Cross very much makes it possible. Holy Cross is a small, highly selective, liberal arts college. We’re like Vassar and Bates and Williams and Amherst…that range. We do get, in general, very, very good students who expect to work hard. So it doesn’t shake them up when they see, for instance, 5 monographs and a whole bunch of journal articles in an Anthro 101 syllabus. That’s kind of the Holy Cross thing.
But, maybe because of my 11 years teaching at Ohio University, I feel that at almost any four-year institution, we can take our first and second year students very seriously, and pitch a course like this to them. I think they rise to the occasion.
You know, in philosophy, the professors are asking their first year students to read very tough material. They don’t flinch from that. When students take a chemistry course, they’re asked to do some pretty challenging thought-work. So, I feel that this has some translatability.
The difference, if I was teaching back at OU, is the size of a class. Here, our 100-level courses are either capped at 25 or at 19. And of course you can ask the students to write a lot more if you’ve got a class of that size versus teaching to 50 or 75 students or even more. The professor could die grading papers. This is a pretty writing intensive course, as most of mine tend to be. If I was teaching it to a larger class—above 25—I would have to scale down the amount of writing that students do. But some aspects of the current version I think would work really well at any institution.
LRM: Weeks 12 through 14, I see they’re doing group presentations. Can you tell me about those?
I always like to have students do teamwork as they go through the course. First of all, there’s four weeks of a condensed anthro of food course at the beginning. They read many chapters from C. Counihan and P. Van Esterik’s Food and Culture anthology. Then they read Paul Farmer, and then Sidney Mintz’s Sweetness and Power, and Psyche Williams-Forson’s Building Houses Out of Chicken Legs. So, they’ve already done some pretty heavy-duty things. Then, we have a section where I ask them to apply what they’ve learned, à la structural violence and so on, to issues of famine. All the way through the course they’ve been divided into 5-person teams. I have little assignments that they’ll do. After they’ve done all of that, writing essays and essay tests all along, I have those teams really do something, in terms of producing knowledge for the whole class.
They have to meet, pick a serious food insecurity issue from outside the United States, research it together, and then put together a 25-minute lecture on their selected issue. For instance, child stunting in India: what causes it? After they’ve done that lecture, they take that same critical lens and work together in their teams to identify, address, and lecture again on a food insecurity issue in Massachusetts that also has relevance for Worcester. And that’s at the end of the course.
And that, I think, could be translated to almost any institution, because students just thrive when they’re asked to do teamwork…but not just to do it, but to actually lecture in the class. One thing that makes this helpful is our reference librarian, who runs a 50-minute class for us in the computer-assisted classroom about how to find sources. So, I know they’re armed with the ability to find good sources. As a follow up to these lectures, each student picks a paper topic that has been generated by their team reports, and then they (individually) write a 7-page paper on that.
LRM: Can you give an example of a memorable project?
SR: For some reason, one whole class was fascinated with South Sudan. One of the teams did a really good job looking at basic infrastructure problems in the country, like transferring food from one city to another. That team had a couple of economics majors, and they were able to bring their expertise to the class lecture, which was trying to explain why food insecurity is so dire in Sudan. From our readings, they were already alerted to the problem of how warfare violence can lead to famine, so they brought that in.
LRM: Do they also get excited about the local topics?
SR: One thing I’ve done is ask the Executive Director of the Worcester County Food Bank to come to class and lecture about food insecurity in Worcester County. South Worcester, right down our hill, is one of the most seriously impoverished parts of Worcester. I mentioned it might be something they could look at. That sparked their interest.
One small group last spring did such a good job! They decided to see how food, in a very generic sense, was portrayed in two quite contrastive high schools. One was in a fairly impoverished part of Worcester, and they also picked the public high school in Weston, MA—do you know about Weston?—it’s so prosperous. It’s one of the most over-the-top wealthy parts of Massachusetts. They did it as an experiment. They wanted to see what the school websites told us about food.
In the Weston public high school, oh my goodness. They had a cafeteria that was basically like an organic cafe. It would provide all these different, extremely interesting, sometimes even literally organic meals; very internationalized, sophisticated cuisine; guides for parents as to how to encourage their sons and daughters to eat healthful food and everything. It was a very elaborate, upper middle class take on healthy food and why it’s good for us.
Then, the students were able to contrast that with the almost blank information about food—and relatively little outreach to the parents—in the particular public high school in Worcester. They were also able to follow the weekly menus and look at the tater tots versus the kale salads and so on in the two contrastive high schools. That was really eye opening for the class, I think. We could discuss issues of class privilege and worldview and class-shaped “taste,” in the Q and A part of the students’ lecture.
LRM: Have you had things that haven’t gone as well, that you’ve elected not to do again? Do you have any cautionary tales that have come out of this course?
SR: There is one cautionary tale I could pull from my experience. When I taught the old version of this course, the more generic Anthropology of Food course, I took one class period (of a 3 days/week class), and met outside the classroom, and together we walked down the hill into south Worcester. I asked them to walk around this little strip mall, with a Wendy’s hamburger joint, a cigar shop that has a few vegetables and a lot of snack foods, and a very cheap Chinese restaurant. I asked students to walk around for 40 minutes with a field notebook, and observe the food scene. The next class period, two days later, we talked about it.
That kind of fell flat because the students really needed more background on Worcester before that would make sense to them. I think in theory it was a great exercise, but we just can’t assume that they really know much about the local community in terms of SES and class and history.
It’s very important, if you’re going to understand the food scene down at the bottom of our hill, you’ve got to understand the history of the Irish American immigration to that very spot, and the movement of the Irish Americans out to the suburbs, and the ethnic composition and poverty issues now in that area. I hadn’t told them much about that. If I were to bring that back, I would really nest it within a couple lectures—and maybe students’ own web investigations—on Worcester and social class.
LRM: That’s a good point. One of the things I’ve found when talking about food with students, it can easily devolve into class-based stereotypes or normative judgments. I wonder if that’s something you face or if you have any strategies for overcoming it?
SR: I think probably anybody who teaches almost any topic in anthropology encounters this. One of the ways I deal with this is with the readings during the first four weeks of class. For example, this article makes such a hit. It’s really tough, and as the teacher you really have to walk through it point by point, but Alice Julier’s wonderful article, “The Political Economy of Obesity: The Fat Pay All,” really makes students think about their own social class positionality.
What Julier ends up saying is that obesity works for the elite in America. It provides us a population of workers who the upper-middle class can look down on, make fun of, and underpay. Obesity also works in a sense of blaming and shaming people who aren’t at fault for their problems of overweight. They should be dealt with as people who are being victimized by the social structure, but the way pop culture works is that we can’t see those social structure dimensions, and we look at the personal and think it’s psychological.
Julier sets all that out, and then I take a whole 50-minute period to discuss that one article after the students have read it carefully with reader’s guides—I always give them a reader’s guide. Then, we can talk about social class, and food overabundance, and body and power. Certain of the articles I use in the first four weeks, introducing the topic of anthro of food, can serve that purpose of making the students aware of social class dimensions to food production and consumption, and then they carry that through the whole course.
LRM: You said you give reader’s guides. Can you tell me a little bit about those?
SR: I’ve found that students need a little guidance before they plunge into a tough article or book. It makes them more serious readers if they have a list of say, 5 dimensions of a chapter to look at beforehand. So, using Julier’s article, it would be something like “What does Julier want us to understand about how social class operates in America?” I don’t want to overdetermine what they look at. Not simply asking them to summarize an aspect of a text, but having a question that kind of comes at them a little bit at a slant, that the author himself or herself would be able to answer.
A lot of my colleagues in this department have found that, if you give the students a reader’s guide before they dive into reading an assignment, it makes for much better class discussion. Also, they sort of need it. When I was in college, I don’t think any of my professors gave me a reader’s guide, but I find that students appreciate some guidance from the professor. They need a bit of help, kind of a map. You really want to ask them provocative questions that are kind of fun to think about, so there’s a technique to writing reader’s guides.
LRM: It sounds like reader’s guides are something you do in a lot of your classes. I wonder if there’s anything you do when teaching a food-related course that differs from the way that you approach other, non-food courses?
SR: One thing I probably do more in my food course than I do in my other range of 100- and 200-level courses that seems to work well, is when there’s a really interesting article in the morning New York Times or in the Washington Post or any serious newspaper, I’ll pull off a copy. And I’ll actually make a photocopy of it for every student in the class. I pick out really well-written current stories related to the topic of that day’s lecture, and I’ll actually ask them to take 10 minutes in class and sit there and read it silently to themselves, and then relate it to the chapter or the article that we’re dealing with on the syllabus that day. That seems to really interest them a lot. Then they go out and begin to be more serious newspaper readers themselves, which is an important lesson.
There’s another thing that’s distinctive to Food, Body, Power that works really well in the food class: autobiographical reflections. When I teach Anne Allison‘s wonderful “Japanese Mothers and Obentos: The Lunch-Box as Ideological State Apparatus,” after I make sure they understand what her argument is, we relate it to the their memories of the way their family prepared lunches for them at age 5 or 6. Everybody scribbles notes, and we describe it, then we do Anne Allison’s analysis and look for the structural message underneath.
One thing that all of us, including me, say is that our parents would prepare our wonderful, nutritionally balanced meal, send us off to 1st grade, and then we’d trade things… a tuna fish salad sandwich for something yummier, for example. Once we all admit we traded away our nutritious lunch, we ask: what does that really tell you about American culture? Then they discover, well, individual choice is really valorized, standing up to authority is valorized. You can do more of that biographical work in a food course than some others.
LRM: This is a writing intensive course. Could you tell me a little bit about the writing assignments?
SR: This has four 5-page response essays. They’re not research papers… the somewhat longer essay they write at the end is more of a research exercise, but the 5-page response essays are directed to the syllabus readings. It’s to make sure that they not only understand a set of articles, but have a critical perspective on it. The best way to demonstrate that is writing. Often I’ll ask them to pair two of the articles, and what they’re doing in an exercise like that is not only showing me that they’ve read those articles in really tremendous depth–real depth of understanding—but also synthesizing it into something that’s distinctly their own. I want them to take on the voice of an anthropologist.
Another thing with having regularly spaced essays: it means that they’re really keeping up with the readings. It takes a whole lot of grading time. With 25 students, all these essays, and in-class essay exams, it’s a lot of grading. But I find it eliminates the problem of a students showing up to class and not having read. If it means more grading time for me, that’s okay, because I really want them to keep up with the syllabus and to read these texts with some seriousness.
One of the goals of college education is to become a better, more precise, and maybe more creative writer. I tell them this quite explicitly before they write their first essay: I’m really interested in excellent writing, and I’m happy to work on drafts in my office hours and help students become a better writer. So that’s undergirding everything.
LRM: Do you have any final thoughts or suggestions for other teachers?
SR: I would say they should not underestimate their students. Even for first and second year students, you can have a complex syllabus.
Paul Farmer does work very well as a theoretical framework that catches younger students’ attention. A cautionary note, though: students tend to rapidly fall in love with Paul Farmer’s work overmuch, and you have to help them draw back a little bit and be a little critical of his ethically engaged anthropology – what Nancy Scheper-Hughes calls “anthropology with its feet on the ground,” – and with Farmer’s notion of structural violence and his hopefulness about structural change. Students glom onto that and want to run with it, so you have to incorporate some critiques.
Students, they’re college students. They’re serious adults. I think our syllabi should challenge them at that level. Often they can rise to the occasion. But you’ve got to have structures in place to make sure you don’t lose a student along the way. Make sure students who don’t understand the readings come to office hours, that sort of thing. Very time intensive, all these nice things I’m saying!
You want to make sure that once they’ve taken the course, and back they go to their normal life, they never think about food in a simple way ever again. Hopefully they’ll keep that anthropological vision of the social complexity of food. With the power element of my syllabus, I hope they think of issues of social class and social inequality, which they’re going to confront when they’re 30 years old and reading the newspaper, or maybe being a boss in a corporation and hopefully being attentive to adequate salaries for their workers.
The anthropology of food… It seems like such a fun topic. It lures them in. Then you hit them with this heavy-duty economic anthropology and political anthropology, and really pretty sophisticated theory, which they begin to like. And then, hopefully, they’ll use it in their other classes, and in their larger life.
I want to really change their vision of the world, maybe more in this course than in any other course of mine. In this course, I’m not worried if these students never take another anthro course. This is not only for anthro majors. You get students into it by the title, and it could be their one anthropology course. It has allowed the student to talk as a group and reconfigure their understanding of food and body and power. That’s an impact. That’s kind of a public anthropology impact on citizenship, I think.
LRM: Thank you so much for you time, and for inaugurating the SAFN food pedagogy interview series! | 2019-04-19T20:28:06Z | https://foodanthro.com/tag/teaching/ |
According to the World Health Organization, 28,616 people contracted Ebola and 11,310 lives were lost during the Ebola epidemic. After so many lives lost and the hopeful, but understandably tentative countdown of Ebola free days continues once again in West Africa, it is imperative that we take a moment to consider what we learned from the devastating and tragic epidemic.
I spoke with Dr. Ali S. Khan, former senior administrator for the Centers for Disease Control and Prevention, former Assistant Surgeon General, and current Dean of the University Of Nebraska College Of Public Health. He noted initially, that there is always the risk of importation of cases; that is how it started he reminds us. He elaborated further that the epidemic “changed the response from the WHO and caused a change in political focus by the nations involved that will affect future outbreaks and ensure native capabilities, as well as link them to the global response.” He also noted that new medical counter measures, such as vaccines and related therapeutics, were also the result of the Ebola impact. When asked about what we learned, he did not hesitate. “The first thing was a new vaccine that permits a novel prevention strategy using ring vaccination to prevent spread and new cases. The second is the new monoclonals and antivirals for treatment.” He also noted the better understanding of the viral progression and clinical diseases that will influence options for acute treatment and follow up of convalescents.
Ebola has provided us with a virtual plethora of opportunities to learn about the disease, its treatment and control, as well as the control of other infectious illnesses through our attempts to prevent its spread as well as through our failures, and successes. We gained valuable treatment modalities and tactics that will likely be used in future outbreaks of Ebola, as well as many other infectious diseases.
Ebola taught us other things too. It has been some time since global health has taken center stage. Ebola changed that. During the epidemic, one could not watch the news or go through a day without hearing an update on the latest development in the Ebola crisis. Although other infectious diseases like Plague, Polio, AIDS, SARS, H1N1, Cholera, and now Zika have captured the world’s attention, few diseases have made such an intense impact, nor caused the uproar and fervor that Ebola elicited. Ebola reminded us that global health is public health and affects us all, and as such, deserves to be a priority for national and international focus and funding for everything from vaccine development and research, to capacity for response locally, nationally, and internationally. Global health has teetered on the edge of public awareness, and remained a quiet player in the competition of priorities in national budgets. Today, it is abundantly clear how vital this sector is to each nation’s, as well as the world’s health, safety, success and even its survival.
Another effect from the Ebola crisis was the opportunity to educate people about public health and the transmission of infectious disease. Through education, public health officials were able to promote behaviors that ensured the safety and health of the public. It is stunning that in this day and age, we persist in so many behaviors that put us and those we interact with at risk. The discrepancy in what we say we will do, and what we are actually willing to commit to and take action on, looms large. Persisting low vaccination rates and the prevalence of infectious diseases such as sexually transmitted diseases, measles, pertussis and influenza show this. Ebola offers yet another opportunity to demonstrate the connection between our behaviors and our risks and disease.
Ebola also showed us that many nations continue to lack sufficient financing, infrastructure, facilities, support and medical staff to treat their own populations. Endemic conditions like malaria, and neglected tropical diseases like Guinea worm disease, Yaws, Leishmaniasis, Filariasis, and Helminths, as well as other conditions continue to affect millions globally. Maternal and childhood morbidity and mortality rates remain deplorable as well. And millions of children around the world continue to suffer and die of malnutrition and disease before they reach the age of five. This is unacceptable, especially because proper treatment and cures for these conditions exist. Ebola also highlighted the need for treatments for chronic non-infectious conditions as well.
Moreover, Ebola clearly demonstrated the enormous need that remains for sufficiently trained medical professionals and healthcare staff to provide adequate care for many populations throughout the world. The loss of so many extraordinary and heroic staff that dedicated their lives to helping others in need under the most daunting and challenging of circumstances was devastating to those whom they served, and must not be in vain.
Additionally, Ebola provided us with yet another chance to relearn lessons about the role of safety in giving aid to others in need. We learned that we cannot just rush in with aid, but must recall the basics that every first responder and medical student must learn: Ensure scene safety before giving care, and first do no harm. Ebola showed us the necessity to strategize and prepare to give care by utilizing personal protective equipment. It also reminded us very quickly that we could indeed do harm, and worsen the epidemic when we acted without first assessing the situation and ensuring proper protection and preparation.
So, it remains to be seen just how much we will learn from Ebola. Will we learn from our mistakes? Will we take the global view in the future, or the narrow one? Will we truly live by the motto of the Three Musketeers and be "one for all and all for one", or persist in "it's all about me"? Only time will tell.
As revolting as it sounds, there are places in the world where the chances of consuming one’s neighbours’ faeces are quite high if one is not vigilant regarding sanitation and hygiene. That being the condition of many areas in low and lower-middle income countries does not mean that high and higher-middle income countries are exempt from any environmental conditions that are harmful to health.
But, what is environment health? The World Health Organization (WHO) defines the term as, “All the physical, chemical, and biological factors external to a person, and all the related factors impacting behaviours”. It, however, excludes genetics and the social and cultural environment.
In low-income settings, concerns for environmental health may arise in the context of sanitation and hygiene, as well as indoor and outdoor pollution. In high-income countries, many chronic diseases like diabetes and cardiovascular disease, are associated with sedentary lifestyles. While these might be attributed to behaviour, one must consider that such behaviours can arise from changes in the environment. Over 80% of communicable and non-communicable diseases can be attributed to environmental hazards. Overall, conservative estimates indicate that about one quarter of the total global burden of disease is owing to this cause (WHO, 2011). Furthermore, the biggest killers of children under 5 years are all environmental-related diseases, including diarrhoea, respiratory infections, and malaria.
Other diseases of concern are helminthic infections, trachoma (a bacterial eye infection), Chagas disease, leishmaniosis, onchocerciasis, and dengue fever. All of which are associated with impoverished conditions and can be mitigated by improving sanitation, hygiene, and housing. Although conflicts and natural disasters might be catastrophic for any country, struggling economies tend to suffer more because disasters worsen the poor conditions which directly affect sanitation and hygiene practices, creating conducive conditions for various infectious diseases, and ultimately feeding into the vicious cycle of poverty.
Many interventions are underway to address these conditions, including Water, Sanitation and Hygiene (WASH) initiatives, Integrated Vector Management, Programme on Household Air Pollution, International Programme on Chemical Safety, Health and Environment Linkages Initiative, and Intersun Programme for the effects of UV radiation. The acknowledgement of the effects of the environment has grown. One of the Millennium Development Goals (MDGs) was, “To ensure environmental sustainability.” The Sustainable Development Goals (SDGs) are more extensive and thorough in placing focus on the environment. Goal 1 is to end poverty, goal 6 is to make provision of clean water and sanitation possible, and goal 13 is to stop climatic change resulting in floods and drought (United Nations, 2014).
It is encouraging to see steps being taken to control environmental hazards; however, the journey to measuring and eradicating such conditions still remains a challenge, which will hopefully be overcome through future endeavours.
Technology is progressively becoming a bigger part of our lives. This holds true in high-income countries and in low- and middle-income countries. By 2012, three quarters of the world’s population had gained access to mobile phones, pushing mobile communications to a new level. Of the over 6 billion mobile subscriptions in use worldwide in 2012, 5 billion of them were in developing countries. The Pew Research Center’s Spring 2014 Global Attitudes survey indicated that 84% of people owned a mobile phone in the 32 emerging and developing nations polled. Internet access is also increasing in low- and middle-income countries. The 2014 Pew Research Center survey indicated that the Internet was at least occasionally used by a median of 44% of people living in the polled countries.
The increase in Internet and mobile phone access has significant implications for how infectious diseases can be better tracked around the world. Although robust and validated traditional methods of data collection rely on established sources like governments, hospitals, environmental, or census data and thus suffer from limitations such as latency, high cost and financial barriers to care. An example of a traditional infectious disease data collection method is the US Centers for Disease Control and Prevention’s (CDC) influenza-like illness (ILI) surveillance system. This system has been the primary method of measuring national influenza activity for decades but suffers from limitations such as differences in laboratory practices, and patient populations seen by different providers, making straightforward comparisons between regions challenging. On an international scale, the WHO receives infectious disease reports from its technical institutions and organizations. However, these data are limited to areas within the WHO’s reach and may not capture outbreaks until they reach a large enough scale.
Compared to traditional global infectious diseases data collection methods, crowdsourcing data allows researchers to gather data in near real-time, as individuals are diagnosed or even before diagnosis in some instances. Furthermore, getting individuals involved in infectious disease reporting helps people become more aware of and involved in their own health. Crowdsourcing infectious disease data provides previously hard to gather information about disease dynamics such as contact patterns and the impact of the social environment. Crowd-sourced data does have some limitations, including data validation and low specificity.
Internet-based applications have resulted in new crowd-sourced infectious disease tracking websites. One example is HealthMap. HealthMap is a freely available website (and mobile app) developed by Boston Children’s Hospital which brings together informal online sources of infectious disease monitoring and surveillance. HealthMap crowd-sources data from libraries, governments, international travelers, online news aggregators, eyewitness reports, expert-curated discussions, and validated official reports to generate a comprehensive worldwide view of global infectious diseases. With HealthMap you can get a worldwide view of what is happening and also sort by twelve disease categories to see what is happening within your local area.
Another crowd-sourced infectious disease tracking platform was Google’s Flu Trends, and also their Dengue Trends. Google was using search pattern data to estimate incidence of influenza and dengue in various parts of the world. Google’s Flu Trends was designed to be a syndromic influenza surveillance system acting complementary to established methods, such as CDC’s surveillance. Google shut down Flu Trends after 2014 due to various concerns about the validity of the data. As an initial venture into using big data to predict infectious diseases, Flu (and Dengue) Trends have provided information that researchers can use to improve future big data efforts.
With the increase of mobile phone access around the world, organizations have started using short message service (SMS), also known as text messaging, as a method of infectious disease reporting and surveillance. Text messaging can be used for infectious disease reporting and surveillance in emergency situations where regular communication channels may have been disrupted. After a 2009 earthquake in Sichuan province, China, regular public health communication channels were damaged. The Chinese Center for Disease Control and Prevention distributed solar powered mobile phones to local health-care agencies in affected areas. The phones were pre-loaded with necessary software and one week after delivery, the number of reports being filed returned to pre-earthquake levels. Mobile phone reporting accounted for as much as 52.9% of total cases reported in the affected areas during about a two-month time period after the earthquake.
Text message infectious disease reporting and surveillance is also useful in non-emergency settings. In many malaria-endemic areas of Africa, health system infrastructure is poor which results in a communication gap between health services managers, health care workers, and patients. With the rapid expansion and affordability of mobile phone services, using text-messaging systems can improve malaria control. Text messages containing surveillance information, supply tracking information and information on patients’ proper use of antimalarial medications can be sent from malaria control managers out in the field to health system managers. Text messaging can also be sent by health workers to patients to remind them of medication adherence and for post-treatment review. Many text message based interventions exist, but there is a current lack of peer-reviewed studies to determine the true efficacy of text message based intervention programs.
Increasing global access to the Internet and mobile phones is changing the way infectious diseases are reported and how surveillance is conducted. Moving towards crowd-sourced infectious disease reporting allows for a wider geographical reach to underserved populations that may encounter outbreaks, which go undetected for a delayed period. While crowdsourcing such data does have limitations, more companies than ever are working on using big data and crowd-sourced data in a reliable way to inform the world about the presence of infectious diseases.
Rabies is a neglected viral disease that is found on all continents except Antarctica and is endemic in 150 countries and territories. While rabies can be found almost everywhere, 95% of cases occur in Africa and Asia. Rabies is almost always fatal following the onset of symptoms. However, rabies is vaccine-preventable and can be eliminated. The World Health Organization (WHO) in conjunction with the Food and Agriculture Organization of the United Nations (FAO), the World Organization for Animal Health (OIE), and the Global Alliance for Rabies Control is raising awareness about rabies. September 28th is World Rabies Day and this year’s theme is “End Rabies Together”.
Rabies is usually transmitted to humans from the deep bite or scratch of an infected animal. Domestic dogs are responsible for more than 99% of human rabies cases throughout the world. According to the WHO, “while infected domestic dogs cause human rabies deaths in Africa and Asia; in the Americas, Australia and Europe, bats are the primary source of human rabies infections.” Children are disproportionately affected by rabies. Forty percent of people who are bitten by suspected rabid animals are children under 15 years of age.
No tests are available to determine if a person is infected with rabies before they show clinical symptoms. Once a person begins to show clinical symptoms of rabies, the disease is almost always fatal. There have been a few cases of people developing rabies symptoms and surviving, with the use of the Milwaukee Protocol. In 2004, a Wisconsin teenager was bitten by an infected bat. She did not seek medical treatment and did not receive PEP. Dr. Willoughby, an infectious disease specialist at the Children’s Hospital of Wisconsin near Milwaukee, tried an experimental treatment that included an induced coma and antiviral medication. The teen survived with few lasting complications. However, many experts caution that the Milwaukee Protocol is not the cure for rabies, at least not yet. The first 43 human rabies cases where doctors attempted to replicate the Milwaukee Protocol resulted in only five survivors. Admittedly, five survivors are pretty good for a nearly always fatal disease, but not enough to say that the Milwaukee Protocol is a cure for human rabies.
Vaccinating dogs is the most cost effective way to prevent human rabies deaths because it results in a decrease in the global deaths attributable to rabies and a decrease in the need for post-exposure prophylaxis (PEP). Post-exposure prophylaxis is the administration of rabies immunoglobulin and rabies vaccine to an exposed person immediately after exposure, in order to prevent infection. Timely PEP can prevent the onset of rabies symptoms and death. However, PEP is expensive and not widely available in many of the resource poor settings with high rabies burden. Eighty percent of dog-mediated rabies deaths occur in rural areas that lack awareness about, and access to, PEP.
Rabies elimination is achievable for many of the countries with a high burden of dog-mediated rabies cases. Achieving a dog vaccination rate of at least 70% is accepted as the most effective way to prevent human rabies deaths. Rabies transmitted by dogs has been eliminated in many Latin American countries including Chile, Costa Rica, Panama, Uruguay, most of Argentina, the states of Sao Paulo and Rio de Janeiro in Brazil, and large parts of Mexico and Brazil. A Bill and Melinda Gates Foundation project, led by WHO, has made great strides against human rabies cases in the Philippines, South Africa and Tanzania. Furthermore, many countries in WHO South-East Asia Region have begun elimination campaigns with the goal of meeting the 2020 target for regional rabies elimination. Bangladesh, for example, launched an elimination program in 2010 and has seen human rabies deaths decrease by 50% during 2010-2013.
While there are still challenges in achieving a high vaccination rate in some areas of the world, such as vaccine availability and community support, some countries have been able to achieve rabies elimination. Events like World Rabies Day help draw attention to the high burden of rabies in resource poor settings and help to highlight the work being done to eliminate rabies.
This is my final post of a three part series on climate change and health. The first post looked at how climate change will influence the onset and severity of droughts in some areas. The second post examined how some regions are predicted to see an increase in droughts, which would decrease food supply; thus, increasing nutrient deficiencies in those areas. This post will briefly discuss the influence of climate change on waterborne diseases.
Change in climate, including the increases in temperature and changes in rainfall patterns may lead to an increase in waterborne diseases, where insect vectors contaminate the water (Shuman, 2010). Often, higher temperatures are needed for some insects to complete their life cycle. This is the case for mosquitoes, as they live in warm, aquatic habitats (Shuman, 2010). With an increase in temperature and more flooding, there will be an increase in mosquitoes (Shuman, 2010). Thus, there may be an increase in the transfer of dengue and malaria (Ramasamy & Surendran, 2011). These warm, aquatic habitats will also be ideal for snails, which transfer schistomiasis (Ramasamy & Surendran, 2011). Furthermore, with a rise in sea levels, there is likely to be an increase in saline levels (Ramasamy & Surendran, 2011). Certain types of mosquitoes and snails have a high tolerance for salt water and are thus able to breed in water with high salt concentrations (Ramasamy & Surendran, 2011).
The relationship between climate change and health is complex because there are many different contributing factors and there is limited scientific evidence for many regions, several of which are under-resourced (New York Times, 2015). Furthermore, areas of high-resource have not been impacted in the same way, due to advantages as simple as air conditioning (New York Times, 2015). Thus, more scientific evidence is needed, to determine more ways in which climate change could possibly influence the health of a population. More recognition also needs to be given to this issue so that contingency plans can be made for possible outbreaks of diseases that were discussed in this blog post.
Shuman, E. K. (2010). Global Climate Change and Infectious Diseases. The New England Journal of Medicine , 362 (12), 1061-1063.
Ramasamy, R., & Surendran, S. (2011). Possible impact of rising sea levels on vector-borne infectious diseases. BMC Infectious Diseases , 11 (18).
The day Macchu Picchu was discovered in 1911.
The day Apollo XI returned to the Earth after the first successful mission of taking humans to the moon in 1969.
Yet, in Nigeria, that day in 2014 will always be marked as the day Patrick Sawyer—the index patient of Ebola—died and set an outbreak in motion in one of the most populated cities in Africa. Patrick Sawyer was a Liberian-American citizen and a diplomat who violated his Ebola quarantine to travel to Nigeria for an ECOWAS convention. His collapse at the airport, coupled with an ongoing strike by Nigerian doctors in public hospitals, landed him at a private hospital in Obalende, where he infected eight other people.
Patrick Sawyer’s death marked the beginning of an Ebola epidemic in Lagos, a city of 21 million. Lagos is a major economic hub in Africa and one of its biggest cities. An uncontrolled Ebola epidemic would have a far-reaching economic impact beyond the borders of the city, its country, and even its continent.
A recent study has shown that Ebola virus remains active in a dead body for more than a week. Add to this that the body is most infectious in the hours before death, and it is a "virus bomb" waiting to happen if handled incorrectly. West Africa, especially Nigeria, has a strong funeral culture. This Ebola-infected Liberian diplomat’s body was transported and incinerated in accordance with the WHO and CDC protocol. This feat was achieved despite immense political and diplomatic pressure to return the body for funeral rites. It represents one of the many cases of collaboration and "clinical system governance" that are at the heart of the successful containment of Ebola in Nigeria. It is one of the many stories that I'm hoping to highlight in my research on the role of the private sector in Nigeria’s successful Ebola containment.
As part of my research, I am looking at 10 different economic sectors to understand how the Ebola outbreak impacted the private sector and how the private sector dealt with the challenges that the Ebola outbreak posed. My hope is that this research will lead to lessons for the private sector on how, in times of an epidemic, they can help the government to mitigate the disease’s economic impact. I also hope that the resulting report will help governments engage with the private sector more effectively in times of emergencies.
With many outbreaks, especially of highly fatal diseases such as Ebola, fear is the biggest demon. This fear has led to the crippling of economies of Ebola-affected countries. This fear has cost Sierra Leone, Guinea, and Liberia 12 % of their GDP in foregone income and unraveled the years of progress made by these countries. However, this fear is not just a phenomenon limited to West Africa. I had a very personal encounter with this fear recently, when I was quarantined for a few hours in the United States (despite Nigeria being declared Ebola free since October 2014).
It has been a humbling experience so far, as I try to understand how this fear and the hysteria around Ebola can lead to significant behavioral changes—some of them necessary but some extreme. Everyone I speak to has a story to share. Some people tell of how they bought more than two bus tickets to prevent sitting next to other people. Others tell of hospitals resembling "ghost buildings" as people avoided hospitals and doctors like the plague. Many tell of the "Ebola elbow-shake" that replaced the usual handshake or hug. The reality is that although the Ebola outbreak infected 21 people in Nigeria, it actually affected the lives of 21 million people in Lagos alone, in one way or another. I have come to realize that there is a thin line between precaution and hysteria. Maintaining the equilibrium between the two is the key to controlling the disease and mitigating its economic impact.
As I wrap up my interviews, a few questions resonate with me time and time again from these sessions.
These are the questions that keep me going. Although my report may not be able to answer all of the aforementioned questions, I do hope it will at least get policy makers, students, and advocacy groups talking about how countries can be better prepared for the next big outbreak and how public-private collaboration can lead a country out of an epidemic and on a path of recovery.
To end on a positive note, 24th July, 2015 also marked one year since the last polio case in Nigeria—an achievement that clearly shows what collaboration in global health can achieve.
Neglected tropical diseases (NTDs) are a group of diseases with different causative pathogens that largely affect poor and marginalized populations in low-resource settings and have profound, intergenerational effects on human health and socioeconomic development. The WHO has prioritized 17 NTDs that are endemic in 149 countries, of which some such as dengue, Chagas disease, and leishmaniasis are epidemic-prone.
NTDs can impede physical and cognitive development, prevent children from pursuing education, frequently contribute to maternal and child morbidity and mortality, and are a cause of physical disabilities and stigma that can make it difficult to earn a livelihood. Largely eliminated in developed, high-resource countries and frequently neglected in favor of better-known global public-health issues, these preventable and relatively inexpensive to treat diseases put at peril the lives of more than a billion people worldwide, including half a billion children. Several reasons have been postulated to explain the neglect of these diseases; an underestimation of their contribution to mortality due to the asymptomatism and lengthy incubation period that is characteristic of many of the diseases, a greater focus on HIV, malaria and TB because of their higher mortality, and a lack of interest in developing (non-profitable) treatments by pharmaceutical companies.
Progress has been made in recent times in combating these diseases and several international measures have been taken. Resolution WHA66.12 adopted at the sixty-sixth World Health Assembly in May 2013 highlighted strategies necessary to accelerate the work to overcome the global impact of neglected tropical diseases. Previously in January 2012 at the “London Declaration”, representatives of governments, pharmaceutical companies and donor organizations convened to make commitments to control or eliminate at least 10 of these diseases by 2020. They proposed a public-private collaboration to ensure the supply of necessary drugs, improve drug access, advance R&D, provide endemic countries with funding and to continue identifying remaining gaps.
Pharmaceutical Companies - In 2013, drug companies met 100% of drug requests, donating more than 1billion treatments. On the R&D front, clinical trials for some NTDs have been started. In addition, several drug companies have enabled access to their compound libraries.
Governments - Compared to 37 in 2011, 55 countries requested drug donations at the end of 2012. Also, over 70 countries have developed national NTD plans. Within a year of the Declaration, Oman went from endemic trachoma to elimination and by 2014, Colombia eliminated onchocerciasis.
Donors - NTDs have become more visible on the development and aid agenda, especially with the £245 million earmarked in 2012 by DFID for NTD programs. Other donors have since followed suit.
However, despite these strides, challenges remain as treatments are not reaching everyone in need. Although 700 million people received mass drug administration (MDA) for one or more NTDs in 2012, only 36% of people in need worldwide received all the drugs they needed. There’s also the anticipated challenge of environmental and climate change on NTDs; with dengue being identified as a disease of the future due to increased urbanization and changes in temperature, rainfall and humidity.
The spotlight needs to remain on NTDs and their contributions to ill-health and poverty for efforts to be sustained.
To sustain these efforts, greater advocacy has to be made for integrating NTD control into other community and even national level programming, without losing them in the crowd. Some anthelminthic drugs for preventive chemotherapy are on the WHO Model List of Essential Medicines and their distribution has been effective and economical. However, to succeed at NTD elimination, we have to look beyond mass drug administration to the removal of the primary risk factors for NTDs (poverty and exposure) by ensuring access to clean water and basic sanitation, improving vector control, integrating NTDs into poverty reduction schemes and vice versa, and building stronger, equitable health systems in endemic areas. There needs to be a consensus as to how to ensure this. At present, it seems there is a gap between elimination objectives and how to incorporate them into other health and development initiatives such as water and sanitation, nutrition and education programs. It has long been established that helminth parasite infection contributes to anemia and malnutrition in children. The presence of other protozoan, bacterial and viral diseases also contribute to school absenteeism. Guinea worm disease (dracunculiasis) can be recurrent when there is no access to safe drinking water.
There is also a need to maintain a surveillance and information system for NTDs in light of increasing migration and displacements. Another way to ensure that the spotlight is kept on NTDs is research that provides evidence of interactions and co-infections with other diseases. For example, epidemiological studies from sub-Saharan Africa have shown that genital infection with Schistosoma haematobium may increase the risk for HIV infection in young women (Mbah et al, 2013). Understanding that neglected diseases can make the “big three” diseases (malaria, HIV and tuberculosis) more deadly and can undermine the gains that have been made in health, nutrition and education is important (Hotez et al, 2006).
Erroneous overstating of the progress made in controlling and eliminating NTDs can have a detrimental effect on funding and public perceptions of their importance. Thus, there is a need for increased synergy between stakeholders. Achievements in polio eradication do not equal achievements in human African trypanosomiasis eradication. While some NTDs can be managed with specific drugs, some such as dengue do not have a specific drug. Therefore, while keeping the spotlight on NTDs collectively, it is important to emphasize their diversity and to also keep in mind the subgroup of NTDs categorized as emerging or reemerging infectious diseases, which are deemed a serious threat and have not been adequately examined in terms of their unique risk characteristics (Mockey et al, 2014).
Lastly, it is important to keep the heat on NTDs in the UN’s post-2015 sustainable development agenda by advocating that proposed goals support efforts to monitor, control and eliminate NTDs. As highlighted by the Ebola crisis, strengthening health systems is paramount. Nevertheless, the future looks optimistic regarding NTDs. Encouraging is the inclusion of neglected and poverty-related diseases on the agenda of the 2015 G7 Summit, which will be held in Germany in June.
Holmes, Peter. "Neglected tropical diseases in the post-2015 health agenda." The Lancet 383.9931 (2014): 1803.
Feasey, Nick, et al. "Neglected tropical diseases." British medical bulletin 93.1 (2010): 179-200.
Mbah, Martial L. Ndeffo, et al. "Cost-effectiveness of a community-based intervention for reducing the transmission of Schistosoma haematobium and HIV in Africa." Proceedings of the National Academy of Sciences 110.19 (2013): 7952-7957.
Hotez, Peter J., et al. "Incorporating a rapid-impact package for neglected tropical diseases with programs for HIV/AIDS, tuberculosis, and malaria." PLoS medicine 3.5 (2006): e102.
Mackey, Tim K., et al. "Emerging and Reemerging Neglected Tropical Diseases: a Review of Key Characteristics, Risk Factors, and the Policy and Innovation Environment." Clinical microbiology reviews 27.4 (2014): 949-979.
World Health Organization. Investing to overcome the global impact of neglected tropical diseases: third WHO report on neglected diseases 2015.
As 2014 draws to a close and we review what has happened over this past year, we also look forward to 2015 and all of it challenges. Numerous organisations and commentators have written of the challenges that lie over the horizon for 2015, as regards Global Health. From my own experience of working on the continent I have identified the following challenges for 2015 for Africa.
Some of the issues/challenges overlap and/or influence one another. They do not stand alone, the one can exacerbate the other.
Africa faces endemic poverty, food insecurity and pervasive underdevelopment, with almost all countries lacking the human, economic and institutional capacities to effectively develop and manage their water resources sustainably. North Africa has 92% coverageand is on track to meet its 94% target before 2015. However, Sub-Saharan Africa experiences a contrasting case with 40% of the 783 million people without access to an improved source of drinking water. This is a serious concern because of the associated massive health burden as many people who lack basic sanitation engage in unsanitary activities like open defecation, solid waste disposal and wastewater disposal. The practice of open defecation is the primary cause of faecal oral transmission of disease with children being the most vulnerable. Hence as I have previously written, this poor sanitisation causes numerous water borne disease and causes diarrhoea leading to dehydration, which is still a major cause of death in children in Sub-Saharan Africa.
Africa has faced the emergence of new pandemics and resurgence of old diseases. While Africa has 10% of the world population, it bears 25% of the global disease burden and has only 3% of the global health work force. Of the four million estimated global shortage of health workers one million are immediately required in Africa.
Community Health Workers (CHWs) deliver life-saving health care services where it’s needed most, in poor rural communities. Across the central belt of sub-Saharan Africa, 10 to 20 percent of children die before the age of 5. Maternal death rates are high. Many people suffer unnecessarily from preventable and treatable diseases, from malaria and diarrhoea to TB and HIV/AIDS. Many of the people have little or no access to the most fundamental aspects of primary healthcare. Many countries are struggling to make progress toward the health related MDGs partly because so many people are poor and live in rural areas beyond the reach of primary health care and even CHW's.
These workers are most effective when supported by a clinically skilled health workforce, and deployed within the context of an appropriately financed primary health care system. With this statement we can already see where the problems lie; as there is a huge lack of skilled medical workers and the necessary infrastructure, which is further compounded by lack of government spending. Furthermore in some regions of the continent CHW's numbers have been reduced as a result of war, poor political will and Ebola.
The Ebola crisis, which claimed its first victim in Guinea just over a year ago, is likely to last until the end of 2015, according to the WHO and Peter Piot, a scientist who helped to discover the virus in 1976. The virus is still spreading in Sierra Leone, especially in the north and west.
The economies of West Africa have been severely damaged: people have lost their jobs as a result of Ebola, children have been unable to attend school, there are widespread food shortages, which will be further compounded by the inability to plant crops. The outbreak has done untold damage to health systems in Guinea, Liberia and Sierra Leone. Hundreds of doctors and nurses and CHW's have died on the front line, and these were countries that could ill afford to lose medical staff; they were severely under staffed to begin with.
The outcome is bleak, growing political instability could cause a resurgence in Ebola, and the current government could also be weakened by how it is attempting to manage the outbreak.
Countries that are politically unstable, will experience problems with raising investment capital, donor organisations also battle to get a foothold in these countries. This will affect their GDP and economic growth, which will filter down to government spending where it is needed most, e.g.: with respect to CHW's.
Studies on political instability have found that incomplete democratization, low openness to international trade, and infant mortality are the three strongest predictors of political instability. A question to then consider is how are these three predictors related to each other? And also why, or does the spread of infectious disease lead to political instability?
Poverty and poor health worldwide are inextricably linked. The causes of poor health for millions globally is rooted in political, social and economic injustices. Poverty is both a cause and a consequence of poor health. Poverty increases the chances of poor health, which in turn traps communities in poverty. Mechanisms that do not allow poor people to climb out of poverty, notably; the population explosion, malnutrition, disease, and the state of education in developing countries and its inability to reduce poverty or to abet development thereof. These are then further compounded by corruption, the international economy, the influence of wealth in politics, and the causes of political instability and the emergence of dictators.
The new poverty line is defined as living on the equivalent of $1.25 a day. With that measure based on latest data available (2005), 1.4 billion people live on or below that line. Furthermore, almost half the world, over three billion people, live on less than $2.50 a day and at least 80% of humanity lives on less than $10 a day.
Not many people probably paid much attention to public health, much less global public health, before Ebola arrived in the US and Spain. Despite the focus on Ebola, there have been other global infectious disease developments in 2014.
A major threat to humans worldwide is the emergence of antibiotic resistance. According to the Infectious Diseases Society of America, the CDC, WHO, the European Union, and President Obama, the problem of antibiotic resistance has reached crisis level. This is due to the overuse of antibiotics worldwide and major pharmaceutical suppliers who have basically abandoned antibiotic development because they don’t make enough money to justify the expense. This is a major problem because we could end up going back to death rates akin to the pre-antibiotic era, where something as simple as a minor cut could be deadly. Another fact to mention is the huge use of antibiotics in agricultural animals. Agricultural use accounts for 80% of antibiotic use in the US and that continued usage gives bacteria more exposure to the antibiotics and more opportunity to develop resistance.
In case you didn’t hear, in only a 25 year span from the discovery of hepatitis C virus (HCV), we now have a treatment that cures 95% of the people who take the pill once a day for 8-12 weeks. I want you to let that soak in a minute…….because this is huge. HCV affects something like 250 million people around the world and now we can not just suppress the virus, but can actually clear it from someone’s body. Unfortunately, right now the cost of this treatment is $74,000 or more per person, basically putting this cure out of reach of anyone in middle or low income countries. Also in 2014, the world reached the tipping point for HIV/AIDS. That means that for the first time in the 30+ year epidemic, the number of people newly infected was less than the number of HIV positive people who got access to HIV medicines. While not every individual country has reached this milestone, and we still have a ways to go to get everyone access to life-saving medication, this tipping point shows that with continued effort the end of HIV/AIDS may be nearer than we thought.
Vaccines have been around for a while and humanity has tried to create vaccines for all sorts of diseases. Work is being done to create vaccine platforms that don’t involve a needle such as embedding the “stuff” of the vaccine into a microneedle array (a small disk with several microscopic points that dissolve when embedded in the skin). There is also an effort to create a universal influenza vaccine. A universal vaccine would target viral proteins that are conserved between the different strains of influenza and don’t mutate very often, so the vaccine could be effective no matter what strains are circulating each influenza season.
I just want to touch on a few of the epidemics you may not have heard much about this year. There was an epidemic of enterovirus D68 this year that caused more severe disease than we had expected as enterovirus infections generally only cause mild respiratory symptoms in kids. A mosquito-borne disease, Chikungunya, has been sweeping the Caribbean and causing fever and severe joint pain. Guinea worm, affecting people living in Africa and Asia, grows inside the body and then erupts from anywhere in the body causing severe and debilitating pain. Guinea worm is on target to be the second disease eradicated in human history (after smallpox) and is being eradicated not with the use of expensive medicines but through inexpensive but challenging to implement behavioral change.
Values are critical in shaping the global health (GH) dialogue and landscape. Values and the actions that arise from them (virtues) underlie the policies that ensure universal access to necessary health services, adequate responses to health emergencies and resource allocation. Similarly, the values of health governing bodies can create chasms between people and their health necessities. This truth has been unfolding poignantly on an international level during the handling of the Ebola virus disease (EVD) outbreak in West Africa.
What values did the actions or inaction of the international GH community endorse in handling the current EVD outbreak? Although the uniqueness of the outbreak in terms of location and challenges in diagnosis should be considered, many experts agree that the greatest force contributing to the rapid spread of EVD was inaction (1-4). In June 2014, signs that EVD was spiraling out of control throughout Guinea were flashing brightly but the response from the international community remained slow. The exception was Doctors without Borders (MSF), whose staff was already on the ground, helped to diagnose the first case and pleaded for a more robust response from international health governing bodies (3).
Criticisms of health regulatory bodies grew stronger when EVD entered rich countries, which appeared to produce a marked increase in global support efforts. It is hard to say unequivocally, whether this heightened interest and commitment was inevitable or whether the cases in the US and Europe were the impetus. But it is fair to say that many mistakes were made in terms of prioritizing EVD eradication and surveillance. It may also be accurate to say that major economies responded when EVD was perceived as an immediate threat to their economy. This, I believe, is inevitable in a GH system that is built upon a market-driven approach.
Can a GH agenda that is framed around economics prioritize the eradication of emerging diseases and neglected diseases of poverty? Although there are compelling arguments for why high-income countries should help to combat EVD and similar diseases, it is unlikely that great achievements will be made without a values shift (5).
A market driven approach inherently prioritizes the need of a few versus the need of many. This model enables the interests of major economies to outweigh the greater good of the whole, if left unchecked. The most important consequence of this approach is that it undermines international health regulatory bodies, whose actions and budgets are heavily influenced by larger economies. This is a problem which, when combined with poor health systems, harmful microbes and permeable borders will inevitably lead to threats in local communities and global security. More importantly, with the movement of people forming a major characteristic of this era, the market driven approach is an unsustainable value upon which to build GH interventions.
There are many points worth considering (schematic above). Major questions moving forward should consider creating a GH model that is more oriented toward equity, security and creativity. Resolutions that create a space in which poor nation states help to set the GH agenda without being threatened by the loss of aid from larger economies must be discussed. Additionally, addressing ways in which the GH dialogue can be re-framed to include stakeholders that currently operate based on virtues stated above should be considered. For example, is there a way to ensure a more official decision-making role for organizations like MSF?
What is next for GH governance and what will the values shift towards? EVD 2014 is a strong indicator of the limits of theoretical values, political indifference and passivity in achieving health and well-being for all. But the stories emerging from West Africa provide an opportunity for EVD 2014 to serve as a “meaning making” event in GH. It provides an impetus for changing priorities from passive verbiage of values of human dignity to a model of creativity, equity and accountability which proactively contextualizes GH policies, innovation and interventions.
1. Gostin LO and Friedman EA 2014 Ebola: a crisis in global health leadership. The Lancet, 384; 1323-1324.
2. Cohen J 2014.Ebola vaccine: Little and late. Science, 345 (6203): 1441-1442.
4. Farrar JJ and Piot P 2014. The Ebola Emergency-Immediate Action, Ongoing Strategy N. Engl J Med 371(16):1545-1546.
5. Rid, A., & Emanuel, E. J. (2014). Why Should High-Income Countries Help Combat Ebola? JAMA, 312(13), 1297-1298. | 2019-04-21T11:06:59Z | https://www.twigh.org/twigh-blog-archives/category/Disease+Outbreak |
Benjamin Crump created a riddle, wrapped in a mystery, inside an enigma; but perhaps there is a key. That key is TRUTH.
Imagine if you will, a constructed narrative, designed with an intentional and hurried specific purpose, and replete with a hidden agenda.
If your ordinary mind can take you into the land of Machiavellian construct then you are a far better-minded cynic than I.
It is only when you are deep within that manipulative place you can begin to understand how challenging it is to get inside this riddle. It is like a basketball made of individual rubber bands intertwined to form a specific shape.
As you take apart the rubber bands the new shape takes on an entirely divergent form, and you realize what you once saw from a distance is not a basketball after all. Daryl Parks and Benjamin Crump are master band weavers but what you are about to read will help you unlock the mystery.
I must take an authors liberty to thank many people here at the Last Refuge for providing literally hundreds of hours of painstaking research to carefully untangle the cyber enigma codes. And so we begin, again.
Only this time we begin the final analysis.
It is important prior to going any further in this thread that you have read Update #9 and Update #10 part 1 in their entirety. Trying to understand this thread post alone would be like trying to understand the US Constitution without first having read the Declaration of Independence. In addition the prior two segments contain over 200 sourced and referenced citations. In the interest of already cumbersome readability it would be impossible to duplicate all the citations into one digestible article.
From the outset of the Trayvon Martin shooting case two things have been consistent. First, the statements both recorded, written, and re-enacted by George Zimmerman; And Second, the Sanford Police account that after a two week investigation there was no cause for arrest. The constancy of these two points are factually irrefutable.
So what changed? Well, for one, the narrative and discoveries of the Trayvon contingent of attorneys and media specialists. And more importantly the climate that led to the change of prosecutorial determination officials amid relentless media and political pressure.
Despite these two substantive changes, one from an obviously, and expectedly, biased party in Team Trayvon; and the other from an optically worried political class, the facts or known truths have never changed from before March 5th. Those facts have remained constant.
Yet, George Zimmerman now sits in jail. So, if the facts of the case did not change since before he was cited with an affidavit for probable cause for arrest, then why is he now incarcerated?
Brutally short answer: Political fear, worry about riots and public reaction via racial animosity, and most worrisome of all “lies”; or in todays politically correct lingo “mistruths”.
As a specific outcome from this debriefing, on the same date, February 28th, in the afternoon, Tracy Martin called his sister-in-law, Patricia Jones, herself an attorney, for help. She in turn contacted Tyrone Williams, another attorney, who knew how to contact Benjamin Crump from the law firm Parks and Crump. Parks and Crump both specialize in personal injury/wrongful death with an emphasis on civil rights cases.
Both Tyrone Williams and Patricia Jones reached out to Benjamin Crump who was in court in Tallahassee. The outcome from these contacts was Crump was put directly in contact with Tracy Martin to discuss the shooting on Wednesday February 29th.
On Wednesday February 29th Trayvons body was returned to Fort Lauderdale via funeral director Richard Kurtz. The viewing and visitation was held on Friday evening March 2nd, and a memorial service with interment the following day March 3rd. Through this timeline Trayvon’s mother, Sybrina Fuller age 46, had not left her North Miami home.
48 hours after being contacted by Tracy Martin, with no arrest of George Zimmerman yet made, Benjamin Crump decided to take the case. Crump enlisted the help of Sanford attorney Natalie Jackson, a former Naval Intelligence Officer, and Director of a Womans Trial Group on March 1st.
Together Jackson and Crump formulated a media strategy, and on Monday March 5th Jackson brought in Ryan Julison, a publicist who had worked with her on a number of high-profile cases. Julison pitched the story to a long list of media contacts.
Eventually, on Wednesday March 7th, Reuters published a story titled “Family of Florida Boy Killed by Neighborhood Watch Seeks Arrest.” The next day, March 8th, CBS News aired a segment on “This Morning,” and by 10 a.m. a crowd of reporters gathered at Natalie Jackson’s law office for a news conference with Ben Crump and Tracy Martin.
The team was now assembled the firestorm media blitz was about to begin.
In the same timeframe George Zimmerman was being questioned and investigated by Sanford Police and investigators. In addition to a 6 hour unrepresented questioning session the night of Sunday February 26th, the police followed up with a crime scene re-enactment with Zimmerman on Monday February 27th. Then, at the conclusion of the re-enactment three detectives grilled Zimmerman, again unrepresented, at police headquarters in their most thorough and hostile questioning. They told Zimmerman they didn’t believe him, and tried unsuccessfully to poke holes in his story.
As has been reported, Zimmerman told police officials that he lost sight of Martin and went around a townhouse to see where he was. Then he claimed Martin confronted him and punched him in the face, breaking his nose, and knocking him down.
According to a Daily Beast’s source, Zimmerman told police that when he was on the ground, Martin straddled him, striking him, and then tried to smother him.
Zimmerman told police that Martin’s last words after the shooting were, “Okay, you got it.” He said the phrase twice, then turned and fell face-down on the ground.
Zimmerman told police he didn’t realize that Martin was seriously injured, and that he lunged to get on top of him after the teenager fell to the ground. Moments later, a police officer from Sanford arrived, placed him in handcuffs and took his gun.
Following the 10am March 8th strategically structured press conference outside Natalie Jackson’s law office with Ben Crump and Tracy Martin, the media interest picked up exponentially. Ryan Julison, the publicist, who pitched the media narrative had done a masterful job of drawing in the attention.
Unfortunately it was from those initial story line pitches that several wrongful conclusions were drawn. Most importantly inaccurate of them was that George Zimmerman was white and the shooting was racially motivated.
But the Julison, Crump, Jackson, Martin, Fulton team et al needed to bait the media hook, so they were not aggressively correcting the factual inaccuracies that eventually worked their way into the Institutional Main Stream Legacy Media narrative.
Pandora’s racial box was open and there was no going back now. By March 23rd the race-baiting narrative reached a boil over point when President Obama took to the Rose Garden podium and publicly stated if he had a son he’d look like Trayvon Martin.
Four days after the Team Trayvon strategic press conference, on Monday March 12th, the Sanford police chief Bill Lee told reporters he lacked necessary probable cause to arrest Zimmerman. Bill Lee, the police chief, would also contend that under Florida’s “Stand Your Ground” law, and even under common Self-Defense laws, police could not arrest Zimmerman without evidence to contradict his story. And there was no contradictory evidence found from the initial beginning of the investigation on February 26th through March 12th.
Immediately Reverend Al Sharpton took up Trayvon Martin’s cause on his MSNBC show, and was soon followed by Jesse Jackson and NAACP President Ben Jealous.
The forces to pressure the Police Department, and the prosecutor’s office into an arrest were all assembled. The only thing that was lacking was justifiable evidence to do it.
There was a slight sense of necessary desperation on the part of Team Trayvon. They had coordinated the local and national NAACP, there were on-line petitions, Al Sharpton, Jesse Jackson, and Ben Jealous were all on board, but they needed heavier emotional artillery.
Enter Trayvon’s Mother, Sybrina Fulton. That Friday night March 16th Benjamin Crump arranged a meeting inside Sanford Mayor Jeff Triplett’s office. Triplett invited Trayvon Martin’s parents and their entire legal team into his office to listen to each of the recorded 911 calls along with the original nonemergency call from George Zimmerman.
No-one was there to represent George Zimmerman’s interests, only Tracy Martin, Sybrina Fulton, the legal team led by Parks and Crump, the media publicity team, and civil rights activists. Mayor Triplett played the calls on his computer.
After playing the calls for Team Trayvon, Mayor Triplett overrode the prior decision of Police Chief Lee not to release the tapes, and instead publicly released the tapes to Martin, Fulton, Parks, Crump, Jackson, Julison et al for them to use in associated media coverage and releases.
The financial motivation began to visibly peak just above the surface for those in tune to the previous efforts in the Martin Lee Anderson case also coordinated by Parks and Crump along with Al Sharpton and Jesse Jackson. $7.2 Million Motivations.
But still the factual evidence of the case had not changed and their was still no actual reason, or evidence to proceed with an arrest.
More heavier artillery was needed. They needed more substance to continue the pressure. More research and strategy considerations would reveal a new strategic manuever.
So accordingly on Sunday night March 18th Tracy Martin decided to investigate his son Trayvon’s cell phone use. As reported by Benjamin Crump it was on this night that Tracy discovered Trayvon had been on the phone with a girlfriend. The girlfriend was previously unknown to them, did not attend the funeral services, and yet apparently was on the phone with Trayvon for 400 minutes on the day he was shot.
We took another step in this — what has been a daily journey for the past three and a half weeks.
He called me late Sunday night [March 18th] and told me that he had called the young lady, and he told me, and I was just utterly shocked when he told me the time that they talked. They had talked all that day, about 400 minutes, starting that morning to the afternoon. Like many teenagers do, they talked on the phones.
Well, what George Zimmerman said to the police about him being suspicious and up to no good is completely contradicted by this phone log, showing, all day, he was just talking to his friends. And in fact, he was talking to this young lady when he went to the 7-11 and when he came back from the 7-11.
I’m going to get into that in detail because her testimony, her testimony that is shown on these phone logs, connects the dots. Completely connects the dots of this whole thing.
In fact, she couldn’t even go to his wake she was so sick. Her mother had to take her to the hospital. She spent the night in the hospital. She is was one of the most special people in the world to you. And we all were teenagers, so we can imagine how that is when you think somebody’s really special, and you call it puppy love or whatever you want to call it. Then suddenly and tragically, this is taken away and you have, unfortunately, a first-hand account of it.
[…] Now, details. That day Trayvon Martin, 17 years old, three weeks, weighed about 140 to 150 pounds soaking wet, as his mother says, and that’s with his shoes on, leaves to go to the store to get some snacks before the NBA all-star game is about to start.
His little step-brother asked for him to bring some Skittles back and something to drink. He is talking to the young lady, as he walks to the store. The phone records show — you get copies of these phone records, they will show you the times the calls were made and how long he was on the phone. And it is without any doubt that he’s on the phone the entire time during the day. especially when he is going to that store and coming back.
You will see that he goes to the store talking to her. And then when he comes back he’s talking to her. This is what she relays. And I’ll share with you some of the audio. We’re going to turn this over to the Department of Justice and their investigation because the family does not trust the Sanford Police Department in anything to do with the investigation.
She relays how he went to the store. When he came out from the store, he said it was starting to rain, he was going to try to make it home before it rained. Then he tells her it starts raining hard. He runs into the apartment complex and runs to the first building he sees to try to get out of the rain. He was trying to get shelter. So he tries to get out of the rain.
And unbeknownst to him, he is being watched. He is a kid trying to get home from the store and get out of the rain. That’s it. Nothing else. So, he stands under that apartment building for a few minutes, the rain kind of dies down. He then goes, and he has his hoodie on because it’s raining and he goes back to walking. And he goes back to talking to her again. You’ll see the phone calls when it came in at 6:54. He then says, I think this dude is following me. And she talks about how he kind of slows down and he’s trying to look in the car like, I think this dude is following me. And she tells him, baby, be careful, just run home. She tells him that.
[…] This young lady connects the dots. She connects the dots. She completely blows Zimmerman’s absurd defense claim out of the water. She says that Trayvon says he’s going to try to lose him. He’s running trying to lose him. He tells her, I think I lost him. So, he’s walking and then she says that he says very simply, oh, he’s right behind me. He’s right behind me again. And so she says “run.” He says, I’m not going to run. I’m going to walk fast. At that point, she says Trayvon — she hears Trayvon say, why are you following me.
She hears the other boy say, what are you doing around here. And again, Trayvon says, why are you following me. And that’s when she says again he said, what are you doing around here. Trayvon is pushed. The reason she concludes, because his voice changes like something interrupted his speech.
Then the other thing, she believes the earplug fell out of his ear. She can hear faint noises but no longer has the contact. She hears an altercation going and she says, then suddenly, somebody must have hit the phone and it went out because that’s the last she hears.
[…] Arrest George Zimmerman for the killing of Trayvon Martin in cold blood today. Arrest this killer. He killed this child in cold blood. Right now, he is free as a jay bird, he’s allowed to go and come as he please while Trayvon Martin is in a grave.
Do you sense the urgency, weight, and importance that Crump is placing on this “girlfriend”? He identified her as “DeeDee”, describing the name as an alias, then proceeded to state she had sworn a testimonial, then a sworn statement.
Essentially Benjamin Crump based the entire construct of the Trayvon narrative of events squarely on the shoulders of “DeeDee“.
Before proceeding it is highly important that you pay attention to the dates and times associated with this Press Conference, and with the content outlined within the substance of the press conference. Tracy Martin “discovered the phone records” late Sunday evening the 18th, he called Crump “very late” that night. The press conference was Tuesday morning March 20th at 11:30am. There was only one day, Monday the 19th, between the discovery and the conference.
Monday, March 19th a school day. Benjamin Crump would have needed to talk to, interview, and retrieve a testimonial from a 15-year-old girlfriend, described as was one of the most special people in the world to Trayvon. She was in Fort Lauderdale, Crump was in Sanford, Florida.
She was filled with “puppy love” and “traumatized beyond anything anyone could imagine”. So devastated she had to be taken to the hospital and could not even attend the visitation or funeral. So special that Trayvon and DeeDee spent 400 minutes, or 6 hours and 40 minutes on the phone with her on one day, Sunday February 26th, the day he was shot. His last day alive.
Their relationship was “so special” that after hearing the event unfold on the phone she did nothing. She never called Trayvon again, she never called Mom or Dad worried, she never spoke to anyone about it, nothing. How is it possible they were so especially close that Tracy didn’t even know who she was. No-one did. Not Tracy, not Sybrina, not no-one.
She was so important to Trayvon yet she refused to cooperate, or talk to police, or give sworn statements to states attorneys or police, even when offered representation. From March 20th until sometime after April 4th, she refused to cooperate; then, when she did give a statement, she would only give it to Federal authorities. Does that part make sense to you?
Oh, it gets worse. But more on that in a moment. For now just understand how important to the narrative DeeDee actually is.
Fast Forward. Thursday March 22nd, yet there was still no cause for arrest, nothing of substantive verifiable evidence had changed. DeeDee still refusing to be interviewed by investigators, and the media interest was boiling.
Subsequently, on March 22nd driven by the relentless 24/7 media blitz of Family Attorney’s Parks and Crump, along with Al Sharpton, Jesse Jackson and the NAACP applying pressure, the Sanford city commission voted “no confidence” in Police Chief Lee by 3-2 margin.
The police the chief Bill Lee announced his temporary resignation of the case. Lee told a news conference that while he stood by the Sanford Police Department, he was stepping aside to remove any possibility of distraction caused by him.
Some news agencies even began to sell the story and report that Sanford’s lead investigator, Chris Serino, wanted Zimmerman charged with manslaughter that night but Wolfinger’s office put a stop to it.
The city of Sanford issued a statement saying that is completely not true.
Police did that night prepare an incident report that lists “manslaughter” as the possible crime being investigated, but in every case in which an officer prepares an incident report, he or she fills in that spot with some crime and statute number to allow the agency to properly report crime statistics to the FBI.
So it would be prudent for the media and Trayvon supporters to stop with whole “The lead investigator wanted to charge him but it was shuffled under the rug” narrative. It really only further diminishes the search for truth in this case.
And yet Zimmerman was arrested and charged by Special Prosecutor Angela Corey with Second Degree Murder. They got their arrest. Or as Crump would say, they got to “first base”.
Getting to “Second Base” or “financial scoring position” requires them to get past the immunity hearing. If Zimmerman successfully argues the immunity hearing and the Judge finds self-defense was reasonably warranted, then the entire case is wiped out and Zimmerman will be released.
Throughout the horrible story of Trayvon Martin there have been two paralleling goals. From the Trayvon family they sought truth, and from the Trayvon Family Attorneys they sought justice. But not the type of justice you would assume.
Benjamin Crump esq was seeking monetary justice. Wrongful Death type financial justice.
Initially, I believe, Benjamin Crump was contacted because Tracy Martin and Sybrina Fulton just wanted honest answers. However, the motivation behind Parks and Crump, while it may contain some semblence of this objective, is more aptly framed around financial interests, and a broader, social/financial justice. This provides them prestige and influence under racially driven civil rights type auspices. Legal civil-rights credibility.
Tracy Martin, and Sybrina Fulton became, perhaps unwitting, tools toward the end goals of a much larger objective. Tracy and Sybrina stood to gain success in their original goal of knowlege, and then Crump added another benefit, while not initially considered, of financial reward.
Yet, the factual evidence stood in their way of both goals.
On one hand you see distraught parents forced to face the reality of a troubled teenage son, and simultaneously faced with guilt from complicit failure as parents to provide Tryavon the internal moral compass and value system to succeed. Defining him as a victim helped avoid confronting the mirror.
On the other hand you have a self-defense claim from Zimmerman which would not only wipe out any chance for Parks and Crump to achieve Financial Justice, but such an outcome would also place the guilt burden back upon an absentee Mother and Father in accepting the failure to develop a moral compass within their son.
So under section “2″ an arrest implies “probable cause”, but the hurdle of immunity still needs to be overcome. Once the process of arrest is established they then need to get beyond the immunity hearing. Once past the immunity hearing a civil action is possible. For the purpose of “monetary justice” it only takes an arrest (first base), winning the immunity hearing (second base), and then on to trial; a subsequent conviction is not necessary.
Without an arrest that leads to trial, there is no implied probable cause, which could lead to Compensatory and punitive damages for wrongful death. They need an arrest and subsequent trial. They DO NOT need a conviction to achieve PAYDAY.
Remember this specific point as you contemplate the Crump strategy.
Again, the problem lay with factual evidence not leading to arrest. So media evidence was needed. Media evidence need not be real, it merely needs to appear to be real.
On Monday March 12th the absence of evidence was again noted by investigators. By Friday night March 16th the 911 calls became a media strategy to manufacture the framing for an arrest.
But, the investigators had already determined the 911 calls did not contradict Zimmerman, to the contrary when added to the eye-witness accounts they supported Zimmerman’s explanation of events.
Crump needed something else to change the arrest narrative in their direction.
That something became “DeeDee”, Trayvon’s “girlfriend” and she became an audio-witness. Again it is important to review the CNN Transcript from Crumps Press Conference disclosing DeeDee.
There is only ONE problem. The framing is FAKE.
Yep, CONTRIVED – MANUFACTURED – PHOONY – FAKE !!!
Does “DeeDee” exist? Yes, when you follow the twitter feed from Trayvon Martin it appears so, yet despite the scrubbing of one on-line social identity, after another, several internet blogs, and researchers were able to trace profile steps.
Early on, quick action was taken by the media team hired by Parks and Crump to hide the on-line social identity of Trayvon Martin. In addition, to help create the optical illusion of “angelic Trayvon” they filed motions to seal school records, police records, and more alarmingly the coroners report.
It appears the initial goal was to construct a media campaign, and a media image. That required creating a specifically controlled media driven personality profile of Trayvon Martin.
However, multiple independent blog researchers quickly began to question the narrative being sold by the Institutional Legacy Media. During that research a divergent picture, arguably the real picture, the honest picture, of Trayvon Martin began to unfold. The Daily Caller used web caches to capture 150+ pages from the twitter feed of Trayvon, which included multiple twitter handles, and the picture is far less angelic.
Another site, Wagist, began exhaustive research of Trayvon Martin to reveal many more details.
*Note* If you visit those twitter feeds prepare to enter a world of unbridled vulgarity, profanity, and very, shall we say, “salty” language.
Within that archived twitter feed, and those details, “DeeDee” is defined by Trayvon’s social identity @NO_LIMIT_NIGGA, under the Twitter Handle @iAdoree_Dee.
However “@iAdoree_Dee” as an on-line identity, was painstakingly scrubbed. The Social Media Twitter account deleted, and the on-line identity deleted/scrubbed in all manner of social networks. Perhaps, for obvious reasons, once again the truth presents a risk to a discovered false narrative.
It is quite amazing how much people will broadcast of themselves, and their daily, even hourly, activity into the internet. Personally I never understood the need for such public broadcasts; who cares? But when it comes to issues like “who is he/she” the shere volume of what people intentionally broadcast about themselves into the internet becomes a resource. And then the shere size of U-Tube users comes in to play also.
There are mulltiple traces of the profile for “@iAdoree_Dee” all over the web, and all publicly visible. Several blogs like Daily Caller etc have become proficient at using “caches” or retrival searches to find deleted public information. Even when an identity is changed or deleted using various search engines and cache pages they can be located.
*Note: Shairaaa_x3 is identified as the cousin of “DeeDee”, aka: @iAdoree_Dee.
So “DeeDee” who was “@iAdoree_Dee”, or someone on her behalf, removed the on-line identity. However, because of multiple intertwined social networks we were able to discover who she transformed into.
“DeeDee” who twittered by @iAdoree_Dee is now twittering by the name @x_FashionObsess.
On-Line riddle solved “DeeDee” is currently @x_FashionObsess (who was formerly @iAdoree_Dee).
Why is this important? Well, now you know who DeeDee is. So now we can take a historic look at what activities were taking place in the life of DeeDee on and around the time frames that Benjamin Crump describes in his story. The great thing about 15 year old kids these days is they feel the need to tell everyone what they are doing all all times.
Additionally Face Book has become so, well, yesterday duh. It’s all about Twitter now. Twitter, Twitter, Twitter….. 24/7 Twittering. *Note to self, never let the kids do this.
Do you interpret her day to be filled with 6 hours and 40 minutes of conversation with Trayvon prior to 7:30pm start of NBA All Star Game? “Out and about with the family” etc. Highly, highly questionable. Impossible even.
Now lets take a close look at the notification that “DeeDee” @x_FashionObsess posted to her twitter account and subsequent social media sites that Trayvon was dead.
DeeDee was responding to the death of Trayvon as a really close friend. A self described “best friend”. Indeed they were very close and talked often, but they were not “dating”, and there was no “puppy love”. They had previously dated but the relationship was now one of friendship, great friendship. And she sweetly aknowleged Trayvon’s death with “RIP Bestfriend You Will Be Missed”.
Not devastated, not uncontrollable, and not manic with grief. She was confronting the loss of a close personal and platonic friend.
So let’s really begin to expose the manipulative lying of Benjamin Crump, again referencing how he introduced DeeDee and what he said of her whereabouts on the days of Trayvon’s funeral. As previously outlined Trayvon’s viewing was the evening of Friday March 2nd with a memorial service and interment the following day Saturday March 3rd.
No, it does not appear DeeDee @x_FashionObsess went to the hospital on Friday March 2nd, or Saturday March 3rd. Benjamin Crump specifically and intentionally LIED on this one.
We’ll get to the motive behind the lies in a moment, but first we just wanted to point out another potential issue that might, just might, absolve Tracy Martin of an initial mistake.
Like you can see, DeeDee aka @x_FashionObsess is a prolific tweeter. In this string of twitter communication (thread) below you’ll note a long phone call with Trayvon on Friday 2/24 my hunch is possibly when he was heading to Sanford with his Dad. It is a long call because she tweets while on the phone.
The twitter sign for frustration is >_< (squinting eyes)…. so something in this call was not going well between DeeDee and Trayvon [@NO_LIMIT_NIGGA]. But I digress.
Perhaps this is the call 2/24 that Tracy Martin confused with the day Trayvon was shot 2/26, who knows? or another option is this might have led the way for some creative use of “phone records” on the dates before sharing with the media.
DeeDee @x_FashionObsess and Trayvon @NO_LIMIT_NIGGA did not talk for 400 minutes on 2/26 the day he was shot.
DeeDee @x_FashionObsess did not go to the hospital on 3/2 and 3/3 and that was NOT the reason for not attending the viewing or memorial.
DeeDee @x_FashionObsess and Trayvon @NO_LIMIT_NIGGA were not Boyfriend/Girlfriend. They were close, platonic best friends.
DeeDee @x_FashionObsess was not devastated, destroyed, or an emotional wreck. She was sad that her best friend was shot. She notified others, including her cousin who is also a prolific tweeter, but the cousin did not know immediately who Trayvon was.
DeeDee @x_FashionObsess did not miss school.
DeeDee @x_FashionObsess did not contact anyone fraught with worry, because apparently there was no reason to.
Did DeeDee actually hear anything that night? Who knows. However, the mere fact that Benjamin Crump can be proven to have falsely constructed more than 80% of his press conference in order to plant misleading misinformation in the media sure brings the entire narrative into question.
You can also go back to Sunday March 18th, the date that Tracy Martin supposedly “found” DeeDee via computer search of Trayvon’s phone records; and follow DeeDee through the time Tuesday morning 2/20 of Benjamin Crumps big press conference where he proclaimed the “sworn affidavit”. NOTHING THERE EITHER. Went to school 3/19 regular day… blah, blah, blah.
Curiosor and Curiosor…. Follow Benjamin Crump and Alice into the “Rabbit Holes”.
The constructed and manipulated evidence to support the claims of Benjamin Crump and consequently Tracy Martin and Sybrina Fulton appear highly manufactured. There wasn’t enough evidence there for the legal authorities to act, so they created a tempest in a teacup media storm replete with false information to create, non-legal “media evidence”.
Media evidence does not need to be “real”, it merely needs to appear real, so as to pressure officials into bending to the demands of public opinion.
I don’t know how else to describe it. It’s very similar to the Tawana Brawley case with Al Sharpton, only this time its Benjamin Crump playing the role of Reverend Al.
Perhaps one of the reasons, maybe the primary reason, the special prosecutor Angela Corey didn’t go to the Grand Jury was because she would have either had to introduce DeeDee’s testimony, or avoid it completely. The probable cause affidavit does not outline anything that Benjamin Crump sold to the media during his March 20 press conference. Nothing.
In addition, the lead prosecution investigator Dale Gilbreath testified under oath during the bond hearing they had found no evidence to dispute George Zimmerman’s account of how the confrontation with Trayvon Martin started.
If investigator Gilbreath held an affidavit from “DeeDee” containing the narrative of hearing the fight start, the physical confrontation, the words exchanged, and Trayvon falling to the ground, he would have stated it under direct questioning.
Therefore one can only reasonably conclude no such sworn statement outlining the confrontation itself, from DeeDee or anybody else, actually exists.
If State Prosecutor Corey included the Benjamin Crump narration, and the obvious manipulations, within the affidavit, DeeDee would have been forced to bear witness at trial and O’Mara would be able to deconstruct or pull the lies out. Perhaps, that is why no Grand Jury was actually used.
I doubt with considerable, and reasonable, certainty they even intend to introduce her into any legal proceedings.
Corey and Crump wanted the media evidence from “DeeDee’s” narrative to exist to get the arrest and probable cause. But they do not plan to use any “DeeDee” legal evidence, because it doesn’t actually exist.
Neither Mark O’Mara, nor George Zimmerman is actually disputing that passage within the affidavit about Zimmerman following Trayvon, because Zimmerman openly admits he did follow him. He followed him to keep an eye on him because he thought his behavior was suspicious. This is not disputed. He gave statements to that effect directly to Sanford Detectives repeatedly. So the actual legal and prosecutorial value to that “witness” (DeeDee) statement contained within the affidavit is nil. It’s a moot point.
If you witnessed the Bond Hearing Friday April 20th you saw how weak the prosecution case is.
It would appear Zimmerman was intentionally overcharged with 2nd Degree Murder, based on constructed, non-legally binding, media-evidence in an effort to get him to plead down to a lesser charge and avoid trial.
Either a plea, or a trial, provide the same “probable cause” and culpability benefit to Benjamin Crump in a civil wrongful death lawsuit against various interests.
One thing is certain, you will not see anyone named DeeDee anywhere near a court room, EVER.
Im assuming that Trayvon was completely sober the night he was killed. I’m also assuming his parents have the toxicology tests taken during the autopsy since they would naturally want to know what happened to their boy..
Doing some great work here. This guy says he was close to Trayvon and judging by his tweets I believe him. He also seems to be one of the first of Trayvon’s friends, that I can find, that found out about Trayvon being dead. Check this tweet he made on March 22, 4:20PM. He says “Tray must have of had 2 hoes, cause that girlfriend we know he went wit aint known he died until ah week after” Now I’m just speculating here but it may be that he heard about this DeeDee on TV and is surprised at the fact that they are calling her Trayvons girlfriend because he knows Trayvon’s girlfriend and he knows that isn’t her. We know this Daisha Brianne found out about Trayvon by 5:44 pm Feb. 27 so we know this can’t be Trayvon’s girlfriend.
It’s unfortunate whoever broke the news on Trayvon’s twitter first didn’t take the time to investigate all the people he was tweeting with, we might have learned a lot more before they started closing their accounts.
What’s funny about the tweet’s I’ve been going through is that if there was in fact a person who was so very concerned about the safety of Trayvon that night they didn’t seem concerned enough to tweet it to any of his friends because none of them seem to have known anything until the next day when it was going around that he was dead and that includes Daisha Brianne.
By the way, scrolling down the twitter pages takes forever and I found Firefox to be the best to do it as IE and Chrome both slow down my computer so much so that I can barely scroll back up through the tweets. Twitter really needs to put a search thing in there so we can go right to the date.
Pingback: Zimmerman has issues - Page 15 - XDTalk Forums - Your XD/XD(m) Information Source!
Question: I have read repeatedly that Trayvon Martin was originally tagged as a “John Doe.” However, the police report written up that was released a couple of weeks ago shows the victim’s ID as Trayon Benjamin Martin. http://wesawthat.files.wordpress.com/2012/04/twin_lakes_shooting_initial_report.pdf Do you understand why this is so?
According to Tracy Martin himself via Reuters, Trayvon was initially found without any identification, nor did anyone know who he was, he was transported to the coroner as a John Doe and tagged accordingly. Feb 26th.
The following day (Mon-27th) at around 8am, from pictures of Trayvon in the morgue provided by investigators, and from descriptions of clothing last know to be worn, Tracy Martin gave a positive ID of Trayvon. The following day (Tue-28th) Tracy met Detective Serino at SPD HQ and requested transport of Trayvon to Funeral home. Wed (29th) Funeral Director picked up body and Transported back to Ft. Lauderdale.
The initial Police Report was more than likely amended with confirmation of identity. | 2019-04-23T22:13:47Z | https://theconservativetreehouse.com/2012/04/21/update-10-part-2-the-trayvon-martin-shooting-deedee-reveals-the-false-truths/ |
In its broadest sense, monetary policy includes all actions of governments, central banks, and other public authorities that influence the quantity of money and bank credit. It therefore embraces policies relating to such things as choice of the nation’s monetary standard; determination of the value of the monetary unit in terms of a metal or foreign currencies; determination of the types and amounts of the government’s own monetary issues; establishment of a central banking system and determination of its powers and rules for its operation; and policies concerning the establishment and regulation of commercial banks and other related financial institutions. A few even extend the meaning of monetary policy to include official actions affecting not only the quantity of money but also its rate of expenditure, thus embracing government tax, expenditure, lending, and debt management policies.
It has become customary, however, to define monetary policy in a more restricted sense and to exclude from it choices relating to the broad legal and institutional framework of the monetary and banking system. This narrower concept will be employed here. Monetary policy in this sense refers to regulation of the supply of money and bank credit for the promotion of selected objectives.
Like all economic policies, monetary policy has three interrelated elements: selection of objectives, implementation, and at least an implicit theory of the relationships between actions and effects. All three elements present problems of choice and are continuing subjects of controversy.
Monetary policy can be directed toward achieving many different objectives. For example, the supply of money can be regulated to provide the government with cheap or even costless funds, to maintain interest rates at some selected level, to regulate the exchange rate on the nation’s currency, to protect the nation’s gold and other international reserves, to stabilize domestic price levels, to promote continuously high levels of employment, and so on. Such multiple objectives are unlikely to be fully compatible at all times. Rational policy making therefore requires identification of the various objectives, analysis of the extent to which they are or can be made compatible, and choices from among those that conflict with one another. A later section will stress changes in the objectives of monetary policy and some of the problems of reconciling them.
The role played by monetary policy in promoting selected economic objectives depends greatly on the nature of the economic system and on attitudes toward the use of other methods of regulation. This role is usually secondary in economies characterized by government operation of most economic enterprises and government control of resource allocation, distribution of output, and prices of in-puts and outputs. Even in these economies monetary policy is not trivial. An excessive supply of money can create excessive demand and inflationary pressures, which are evidenced in black markets, hoarding, and bare shelves. On the other hand, a deficient supply of money can impede the flow of production and trade. Yet the major function of monetary policy in such economies is that of passive accommodation, that is, to provide the amount of money needed to facilitate the operation of other government controls; it is not to serve as a prime regulator.
Monetary policy usually plays a more positive regulatory role in economic systems that rely heavily on market forces to organize and direct processes of production and distribution. In such economies, decisions of business firms relating to rates of output, amounts of labor employed, rates of capital formation, and so on, are strongly influenced by relationships between costs and actual and prospective demands for output. If aggregate demands are deficient, firms will not find it profitable to employ all available labor, to utilize fully existing capacity, or to purchase all the new capital goods that could be produced. On the other hand, excessive aggregate demands for output are inflationary. A major function of monetary policy, therefore, is to regulate the behavior of aggregate demand for output in order to elicit a more favorable performance by the economy. This function is shared with fiscal policy in many countries and in many different combinations or “mixes.” Although the deliberate use of fiscal policy for this purpose has increased considerably in recent decades, monetary policy continues to be a major instrument.
Primary responsibility for administering monetary policies is usually entrusted to central banks, although there are varying degrees of government control of central banks and their policies. Central banks regulate the money supply and influence the supply of credit in two principal separate but closely related capacities: as controllers of their own issues of money and as regulators of the amount of money created by commercial banks. Both are important, but their relative importance depends in part on the stage of financial development of the country and on the types of money employed. In countries where bank deposits have not yet come to be widely used, notes issued by the central bank often constitute a major part of the money supply. In such cases the central bank may regulate the money supply largely by controlling directly its own note issues. However, in countries that have reached a later stage of financial development, central bank notes constitute a smaller part of the money supply; deposits at commercial banks are the major component, and the actions of commercial banks directly account for a large part of the fluctuations of the money supply. In such countries, the central bank is primarily a regulator of the commercial banks, although control of its own money creation remains important and is a part of the process.
The terms “monetary policy” and “credit policy” are often used interchangeably or with only slightly different shades of meaning. This has come about primarily because in most modern systems the creation and destruction of money by central and commercial banks are so closely intertwined with their expansion and contraction of credit. They typically create and issue money (currency and deposits) by making loans or purchasing securities, usually debt obligations. Thus, one side of the transaction is the issue of money; the other is the provision of funds to borrowers or sellers of securities, which tends to lower interest rates. Central and commercial banks typically withdraw money (currency and deposits) by decreasing their outstanding loans or by selling securities, usually debt obligations. Thus there is both a decrease in the supply of money and a decrease in the funds available to borrowers and to purchasers of the securities sold by the banks, which tends to increase interest rates.
Those who speak of monetary policy tend to focus on the behavior of the stock of money, while those who speak of credit policy tend to focus on the quantity of loan funds available from the central and commercial banks. Such differences in focus need not lead to differences in either analysis or conclusions. Yet they sometimes do. Those who focus on the stock of money are more likely to stress “real balance effects” on both consumption and investment spending, while those who focus on credit are likely to put more stress on the direct effects on interest rates, the availability of funds, and investment. Monetary theory has made considerable progress in reconciling and integrating these approaches, but much remains to be done.
The third element in monetary policy is at least an implicit theory of the relationships between actions and effects. If its actions are to promote its objectives, the monetary authority needs some theory as to the nature, direction, magnitude, and timing of the responses. The relevant responses are numerous and on several levels. For example, they include the response of the supply of money and credit; the response of aggregate demand for output; and the responses of real output, employment, and prices. There are still disagreements among both economists and central bankers on many of these theoretical and empirical issues, and these disagreements underlie many continuing controversies over the proper nature and scope of monetary policy. Some of these will be treated in a later section.
Monetary policy, in the modern sense of deliberate and continuous management of the money supply to promote selected social and economic objectives, is largely a product of the twentieth century, especially the decades since World War i. In the earlier period, when most countries were on either a gold or a bimetallic standard, the primary and overriding objective of monetary policy was to maintain redeemability of the nation’s money in the primary metal, both domestically and internationally. A decline of the nation’s metallic reserves to dangerously low levels, or any other threat to redeemability, became a signal for monetary and credit restriction, whatever might be its other economic effects. When redeemability seemed secure, monetary policy was used to promote other objectives—to deal with panics, crises, and other credit stringencies and even to expand money somewhat when business was depressed. But such intervention was sporadic rather than continuous and its purposes limited rather than ambitious. The international gold standard of the pre-1914 period was not purely automatic, but it was managed only marginally.
Many forces have contributed to the change and growth of monetary policy since World War i. One set of forces includes the breakdown of the international gold standard and other changes and crises in monetary systems—inflation during and following World War I and the long period of suspension of gold redeemability in most countries, the changed and insecure nature of the gold and gold exchange standards re-established in the 1920s, the renewed breakdown of gold standards during the great depression of the 1930s, and world-wide inflation during and following World War n. All these had profound effects on attitudes toward monetary policy. Both countries that had too little gold and those that had too much shifted to the view that the state of their gold reserves was no longer an adequate guide to policy and that new objectives and guides should be developed. Monetary actions became increasingly less sporadic and limited and more continuous and ambitious in scope.
The objectives of monetary policy have also been powerfully influenced by changes in attitudes concerning the responsibilities of central banks and governments for the performance of the economy. The 1920s witnessed growing demands that some central agency reduce instability of price levels and business activity. These demands were strengthened immeasurably by the economic catastrophe of the 1930s and by fears that World War n would be followed by another world-wide depression. Within a few years after that war the governments of almost all Western nations had formally assumed responsibility for promoting continuously high levels of employment and output. And within a few more years almost all of these governments had signified their intentions to promote economic growth. Monetary policy is required, in some cases by government and in others by the force of public opinion and pressure, to contribute to such objectives.
Although often phrased in different terms, it is now common for monetary authorities to state four major or basic objectives of monetary policy: (1) continuously high levels of employment and output, (2) the highest sustainable rate of economic growth, (3) relatively stable domestic price levels, and (4) maintenance of a stable exchange rate for the nation’s currency and protection of its international reserve position. In some countries monetary policy is also influenced by other considerations, such as a desire to maintain low interest rates to facilitate government finance or other favored types of economic activity.
Some of the most basic problems of monetary policy relate to the compatibility of such multiple objectives. Can all these be achieved simultaneously and to an acceptable degree even if a nation has precise control of the behavior of aggregate demand for output? Of course, the answer depends in part on the ambitiousness of the goals; perfection in all respects is hardly to be expected.
The answer also depends to an important extent on the responses of output, employment, money wage rates, and prices to changes in aggregate demand. The most favorable case is that in which the supply of output is completely elastic at existing price levels up to the point of “full employment” and capacity output. In such cases, increases of demand would elicit only increases in output until the economy reached its maximum capacity to produce. Price inflation would appear only when demand became excessive relative to productive capacity.
Problems of reconciling objectives relating to output, employment, and price level stability arise, however, when the supply of output does not respond in such a favorable manner to increases of demand—when prices rise before the economy has neared its capacity to produce. Even in the face of considerable amounts of unemployment, average money wage rates may rise faster than average output per man-hour, thereby tending to raise costs of production. And for this, or other reasons, business firms may raise the prices of their products even though considerable amounts of excess capacity persist. Under such conditions it may be impossible to achieve all objectives, to acceptable degrees, solely by controlling aggregate demand. Levels of demand sufficient to elicit “full employment” and capacity output may bring inflation, while levels of demand low enough to assure stability of price levels may leave large amounts of unemployment and unused capacity.
Because of such difficulties, many economists and other observers have come to believe that objectives relating to output, employment, and price levels can be reconciled satisfactorily only if regulation of aggregate demand through monetary and fiscal policies is supplemented by measures designed to elicit more favorable responses by the economy. These measures are of several types, which can only be listed here: (1) reform of wage-making processes in order to avoid inflationary increases of money wage rates, (2) decrease of monopoly power in industry, and (3) increase of regional and occupational mobility of labor.
The above discussion related to possible conflicts among a nation’s multiple domestic objectives. One, or more, of these domestic objectives may also conflict with the nation’s international objectives of maintaining a stable exchange rate for its currency and of protecting its international reserve position. Fortunately, domestic and international objectives do not always conflict. For example, a nation may have a deficit in its balance of payments primarily because of excessive domestic demands and rising prices. In such cases, restrictive monetary policies may be appropriate for both domestic and international reasons. On the other hand, a nation may have a surplus in its balance of payments primarily because of unemployment and depressed output and incomes at home, which depress its demands for imports. In this case an expansionary monetary policy will promote both its domestic and international objectives.
Cases do arise, however, in which domestic objectives and the objectives of maintaining stable exchange rates and a balance in international payments come into conflict. For example, a nation may have a large and persistent surplus in its balance of payments while demands for its output are so large as to bring actual or threatened inflation. An expansionary monetary policy, aimed at reducing the surplus in its balance of payments, would increase inflationary pressures at home; while a restrictive policy, aimed at inhibiting domestic inflation, would continue, and perhaps even increase, the surplus in its balance of payments. A nation faced with this situation may be compelled to sacrifice its domestic objective of preventing inflation or to increase the exchange rate on its currency in order to decrease the value of its exports relative to its imports.
Considered by most countries to be even more serious is the situation in which there is a large and persistent deficit in the balance of payments combined with actual or threatened excess unemployment at home. Employing expansionary monetary and fiscal policies to increase domestic demand and eradicate excess unemployment would tend to widen the deficit in the nation’s balance of payments and to drain away its international reserves. But employing restrictive policies to eradicate the deficit in its balance of payments would increase unemployment at home. The nation may be forced to sacrifice its domestic objectives relating to employment, output, and growth or to lower the exchange rate on its currency.
The preceding sections dealt with some of the problems that would be encountered in promoting multiple economic objectives simultaneously, even if the monetary authority possessed precise control over the behavior of aggregate demand for output. But it is unsafe to assume without analysis that the monetary authority, or even the monetary authority together with the fiscal authorities, can control aggregate demand precisely. The monetary authority has no direct control over aggregate demand for output or over any of its major components, such as demands for consumption, for investment or capital formation, for government use, or for export. Its powers are largely confined to regulation of the supply of money and credit. Even at this level its controls may lack precision. Presumably the central bank can accurately control its own creation and destruction of money; but its control of the creation and destruction of money and credit by the commercial banking system, exercised largely through its control over the reserve position of the banks, may be less accurate. And even if the monetary authority has precise control of the money supply, aggregate demand for output may not respond in a uniform or precisely predictable manner; the income velocity, or rate of expenditure, of money may fluctuate. Thus there are many links in the chain of causation from central bank action to the reaction of aggregate demand and many possibilities of slippage.
The effectiveness of monetary policy as a regulator of aggregate demand does not depend on the existence of some fixed relationship between the supply of money and aggregate demand. It requires only that changes in the money supply influence aggregate demand in the desired direction and in a predictable way and that the monetary authority have power to change the money supply to the extent required to offset adverse variations in the income velocity of money. However, the possibility of control of aggregate demand does suffer to the extent that changes in money supply fail to affect aggregate demand, that the power of the monetary authority to change the money supply is limited, and that the relationship between the money supply and aggregate demand is unpredictable.
Few economists doubt the ability of monetary policy, in the absence of strong cyclical forces, to regulate effectively the secular behavior of both the money supply and aggregate demand for output. Secular changes in the velocity of money are usually gradual and can be allowed for in determining the appropriate rate of change of the money supply. There is much less agreement, however, concerning the effectiveness of monetary policy alone for offsetting cyclical forces and stabilizing aggregate demand over the various phases of the business cycle.
Monetary policy meets its most severe test in dealing with the strong forces that cause recessions or depressions. Consider the extreme case in which an economy has slipped into a severe depression with widespread unemployment and unused capacity. Under such conditions businessmen are likely to view the future pessimistically and to see few opportunities for investment in capital facilities that promise favorable rates of return. Their demand functions for output to be used for capital formation may be so low that only extremely low interest—rates, perhaps rates approaching zero, would induce them to invest enough to lift the economy back toward full-employment levels.
But monetary policy may be incapable of depressing interest rates, and especially long-term rates, to such low levels. The monetary authority may encounter difficulties in increasing the money supply under such conditions because the banks prefer to hold excess reserves rather than lend and take risks. Interest rates, and especially long-term rates, may fall only sluggishly, even in the face of large increases in the money supply. One reason for this is the fear of default by borrowers under depression conditions. John Maynard Keynes suggested another reason—his famous “liquidity trap.” He argued that there was some long-term rate of interest, not far below that previously prevailing, that the public considered “normal,” in the sense that it would again prevail. No one would hold securities at lower yields because of fear of capital losses when interest rates returned to their normal levels. Below this normal rate the public would increase its holdings of money balances indefinitely rather than lend at a lower rate.
Thus monetary policy may be incapable of lowering interest rates enough to offset the decline of investment demand functions, and recovery may be delayed until something increases the expected profitability of private investment or until the government adopts expansionary fiscal policies.
In how many cases would a well-conceived and well-executed monetary policy prove incapable of dealing with depressive forces? On this there is still lack of agreement among economists. Some have argued that experience during the great depression proved the ineffectiveness of monetary policy. This experience is hardly relevant to the present question, however, because the monetary policies of that period were hardly exemplary. To protect gold standards or for other reasons, many countries actually followed deflationary monetary policies for a considerable period. Expansionary policies were in many cases initiated only after a long delay, during which excess capacity had be-come widespread, expectations had deteriorated, and the entire financial system had come under serious strain. It may well be that in this and other recessions an ambitious expansionary monetary policy introduced promptly after the downturn would have proved effective in arresting the decline of aggregate demand. However, many economists—including some who are optimists about the effectiveness of monetary policy—believe that monetary policy alone may not be potent enough to offset strong depressive forces and that expansionary fiscal policies should also be employed under such conditions.
It is generally conceded that well-conceived monetary policies can be more effective in restricting increases in aggregate demand during the prosperity phases of business cycles. However, such prosperity periods are usually characterized by increases in aggregate demand relative to the money supply. This increase in the income velocity of money, or “economizing of money balances relative to expenditures,” reflects several forces that usually accompany prosperity—greater optimism on the part of both households and business firms concerning their future receipts of income, which decreases the amounts of money held against contingencies; more profitable opportunities for investing idle balances held by business firms; and rising interest rates. Theorists have tended to stress, perhaps to overstress, the role played by rising interest rates. The rise of investment demand during prosperity tends to raise interest rates, and the rise of rates is accentuated by a restrictive monetary policy. In turn, the availability of higher yields on other assets induces both business firms and households to economize their holdings of money balances that yield no interest.
Such increases of velocity—induced in part, but only in part, by restrictive monetary policy—do constitute a slippage in the operation of monetary policy. This does not mean that monetary policy is rendered ineffective; it means only that larger restrictive actions are required to achieve any specified amount of restriction of aggregate demand. Of course, the monetary authority may be unable or unwilling to restrict money to the required extent. For example, it may be inhibited by inadequacy of the control instruments currently at its disposal, fear that further restriction would precipitate a recession, dislike of high interest rates, or charges that credit restriction discriminates against both new and small business firms. However, these are not limitations on the capability of monetary policy to restrict aggregate demand. They are only considerations affecting the willingness of the monetary authority to use its powers of restriction.
The effectiveness of monetary policy as a countercyclical instrument depends heavily on the quickness of policy action and the quickness of response of the economy. Ideally, policy actions would be taken as soon as adverse developments appeared, or even in anticipation of such developments; and there would be an immediate and full response of aggregate demand and of such policy objectives as employment and output. Under such ideal conditions a high degree of stability might be maintained continuously. In practice, of course, such ideal performance is not realized. Economists have long recognized three lags in monetary policy: (1) the recognition lag—the interval between the time when a need for action develops and the time the need is recognized; (2) the administrative lag—the interval between recognition and the actual policy action; and (3) the operational lag—the interval between policy action and the time that the policy objectives, such as output and employment, respond fully.
This view has been challenged by some economists, notably by Milton Friedman. These economists contend that the responses to a given monetary action are distributed over time and that the full effects are realized only after a lag of considerably more than a year. Because of this, monetary actions taken to counter cyclical fluctuations may actually produce, or at least accentuate, these fluctuations. For example, expansionary policy actions taken to counter recession may have little effect for several months and then achieve their full expansionary effects on aggregate demand only when the economy is in its next boom phase. And actions taken to restrict aggregate demand during a boom may in fact precipitate and accentuate an ensuing depression.
For this and other reasons, members of this school oppose flexible countercyclical monetary policies. They believe that a greater degree of stability will be achieved by a monetary policy aimed at a steady growth of the money supply, regardless of cyclical conditions. This growth should be at an annual rate approximating the growth rate of real gross national product.
This whole question, which is obviously crucial for countercyclical monetary policy, remains unresolved and controversial. Friedman’s theoretical and statistical arguments have been strongly challenged but not wholly refuted. Much more research is needed on both the magnitude and timing of responses to monetary policy actions. The same applies to the various types of fiscal policy actions.
Nations face complex problems in determining the relative roles to be played by monetary policies and by the various types of government expenditure and tax policies in promoting the economic objectives described earlier. Only a few of the considerations determining these relative roles can be mentioned here. One is, of course, the whole set of cultural, institutional, and political conditions determining the actual availability of these policy instruments. For example, in some countries it is in fact acceptable to use government tax and expenditure policies in a timely and flexible manner. Other governments are not yet in this position. Still others may find it possible to reduce taxes or increase expenditures to support aggregate demand but not to restrict it by fiscal measures. There can also be comparable differences in the actual availability of monetary policy instruments.
Also relevant are judgments concerning the relative effectiveness of monetary and fiscal policies in achieving some desired behavior of aggregate demand. For example, an expansionary fiscal policy may be judged to be necessary to promote quick recovery from depression conditions but to be no more effective than monetary policy in restricting increases of demand.
The optimum mix of monetary and fiscal policies also depends in part on the nature of economic objectives and on their relative priorities. Suppose that it is possible to achieve some selected level of aggregate demand with various combinations of monetary and fiscal policies—with, say, some restrictive fiscal policy and some expansionary monetary policy or with some expansionary fiscal policy and some restrictive monetary policy. This level of aggregate demand can reflect various combinations of consumption and capital formation. If the objective is only to achieve some selected level of total output and employment, without regard to the distribution of output between consumption and capital formation, many different combinations of monetary and fiscal policies may be equally acceptable. But this may cease to be true if promotion of economic growth through a higher rate of capital formation is also an objective. For this purpose a restrictive fiscal policy and an easy monetary policy may be most appropriate. Large taxes relative to government expenditures for current purposes can be used to force the nation to consume a smaller part, and to save a larger part, of its total income; and an easy monetary policy, instituted to lower interest rates, can encourage the use of savings for capital formation.
A somewhat different case is that in which a nation wishes to raise aggregate demand for its output while it faces an undesired deficit in its balance of payments. Both expansionary fiscal policies and expansionary monetary policies tend to increase the deficit in the balance of payments to the extent that they succeed in raising aggregate demand, which in turn increases imports. But an expansionary monetary policy, which lowers interest rates, will also tend to increase capital outflows or at least to reduce capital inflows. In such a situation, an optimum policy mix may require more expansionary fiscal policies to raise domestic demand, together with a less expansionary monetary policy to support interest rates and attract capital inflows or at least to retard capital outflows.
These are but a few of the many considerations that determine the relative roles of monetary and fiscal policies. These relative roles have changed markedly in recent decades and are likely to continue to change with changes in the nature and relative priorities of economic objectives, with changes in attitudes toward the flexible use of fiscal policies for stabilization purposes, and with changes in our knowledge concerning the magnitudes and timing of responses to various types of both monetary and fiscal actions.
Commission on Money and Credit 1961 Money and Credit: Their Influence on Jobs, Prices and Growth. Englewood Cliffs, N.J.: Prentice-Hall.
Culbertson, J. M. 1960 Friedman on the Lag in Effect of Monetary Policy. Journal of Political Economy 68: 617–621.
Culbertson, J. M. 1961 The Lag in Effect of Monetary Policy: Reply. Journal of Political Economy 69:467–477.
Friedman, Milton 1961 The Lag in Effect of Monetary Policy. Journal of Political Economy 69:447–466.
Great Britain, Committee on the Working of the Monetary System 1959 Report. Papers by Command, Cmnd. 827. London: H. M. Stationery Office. → Known as the Radcliffe Report.
Scammell, W. M. (1957) 1962 International Monetary Policy. 2d ed. London: Macmillan; New York: St. Martins.
Yeager, Leland B. (editor) 1962 In Search of a Monetary Constitution. Cambridge, Mass.: Harvard Univ. Press.
"Monetary Policy." International Encyclopedia of the Social Sciences. . Encyclopedia.com. 8 Apr. 2019 <https://www.encyclopedia.com>.
Monetary policy is the management of money, credit, and interest rates by a country’s central bank. Unfortunately, this short definition is clearly inadequate. What is money? What is credit? What is an interest rate? What is a central bank and how does it control them? And, most importantly, why should anyone care? The purpose of this entry is to answer these questions (for more detail, see the relevant chapters of Stephen G. Cecchetti ).
Money is an asset that is generally accepted as payment for goods and services or repayment of debt; money acts as a unit of account, and serves as a store of value. That is, people use money to pay for things (it is a means of payment); quote prices in dollars, euros, yen, or the units of our currency (it is a unit of account); and use money to move purchasing power over time (it is a store of value). Credit is the borrowing and lending of resources. Some people have more resources than they currently need (they are savers) while others have profitable opportunities that they cannot fund (they are investors). Credit flows from the savers to the investors. And an interest rate is the cost of borrowing and the reward for lending. Since lenders could have done something else with their resources, they require compensation—interest is rent paid by borrowers.
The U.S. Federal Reserve System, the Bank of Japan, and the Bank of England are all central banks. Nearly every country in the world has a central bank. It is easiest to understand a central bank by looking at what it does (for a history of money and central banks, see Glyn Davies ). A modern central bank both provides an array of services to commercial banks (it is the bankers’ bank) and manages the government’s finances (it is the government’s bank). While not universally true, we will assume that only banks and governments have accounts at central banks. As the bank for bankers, the central bank holds deposit accounts and operates a system for interbank payments that enables commercial banks (the ones the public uses) to transfer balances in these accounts to one another. The central bank is also in a unique position to provide loans to commercial banks during times of crisis—more on this shortly.
Like any individual or business, the government needs a bank to make and receive payments. So, the central bank keeps an account for the government. When the government wants to make or receive a payment, it needs a bank just like the rest of us. The central bank does that job. In addition, the government gives the central bank the right to print money—that is the paper currency that people use in everyday life.
At its most basic level, printing money is a very profitable business. A $100 bill costs only a few cents to print, but it can be exchanged for $100 worth of goods and services. It is logical then that national governments create a monopoly on printing money and use the revenue it generates to benefit the general public. Also, government officials know that losing control of the money printing presses means losing control of inflation.
The fact that the central bank has the license to issue money makes it unique. If individuals want to make a purchase, they need to have the resources to do it. So, for example, someone using a debit card to purchase groceries will have to have sufficient balances in a commercial bank account to cover it. If the grocery purchaser does not have sufficient resources of his own, he will need the financial assistance of someone who is willing to make him a loan. The central bank is different. If the central bank wants to buy something—say a government-issued bond—it can just create the liabilities to do it. Essentially it can issue the money. Importantly, the central bank can expand the size of its balance sheet at will. No one else can do this.
The central bank uses its ability to expand (and contract) its assets and liabilities to implement monetary policy. Figure 1 is a simple version of the central bank’s balance sheet, stripped of a number of incidental items (like buildings and gold). When looking at any balance sheet, the most important thing to remember is that assets equal liabilities, so any change in one side must be matched by a change in the other. When a central bank purchases a government security, increasing its assets, this is normally matched by an increase in commercial bank reserve liabilities. Banks hold these reserves both because they are required by law and in order to make interbank payments.
as a target, see Laurence Meyer [2001a]. For a detailed discussion of the monetary policy of the European Central Bank, see Otmar Issing et al. ).
It is important to note that some central banks decide to use their ability to control the size of their balance sheet to target something other than interest rates. The natural alternative is the exchange value of their currency—that is, the value of the number of dollars it takes to purchase the currency issued by another central bank. But, by the beginning of the twenty-first century, this had become increasingly rare. A central bank cannot control the total quantity of money and credit in the economy directly, and no modern central bank tries.
Finally, in addition to the size of their balance sheet, central banks have two additional tools. During times of financial stress, the central bank stands ready to provide loans to banks that are illiquid (so they cannot make payments) but still solvent (so their net worth is positive). Policymakers set interest rates on these loans. When the lending is done properly, this eliminates financial systemwide panics. In addition, central banks in many countries are given the power to set requirements governing how banks hold their assets. So, for example, they may require a certain level of reserve deposits, or prohibit the holding of common stock.
The central bank is part of the government. Whenever an agency of the government involves itself in the economy, people need to ask why. What makes individuals incapable of doing what they have entrusted to the government? In the case of national defense and pollution regulation, the reasons are obvious. Most people will not voluntarily contribute their resources to the army, nor will a country’s citizens spontaneously clean up their own air.
The rationale for the existence of a central bank is equally clear. While economic and financial systems may be fairly stable most of the time, when left on their own they are prone to episodes of extreme volatility. In the absence of a central bank, economic systems tend to be extremely unstable. The historical record is filled with examples of failure, such as the Great Depression of the 1930s, when the American banking system collapsed and economic activity plummeted.
Central bankers adjust interest rates to reduce the volatility of the economic and financial systems by pursuing a number of objectives. The three most important are: (1) low and stable inflation; (2) high and stable real growth, together with high employment; and (3) stable financial markets. Let’s look at each of these in turn.
The rationale for keeping the economy inflation-free is straightforward. Standards, everyone agrees, should be standard. A pound should always weigh a pound, a measuring cup should always hold a cup, a yardstick should always measure a yard, and one dollar should always have the same purchasing power. Maintaining price stability enhances money’s usefulness both as a unit of account and as a store of value.
Prices are central to everything that happens in a market-based economy. They provide the information individuals and firms need to ensure that resources are allocated to their best uses. When a seller can raise the price of a product, that is supposed to signal that demand has increased, so producing more is worthwhile. Inflation degrades the information content of prices, reducing the efficient operation of the economy.
Turning to growth, central bankers work to dampen the fluctuations of the business cycle. Booms are good; recessions are not. In recessions, people lose their jobs and businesses fail. Without a steady income, people struggle to make their auto, credit card, and mortgage payments. Consumers pull back, hurting businesses that rely on them to buy products. Reduced sales lead to more layoffs, and so on. The longer the downturn goes on, the worse it gets.
Finally, there is financial stability. The financial system is like plumbing: when it works, it is taken for granted, but when it does not work, watch out. If people lose faith in banks and financial markets, they will rush to low-risk alternatives, and the flow of resources from savers to borrowers will stop. Getting a car loan or a home mortgage becomes impossible, as does selling a bond to maintain or expand a business. When the financial system collapses, economic activity also collapses.
Central banks use their ability to control their balance sheet to manipulate short-term interest rates in order to keep inflation low and stable, the growth high and stable, and the financial system stable. But what makes monetary policymakers successful? Today, there is a clear consensus that to succeed a central bank must be: (1) independent of political pressure; (2) accountable to the public; (3) transparent in its policy actions; and (4) clear in its communications with financial markets and the public.
Independence is the most important of these elements. Successful monetary policy requires a long time horizon. The impact of today’s decisions will not be felt for a while—not for several years, in most instances. Democratically elected politicians are not a patient bunch; their time horizon extends only to the next election. Politicians are encouraged to do everything they can for their constituents before the next election—including manipulating interest rates to bring short-term prosperity at the expense of long-term stability. The temptation to forsake long-term goals for short-term gains is simply impossible to resist. Given the ability to choose, politicians will keep interest rates too low, raising output and employment quickly (before the election), but resulting in inflation later (after the election).
Knowing these tendencies, governments have moved responsibility for monetary policy into a separate, largely apolitical, institution. To insulate policymakers from the daily pressures faced by politicians, governments must give central bankers control over their budgets, authority to make irreversible decisions, and long-term appointments.
There is a major problem with central bank independence: It is inconsistent with representative democracy. Politicians answer to the voters; by design, independent central bankers do not. How can people have faith in the financial system if there are no checks on what the central bankers are doing? The economy will not operate efficiently unless policymakers are trusted to do the right thing.
The solution to this problem is twofold. First, politicians establish the goals for the independent central bankers, and second, monetary policymakers publicly report their progress in achieving those goals. Explicit goals foster accountability and disclosure requirements create transparency. While central bankers are powerful, elected representatives tell them what to do and then monitor their progress.
The institutional means for assuring accountability and transparency differ from one country to the next. In some countries, such as the United Kingdom and Chile, the government establishes an explicit numerical target for inflation. In others, such as the United States, the central bank is asked to deliver price stability as one of a number of objectives (for a discussion of the structure of central bank objectives, see Laurence Meyer [2001b]).
In the early 1980s, nearly two out of three of the countries in the world were experiencing inflation in excess of 10 percent per year. In the early twenty-first century, this is one in six. Two decades ago nearly one country in three was contracting. By 2005, five in six countries were growing at a rate in excess of 2 percent per year. But not only has inflation been lower and output higher, both inflation and output appear to be more stable. And careful empirical analysis shows that monetary policy is a likely source of this low, stable inflation and high, stable growth (see Cecchetti, Flores-Lagunes and Krause ).
Central bankers’ success can be traced to their ability to control interest rates. And their ability to manipulate interest rates relies on their control of the size of their balance sheet. This, in turn, requires that banks and individuals actually demand central bank liabilities. That is, people have to want to hold the currency issued by central banks, and commercial banks have to demand reserves. Won’t the day come when no one wants this stuff anymore? And when that happens, won’t monetary policy disappear?
The answer is almost surely no. While it is true that the creation of a secure and anonymous substitute for paper currency will ultimately cause dollar bills and euro notes to disappear, reserves are different. The central bank operates an interbank payments system based on reserves. It does this to ensure that, even during periods of crisis, banks can continue to make payments. And to ensure that commercial banks use their payments system, the central bank offers cheap access to this system—that is, it subsidizes the cost of the system’s operation. So long as banks want reserves, there will be monetary policy (for a discussion of the challenges facing monetary policy makers, see Gordon Sellon and Chairmaine Buskas and Laurence Meyer [2001c]).
Cecchetti, Stephen G. 2006. Money, Banking, and Financial Markets. New York: McGraw Hill–Irwin.
Cecchetti, Stephen G., Alfonso Flores-Lagunes, and Stefan Krause. 2006. Has Monetary Policy Become More Efficient? A Cross-Country Analysis. Economic Journal 116 (4): 408–433.
Davies, Glyn. 2002. The History of Money from Ancient Times to the Present Day. 3rd ed. Cardiff, U.K.: University of Wales Press.
Issing, Otmar, Ignazio Angeloni, Vitor Gaspar, and Oreste Tristani. 2001. Monetary Policy in the Euro Area: Strategy and Decision-Making at the European Central Bank. Cambridge, U.K.: Cambridge University Press.
Meyer, Laurence H. 2001a. Does Money Matter? The 2001 Homer Jones Memorial Lecture, Washington University, Saint Louis, Missouri, March 28. http://www.federalreserve.gov/boarddocs/speeches/2001/20010328/default.htm.
Meyer, Laurence H. 2001b. Inflation Targets and Inflation Targeting. Remarks at the University of California at San Diego Economics Roundtable, San Diego, California, July 17. http://www.federalreserve.gov/boarddocs/speeches/2001/20010328/default.htm./boarddocs/speeches/2001/20010717/default.htm.
Meyer, Laurence H. 2001c. The Future of Money and of Monetary Policy. Remarks at the Distinguished Lecture Program, Swarthmore College, Swarthmore, Pennsylvania, December 5. http://www.federalreserve.gov/boarddocs/speeches/2001/20011205/default.htm.
Sellon, Gordon H., Jr., and Chairmaine R. Buskas, eds. 1999. New Challenges for Monetary Policy: A Symposium Sponsored by the Federal Reserve Bank of Kansas City. Kansas City, MO: Federal Reserve Bank of Kansas City.
"Policy, Monetary." International Encyclopedia of the Social Sciences. . Encyclopedia.com. 8 Apr. 2019 <https://www.encyclopedia.com>.
The central agency that conducts monetary policy in the United States is the Federal Reserve System (the Fed). It was founded by the U.S. Congress in 1913 under the Federal Reserve Act. The Fed is a highly independent agency that is insulated from day-to-day political pressures, accountable only to Congress. It is a federal system, consisting of a board of governors, twelve regional Federal Reserve Banks (FRBs) and their twenty-five branches, the Federal Open Market Committee (FOMC), the Federal Advisory Council and other advisory and working committees, and 2,900 member banks, mostly national banks. By law, all federally chartered banks, that is, national banks, are automatic members of the system. State-chartered banks may elect to become members.
The seven-member board of governors, headquartered in Washington, D.C., is the core agency of the Fed, overseeing the entire operation of U.S. monetary policy. The FRBs are the operating arms of the system and are located in twelve major cities, one in each of the twelve Federal Reserve Districts around the nation. The twelve-member FOMC is the most important policy-making entity of the system. The voting members of the committee are the seven members of the board, the president of the FRB of New York, and four of the other eleven FRB presidents, each serving one year on a rotating basis. The other seven nonvoting FRB presidents still attend the meetings and participate fully in policy deliberations.
Being one of the most influential government policies, monetary policy aims at affecting the economy through the Fed's management of money and interest rates. The narrowest definition of money is M1, which includes currency, checking account deposits, and traveler's checks. Time deposits, savings deposits, money market deposits, and other financial assets can be added to M1 to define other monetary measures, such as M2 and M3. Interest rates are simply the costs of borrowing. The Fed conducts monetary policy through bank reserves, which are the portion of the deposits that banks and other depository institutions are required to hold either as vault cash or as deposits with their home FRBs. Excess reserves are the reserves in excess of the amount required. These additional funds can be transacted in the reserves market (the federal funds market) to allow overnight borrowing between depository institutions to meet short-term needs in reserves. The rate at which such private borrowings are charged is the federal funds rate.
Monetary policy is closely linked with the reserves market. With its policy tools, the Fed can control the reserves available in the market, affect the federal funds rate, and subsequently trigger a chain of reactions that influence other short-term interest rates, foreign-exchange rates, long-term interest rates, and the amount of money and credit in the economy. These changes will then bring about adjustments in consumption, affect saving and investment decisions, and eventually influence employment, output, and prices.
The long-term goals of monetary policy are to promote full employment and stable prices and to moderate long-term interest rates. Most economists believe price stability should be the primary objective, since a stable level of prices is key to sustained output and employment, as well as to maintaining moderate long-term interest rates. Relatively speaking, it is easier for central banks to control inflation (i.e., the continual rise in the price level) than to influence employment directly, because the latter is affected by such real factors as technology and consumer tastes. Moreover, historical evidence indicates a strong positive correlation between inflation and the amount of money.
While the financial markets react quickly to changes in monetary policy, it generally takes months or even years for such policy to affect employment and growth, and thus to reach the Fed's long-term goals. The Fed, therefore, needs to be forward-looking and to make timely policy adjustments based on forecasted as well as actual data on such variables as wages and prices, inflation, unemployment, output growth, foreign trade, interest rates, exchange rates, money and credit, and conditions in the markets for bonds and stocks.
Since the early 1980s, the Fed has been relying on the overnight federal funds rate as the guide to its position in monetary policy. The Fed has at its disposal three major monetary policy tools: reserve requirements, the discount rate, and open-market operations.
Under the Monetary Control Act of 1980, all depository institutions, including commercial banks and savings and loans, are subject to the same reserve requirements, regardless of their Fed member status. As of October 2005, the structure of reserve requirements was 0 percent for all checkable deposits up to $7 million (the exemption), 3 percent for such deposits from above $7 million to $47.6 million (the low-reserve tranche), and 10 percent for the amount above $47.6 million. Both the exemption and the low-reserve tranche are subject to annual adjustment by statute to reflect changes in reservable liabilities at all depository institutions. No reserves are required for nonpersonal time deposits and Eurocurrency liabilities.
Reserve requirements affect the so-called multiple money creation. Suppose, for example, the reserve requirement ratio is 10 percent. A bank that receives a $100 deposit (Bank 1) can lend out $90. Bank 1 can then issue a $90 check to a borrower, who deposits it in Bank 2, which can then lend out $81. As it continues, the process will eventually involve a total of $1,000 ($100 + $90 + $81 + $72.9 + … = $1,000) in deposits. The initial deposit of $100 is thus multiplied ten times. With a lower (higher) ratio, the multiple involved is larger (smaller), and more (fewer) reserves can be created.
Reserve requirements are not used as often as the other policy tools. Since funds grow in multiples, it is difficult to administer small adjustments in reserves with this tool. Also, banks always have the option of entering the federal funds market for reserves, further limiting the role of reserve requirements. Except for the yearly adjustments of the exemption and the low-reserve tranche, the last change in the reserve requirements was in April 1992, when the upper ratio was reduced from 12 to 10 percent.
Banks and other depository institutions may acquire loans through the discount window at their home FRB to meet their short-term needs against, for example, unexpected large withdrawals of deposits. The interest rate charged on such loans is the discount rate. A reduction of the rate encourages more borrowing, and through money creation, bank deposits increase and reserves increase. A rate hike works in the opposite direction. Since it is more efficient, however, to adjust reserves through open-market operations (discussed below), the amount of discount window lending has been unimportant, accounting for only a small fraction of total reserves. Perhaps a more meaningful function served by the discount rate is to signal the Fed's stance on monetary policy, similar to the role of the federal funds rate.
By law, each FRB sets its discount rate every two weeks, subject to the approval of the board of governors. The gradual nationalization of the credit market over the years, however, has resulted in a uniform discount rate. A major revision in the discount window programs took effect in January 2003 to enhance the Fed's lending function. The FRBs began to offer three discount window programs to depository institutions: the primary credit program for financially sound institutions, the secondary credit program for institutions not eligible for primary credit, and the seasonal credit program for small depository institutions that have seasonal fluctuations in funding needs.
Discount-rate adjustments, usually going hand in hand with changes in the federal funds rate, have been dictated by cyclical conditions of the economy, and the frequency of adjustments has varied. For instance, the discount rate moved up from a low of 3 percent in May 1994 to 6 percent in January 2001 to counter possible overheating and inflation from the robust economic growth since the mid-1990s. The rate was then lowered twelve times, to a bare 0.75 percent in two years, to help the economy recover from its 2001 recession. From June 2004 to September 2005, the primary (4.75 percent) and secondary (5.25 percent) credit rates were both raised 11 times to cool off the economy and especially the overheated housing market.
The most important and flexible tool of monetary policy is open-market operations, that is, trading U.S. government securities in the open market. In 2004 the Fed made $7.55 trillion of purchases and $7.51 trillion of sales of Treasury securities (mostly short-term Treasury bills). As of June 2005, the Fed held $721.92 billion of U.S. Treasury securities, roughly 9.2 percent of the total federal debt outstanding.
The FOMC directs open-market operations (and also advises about reserve requirements and discount-rate policies). The day-to-day operations are determined and executed by the Domestic Trading Desk (the Desk) at the FRB of New York. Since 1980 the FOMC has met regularly eight times a year in Washington, D.C. At each of these meetings, it votes on an intermeeting target federal funds rate, based on the current and prospective conditions of the economy. Until the next meeting, the Desk will manage reserve conditions through open-market operations to maintain the federal funds rate around the given target level. When buying securities from a bank, the Fed makes the payment by increasing the bank's reserves at the Fed. More reserves will then be available in the federal funds market and the federal funds rate falls. By selling securities to a bank, the Fed receives payment in reserves from the bank. Supply of reserves falls and the funds rate rises.
The Fed has two basic approaches in running open-market operations. When a shortage or surplus in reserves is likely to persist, the Fed may undertake outright purchases or sales, creating a long-term impact on the supply of reserves. Nevertheless, many reserve movements are temporary. The Fed can then take a defensive position and engage in transactions that impose only temporary effects on the level of reserves. A repurchase agreement (a repo) allows the Fed to purchase securities with the agreement that the seller will buy back them within a short period, sometimes overnight and mostly within seven days. The repo creates a temporary increase in reserves, which vanishes when the term expires. If the Fed wishes to drain reserves temporarily from the banking system, it can adopt a matched sale-purchase transaction (a reverse repo), under which the buyer agrees to sell the securities back to the Fed, usually in fewer than seven days.
The Federal Reserve System: Purposes and functions. (2005, June). Washington, DC: Board of Governors of the Federal Reserve System.
Mishkin, Frederic S. (2006). The economics of money, banking, and financial markets (7th ed.). New York: Pearson-Addison-Wesley.
91st Annual Report: 2004. (2005). Washington, DC: Board of Governors of the Federal Reserve System.
Treasury Bulletin. (2005, September). Washington, DC: U.S. Department of Treasury.
"Monetary Policy." Encyclopedia of Business and Finance, 2nd ed.. . Encyclopedia.com. 8 Apr. 2019 <https://www.encyclopedia.com>.
The size of the money supply (the amount of money in circulation) is one of the most powerful influences on an economy. In general, when more money is circulating in an economy, there is more demand for goods and services, so businesses produce more, and more people have jobs. By contrast, when the money supply shrinks, there is less demand for goods and services, businesses restrict their activities, and fewer people have jobs. Monetary policy is the government practice of adjusting the money supply in order to bring about a change in the economy.
In developed, capitalist economies (in which businesses are generally owned by private individuals rather than the government), there are central banks that regulate the banking industry and oversee the country’s money supply. For instance, the United States’ central bank, the Federal Reserve System (often called the Fed), keeps watch over the U.S. economy and makes adjustments to the money supply in the hope of reaching certain economic goals. These goals usually include making sure there are enough jobs for people who want them, guarding against inflation (the general rising of prices), minimizing the damage caused by cycles of economic boom and bust, and otherwise promoting the long-term health of the economy.
Monetary policy does not adjust the money supply by changing the amount of currency (government-issued bills and coins) in circulation. Much of a country’s money supply is actually paperless money created by bank loans. When banks loan money to consumers and businesses, they pump far more money into the economy than actually exists in the form of currency. This money takes the form of balances in individual checking and other bank accounts.
Prior to the Great Depression (a decline in the world economy that began in 1929 and lasted through much of the following decade), countries did not have well-defined economic policies. The so-called classical economists, who at the time dominated economic thought in capitalist countries, believed that economies regulated themselves through market forces (such as supply and demand) and should remain free from government intervention. In capitalist countries there was actually some amount of regulation by the government; in the United States, for instance, the Federal Reserve System had been established in 1913 in response to financial panics that caused many banks to fail. But individual banks still had more control over the money supply than the government did. The problems facing the economy during the Depression, however, could not be solved by market forces. Roughly one-third of the American labor force was out of work and so did not have any wages to spend on goods and services (in economic terms, there was a drop in demand). As a result, companies had no incentive to produce goods and services, which (to complete the circle) meant that they could not hire new workers. The U.S. economy was in a deep hole, and classical economic principles offered no ideas for how to get out of it.
Against this backdrop the British economist John Maynard Keynes (1883–1946) published The General Theory of Employment, Interest, and Money (1936), which revolutionized the study of economics as well as the relationship between government and the economy. Among other ideas, Keynes argued that the government could compensate for the loss of demand (the desire to purchase goods and services) that characterized the Depression (and that characterized milder forms of economic downturn, called recessions). He said that governments could do this by spending money on public works projects (such as building roads and dams) and by providing relief payments to people who were out of work. Additionally, governments could use tax policy to affect demand: when the government reduces the amount of money that people pay in taxes, people can spend more of their money on goods and services. The reverse can be expected to happen when governments raise taxes. This use of government spending and taxes to regulate the economy is called fiscal policy.
Keynes also suggested that governments could adjust the money supply to manage demand in the economy, thus laying the groundwork for the Fed’s more active role in the economy in the years after World War II (which ended in 1945). For decades after the Depression, Keynes’s ideas dominated economic thought. U.S. presidents and Congressional leaders attempted to steer the economy through difficulties using fiscal policy, while the Fed attempted to control inflation and unemployment through monetary policy.
To understand the Fed’s (or any central bank’s) monetary policy, it is necessary to understand the basics of the banking system.
Banks take in money from some customers (called depositors) and lend it out to other customers. People who borrow money from a bank pay interest (a fee for the use of that money), and this interest is the chief source of profits for most banks. Banks therefore typically want to make as many loans as possible at any given time. If they loaned all the deposited money out, however, depositors might worry that they would not be able to get their money back in cash. To give the public confidence in the banking system, every time a bank receives a deposit from a customer, it must set aside a portion of that money and keep it in the bank’s reserves. In the United States the Fed decides what the size of that portion will be. Because very few depositors will ask for the majority of their money in cash at any given time, the Fed only requires banks to set aside a small fraction of those deposits. The excess can be loaned out to other customers.
For instance, imagine for simplicity’s sake that the Fed currently requires banks to set aside 10 percent of all deposits before making loans. If you deposited $10,000 in your bank, then your bank would have to set aside $1,000 of that money to make sure that it can meet depositors’ needs. It could then lend the remaining $9,000 of your account balance to someone who wanted it and who qualified for a loan.
The effect this has on the money supply is tremendous. Notice that the bank has turned your initial deposit of $10,000 into $19,000. This is possible because your $10,000 exists only on paper, as a bank balance. You have full use of your bank balance, and anytime you take money out of your account, you will put it to work in the economy. If you write a $500 check to your landlord, she will deposit that check in her bank account, and her bank will set aside a portion of that money for its reserves and lend out the rest. Meanwhile, the person who borrowed $9,000 from your bank will similarly use that money to pay for goods and services. The businesses that sell these goods and services will then deposit their profits in their own banks, which will use those deposits to finance more loans.
Anytime a bank can add to its reserves, it can (and probably will) make more loans. When a bank makes loans, it increases the country’s money supply. Therefore, the Fed changes the money supply by changing the amount of money banks have in reserve. It has three tools for affecting reserve amounts.
First, the Fed can simply change reserve requirements. If, as in the above example, current Fed requirements specify that 10 percent of deposits must be set aside, and the Fed wants to restrict the money supply, it might instruct banks to begin setting aside 12 percent of deposits. Because that extra 2 percent represents money that must be kept in reserve rather than loaned out and thereby allowed to multiply, this would have an immediate and drastic effect on the money supply. Conversely, lowering reserve requirements to 8 percent would cause an enormous increase in the amount of money circulating through the banking system. Because the effects of changing reserve requirements are so broad, the Fed does not use this monetary policy tool very often.
The Fed’s second monetary policy tool is to change an interest rate called the discount rate. The Fed provides banks with a special service called the discount window. The discount window is not literally a window but rather an outlet for borrowing money. If a bank suddenly found that it did not have enough money to meet the minimum reserve requirements, it could use the Fed’s discount window to cover its shortfall. Just as an individual pays a fee called interest to borrow from a bank, so does a bank when it borrows from the Fed. If the interest rate charged at the discount window is high, banks are not very likely to borrow money from the Fed in this way. If the interest rate charged at the discount window is low, banks are more likely to borrow and, by extension, make loans. In reality, many banks worry that borrowing from the discount window will signal to the Fed that they are having financial difficulties; therefore, most banks choose to borrow money from each other to cover reserve shortfalls.
The third tool the Fed uses is open-market operations. This is the process of buying or selling government securities (low-risk, government-backed investments in the form of Treasury bills, Treasury notes, and Treasury bonds) on the open market. In other words, the Fed acts just like any other investor in the financial markets, contacting a dealer of securities to purchase or sell its securities, depending on how it wants to affect the money supply. If the Fed buys securities, it injects money into the economy, because money is coming out of the government’s own checking account and being placed in the bank accounts of securities dealers, where it will multiply according to the loan process outlined above. If the government sells securities, however, securities dealers write checks to the Fed, which means that the money represented by those checks leaves the commercial banking system, diminishing the money supply. Open-market operations are by far the Fed’s most commonly used monetary policy tool today.
Starting in the 1990s there was a great deal of media coverage of the Fed’s actions in regard to interest rates, but few people understand what the Fed really does or what interest rates newscasters are talking about when they mention the Fed.
While the Fed controls the discount rate directly, the discount rate is not the most important interest rate to the wider economy. As noted above, banks usually borrow from one another when they are short on reserves. The interest rate at which they borrow this money is called the federal funds rate. Because this represents one of the main ways, other than deposits, that banks get their money in today’s economy, changes in the federal funds rate have a large effect on the money supply.
Therefore, while the Fed’s tools for monetary policy do not strictly include the ability to set the federal funds rate, in actuality it does so by announcing its goals for that rate. If the Fed’s chairman says that he would like to see the federal funds rate drop by 0.5 percent, the federal funds rate will drop by 0.5 percent. This happens because the Fed backs up its target for the federal funds rate with open-market operations that change the money supply. When more money is in circulation, banks charge less for the use of borrowed money. If the Fed wants a drop of 0.5 percent, then, it increases the money supply by buying enough government securities to bring about that amount of fluctuation.
All other interest rates in the economy tend to be based on the federal funds rate. For instance, the prime interest rate (the rate banks use to determine how much interest to charge people who take out home or business loans), is usually about 3 percentage points higher than the federal funds rate.
"Monetary Policy." Everyday Finance: Economics, Personal Money Management, and Entrepreneurship. . Encyclopedia.com. 8 Apr. 2019 <https://www.encyclopedia.com>.
In the United States, heterodox proposals for monetary manipulation tend to flourish in times of economic crisis. The farm lobbies, in particular, have been disposed to back such measures when seeking relief from agricultural distress. They had done so in the 1870s when supporting the Greenback movement to expand the currency issue. They did so again in the 1890s when rallying behind the Populists and then William Jennings Bryan's campaigns for "free silver."
In 1932, this tradition took on a renewed vitality. The argument for inflationary policies to pump up farm prices was now articulated in more sophisticated form. Through the research of Cornell University agricultural economist George F. Warren and his collaborator, F. A. Pearson, doctrines that could formerly be dismissed as the work of "cranks" and "amateurs" were given at least a pseudoscientific veneer. From their base at the state of New York's land grant college, Warren and Pearson enjoyed proximity and visibility to the state's political establishment. And they had won converts to their views among some who would later occupy high positions in President Franklin D. Roosevelt's administrations—most notably, Henry Morgenthau, Jr., a future secretary of the treasury.
Warren and Pearson rested their arguments on elaborate statistical investigations of the behavior of commodity prices, on the one hand, and the price of gold, on the other. Their findings suggested that there was a high positive correlation between the two. It thus seemed to follow that the answer to depressed farm prices could be found in raising the price of gold. This approach to policy, however, would be incompatible with a U. S. commitment to gold convertibility of the dollar at a fixed parity.
Another version of this line of argument was supplied by Yale University's Irving Fisher, an economist recognized for his analytic ingenuity, though one who was also regarded as a bit suspect for his eccentricities (such as his ardent advocacy of prohibition and the eugenics movement) and for his unfortunate pronouncement in September 1929 that the stock market had reached a permanently high plateau. Fisher's empirical studies in the mid-1920s had indicated that the general price level—with a lag of seven months or so—led changes in the volume of aggregate economic activity. More specifically, a rising price level stimulated the volume of trade, and a declining price level depressed it. Since 1930, the American economy had experienced severe deflation: It was thus not surprising that the Depression had deepened. By 1932, Fisher was convinced that the remedy for this condition was to be found in "reflating" the general price level back to its pre-Depression elevation. When the targeted price level had been reached, the price level should be stabilized and the economy would thereafter enjoy stability. He insisted that monetary expansion—when no longer constrained by the gold standard—could produce the needed reflation. Raising the price of gold should be one of the measures deployed for this purpose.
The state of the American financial system when Roosevelt was inaugurated in March 1933 provided a moment of opportunity when suspension of the dollar's gold convertibility was both necessary and acceptable. Between his election in November 1932 and his assumption of the presidency, the nation had experienced unprecedented runs on banks and drains on the country's gold reserves serious enough to threaten their exhaustion. In the face of this crisis, Roosevelt was obliged to declare a "bank holiday" and to suspend gold convertibility, which he did by executive order as his first substantive official act. Measures taken in the months immediately thereafter effectively nationalized the monetary gold stock by outlawing private holdings.
Rupturing the tie to gold meant that economic policymakers had a much freer hand to experiment. Congress further widened the president's range of options with the passage of an amendment to the Agricultural Adjustment Act of 1933 (known as the Thomas Amendment, in recognition of the Oklahoma senator who sponsored it). This legislation conveyed discretionary power to the president to: (1) issue up to $3 billion in greenbacks (a currency without metallic backing); (2) establish the gold content of the dollar with the restriction that it could not be reduced by more than 50 percent; and (3) fix the value of silver and provide for its unlimited coinage and establish bimetallism. It was not clear, however, which of these powers (if any) would be exercised.
On October 22, 1933, Roosevelt announced that he had ordered a government agency to buy gold "at prices determined from time to time," that "this was a policy and not an expedient," and that this action was "not to be used merely to offset a temporary fall in prices." (The presence of Warren and of James Harvey Rogers—a Yale economist who shared Fisher's views—when this initiative was launched indicated that reflation of the price level was the objective of the exercise.) On each business day in the ensuing weeks, Roosevelt met with Morgenthau to fix the day's buying price. When price-elevating bidding was terminated in January 1934, the price of gold had reached $35 per ounce, at which point it was pegged. Before the country left the gold standard, its official price had been $20.67. Despite this activity, the general price level had not risen as the advocates of the gold purchase program had predicted.
In early 1934, the Roosevelt administration was confronted with mounting political pressures—particularly from senators representing silver-mining constituencies—to do something to raise the price of silver. There was a fundamental difference between the gold purchase program mounted in the autumn of 1933 and the silver purchase program that was later adopted. The former was an instance of a deliberate policy of preference that allegedly had some analytic mooring. The latter was undertaken reluctantly in response to congressional pressures that were difficult to contain. Administration officials counted it as a success that they had at least managed to forestall enactment of legislation that would mandate purchase of prescribed quantities of silver. The agreement struck with Congress in May 1934 instead set out a general goal: Treasury purchases should aim at an accumulation in which silver amounted to one-third of the value of the gold stock. However, no timetable for this outcome was specified. Though the Department of the Treasury was slow to implement this policy, it managed to spend $1.6 billion on silver acquisitions between 1934 and 1941.
Between them, gold and silver acquisitions substantially augmented the nation's monetary base and made major contributions to the swelling of excess reserves in commercial banks. By contrast, the Federal Reserve's contribution to monetary ease in 1933 and 1934 was slight. The Federal Reserve—without enthusiasm—did acquire a modest quantity of government securities between May and November 1933 and then suspended open market operations until 1937. The 1933 purchases appear to have been motivated by the Board's fear that, in the absence of some activity on its part, the administration might be provoked to issue greenbacks. The discount rate, which stood at 3.5 percent in March 1933, was reduced by seven of the twelve District Banks and, in New York, it fell to 2 percent.
The Federal Reserve's role began to change in 1935 with passage of a Banking Act that reorganized its structure. This legislation was largely the handiwork of Marriner Eccles, a Utah banker whose views on depression-fighting called for enlarged government spending financed through deficits, who had been recruited to Washington to serve as its chairman. The Banking Act of 1935 was designed to serve three purposes: (1) to change the composition of the governing body by displacing two ex officio members—the secretary of the treasury and the comptroller of the currency—and by restyling the Federal Reserve Board as the Board of Governors of the Federal Reserve System; (2) to restructure the Open Market Committee by placing its decisive weight with the Board of Governors in Washington by reducing the voting strength of the Federal Reserve District Banks; and (3) to increase the power of the central Board over the determination of discount rates and to widen its discretionary latitude over required reserve ratios.
Eccles did not delay long in using his new authority over required reserve ratios. It was then believed that the Board's capacity to restrain lending by commercial banks would be compromised when they held abnormally large sums in excess reserves, as appeared to be the case in 1936 and early 1937. Accordingly, the Board of Governors acted to increase its leverage by exercising its newly-conveyed power to double required reserve ratios. Board action was taken in two steps: (1) required reserve ratios were raised half the distance toward the legal maximum in August 1936; and (2) increases to the full limit allowed by law were ordered in the spring of 1937. All of this was seen as precautionary and not as a retreat from monetary ease. After all, the discount rate in New York in September 1937 was 1 percent and it was set at 1.5 percent by the other District Banks. Eccles insisted that the "supply of money to finance increased production [was] ample."
The Board's decisions on this matter have been faulted on grounds that they provoked the recession of 1937 and 1938, which set in when the economy was operating well below its full employment capacity. Two latter-day commentators, Milton Friedman and Anna Jacobson Schwartz, have assigned major responsibility for this sharp downturn to the Federal Reserve's actions in doubling required reserve ratios. Their argument rests on the view that excess reserves, which the Board held to be needlessly excessive, were, in fact, desired as liquidity cushions in circumstances of depression. Hence, the Board's intervention in shrinking them led banks to constrain lending activities. A different interpretation—favored by New Deal contemporaries—held that the recession had been triggered by a turnaround in government's fiscal impact on the economy: that is, from being expansionary in 1936 to contractionary in 1937.
The administration's policy response to the recession—when announced in April 1938—emphasized fiscal stimulants in a "spend-lend program." Then, for the first time, Roosevelt embraced deficit financing as a positive good, rather than an unavoidable evil. The Federal Reserve participated by lowering required reserve ratios by one-third. Subsequently the volume of excess reserves again grew. It was not until November 1941, however, that the Board once more set required reserve ratios at the maximum level allowed by law.
See Also: BANK PANICS (1930–1933); ECCLES, MARRINER; ECONOMY, AMERICAN; FEDERAL RESERVE SYSTEM; GOLD STANDARD; MORGENTHAU, HENRY T., JR.; RECESSION OF 1937.
Barber, William J. Designs within Disorder: Franklin D.Roosevelt, the Economists, and the Shaping of American Economic Policy, 1933–1945. 1996.
Blum, John Morton. From the Morgenthau Diaries, Vol. 1: Years of Crisis, 1928–1938; Vol. 2: Years of Urgency, 1938-1941; Vol. 3: Years of War, 1941–1945. 1959–1967.
Chandler, Lester V. America's Greatest Depression,1929–1941. 1970.
Eccles, Marriner S. Beckoning Frontiers: Public and PersonalRecollections. 1951.
Friedman, Milton, and Anna Jacobson Schwartz. A Monetary History of the United States, 1867–1960. 1963.
Johnson, G. Griffith. The Treasury and Monetary Policy,1932–1938. 1939.
"Monetary Policy." Encyclopedia of the Great Depression. . Encyclopedia.com. 8 Apr. 2019 <https://www.encyclopedia.com>. | 2019-04-26T12:19:53Z | https://www.encyclopedia.com/social-sciences-and-law/economics-business-and-labor/economics-terms-and-concepts/monetary-policy |
1 A Proposal to the Health and Care Professions Council for the inclusion of Cognitive Behavioural Therapy education within the current curriculum to enhance the skills of Physiotherapy students.
2.3 How and Why Does CBT Fit Into Physiotherapy Practice?
2.3.4 Does CBT Work For All Patient Populations?
A Proposal to the Health and Care Professions Council for the inclusion of Cognitive Behavioural Therapy education within the current curriculum to enhance the skills of Physiotherapy students.
The aim of this wiki page is to present to the Health and Care Professions Council (HCPC) the importance of including Cognitive Behavioural Therapy (CBT) training into physiotherapy education. This will be illustrated through the synthesis of the current evidence base, the provision of case studies and the construction of a sample module.
Critically appraise the importance of CBT in physiotherapy practice for the benefit of current physiotherapy students.
Justify and synthesise evidence for the inclusion of CBT education for physiotherapy students to the HCPC.
Formulate an appropriate CBT module that enhances the ability of physiotherapy students to address the psychosocial aspects with regards to patients in various settings.
Demonstrate the construction of an accessible social media resource that enhances the delivery of the CBT module material.
In the early 1960s, psychoanalyst professor Aaron Beck developed cognitive therapy after investigating the psychoanalytic concepts of depression. During his studies, he discovered that depressed patients spontaneously experienced automatic negative thoughts. These negative thoughts fell into three categories: negative thoughts about themselves, the world, and the future. After spending some time with these patients, Beck recognised that these automatic negative thoughts were highly related to the individual’s emotions. Beck started to notice rapid improvements amongst these individuals after helping them identify, evaluate and respond to their maladaptive thinking and behavioural patterns. In order to see the effects of this form of cognitive therapy, a randomised controlled study was conducted looking at the impact of cognitive therapy in depressed patients. Results showed cognitive therapy to be as effective as imipramine, an antidepressant. These findings were a huge milestone as a form of talk therapy had been compared to a pharmacological medication. Today CBT has been scientifically proven to be effective in numerous clinical trials for varying disorders.
CBT stems from the cognitive model of psychopathology. This theory looks at how individuals' perceptions and thoughts about situations influence their emotional, behavioural and physiological reactions. For example, when individuals are stressed, their thoughts tend to be distorted and dysfunctional. If individuals learn to identify, address and correct these thoughts, their stress levels tend to decrease leading to more functional behaviour.
CBT teaches individuals to confront their irrational thoughts, in a more realistic and adaptive manner so that they experience improvements in their emotional state and behaviour. CBT can include a number of cognitive and behavioural techniques including self-instructions and adaptive coping strategies. CBT involves six overlapping phases that can be adapted to a diverse set of populations with various disorders. The phases represent the different theoretical components of the multidimensional treatment. Even though CBT follows a logical sequence, the treatment should be flexible and individualised to the patient’s needs.
This phase involves assessing information given from the patient and family through a series of self-reported measures and observational procedures to identify the degree of psychosocial impairment.
Information provided determines the most appropriate course of action.
Patients are often asked to maintain a self-report diary.
Seeks to help patients challenge and question their maladaptive thoughts (e.g. “I am a failure in life because I am in pain”).
Collaboratively set goals with the patient.
Therapist uses various cognitive and behavioural strategies to teach patients how to deal with obstacles in their day to day lives.
Collaboratively focus on problem solving strategies i.e. relaxation techniques/pacing/graded exposure/coping strategies.
Patients are given homework to help reinforce the skills that they have learned.
Patients review homework and practice skills that have been taught and considers potential problematic situations that may arise.
Patients evaluate their progress and attribute success to their own coping efforts.
All aspects of therapy are reviewed.
Therapist monitors and evaluates patient's application of CBT to their life.
How and Why Does CBT Fit Into Physiotherapy Practice?
Current physiotherapy education stems from the International Classification of Function, Disability and Health (ICF) Model. The incorporation of CBT into physiotherapy practice will enhance the delivery of the bio-psychosocial model providing a more holistic approach towards patient-centred care. This will ensure a more comprehensive and successful journey for both patient and practitioner. The correct implementation of CBT by physiotherapists within their scope of practice will increase the success of treatment and overall outcome for patients.
The fundamental principles of both CBT and physiotherapy are comparable and integrate cohesively as shown in Table 1.
The addition of CBT in a physiotherapist's skill set can help enable patients to identifiy and change negative thought patterns which are detrimental for successful rehabilitation. This allows patients to regain internal locus of control which can positively influence the patient’s specific problems. Physiotherapists are in a prime position to help manage and modify a patient's maladaptive thoughts. The start of a physiotherapy assessment begins with a subjective examination. This provides the opportunity for physiotherapists to gauge if CBT would be an appropriate tool for the patient. Appropriate tools to identify psychosocial risk factors i.e. yellow flags, would enable the collaboration of the physiotherapist and patient to target these patient problems when setting SMART goals. A treatment plan can then be seamlessly adapted with both the physical and psychosocial conditions in mind. This may also help to reduce the impact of any negative stigma patients may have with regards to requesting and obtaining psychological support.
In some cases, physiotherapists will be the first point of health care contact for many patients. This places physiotherapists in a prime position to help treat the patient holistically. In scenarios containing complex patients with psychosocial issues at the stem of the problem list, the aim of treatment can be directed appropriately with collaboration between the physiotherapist and patient. This is likely to reduce rates of relapse due to previous maladaptive behaviours and reduce re-admission rates.
The amalgamation of CBT into the current physiotherapy curriculum would equip physiotherapy students with the skills to identify and manage patients indicative of yellow flags early, thus reducing the need for a referral to a clinical psychologist. Ultimately, physiotherapists tackling subtle psychosocial issues at the start may decrease contact time amongst the multidisciplinary team and decrease health costs. This would have the potential to increase the success rate of treatment and reduce readmissions as the patient would learn to self manage their behaviours.
In addition to the principles of physiotherapy and CBT integrating seamlessly, there exists some gaps in current physiotherapy training.
Additionally, the focus of continuing professional development (CPD) continues to enforce the biomedical model of assessment and treatment, with minimal CPD workshops that address the psychosocial approach. A CBT module within the physiotherapy curriculum can help further develop a physiotherapy student to become a more well-rounded and competent clinician.
There is empirical evidence that suggests CBT is effective in improving conditions such as anxiety, depression, post-traumatic stress disorder, eating disorders and chronic pain. In the United Kingdom, the National Institute for Health and Clinical Excellence (NICE) recommends CBT as the treatment of choice for a number of mental health illnesses previously mentioned. In addition there is a growing body of evidence behind the effectiveness of CBT for physiotherapy, producing significant improvements for patients with back pain, chronic pain and fibromyalgia with regards to function, pain experience and coping strategies.
There has been an increase in the demand for interventions that may prevent the development of persistent pain problems.
In 1997, a review of 10 trials of early interventions for acute back pain based in primary care settings was carried out. These programmes dealt with fear and anxiety which is often associated with acute pain, leading to positive results over various control conditions. A study conducted in 1998 also found that a cognitive-behavioural programme for patients with acute back pain significantly reduced worry and disability at follow-up – therefore preventative measures may be viable.
In 2001, a randomised controlled trial was published which aimed to investigate the preventative effects of a CBT group intervention for people reporting neck or back pain. The participants had experienced four or more episodes of relatively intense spinal pain during the past year but had not been out of work for more than 30 days. As a result the aim was to prevent a non-patient population developing a more serious pain problem and entering a chronic stage. The experimental group participated in a six-session structured programme where the individuals met in groups of 6-10 once a week for two hours. The CBT group showed more stable improvements over the control group with reduced sick days. The CBT group also reported a decrease in fear avoidance and an increase in the number of pain-free days concluding early preventative measures may be helpful.
With regard to the issue of absenteeism, musculoskeletal disorders (MSDs) are one of the most commonly reported work-related illnesses. There is now general agreement among the various occupational health guidelines for management of MSDs. This encompasses the identification of psychosocial obstacles to recovery, provision of advice that MSDs are self-limiting conditions and that remaining at work or an early return to work (RTW) should be encouraged and supported. A study was conducted in 2006 in a large pharmaceutical company in the UK. Occupational health nurses (OHN) were trained to deliver an intervention to workers taking absence due to various MSDs including low back pain (LBP) and upper limb disorders. This training package included education about pain and pain mechanisms, tackling negative beliefs and attitudes and reinforcing the importance of keeping active and early RTW . Results showed a decrease in absent days in one particular site compared to the control site where workers were seen by the OHN on RTW. In summary, this study adds to emerging evidence that absence from work can also be reduced by providing information and support to employees.
CBT has also been used successfully with angina patients. The Heart Manual is a six-week cognitive behavioural rehabilitation tool designed to correct misconceptions about the cause of Myocardial Infarction (MI). In addition it helps patients develop strategies for dealing with stress in order to neutralise enduring misunderstandings. The Heart Manual is one way of providing educational and psychological support for post MI patients, although it will not meet the needs of a minority who require additional help. An initial randomised controlled trial evaluating the Heart Manual found that those receiving the manual had improved emotional states, fewer GP contacts and hospital readmissions at six months post MI. Subsequent studies have found significantly fewer readmissions in the 77 treated patients and improvement in emotional state and sense of control at six months.
As previously mentioned, CBT can also play a role in the treatment of various mental health conditions. A study was published in 2002 which aimed to test the effectiveness of added CBT in accelerating remission from acute psychotic symptoms in early schizophrenia. A 5-week CBT programme plus routine care was compared with supportive counselling plus routine care and routine care alone in a multi-centre trial randomising 315 people with DSM-IV schizophrenia and related disorders in their first (83%) or second acute admission. Linear regression over 70 days showed predicted trends towards faster improvement in the CBT group. Concluding that CBT shows transient advantages over routine care alone or supportive counselling in speeding remission from acute symptoms in early Schizophrenia.
Does CBT Work For All Patient Populations?
As mentioned previously, CBT is applicable in a wide range of situations and beyond the initial problem for which the patient may seek treatment. Although it has been specialised and adapted for use within a number of specific disorders ranging from depression to psychosis, CBT has also become increasingly popular for a wide variety of chronic pain conditions, particularly for chronic LBP. Despite this, there exists a patient population that is less likely to respond to CBT as a treatment. In addition, some research has shown that a CBT approach is equally as effective at reducing pain levels as traditional interventions. Perhaps a more systematic approach to matching CBT to certain patient populations and filtering it to those who are more likely to respond positively to treatment is the approach required for CBT.
The Keele STarT Back Screening Tool (SBST) is designed to address the mismatch. A sample musculoskeletal (MSK) screening tool can be downloaded here. The SBST categorises patients with LBP into three subgroups based on their prognosis (low risk of chronicity, medium risk with physical obstacles to recovery, and high risk with psychological obstacles to recovery). The practice of physiotherapy revolves around patient centered care. The choice for a physiotherapist to utilise CBT as an intervention stems from prior CBT training and therapist intuition/clinical reasoning. Moreover, a tool such as the SBST can determine if any discrepencies exist between patients. The SBST is valid and repeatable, and consists of 9 items which include: referred pain, co-morbid pain, disability, bothersomeness, catastrophising, fear avoidance, anxiety and depression. The latter 5 items combine to form a subscore relating to psychosocial factors that indicates appropriateness for CBT as an intervention. The SBST is currently being adapted to MSK conditions, with trials occurring in NHS 24 in Scotland.
Targeting patient subgroups that are most likely to be receptive to CBT can help improve outcomes and reduce costs. A trial of the SBST conducted by Hill et al. 2011 demonstrated increased health benefits along with reduced cost of health care. The trial revealed that with the SBST and trained therapists to deliver targeted interventions for each of the three subgroups of patients, there was a direct mean savings of £34.39 per patient and an indirect productivity saving of £675 per patient when compared with patients receiving current care. Pain-related productivity and societal losses can manifest through sick days and repeat health care visits. A randomised control trial conducted in 2005 found that CBT in addition to physiotherapy reduced the mean number of health care visits due to pain from 6 to 1, and reduced the percentage of sick days from 9-14% to 2-5% when comparing groups that received minimal treatment and CBT. This type of evidence suggests that with therapeutic interventions that take into account the biopsychosocial model of patient care, there is a possibility to reduce disability and reduce the cost of care.
The evidence suggests the effectiveness of CBT is improved when directed at the correct patient populations. Tools like the SBST need to be used in conjunction with sound clinical reasoning in a patient centered approach to target those who are likely to benefit from it. With adapted versions of the SBST to encompass other MSK conditions being trialed with NHS 24 currently, newly trained physiotherapists would benefit from CBT training to effectively utilise this new information gained from patients in practice. Physiotherapists are evidenced-based practitioners and there exists not only a need for further training to incorporate CBT principles, but a desire from practicing physiotherapists to expand knowledge on CBT principles.
Reassurance to family members of those affected by chronic and acute conditions is essential in the treatment and recovery of the patient.
Programmes designed to include families in the care of relatives with chronic conditions can be implemented, particularly in the terminal setting. These programmes can guide family members with goal setting, supportive communication techniques and provide them with the tools to assist in monitoring clinical symptoms and medications.
For those with career threatening injuries (e.g. professional athletes or manual workers), coping with potential loss of income can be extremely stressful for both themselves and their families.
In order to get families to adopt a supportive role there often needs to be a change in cognition. Unrealistic and irrational thoughts regarding their loved ones prognosis may be detrimental to the treatment process. Therefore, where possible, such beliefs should be addressed to reduce the potential of any maladaptive behaviours. For those with acute conditions that may result in loss of earnings or concept of self, CBT may help to prevent anxiety and cognitive distortion (e.g. catastrophising), as well as increased adherence to the rehabilitation protocol.
When those working in palliative care settings have been interviewed with regards to work place stressors, more stressors were related to difficulty with colleagues, work environment, and occupational roles than with the interaction with patients and their families.
Seeking support from colleagues is often preferred and more accessible then official support models in place for those working in health provision areas with high stress.
With an insight into the cognitive and behavioural components of our own actions we can develop higher self-monitoring traits along with increased empathy. This in turn may lead to further understanding of fellow professionals within the MDT thus enabling us to defuse any potentially volatile situations. Furthermore, many of the environments in which physiotherapy skills are required tend to be highly stressful and emotional. As a result we may be required to engage in supportive behaviour and cognitive reasoning with colleagues.
The Health and Safety Executive recognises that there are many factors in the workplace that contribute to strains on NHS professional’s mental health. These include: excessive demands, lack of control, lack of support, poor working relationships, role ambiguity and organisational change.
The 2009 Boorman Review reported that the NHS loses 10 million working days annually due to sickness costing the NHS an estimated £555million, with mental health along with MSDs being the primary cause. Combined they are the leading cause of health-related early retirement in the NHS.
The Work Foundation estimates that presenteeism due to poor mental health leads to a loss of working time nearly 1.5 times that caused by sickness absence due to mental health in the United Kingdom.
By having an understanding of ones own cognitive state, AHPs may be able to overcome the inherent stressors in their jobs. It has been documented that self-directed CBT can reduce an individual’s own stress, anxiety, depression and cognitive dissonance. As CBT incorporates the introspection of thought process from Cognitive Therapy and the goal of behavioural change from Behavioural Therapy, CBT can be a useful tool for physiotherapists in their own development as a competent and holistic professional. Enhanced insight into maladaptive thoughts may lead to a reduction in mental health issues, likely resulting in a decrease in work days lost in the NHS.
It is essential that the patient views therapy as teamwork.
It is important for the therapist to provide empathy, warmth and genuine regard through listening and understanding the patients' true feelings.
Ensure the patient understands and agrees with modes of therapy utilised.
Encourage the patient to take an active role in their recovery by providing therapy homework.
Elicit SMART goals from the start to ensure the patient understands what they are working towards.
The therapist should aim to teach the patient skills and techniques of how to be their own therapist.
Patients are usually treated for 6-14 sessions during which the therapist aims to provide relief, resolve patients' most pressing problems and teach them skills to avoid relapse.
In order to maximise efficiency and effectiveness each session should be structured.
CBT uses various techniques in order to cater to the individuals' needs.
Patients can have hundreds of automatic thoughts everyday but it is important that the therapist teaches the patient to identify the key cognitions and how to respond.
however over the last 18 months the pain has become constant and he is finding it much harder to work. The main problems at work are bending down, working in cramped places (e.g. under sinks), carrying tools and driving for longer than half an hour. His only partial relief is a long, hot bath at the end of the day which does not help the pain, rather helps him relax. He is self-employed and has a family to support so tends to “push on” to get the work done. Towards the end of the day he says he’s “good for nothing except lying on the sofa” along with spending the weekend resting and recuperating for the upcoming week. His GP has ruled out the need for further investigations but has prescribed various painkillers which proved to be ineffective. A year ago he attended physiotherapy where he was given exercises and received manual manipulation, neither of which helped. He found the exercises very painful and therefore avoided them as he deemed them detrimental.
All active movements of the lumbar spine reduced to ¾ of normal range. Neurological examination is normal. Palpation of the lumbar spine reveals pain and tenderness at all levels. SLR was 70 degrees bilaterally.
Education regarding the boom/bust model.
A 26-year-old professional rugby player has reported coming off his first right ACL-repair. 8 months prior he had a traumatic injury in a game when a player tackled his right knee into valgus. He heard a “pop” and had pain with immediate swelling and went to A&E. An MRI revealed a triad injury of the right knee (ACL rupture, MCL tear, medial meniscal tear). He had an arthroscopy 4 days following the incident. Since then, he reports he has received daily physiotherapy but has not returned to rugby since. He is experiencing ongoing weakness in his right knee, feels very fearful of re-injury and returning to the full contact nature of the sport.
Graded exposure to progressively increase the amount of full sprints and cutting movements.
Gradually increasing minutes played in real game.
Thought re-evaluation/re-modelling of maladaptive thoughts and fear avoidance behaviors.
A 45-year-old woman with end stage lung cancer lives at home with her husband and 12-year-old daughter. The patient has been managing personal ADLs until recently and is struggling with fatigue and nausea resulting in declining motivation. She has been diagnosed with depression but does not wish to take medication for this as she feels she ‘is already taking enough pills’. She has started to decline physiotherapy sessions because she ‘doesn’t see the point’. As a result of her low motivation and compliance, her exercise tolerance is declining.
Her husband feels he is unable cope with the role of primary caregiver and the inevitability of single parenthood through bereavement. Upon seeing the impending loss of a close family unit, the MDT are beginning to struggle with maintaining their professional composure.
Thought remodelling to aid the patients acceptance of the situation along with an understanding of the value of maximizing remaining quality of life for her and her family.
Realising the stage of grief that the husband may be presently at and allowing time for acceptance of the situation for him to adopt his new role.
Provide support to colleagues and self through open lines of communication and introspection to highlight irrational thoughts regarding loss.
h up secretions, has been taught the Active Cycle of Breathing but does not feel confident that it works therefore doesn’t comply with the prescribed technique.
Remodeling of beliefs towards ACBT and smoking.
Exploration of strategies/problem solving with quitting smoking. Education regarding smoking.
Thought monitoring to examine the grounds on which he bases his reluctance to exercise.
Why current students and newly qualified physiotherapists need CBT training?
A growing body of evidence has highlighted the important roles cognitive factors can have on an individual’s health contributing to disability and influencing treatment response. Research has shown that a good treatment outcome is more positive in people who strongly believe in their internal control over illness. Thus by gaining a greater understanding of identifying and evaluating these maladaptive patterns in patients, they can then apply CBT which will aid in their overall success.
The overall aim of this workshop is to prepare physiotherapists with an overview of CBT and provide them with the tools and knowledge for the application of this approach.
Students will complete approximately 10 hours of CBT training through a combination of self-directed online tutorials and a practical element.
8 online YouTube videos: Self-directed.
4 practical classes (1 hour each): individual’s will take part in four 1-hour practical classes where they will gain insight on CBT principles, role play various scenarios and implement several strategies.
The module is spread over an 8-week period.
To become familiar with the theory of CBT.
To build upon the individual’s existing knowledge of CBT.
To provide the opportunity for practice of various CBT strategies.
To enhance individual confidence when using CBT in clinical practice.
Training will take place between the students 2nd and 3rd placement in order to better prepare the students for their final placements. Having previously been on placement, students will have a better understanding and appreciation of how and when CBT can be applied and to whom it may benefit.
Completion of CBT training will be a requirement on the students' passport in order to graduate successfully.
Completion of CBT training will contribute to the students' CPD portfolio.
The online component of the CBT module for physiotherapy students will be available via the video hosting website: YouTube. YouTube is a free to use, internationally available format that allows videos to be searched for, saved, discussed and easily linked to others. YouTube is accessible on an array of platforms such as laptops, tablets and smartphones, which is vital in the ever-changing landscape of personal computing. YouTube commands one of the biggest audiences in terms of internet traffic in the world with an estimated 100 million unique viewers each month.
As a result of YouTube’s search and subscription feature it will be easy for students to find the “CBT Physio” videos and subscribe to the channel to remain up to date. In order to access the YouTube channel more effectively there is the capacity for individual users to receive email notifications of new videos along with a Twitter account. Twitter is the second most accessed social media site (behind Facebook) with over 6 million external sites linking to it.
This online self-directed study, presented by CBT Physio, explores the application of CBT principles into physiotherapy practice.
Step 1: By going to www.youtube.com the user will be able to freely access the video sharing website.
Step 2: Entering "CBT physio" will initiate a search of all relevent videos.
Step 3: The top result is our sample clip.
Step 4: The user can easily subscribe to the channel to recieve updates and subsequently be notified when new videos are published.
Step 5: The channel will then be added to the user's list of subcriptions.
Step 6: Below the video is a comment section for both the user and publisher to discuss the video content.
Step 7: From the channels home page the user can access the corresponding Twitter account.
Step 8: The Twitter account can be followed with a click of a button ensuring the user is updated and informed of any activity.
Step 9: Any updates from the Module are then fed directly to the user's Twitter timeline with direct links to the video.
This physio-pedia page aims to engage with the HCPC board members regarding the importance of including CBT within the current physiotherapy curriculum. The evidence demonstrates that CBT can benefit all aspects of the patient journey which incorporates not only the patient, but family members and the MDT as well. Current physiotherapy education attempts to emphasize and root its practice based on the ICF model. The integration of a CBT module in the current curriculum would highlight the importance of combining both the biomedical and psychosocial models of healthcare. Numerous benefits of CBT have been demonstrated throughout this proposal. These include enhancing the patient journey, facilitating a more efficient practice and ultimately minimising health care costs. The sample module this page presents demonstrates the simplicity and feasibility of implementing a CBT module.
↑ 1.0 1.1 Beck J. Cognitive Therapy: Basics and Beyond, 2nd ed. New York: Guildford Press, 2011.
↑ 2.0 2.1 Donaghy M, Nicol M, Davidson K, editors. Cognitive-behavioral interventions in physiotherapy and occupational therapy. Edinburgh: Elsevier, 2008.
↑ Gatchel RJ, Rollings KH. Evidence informed management of chronic low back pain with cognitive behavioural therapy. The Spine Journal 2008; 8(1):40–44.
↑ Turk D, Flor H. A cognitive-behavioral approach to pain management. In: Mcmahon S, Koltzenburg M, editors. Wall and Melzacks textbook of pain. London: Elsevier Churchill Livingstone, 1999. p1431-1441.
↑ Wright J, Basco M, Thase M. Learning cognitive-behaviour therapy: An illustrated guide. London: American psychiatric publishing inc, 2006.
↑ Foster N, Delitto A. Embedding psychosocial perspectives within clinical management of low back pain: Integration of psychosocially informed management principles into physical therapist practice – challenges and opportunities. Journal of American Physical Therapy Association 2011;91:790-803.
↑ Van Tulder MW, Ostelo R, Vlaeyen JWS, Linton SJ, Moreley SJ, Assendelft WJJ. Behavioral treatment for chronic low back pain: A systematic review within the framework of the Cochrane back review group. Spine 2000:25(20);2688-99.
↑ Morley S, Eccleston C, Williams A. Systematic review and meta-analysis of randomized controlled trials of cognitive behavior therapy and behavior therapy for chronic pain in adults, excluding headache. Pain 1999:80;1-13.
↑ Rossy LA, Buckelew SP, Dorr N, Hagglund KJ, Thayer JF, Mcintosh MJ, Hewett JE, Johnson JC. A meta-analysis of fibromyalgia treatment interventions. Annals of Behavioral Medicine 1999:21(2);180-91.
↑ Von Korff M, Moore JE, Lorig K, Cherkin DC, Saunders K, González VM, Laurent D, Rutter C, Comite, F. A randomized trial of a lay-led self-management group intervention for back pain patients in primary care. Spine 1998; 23(23): 2608–2615.
↑ Waddell G, Feder G, Lewis M. Systematic reviews of bed rest and advice to stay active for acute low back pain. British Journal of General Practice 1997; 47(423):647–652.
↑ Linton SJ, Ryberg M. A cognitive-behavioural group intervention as prevention for persistent neck and back pain in a non-patient population: a randomized controlled trial. Pain 2001;90(1-2):83–90.
↑ McCluskey S, Burton AK, Main CJ. The implementation of occupational health guidelines principles for reducing sickness absence due to musculoskeletal disorders. Occupational Medicine 2006;56:237–242.
↑ Lewin B, Cay EL, Todd I, Sorgal I, Gordfield, Bloomfield P. The Angina Management Programme: a rehabilitation treatment. British Journal of Cardiology 1995; 2: 221-226.
↑ Lewin B, Robertson IH, Cay EL, Irving JB, Campbell M. Effects of self-help post myocardial-infarction rehabilitation on psychological adjustment and use of health services. Lancet 1992; 339(8800): 1036-1040.
↑ O.Rourke A, Hampson SE. Psychosocial outcomes after an MI: evaluation of two approaches to rehabilitation. Psychology Health and Medicine 1999; 4(4): 393-402.
↑ Vlaeyen JWS, Morley S. Cognitive-behavior treatments for chronic pain: What works for whom?. Clin J Pain 2005;21:1-8.
↑ Eccleston C, Williams ACDC, Morley S. Psychological therapies for the management of chronic pain (excluding headache) in adults (review). Cochrane Database of Systematic Reviews 2009:2;1-102.
↑ Keele University. STarT Back Screening Tool Website. http://www.keele.ac.uk/sbst/usingscoringthesbst/ (accessed 28 October 2013).
↑ Hill JC, Dunn KM, Lewis M, Mullis R, Main CJ, Foster NE, Hay EM. A primary care back pain screening tool: Identifying patient subgroups for initial treatment. American College Rheumatology 2008: 59(5);632-41.
↑ Hill JC, Whitehurst DGT, Bryan S, Dunn KM, Foster NE, Konstantinou K, Main CJ, Mason E, Somerville S, Sowden G, Vohora K, Hay EM. Comparison of stratified primary care management for low back pain with current best practice (STarT Back): a randomized controlled trial. Lancet 2011:378;1560-71.
↑ Linton SJ, Boersma K, Jansson M, Svard L, Botvalde M. The effects of cognitive-behavioral and physical therapy preventive interventions on pain-related sick leave: A randomized controlled trial. Clin J Pain 2005:21;109-19.
↑ Beissner K, Keefe FJ, Main CJ. Discussion: Cognitive behavioral therapy for patients with chronic pain [PODCAST]. Physical Therapy (PTJ): Journal of the American Physical Therapy Association. http://podbay.fm/show/272092273/e/1243538700. (accessed 4 Nov 2013).
↑ Jamison R, Virts, K. The influence of family support on chronic pain. Behaviour Research and Therapy 1990; 28(4):283-287.
↑ Rosland A, Piette J. Emerging models for mobilizing family support for chronic disease management: a structured review. Chronic Illness 2010; 6(1):7-21.
↑ Bascom P, Tolle S. Care of the family when the patient is dying. Western journal of medicine 1995; 163(3):292.
↑ Vachon M. Team stress in palliative/hospice care. Hospice Journal 1987; 3(2-3):75-103.
↑ Fernandes C, Bouthillette F, Raboud J, Bullock L, Moore C, Christenson J, Grafstein E. Violence in the emergency department: a survey of health care workers. Canadian Medical Association Journal 1999; 161(10):1245-1248.
↑ Boudreau R, Moulton K, Cunningham J. Self-directed cognitive behaviour therapy for adult with diagnosis of depression: systematic review of clinical effectiveness, cost-effectiveness and guidelines. Canadian Agency for Drugs and Technologies in Health 2010.
↑ Proudfoot J, Everitt B, Shapiro D, Goldberg D, Mann A, Tylee A, Gray J. Clinical efficacy of computerised cognitive–behavioural therapy for anxiety and depression in primary care: randomised controlled trial. The British Journal of Psychiatry 2004; 185(1):46-54.
↑ Clouder L. Reflective practice in physiotherapy education: a critical conversation. Studies in Higher Education 2000; 25(2):211-223.
↑ Butler DS, Moseley GL. Explain Pain. 2nd ed. Noigroup Publications, 2013.
↑ Maciejewski PK, Zhang B, Block SD, Prigerson HG. An Empirical Examination of the Stage Theory of Grief. The Journal of the American Medical Association 2007:297(7);716-23.
↑ Boothby J, Thorn B, Stroud M, Jenson M. Coping with pain. In: GATCHEL, R., Turk, D. (eds.) Psychological factors in pain. 1999. New York: Guilford Press, pp. 320-345.
↑ Wells P, Frampton V, Bowsher D, editors. Pain managment by physiotherapy. Oxford: Butterworth-Heinemann, 1988.
↑ Gill P, Arlitt M, Mahanti A. YouTube traffic characterization: a view from the edge. In Proceedings of the 7th ACM SIGCOMM conference on Internet measurement 2007: 15-28. | 2019-04-22T22:52:30Z | https://physio-pedia.com/The_Inclusion_of_CBT_in_Physiotherapy_Education |
2016-08-09 Assigned to UPTAKE MEDICAL TECHNOLOGY INC. reassignment UPTAKE MEDICAL TECHNOLOGY INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: UPTAKE MEDICAL CORP.
Methods are describe for treating intraluminal locations such as in a patient's lung. The device is a catheter which has an elongated shaft with an inner lumen, preferably defined by an inner tubular member, formed of heat resistant polymeric materials such as polyimide and high temperature vapor is directed through the inner lumen into the intraluminal location to treat tissue at and/or distal to the location. The outer surface of the catheter is also formed of heat resistant material. An enlarged or enlargeable member, such as a balloon, is provided on a distal portion of the catheter shaft to prevent proximal flow of the high temperature vapor upon discharge from the catheter.
This application is related to application Ser. No. ______ concurrently filed Nov. 13, 2006, entitled High Pressure and High Temperature Vapor Catheters and Systems, which is incorporated by reference herein in its entirety.
This invention relates to medical devices, systems and methods, and in particular to intrabronchial catheters, systems and methods for delivering a high pressure, high temperature vapor to one or more tissue targets in a patient's lungs.
Heating therapies are increasingly used in various medical disciplines including cardiology, dermatology, orthopedics, oncology as well as a number of other medical specialties. In general, the manifold clinical effects of superphysiological tissue temperatures results from underlying molecular and cellular responses, including expression of heat-shock proteins, cell death, protein denaturation, tissue coagulation and ablation. Associated with these heat-induced cellular alternations and responses are dramatic changes in tissue structure, function and properties that can be exploited for a desired therapeutic outcome such as tissue injury, shrinkage, modification, destruction and/or removal.
Heating techniques in the lung pose several technical challenges because lung tissue is more aerated than most tissues and also due to its vascularization. Accordingly, these new heating methods, devices and systems for rapid, controllable, effective and efficient heating of lung tissue are needed. The present invention is directed at meeting these as well as other needs.
The present invention relates to novel methods for treating an intraluminal location by volumetricly heating one or more target tissues and particularly tissue in a patient's lungs. Preferably, the one or more target tissues are heated to superphysiological temperatures (temperatures above at least 45° C.) by dispersing a high temperature vapor into an airway (e.g. intrabronchial location) that ventilates the one or more target tissues. Because of the physiological characteristics of the airways, the vapor can be delivered focally or regionally dependant largely on where in the airways the vapor is dispersed. The target tissue is heated without causing pneumothorax.
In a first aspect of the invention, a method of treating a patient's lungs comprises providing an elongated device such as a catheter which has an inner lumen formed of heat resistant materials and configured to deliver heated vapor to a port in a distal portion of the catheter; advancing the catheter within the patient to a desired location therein, such as a lung; and delivering heated vapor through the inner lumen. Preferably, catheter has an inflatable member on a distal portion of the catheter and the inflatable member is inflated before heated vapor is delivered through the inner lumen to prevent proximal flow of the heated vapor.
FIG. 7B is a transverse cross-sectional view of catheter illustrated in FIG. 7 taken along lines 7B-7B.
FIG. 1 illustrates a human respiratory system 10. The respiratory system 10 resides within the thorax 12 that occupies a space defined by the chest wall 14 and the diaphragm 16. The human respiratory system 10 includes left lung lobes 44 and 46 and right lung lobes 48, 50, and 52.
The respiratory system 10 further includes trachea 18; left and right main stem bronchus 20 and 22 (primary, or first generation) and lobar bronchial branches 24, 26, 28, 30, and 32 (second generation). Segmental and sub-segmental branches further bifurcate off the lobar bronchial branches (third and fourth generation). Each bronchial branch and sub-branch communicates with a different portion of a lung lobe, either the entire lung lobe or a portion thereof. As used herein, the term “air passageway” or “airway” means a bronchial branch of any generation, including the bronchioles and terminal bronchioles.
FIG. 2 is a perspective view of the airway anatomy emphasizing the upper right lung lobe 48. In addition to the bronchial branches illustrated in FIG. 1, FIG. 2 shows sub-segmental bronchial branches (fourth generation) that provide air circulation (i.e. ventilation) to superior right lung lobe 48. The bronchial segments branch into six generations and the bronchioles branch into approximately another three to eight generations or orders. Each airway generation has a smaller diameter than its predecessor, with the inside diameter of a generation varying depending on the particular bronchial branch, and further varying between individuals. A typical lobar bronchus providing air circulation to the upper right upper lobe 48 has an internal diameter of approximately 1 cm. Typical segmental bronchi have internal diameter of approximately of about 4 to about 7 mm.
The airways of the lungs branch much like the roots of a tree and anatomically constitute an extensive network of air flow conduits that reach all lung areas and tissues. The airways have extensive branching that distally communicates with the parenchyma alveoli where gas exchange occurs. Because of these physiological characteristics of the airways, a medium, such as a vapor, delivered through an airway can be delivered focally or more regionally depending largely on the airway location at which the medium is delivered or dispersed.
While not illustrated, a clear, thin, shiny covering, known as the serous coat or pleura, covers the lungs. The inner, visceral layer of the pleura is attached to the lungs and the outer parietal layer is attached to the chest wall 14. Both layers are held in place by a film of pleural fluid in a manner similar to two glass microscope slides that are wet and stuck together. Essentially, the pleural membrane around each lung forms a continuous sac that encloses the lung and also forms a lining for the thoracic cavity 12. The space between the pleural membranes forming the lining of the thoracic cavity 12 and the pleural membranes enclosing the lungs is referred to as the pleural cavity. If the air tight seal around the lungs created by the pleural members are breached (via a puncture, tear, or is otherwise damaged) air can enter the sac and cause the lungs to collapse.
FIG. 3 illustrates generally a procedure in accordance with the present invention. FIG. 3 shows a bronchoscope 100 having a working channel into which an energy delivery catheter 200 is inserted. Bronchoscope 100 is inserted into a patient's lungs while the proximal portion of the energy delivery catheter 200 remaining outside of the patient. Energy delivery catheter 200 is adapted to operatively couple to an energy generator 300 as further discussed below.
Though not illustrated, patients can be intubated with a double-lumen endobronchial tube during the procedure, which allows for selective ventilation or deflation of the right and left lung. Depending on the location or locations of the target lung tissues to be treated, it may be preferable to stop ventilation of the target lung tissue. Also, while not illustrated, in an alternative embodiment, the procedure can be performed minimally invasively with energy catheter 200 introduced percutaneously through the chest wall and advanced to an appropriate location for with the aid of an introducer or guide sheath (with or without introduction into an airway).
FIG. 4 is a schematic diagram of one embodiment of the present invention wherein energy generator 300 is configured as a vapor generator. Preferably, vapor generator is configured to deliver a controlled dose of vapor to one or more target lung tissues. Generally, vapor generator 300 is adapted to convert a biocompatible liquid 301 (e.g. saline, sterile water or other biocompatible liquid), into a wet or dry vapor, which is then delivered to one or more target tissues. A wet vapor refers to a vapor that contains vaporous forms of the liquid as well as a non-negligible proportion of minute liquid droplets carried over with and held in suspension in the vapor. A dry vapor refers to a vapor contained little or no liquid droplets. In general, vapor generator 300 is configured to have a liquid capacity between about 1000 to 2500 cc and configured to generate a vapor having a pressure between about 5-50 psig and temperatures between about 100-175° C.
Vapor generator 300 is preferably configured as a self-contained, medical-grade generator unit comprising at least a controller (not shown), a vaporizing unit 302, a vapor inlet 304, a vapor outlet 306 and a connective handle (not shown). The vaporizing unit 302 comprises a fluid chamber for containing a fluid 302, preferably a biocompatible, sterile fluid, in a liquid state. Vapor outlet 304 is coupled to one or more pipes or tubes 310, which in turn are in fluid communication with a vapor lumen of a hub assembly or other adapter, which in turn is adapted to operatively couple to the proximal end of energy delivery catheter 200, several embodiments of which are further described below. Vapor flow from vapor generator 300 to a catheter (and specifically a vapor lumen of said catheter) is depicted as a vapor flow circuit 314 wherein flow of the vapor in circuit 314 is indicated by arrows 314 in FIG. 4.
Vaporizer unit 302 is configured to heat and vaporize a liquid contained in a fluid chamber (not shown). Other components can be incorporated into the biocompatible liquid 301 or mixed into the vapor. For example, these components can be used in order to control perioperative and/or post procedural pain, enhance tissue fibrosis, and/or control infection. Other constituents, for the purpose of regulating vapor temperatures and thus control extent and speed of tissue heating, can be incorporated; for example, in one implementation, carbon dioxide, helium, other noble gases can be mixed with the vapor to decrease vapor temperatures.
Vaporizing unit 302 further comprises a fluid inlet 304 that is provided to allow liquid 301 to be added to the fluid chamber as needed. Fluid chamber can be configured to accommodate or vaporize sufficient liquid as need to apply vapor to one or more target tissues. Liquid in vaporizing unit 302 is heated and vaporized and the vapor flows into vapor outlet 304. A number of hollow thermally conductive pipes 314 are adapted to fluidly connect vapor outlet 304 and handle, which in turn is adapted to operatively couple to a variety of energy delivery catheters, which are further described below. Preferably, there is little or no vapor-to-liquid transition during movement of the vapor through vapor flow circuit 314. Vapor flow through vapor flow circuit 314 is unidirectional (in the direction of arrows 314). Accordingly one or more isolation valves 320 are incorporated in vapor flow circuit 314. Isolation valves 320, which are normally open during use of generator 300, minimize vapor flow in a direction opposite that of the vapor flow circuit 314.
A priming line 330, branching from main vapor flow circuit 314, is provided to minimize or prevent undesirable liquid-state water formation during vapor flow through vapor flow circuit 314. Pressure and temperature changes along vapor flow circuit 314 can affect whether the vapor is sustainable in a vapor state or condensed back into a liquid. Priming line 330 is provided to equalize temperatures and/or pressures along vapor flow circuit 314 in order to minimize or prevent undesirable liquid-state transition of the vapor during its progression through vapor flow circuit 314. In one embodiment, an initial “purge” or “priming” procedure can be preformed prior to delivery of a therapeutic vapor dose in order to preheat flow, circuit 314 thus maintaining a constant temperature and pressure in the main vapor flow circuit 314 prior to delivery of a vapor to the target lung tissue.
As shown in FIG. 4, priming line 330 terminates at evaporator 332, which is adapted to either house undesirable liquid in a collection unit (not shown) located within generator 300. In one embodiment, collection unit is adapted to house the liquid until a user or clinician is able to empty said collection unit. Alternatively, evaporator 332 is configured to evaporate and expel said undesirable liquid into the ambient air. Baffle plates (not shown) or other like means can be incorporated in evaporator 332 to facilitate maximal vapor-to-liquid transition. It should be understood that other suitable evaporator configurations could be included to facilitate vapor-to-liquid transition during a priming procedure of lines 314.
A number of sensors, operatively connected to a controller, can be incorporated into vapor generator 300. For example, in the liquid chamber, or along any point in vapor flow circuit 314, a number of sensors can be provided. Water level sensors, adapted to monitor the water level in the liquid chamber, can be included. These water level sensors are configured as upper and lower security sensors to sense or indicate when a liquid level in the fluid chamber is below or above a set fluid level. In example, if a water level in the fluid chamber falls below the level of a lower water control sensor, the controller can be configured to interrupt the operation of the vapor generator 300.
In yet another embodiment, pressure sensors, or manometers, can be included in vaporizing unit 302, or at various points along the vapor flow circuit 314, to measure the liquid or vapor pressures at various discrete locations and/or to measure vapor pressures within a defined segment along circuit 314. One or more control valves 320 can also be installed at various points in the vapor flow circuit 314 to control vapor flow for instance to control or increase the vapor flow or vapor flow rates in vapor flow circuit 314. In yet another embodiment, a safety valve 322 can be incorporated into the liquid chamber of vaporizing unit 302 and coupled to a vapor overflow line 340 if the need for removing or venting vaporizing unit 302 arises during generator 300 operation.
FIG. 5 illustrates one embodiment of a user interface 360 of vapor generator 300. As illustrated, the user interface 360 comprises various visual readouts intended to provide clinical users information about various treatment parameters of interest, such as pressure, temperature and/or duration of vapor delivery. Vapor generator 300 can also be adapted to incorporate one or more auditory alerts, in addition or in lieu of, visual indicators provided on user interface 360. These one or more auditory alerts are designed to provide an alert to a clinical user, such as when vapor delivery is complete, when liquid chamber must be refilled or the like. As will be recognized by those in the art, other components, while not shown, can be incorporated including any of the following: a keyboard; a real-time imaging system display (such as a CT, fluoroscopy, ultrasound); memory system; and/or one or more recording systems.
FIG. 6 illustrates yet another aspect of the invention, in particular a vapor catheter 200 embodying various features of the present invention. Generally, catheter 200 is adapted to operatively connect to a control handle of vapor generator 300 via hub assembly 202. Catheter 200 includes elongate shaft 204 defined by proximal section 206 and distal section 208. Elongated shaft 204 is formed with at least one lumen (such as a vapor, inflation, sensing, imaging, guide wire, vacuum lumen) extending from proximal section 206 to distal section 208 of shaft 204. Starting at proximal section 206, catheter 200 comprises strain relief member 201.
Elongated shaft 204 further comprises at least one occlusive member 210 disposed at distal section 208 and distal tip 210 having at least one distal port 212. In one embodiment, the at least one distal port 212 is configured as a vapor outlet port. In yet another embodiment, vapor outlet port may also be used as an aspiration port while catheter is coupled to a vacuum source (not shown) in order to aspirate mucus, fluids, and other debris from an airway through which catheter 200 is advanced prior to vapor delivery. Alternatively, catheter 200 can be configured to include a separate vacuum lumen and aspiration ports as needed. Distal tip 210 can be adapted into a variety of shapes depending on the specific clinical need and application. For example, distal tip 210 can be adapted to be atraumatic in order to minimize airway damage during delivery.
The dimensions of the catheter are determined largely by the size airway lumen through which the catheter must pass in order to deliver the catheter to an airway location appropriate for treatment of the one or more target tissues. An airway location appropriate for treatment of a target lung tissue depends on the volume of the target tissue and the proximity of catheter tip to the target tissue. Generally, catheter 200 is low profile to facilitate placement of catheter distal tip 210 as close as practicable to proximally and peripherally located target lung tissue, i.e. in order to facilitate the catheter's advancement into smaller and deeper airways. In addition, the low profile feature of catheter 200 also ensures that catheter can be delivered to the lungs and airways through a working channel of a bronchoscope, including for example, through the working channels of ultra-thin bronchoscopes. Preferably, catheter 200 is slidably advanced and retracted from a bronchoscope working channel. The overall length and diameter of catheter 200 can be varied and adapted according to: the specific clinical application; size of the airway to be navigated; and/or the location of the one or more target tissues.
Occlusive member or members 210 are similarly configured to provide the smallest possible size when deflated to facilitate ready retraction of catheter 200 back into the working channel of a bronchoscope following completion of a treatment procedure involving delivery of one or more vapor doses to one or more target tissues. The one or more occlusive members 210 are provided to obstruct of proximal vapor flow and/or seat catheter 200 in the patient's airway during vapor delivery without slipping.
Obstruction of an airway by occlusive member 210 prevents retrograde flow of vapor to tissues located outside of the desired target tissues. Because of the physiological characteristics of the airways, in particular the fact that the airways ventilate and communicate specific lung parenchyma or tissues, vapor delivered or dispersed at a particular airway location (e.g. at the bronchial, sub segmental, main bronchi) determines whether there is a focal or regional heating of tissue. In addition to location of the catheter distal tip, other considerations that impact whether there is focal or regional tissue heating patterns (i.e. volume of tissue heated or size of thermal lesion) created include: time or duration of vapor delivery; the vapor flow rate; and vapor content (dry vs. wet; vapor alone vs. vapor cocktail). Preferably, the one or more occlusive members 210 are compliant to ensure: adequate seating; airway obstruction; and/or complete collapse following deflation.
Catheter 200 can be fabricated from a variety of suitable materials and formed by any process such as extrusion, blow molding, or other methods well know in the art. In general, catheter 200 and its various components are fabricated from materials that are relatively flexible (for advancement into tortuous airways) yet having good pushability characteristics and durable enough to withstanding the high temperatures and pressures of the vapor delivered using catheter 200.
Catheter 200 and elongated shaft 204 can be a tubular braided polyimide, silicone, or reinforced silicone. These materials are relatively flexible, yet have good pushability characteristics, while able to withstand the high temperature and pressures of vapor flow. Suitable materials should be adapted to withstand vapor pressures of up to 80 psig, at temperatures up to 170° C. Specific suitable materials include, for example, various braided polyimide tubing available, for example, from IW High Performance Conductors, Inc. (See www.iwghpc.com/MedicalProducts/Tubing.html.) Similarly, the one or more occlusive members 210 are preferably fabricated from similar materials having pressure and temperature tolerant attributes as elongated shaft 204, but preferably which is also compliant, such as silicone available from Dow Corning Q74720. As an added feature, catheter 200 and elongated shaft 204 can further be adapted to include varying flexibility and stiffness characteristics along the length of shaft 204 based on the clinical requirements and desired advantages. While not shown, various sensing members, including for example pressure, temperature and flow sensors known in the art can be incorporated into catheter 200. For example, catheter 200 can be adapted to include a sensing lumen for advancement or connection with various sensory devices such as pressure, temperature and flow sensors.
Turing now to FIG. 7, illustrated is a preferred embodiment of a vapor catheter 400. FIG. 7 is a longitudinal cross sectional view of the elongate shaft 404 while FIGS. 7A and 7B show transverse cross sectional views of the elongate shaft 404 taken along the lines 7A-7A and lines 7B-7B respectively.
In this preferred embodiment, catheter 400 comprises an elongated catheter shaft 404 having an outer tubular member 406 and an inner tubular member 408 disposed within outer tubular member 406. Inner tubular member 408 defines a vapor lumen 410 adapted to receive a vapor and which is in fluid communication with a vapor flow circuit 314 of generator 300. The coaxial relationship between outer tubular member 406 and inner tubular member 408 defines annular inflation lumen 412. Vapor lumen 410 terminates at vapor port 424.
As shown in FIG. 7, structural members 422 are disposed between inner tubular member 408 and outer tubular member 406 at distal vapor port 424 to seal inflation lumen 412 and provide structural integrity at the catheter tip. Structural members 422 are preferably made of stainless steel, nickel titanium alloys, gold, gold plated materials or other radiopaque materials, to provide catheter tip visibility under fluoroscopy and/or provide sufficient echogenicity so that the catheter tip is detectable using ultrasonography. Hub assembly 426 (or other adaptor) at the proximal end of catheter 400 is configured to direct an inflation fluid (such as a liquid or air) into inflation lumen 412 as well as provide access to vapor lumen 410.
FIG. 7B illustrates inflation balloon 414 in an inflated or expanded configuration. Inflation balloon 414 inflates to a cylindrical cross section equal to that of a target airway in order to obstruct the airway and prevent proximal or retrograde vapor flow. This inflated configuration is achieved at an inflation pressure within the working pressure range of balloon 414. Inflation balloon 414 has a working length, which is sufficiently long to provide adequate seating in a target airway without slippage during or prior to vapor delivery.
Suitable dimensions for the vapor catheter 400 in accordance with the present invention include an outer tubular member 406 which has an outer diameter of about 0.05 to about 0.16 inches, usually about 0.065 inches and an inner diameter of about 0.04 to about 0.15 inches, usually about 0.059 inches. The wall thickness of outer tubular member 406 and inner tubular member 408 can vary from about 0.001 to about 0.005 inches, typically about 0.003 inches. The inner tubular member 408 typically has an outer diameter of about 0.04 to about 0.15 inches, usually about 0.054 inches and an inner diameter of about 0.03 to about 0.14 inches, usually about 0.048 inches.
The overall working length of catheter 400 may range from about 50 to about 150 cm, typically about 110 to about 120 cm. Preferably, inflation balloon 414 has a total length about 5 to about 20 mm; a working length of about 1 to about 18 mm, preferably about 4 to about 8 mm. Inflation balloon 414 has an inflated working outer diameter of about 4 to about 20 mm, preferably about 4 to about 8 mm within a working pressure range of inflation balloon 414. In preferred embodiment, outer tubular member 406 and inner tubular member 408 is braided polyimide tubular member from IWG High Performance Conductors. Specifically, the braided polyimide tubular member comprises braided stainless steel, with the braid comprising rectangular or round stainless steel wires. Preferably, the braided stainless steel has about 90 picks per inch. The individual stainless steel strands may be coated with heat resistant polyimide and then braided or otherwise formed into a tubular member or the stainless steel wires or strands may be braided or otherwise formed into a tubular product and the braided surfaces of the tubular product may be coated with a heat resistant polyimide.
As will be appreciated by those skilled in the art, the catheters and generators of the present invention can be used to heat one or more target lung tissue to treat a variety of lung diseases and conditions, including but not limited to: lung tumors, solitary pulmonary nodules, lung abscesses, tuberculosis, as well as a variety of other diseases and disorders. In one embodiment, a procedure for inducing lung volume reduction (as a treatment for emphysema) involves advancing catheter 400 into a segmental or sub-segmental airway and delivering a controlled vapor dose. As will be appreciated by those skilled in the art, the vapor carries most of the energy and heat required to convert liquid in vapor generator from a liquid into a vapor. Upon dispersion of the vapor into the airways, the vapor penetrates into the interstitial channels between the cells, and distributes thermal area over a relatively large volume of tissue, permitting tissue heating to be accomplished quickly, usually with a few seconds or minutes. Vapor heating of target lung tissue is intended to cause tissue injury, shrinkage and/or ablation, in order to cause volumetric reduction of one or more target lung tissues. Lung volume reduction is immediate and/or occurs over several weeks or months.
Depending on the extent of the volumetric reduction (complete or partial reduction of a lobe) desired, catheter 400 is navigated into one or more airways, preferably as into the segmental or sub-segmental airways and the vapor delivered into as many airway as need during a single procedure to effect the therapeutically optimal extent of lung volume reduction. In a preferred embodiment, a vapor generator configured to create a vapor having a vapor pressure between about 5-50 psig, at a temperature between about 100°-170° degrees C. within vapor generator 300 is employed. The vapor catheter is delivered into the sub-segmental airways that communicate with either the left and right upper lobes, and vapor delivered for a period of 1-20 seconds in each of these airways, to effect volumetric reduction of the left and right upper lobes. Preferably, energy deliver to a target lung tissue is achieved without attendant plural heating sufficient to cause damage to the pleura or a pneumothoraces.
As will be appreciated by one skilled in the art, various imaging techniques (in addition to or in lieu of conventional bronchoscopic imaging) can be employed before, during and after a vapor treatment procedure. Real time fluoroscopy can be used to confirm depth of catheter 400 inside a patient's lung as well as confirm position of catheter in a desired airway. In yet another embodiment, real-time CT guided electromagnetic navigational systems, such as the SuperDimension®/Bronchus system can be employed to accurately guide catheters of the present invention to the desired tissues targets, especially to get the catheters close to target tissues that are peripherally located. In one embodiment of the invention, the present invention can be adapted to work through a working channel of a locatable guide or guide catheter of the SuperDimension CT navigational system.
A medical kit for performing volumetric vapor heating of one or more target lung tissues which comprises a packaged, sterile liquid or liquid composition and a high temperature vapor delivery catheter. Other embodiments of said medical kits can comprise instructions of use, syringes, and the like.
The invention has been discussed in terms of certain embodiments. One of skill in the art, however, will recognize that various modifications may be made without departing from the scope of the invention. For example, numerous variations, changes, and substitutions will now occur to those skilled in the art without departing from the invention. Moreover, while certain features may be shown or discussed in relation to a particular embodiment, such individual features may be used on the various other embodiments of the invention. In addition, while not provided, other energy modalities can be employed for volumetric heating of target lung tissue and its understood that in conjunction with or instead of vapor, such as modalities such as RF, laser, microwave, cryogenic fluid, a resistive heating source, ultrasound and other energy delivery mechanisms can be employed for heating a target lung volume.
c. delivering heated vapor through the inner lumen of the catheter.
2. The method of claim 1 wherein the catheter has an inner tubular member formed at least in part of heat resistant material which defines at least in part the inner lumen.
3. The method of claim 2 wherein the heat resistant material is polyimide.
4. The method of claim 2 wherein the catheter has an outer tubular member which is formed of heat resistant material and which is disposed about the inner tubular member forming an inner lumen therebetween.
5. The method of claim 4 wherein the heat resistant material of the outer tubular member is polyimide.
6. The method of claim 1 wherein the catheter has an enlarged or enlargeable member on a distal portion of the catheter to prevent proximal flow of the heated vapor upon discharge from the catheter.
7. The method of claim 6 wherein the enlarged or enlargeable member is inflatable.
8. The method of claim 7 wherein the enlarged or enlargeable member is inflated before heated vapor is delivered through the inner lumen of the catheter.
9. The method of claim 7 wherein the enlarged or enlargeable member is formed of compliant polymeric material.
10. The method of claim 9 wherein the polymeric material is a silicone.
11. The method of claim 1 wherein intraluminal location is within a patient's lung.
12. The method of claim 11 wherein lung tissue distal to the distal end of the catheter is heated to a temperature above 45° C.
13. The method of claim 1 wherein the heated vapor is at a temperature greater than at least 45° C. when exiting the catheter.
14. A method of heating a target lung tissue volume, comprising directing high temperature vapor from an intrabronchial location to the target lung tissue volume to heat the tissue volume to a temperature above 45° C. without causing a pneumothorax.
15. The method of claim 14 wherein the high temperature vapor is delivered to the intrabronchial location through an inner lumen of a catheter formed at least in part of heat resistant material.
16. The method of claim 15 wherein the heat resistant material is polyimide.
17. The method of claim 16 wherein the catheter has an inflatable member on a distal portion thereof which has an inflated configuration to prevent proximal flow of the high temperature vapor when discharged from the catheter. | 2019-04-18T21:06:04Z | https://patents.google.com/patent/US20080110457A1/en |
Section 1.1. Name. The name of this corporation is COMPOSITE REPAIR USERS GROUP (the “Corporation”).
Section 1.2. Purposes. The purposes for which the Corporation is organized is as set forth in its Certificate of Formation.
Section 1.3. Powers. The Corporation is a nonprofit corporation and shall have all of the powers, duties, authorizations and responsibilities as provided in the Texas Business Organizations Code (the “BOC”); provided, however, the Corporation shall neither have nor exercise any power, nor engage directly or indirectly in any activity, that would invalidate its status as a corporation that is exempt from Federal income tax as an organization described in Section 501(c)(6) of the Code.
Section 1.4. Offices. The Corporation may have, in addition to its registered office, offices at such places within or outside the State of Texas as the Board of Directors may from time to time determine or as the activities of the Corporation may require.
Section 2.1. Classes and Qualifications of Membership. There shall be three classes of membership: (1) Operators; (2) Composite Manufacturers; and (3) General Interest. The Board of Directors shall determine and set forth in separate documents the qualifications, dues, and other conditions of the classes of members All members shall be subject to all policies and procedures established by the Board of Directors from time to time. All members shall have some degree of interest or involvement in the proper use of composite materials that are used to structurally repair pipelines, piping, and other pressure containing equipment (“Composite Systems”). Applications for membership shall be in such form and manner as prescribed by the board of Directors and shall be accompanied by the full amount of the current dues (if any).
Section 2.2. Voting Rights. The members shall have the right to elect Directors and officers in accordance with the procedures set forth in these bylaws. The members shall also have the right to vote on matters presented to the members by the Board of Directors, in the sole discretion of the Board of Directors. The members shall not have the right to vote on any matter not expressly described in these Bylaws or submitted to the members by the Board of Directors.
Section 2.3. Annual and Regular Meetings. An annual meeting of the membership shall be held each year, at such time and place as shall be determined by the Board of Directors of the Corporation and communicated to the Corporation’s members. Currently, the annual meeting is held each September. At such annual meeting, the voting members shall receive updates on the Corporation’s activities and transact such business as shall be included in the notice and agenda for the meeting. Written notice of the place, date and time of each annual meeting of membership shall be delivered not less than thirty (30) nor more than sixty (60) days before the date of such meeting, either personally, by hand delivery, by mail, by facsimile transmission or by email to the Corporation’s members, at such members’ address as it appears on the books of the Corporation at the time such notice is given. The Board of Directors may establish regular meetings of the members, in addition to the annual meeting, in its discretion.
Section 2.4. Special Meetings. Special meetings of the membership may be called by the Chair of the Corporation, by the Board of Directors, or upon request of twenty percent (20%) of the voting members. Written notice of the place, date, time and purpose of each special meeting of the membership shall be given to the Corporation’s members not less than seven (7) nor more than sixty (60) days prior to the date thereof. No business shall be transacted at a special meeting of the membership except as stated in the notice of such meeting.
Section 2.5. Place of Meetings. Meetings of the membership shall be held at such places as may from time to time be determined by the Board of Directors or as may be specified in the respective notices or waivers of notice thereof.
Section 2.6. Record Date. Only those individuals who are members of the Corporation at least ten (10) days immediately prior to the day upon which the Corporation transmits notice of any meeting to its members shall be entitled to receive notice of such meeting.
Section 2.7. Quorum and Manner of Acting. The presence of at least twenty percent (20%) of the voting members shall be necessary and sufficient to constitute a quorum for the transaction of business at such meeting. A majority of the votes cast at a meeting at which a quorum is present shall constitute the action of the members.
1. The ballot shall set forth each proposed action and shall provide an opportunity to vote either for or against each proposed action.
2. The number of ballots received by the Corporation must equal or exceed the quorum that would have been required had there been a meeting (i.e., the Corporation must have received a valid ballot from twenty percent (20%) or more of its voting members.
3. Unless otherwise indicated in these Bylaws, a majority of the affirmative votes cast by ballot shall constitute the action of the members with respect to each matter on the ballot.
4. All solicitations for votes by written ballots shall indicate the number of responses needed to meet the quorum requirement, state the percentage of approvals necessary to approve each matter, and specify the time by which a ballot must be received by the Corporation in order to be counted.
5. To the fullest extent allowed by law, the election process may be completed by written ballots delivered to members and received from members by electronic mail or by an internet or other electronic-communications-based protocol as determined by the Board of Directors.
Section 2.9. Dues. Annual membership dues, if any, shall be in an amount set by the Board of Directors. Notification of upcoming annual dues shall be sent in December each year and dues must be paid no later than March of the following year. If dues are assessed, failure to pay dues by May shall be cause for removal from membership pursuant to Section 2.10. Annual membership dues shall not be prorated.
a) Failure to pay annual membership dues, if any, by May of each year.
b) Willful action or conduct detrimental to the interests of the Corporation, or to its programs, policies, objectives or the harmonious relationship of its members, as determined by the Board of Directors.
Section 3.1. General Powers; Delegation. The activities, property and affairs of the Corporation shall be managed by its Board of Directors, who may exercise all such powers of the Corporation and do all such lawful acts and things as are permitted by law, by the Certificate of Formation or by these Bylaws. In fulfillment of these responsibilities, the Board of Directors is to communicate and make periodic reports to the members concerning the activities of the Corporation.
Section 3.2. Number and Terms of Directors. The Board of Directors shall consist of not less than three (3) and not more than twenty (20) individuals, as may be determined by the Board of Directors from time to time, provided that the number of Directors shall not be decreased to less than three (3) and that no decrease in the number of Directors shall have the effect of shortening the term of any incumbent Director. Directors shall hold office for two-year terms and until their successors are chosen and qualified, or until their respective earlier deaths, resignations, retirements, disqualification or removals from office. Directors may be elected to up to two succeeding terms. Following three consecutive two-year terms, a Director must leave the Board for a minimum of one year, before being elected to subsequent terms.
Section 3.3. Qualifications and Elections of Directors.
1. Qualifications. To be eligible to be able to serve as a Director an individual must be a member of the Corporation.
3. Classes and Elections. Each of the three classes of membership described in Section 2.1 shall have the right to elect one-third of the Board of Director positions. Director elections shall be held every two years, on odd years, at the annual meeting of the members in September. Members are entitled to a single vote for each open Board position that is within the member’s class. Members may vote for prospective Directors nominated by the Corporation’s officers. For non-officer positions only, members may also vote for write-in candidates. Write in candidates must be from the applicable member class.
Section 3.4. Filling of Vacancies. Any vacancy occurring in the Board of Directors resulting from the death, resignation, retirement, disqualification or removal from office of any Director shall be filled by majority vote of the remaining members of the Board of Directors for the unexpired term. Any Director elected or appointed to fill a vacancy shall hold office for the unexpired term for his or her predecessor in office, or until such Director’s earlier death, resignation, retirement, disqualification or removal from office.
Section 3.5. Removal. Any Director may be removed, either for or without cause, by a two-thirds (2/3) vote of the voting members of the Corporation, at any regular or special meeting of the members called expressly for that purpose. Any Director may resign at any time. Such resignation shall be made in writing and shall take effect at the time specified therein or, if not time is specified therein, at that time of its receipt by the Chair of the Corporation. No acceptance of a resignation shall be necessary to make it effective. Further, any Director who has an unexcused absence from three consecutive, regularly scheduled meetings of the Board of Directors will be automatically terminated from the Board. The Chair will notify such a Director of their termination within thirty-days of the third missed meeting.
Section 3.6. Place of Meeting. Meetings of the Board of Directors shall be held at such places as may from time to time be fixed by the Board of Directors or as shall be specified or fixed in the respective notices or waivers of notice thereof.
Section 3.7. Annual Meetings. An annual meeting of the Board of Directors, of which no notice shall be necessary, shall be held each year immediately following the annual meeting of the membership, and at the same place. At such annual meeting, the Directors shall transact any and all business as may properly come before the meeting. Newly-elected Directors shall be installed at the conclusion of the Corporation’s annual meeting and shall begin performance of their duties at the immediately following annual meeting of the Board of Directors.
Section 3.8. Regular Meetings. Regular meetings of the Board of Directors shall be held at such times and places as may be fixed from time to time by resolution adopted by the Board and communicated by written notice to all Directors. Except as otherwise provided by law, by the Certificate of Formation or by these Bylaws, any and all business may be transacted at any regular meeting.
Section 3.9. Special Meetings. Special meetings of the Board of Directors may be called by the Chair or by a majority of the Directors then in office, upon not upon not less than three (3) nor more than sixty (60) days’ notice to each Director, either personally, by hand delivery, or by mail or by facsimile transmission. The time, day, place and purpose for which the special meeting is called shall be stated in the notice. Any Director may waive notice of any meeting by a written statement executed either before or after the meeting. Attendance and participation at a meeting without objection to notice shall also constitute a waiver of notice.
Section 3.10. Quorum and Manner of Acting. At all meetings of the Board of Directors the presence of a majority of the Directors then in office shall be necessary and sufficient to constitute a quorum for the transaction of business, except as otherwise provided by law, by the Certificate of Formation or by these Bylaws. The act of a majority of the Directors present in person or by proxy at a meeting at which a quorum is present shall be the act of the Board of Directors unless the act of a greater number is required by law, by the Certificate of Formation or by these Bylaws, in which case the act of such greater number shall be requisite to constitute the act of the Board. Any Director not present at a meeting shall be permitted to vote by proxy by providing written notice of such desire with the Secretary prior to the meeting and designating who shall hold the proxy. Any such proxy shall only be valid for the meeting in question (i.e. the next meeting). If a quorum shall not be present at any meeting of the Directors, the Directors present thereat may adjourn the meeting from time to time, without notice other than announcement at the meeting, until a quorum shall be present. At any such adjourned meeting at which a quorum shall later be present, any business may be transacted which might have been transacted at the meeting as originally convened.
Section 3.11. Written Consent of Directors. Any action required or permitted to be taken at any meeting of the Board of Directors or any committee may be taken without a meeting if a consent in writing setting forth the action to be taken shall be signed by all of the Directors or all of the members of the committee, as the case may be. Such consent must be filed with the minutes of proceedings of the Board of Directors or of the committee. Such consent shall have the same force and effect as a unanimous vote, and may be stated as such in any document.
Section 3.12. Electronic Meetings. Subject to the provisions of applicable law and these Bylaws regarding notice of meetings, members of the Board of Directors or members of any committee designated by such Board may, unless otherwise restricted by statute, by the Certificate of Formation or by these Bylaws, participate in and hold any meeting of such Board of Directors or committee by using conference telephone or similar communications equipment, or another suitable electronic communications system, including videoconferencing technology or the Internet, or any combination, if the telephone or other equipment system permits each person participating in the meeting to communicate with all other persons participating in the meeting. If voting is to take place at the meeting, reasonable measures must be implemented to verify that every person voting at the meeting by means of remote communications is sufficiently identified and a record must be kept of any vote or other action taken. Participation in a meeting pursuant to this Section 3.12 shall constitute presence in person at such meeting, except when a person participates in the meeting for the express purpose of objecting to the transaction of any business on the ground that the meeting was not lawfully called or convened.
Section 4.1. Designation. The Board of Directors by resolution adopted by a majority of the Directors in office may establish or discontinue any advisory board or committee. The Board of Directors may establish the number of persons on such boards or committees. The designation of such advisory boards or committees shall not operate to relieve the Board of Directors, or any individual Director, of any responsibility imposed on the Board or such Director by law.
Section 4.2. Membership. Except as otherwise provided in such resolution or these bylaws, members of each such advisory board or committee need not be Directors of the Corporation nor members of the Corporation. The Board of Directors shall appoint one or more members to such advisory boards or committees. Additional members may be added by the advisory boards or committees with the approval of the Board of Directors. Any member of any advisory board or committee may be removed by the Board of Directors whenever in the Board of Director’s judgment the best interests of the Corporation shall be served by such removal. Any person who is a member of any advisory board or committee and not a member of the Corporation shall be entitled to vote on advisory board or committee action only.
Section 4.3. Term of Office. Each member of an advisory board or committee shall continue as such until the next annual meeting of the Directors of the Corporation and until such member’s successor is appointed, unless the board or committee is sooner terminated, or unless such member is removed from such board or committee or shall cease to qualify as a member thereof.
Section 4.4. Chairman. Unless otherwise designated by these Bylaws, one or more members of each advisory board or committee shall be appointed chairman, or co‑chairman, by the person or persons authorized to appoint the members thereof.
Section 4.5. Vacancies. Vacancies in the membership of any advisory board or committee may be filled by the remaining members of such advisory board or committee with the approval of the Board of Directors.
Section 4.6. Quorum; Manner of Acting. Unless otherwise provided in the resolution of the Board of Directors designating an advisory board or committee, a majority of the whole board or committee shall constitute a quorum, and the act of the majority of the members present at a meeting at which a quorum is present shall be the act of the board or committee.
Section 4.7. Rules. Each advisory board or committee may adopt rules for its own government not inconsistent with these Bylaws or with rules adopted by the Board of Directors. Any such adopted rules shall be subject to alteration by the Board of Directors whenever in the Board of Director’s judgment the best interests of the Corporation shall be served by such alteration. Each advisory board or committee shall keep minutes of proceedings and provide same to the Corporation’s Secretary.
Section 4.8. Funds. Any funds generated by or otherwise restricted for use by or for any committee or advisory board for activities or programs of such committee or advisory board shall belong to the Corporation and be subject to the oversight and control of the Corporation’s Treasurer. The expenditure of any such funds shall require approval of the Treasurer of the Corporation.
Section 4.9. Nominating Committee. The Corporation shall have as a standing committee, a Nominating Committee, whose responsibility shall be taking nominations for Director positions and verifying the willingness of such nominees to serve as Directors.. The Nominating Committee shall have a chair plus at least two (2) additional members who shall be appointed by the Board of Directors.
Action may be taken by use of signed written consents by the number of members, Directors, officers or committee members whose vote would be necessary to take action at a meeting at which all such persons entitled to vote were present and voted. Each written consent must bear the date and signature of each person signing it. A consent signed by less than all of the Directors, officers, or committee members is not effective to take the intended action unless consents signed by the required number of persons are delivered to the Corporation within sixty (60) days after the date of the earliest dated consent delivered to the Corporation. Delivery must be made by hand, or by certified or registered mail, return receipt requested. The delivery may be made to the Corporation’s registered office, registered agent, principal place of business, transfer agent, registrar, exchange agent or officer or agent having custody of books in which the relevant proceedings are recorded. If the delivery is made to the Corporation’s principal place of business, the consent must be addressed to the Chair or principal executive officer.
The Corporation will give prompt notice of the action taken to persons who do not sign the consents. If the action taken requires documents to be filed with the Secretary of State, the filed documents will indicate that the written consent procedures have been properly followed.
Any photographic, photostatic, facsimile, or similarly reliable reproduction of a consent in writing signed by a Director, officer, or committee member may be substituted or used instead of the original writing for any purpose for which the original writing could be used, if the reproduction is a complete reproduction of the entire original writing.
Section 6.1. Manner of Giving Notice. Whenever, under the provisions of any law, the Certificate of Formation or these Bylaws, notice is required to be given to any member, Director, or committee member of the Corporation, and no provision is made as to how such notice shall be given, it shall not be construed to require personal notice, but any such notice may be given in writing by hand delivery, by facsimile transmission, by electronic mail transmission, or by mail, postage prepaid, addressed to such member, Director, or committee member at such person’s address as it appears on the records of the Corporation. Any notice required or permitted to be given by mail shall be deemed to be delivered at the time when the same shall be thus deposited in the United States mails, as aforesaid. Any notice required or permitted to be given by facsimile transmission shall be deemed to be delivered upon successful transmission or electronic mail transmission of such facsimile or electronic mail.
Section 6.2. Waiver of Notice. Whenever any notice is required to be given to any member, Director, or committee member of the Corporation under the provisions of any law, the Certificate of Formation or these Bylaws, a waiver thereof in writing signed by the person or persons entitled to such notice, whether signed before or after the time stated therein, shall be deemed equivalent to the giving of such notice.
Section 7.1. Elected Officers. The elected officers of the Corporation shall include a Chair, a Vice-Chair, a Secretary, a Treasurer, and a Public Relations/Meeting Coordinator. Additional non-elected officers of the Corporation include a Founder (also referred to as the Chair Emeritus), and a Past Chair The position of Founder shall be held by held by Chris Alexander, until his death, resignation, or removal for cause in accordance with Section 7.7. The Past Chair position shall be automatically filled by the immediately preceding elected Chair. The officers shall have such duties as are described in these Bylaws and/or those duties assigned by the Board of Directors from time to time.
Section 7.2. Election. All officers shall be members of the Board of Directors. The Corporation’s existing officers shall nominate candidates to the elected officer positions, which shall be presented to the members for approval. The officers and/or the Board of Directors may, in their discretion, nominate only a single candidate for each officer position. The Board of Directors may elect any other such officers deemed necessary by the Board of Directors to carry out the exempt purposes of the Corporation.
Section 7.3. Two or More Offices. Any two (2) or more offices may be held by the same person, except that the Chair and Secretary shall not be the same person.
Section 7.4. Compensation. The compensation, if any, of all officers of the Corporation shall be fixed from time to time by the Board of Directors. In the event the Corporation has employees, the Board of Directors may fix compensation for such employees or may from time to time delegate to an Executive Director/Director of Operations (if any) the authority to fix the compensation of any or all of the other employees and agents of the Corporation. Any officer, employee or agent of the Corporation (including an officer, employee or agent who is a “disqualified person” with respect to the Corporation within the meaning of Section 4946 of the Internal Revenue Code and the regulations promulgated thereunder) shall be entitled to compensation and reimbursement of reasonable expenses (including reasonable advances for expenses anticipated in the immediate future) for the performance of “personal services” as defined in the Treasury Regulation Section 51.4942(d)-3(c) which are reasonable and necessary to carry out the exempt purposes of the Corporation, provided that such compensation and reimbursement of reasonable expenses shall not be excessive.
Section 7.5. Term of Office; Removal; Filling of Vacancies. Each elected officer of the Corporation shall hold office from the time of his or her appointment as officer by the Board of Directors for a term of two (2) years or until such officer’s successor is chosen and qualified in such officer’s stead or until such officer’s earlier death, resignation, retirement, disqualification or removal from office. In the event a Director who has only one (1) year remaining on his or her term as Director is elected to an office, such person’s second year in office shall be contingent upon his or her re-election to the Board of Directors, and, in the event such person is not re-elected to the Board of Directors, the Board of Directors shall treat the vacancy as if occurring as a result of an expired term and shall fill such position for a new two (2) year term in accordance with Section 7.2.
Section 7.6. Resignation. Any officer may resign at any time by giving written notice to the Chair. Such resignation shall take effect at the time specified in the notice, or if no time is specified, then immediately.
Section 7.7. Removal. Any officer may be removed from such office for cause, as defined hereinafter, by a two-thirds (2/3) vote of the Board of Directors at any regular or special meeting of the Directors called expressly for that purpose. “For cause” shall mean failure to complete the duties and/or responsibilities of the individual’s office; willful actions or conduct detrimental to the interests of the Corporation, or to its programs, policies, objectives or the harmonious relationship of its members as determined by the Board of Directors, or removal from voting membership of the Corporation.
Section 7.8. Vacancies. A vacancy in any office shall be filled by the Board of Directors for the unexpired term.
Section 7.9. Chair. The Chair shall be the chief executive officer of the Corporation and, subject to the provisions of these Bylaws, shall have general supervision of the day-to-day activities and affairs of the Corporation and shall have general and active control thereof. The Chair shall have general authority to execute, in the name of the Corporation, checks, promissory notes, bonds, leases, deeds, notices, contracts and, unless the Board of Directors shall order otherwise by resolution, any other papers and instruments as the ordinary conduct of the Corporation’s business may require and to affix the corporate seal thereto; to cause the employment or appointment of such employees and agents of the Corporation as the proper conduct of operations may require and to fix their compensation; to remove or suspend any employee or agent; and in general to exercise all the powers usually appertaining to the office of chief executive officer of a corporation, except as otherwise provided by law, the Certificate of Formation or these Bylaws. The Chair shall attend and participate in all meetings of the Board of Directors, advisory boards and committees without vote. The Chair shall have such other powers and duties as the Board of Directors may determine from time to time. In the absence or disability of the Chair, the duties of such office shall be performed and the powers may be exercised by the Vice Chair, unless otherwise determined by the Board of Directors. The Chair shall serve as Chairman of the Board of Directors.
Section 7.10. Vice Chair (if any). The Vice Chair, if any, shall generally assist the Chair and shall have such powers and perform such duties and services as shall from time to time be prescribed or delegated to such office by the Chair or the Board of Directors.
Section 7.11. Secretary. The Secretary shall see that notice is given of all annual and special meetings of the Board of Directors and shall keep and attest true records of all proceedings at all meetings of the Board. The Secretary shall have charge of the corporate seal and shall have authority to attest any and all instruments of writing to which the same may be affixed. The Secretary shall keep and account for all books, documents, papers and records of the Corporation, except those for which some other officer or agent is properly accountable. The Secretary shall generally perform all duties usually appertaining to the office of secretary of a corporation.. In the absence or disability of the Secretary, the duties of such office shall be performed and the powers may be exercised as determined by the Board of Directors.
Section 7.12. Treasurer. The Treasurer shall be the chief accounting and financial officer of the Corporation and shall have active control of and shall be responsible for all matters pertaining to the accounts and finances of the Corporation and shall direct the manner of certifying the same; shall supervise the manner of keeping all vouchers for payments by the Corporation and all other documents relating to such payments; shall receive, audit and consolidate all operating and financial statements of the Corporation and its various departments; shall have supervision of the books of account of the Corporation, their arrangements and classification; shall supervise the accounting and auditing practices of the Corporation and shall have charge of all matters relating to taxation. The Treasurer shall have the care and custody of all monies, funds and securities of the Corporation; shall deposit or cause to be deposited all such funds in and with such depositories as the Board of Directors shall from time to time direct or as shall be selected in accordance with procedures established by the Board; shall advise upon all terms of credit granted by the Corporation; shall be responsible for the collection of all its accounts and shall cause to be kept full and accurate accounts of all receipts, disbursements and contributions of the Corporation. The Treasurer shall have the power to endorse for deposit or collection or otherwise all checks, drafts, notes, bills of exchange or other commercial papers payable to the Corporation, and to give proper receipts or discharges for all payments to the Corporation. The Treasurer shall generally perform all duties usually appertaining to the office of treasurer of a corporation. In the absence or disability of the Treasurer, the duties of such office shall be performed and the powers may be exercised as determined by the Board of Directors.
Section 7.13. Additional Powers and Duties. In addition to the foregoing specially enumerated duties, services and powers, the several elected and appointed officers of the Corporation shall perform such other duties and services and exercise such further powers as may be provided by law, the Certificate of Formation or these Bylaws, or as the Board of Directors may from time to time determine or as may be assigned by any competent superior officer.
Section 7.14. Administration of Day to Day Matters. Pursuant to the provisions of this Section 7.14 and the then-applicable administrative agreement (if any) or such other agreement designating the accomplishment of administrative tasks, the day-to-day maters of the Corporation, other than those matters specifically assigned to an officer or committee by the Board of Directors, may be administered by an Administrative Agent. The Administrative Agent shall be entitled to reasonable compensation for services rendered and shall operate under the budget developed by the Board of Directors for each fiscal year. To the extent of any conflict between any such administrative agreement and these Bylaws, these Bylaws shall control.
Section 8.1. Contracts. The Board of Directors may authorize any officer or officers, or agent or agents, of the Corporation, in addition to the officers so authorized by these Bylaws, to enter into any contract or execute and deliver any instrument in the name of and on behalf of the Corporation, and such authority may be general or confined to specific instances.
Section 8.2. Checks, Drafts or Orders for Payment. All checks, drafts or orders for the payment of money, notes or other evidences of indebtedness issued in the name of the Corporation shall be signed by such officer or officers, or agent or agents, of the Corporation and in such manner as shall from time to time be determined by resolution of the Board of Directors. In the absence of such determination, such instruments shall be signed by the Chair of the Corporation.
Section 8.3. Deposits. All funds of the Corporation shall be deposited from time to time to the credit of the Corporation in such banks, trust companies or other depositories as the Board of Directors may select or as may be selected in accordance with procedures established by the Board.
Section 9.1. Dividends Prohibited. No part of the net income of the Corporation shall inure to the benefit of any private individual and no dividend shall be paid and no part of the income of the Corporation shall be distributed to its members, Directors, or officers. The Corporation may pay compensation in a reasonable amount to its officers for services rendered and may compensate and reimburse its officers as provided in Section 7.5 of Article Seven hereof.
Section 9.2. Loans to Directors Prohibited. No loans shall be made by the Corporation to its Directors, and any Directors voting for or assenting to the making of any such loan, and any officer participating in the making thereof, shall be jointly and severally liable to the Corporation for the amount of such loan until repayment thereof.
Section 9.3. Fiscal Year. The fiscal year of the Corporation shall be fixed by resolution of the Board of Directors.
Section 9.4. Seal. The Corporation’s seal, if any, shall be in such form as shall be adopted and approved from time to time by the Board of Directors. The seal may be used by causing it, or a facsimile thereof, to be impressed, affixed, imprinted or in any manner reproduced.
Section 9.5. Gender. Words of either gender used in these Bylaws shall be construed to include the other gender, unless the context requires otherwise.
Section 9.6. Invalid Provisions. If any part of these Bylaws shall be held invalid or inoperative for any reason, the remaining parts, so far as is possible and reasonable, shall remain valid and operative.
Section 9.7. Headings. The headings used in these Bylaws are for convenience only and do not constitute matter to be construed in the interpretation of these Bylaws.
Section 10.1. Amendments. These Bylaws may be amended or repealed, or new bylaws may be adopted by a two-thirds (2/3) vote of a quorum of the Board of Directors of the Corporation, at any regular or special meeting of the Directors called expressly for that purpose. The notice of the meeting shall set forth a summary of the proposed amendments. These Bylaws may not be amended or repealed by, nor may new bylaws be adopted by, the Board of Directors.
Section 11.1. Indemnification. To the maximum extent permitted or required by Chapter 8 of the Texas Business Organizations Code, as it now exists or as it may be amended in the future, the Corporation shall indemnify any person who was, is, or is threatened to be made a named defendant or respondent in a proceeding (as hereinafter defined) because the person (i) is or was a Director or officer of the Corporation or (ii) while a Director or officer of the Corporation, is or was serving at the request of the Corporation as a Director, officer, partner, venturer, proprietor, trustee, employee, agent or similar functionary of another foreign or domestic corporation, partnership, joint venture, sole proprietorship, trust, employee benefit plan or other enterprise, against all expenses (other than taxes (including taxes imposed by Chapter 42 of the Internal Revenue Code), penalties, or expenses of correction), including attorneys’ fees, to the fullest extent that a corporation may grant indemnification to a trustee under the Texas Business Organizations Code, as the same exists or may hereafter be amended. In addition to any indemnification to which a person may be entitled pursuant to the foregoing sentence of this Article, the Corporation shall indemnify a foundation manager (as defined in Section 4946(b) of the Internal Revenue Code) for Compensatory Expenses (as hereinafter defined) incurred by or imposed upon such person to the extent, and only to the extent, that when such payment or reimbursement is added to any other compensation paid to such person, such person’s total compensation from the Corporation is reasonable under Chapter 42 of the Internal Revenue Code. As used herein, a Compensatory Expense shall mean (a) any penalty, tax (including a tax imposed by Chapter 42 of the Internal Revenue Code), or expense of correction that is owed by a person; (b) any expense not reasonably incurred by the person in connection with a proceeding arising out of a person’s performance of services on behalf of the Corporation; or (c) any expense resulting from an act or failure to act with respect to which a person has acted willfully and without reasonable cause.
The rights conferred by this Article shall be contract rights and shall include the right to be paid by the Corporation expenses incurred in defending any such proceeding in advance of its final disposition to the maximum extent permitted under the Texas Business Organizations Code, as the same exists or may hereafter be amended. If a claim for indemnification or advancement of expenses hereunder is not paid in full by the Corporation within ninety (90) days after a written claim has been received by the Corporation, the claimant may at any time thereafter bring suit against the Corporation to recover the unpaid amount of the claim, and if successful in whole or in part, the claimant shall be entitled to also be paid the expenses of prosecuting such claim. It shall be a defense to any such action that such indemnification or advancement of costs of defense is not permitted under the Texas Business Organizations Code, but the burden of proving such defense shall be on the Corporation. Neither the failure of the Corporation (including its Board of Directors or any committee thereof or special legal counsel) to have made its determination prior to the commencement of such action that indemnification of, or advancement of costs of defense to, the claimant is permissible in the circumstances nor an actual determination by the Corporation (including its Board of Directors or any committee thereof, or special legal counsel) that such indemnification or advancement is not permissible shall be a defense to the action or create a presumption that such indemnification or advancement is not permissible.
In the event of the death of any person having a right of indemnification under the foregoing provisions, such right shall inure to the benefit of such person’s heirs, executors, administrators and personal representatives. The rights conferred above shall not be exclusive of any other right which any person may have or hereafter acquire under any statute, bylaw, resolution of Directors or members, agreement or otherwise.
The Corporation may additionally indemnify any person covered by the grant of mandatory indemnification contained in this Article to such further extent as is permitted by law and may indemnify any other person to the fullest extent permitted by law. The Corporation may purchase and maintain insurance or a similar arrangement (including, but not limited to, a trust fund, self-insurance, a security interest or lien on the assets of the Corporation, or a letter of credit, guaranty or surety arrangement) on behalf of any person who is serving the Corporation (or another entity at the request of the Corporation) against any liability asserted against such person and incurred by such person in such a capacity or arising out of the status as such a person, whether or not the Corporation would have the power to indemnify such person against that liability under this Article or by statute. Notwithstanding the other provisions of this Article, the Corporation may not indemnify or maintain insurance or a similar arrangement on behalf of any person, if such indemnification or maintenance of insurance or similar arrangement would subject the Corporation to income tax under the Internal Revenue Code or subject such person to excise tax under the Internal Revenue Code. For purposes of this Article, the term “expenses” includes court costs and attorneys’ fees, and the term “proceeding” means any threatened, pending or completed action, suit or proceeding, whether civil, criminal, administrative, arbitrative or investigative, any appeal in such action, suit or proceeding, and any inquiry or investigation that could lead to such an action, suit or proceeding.
Section 12.1. Nonprofit Operation. The Corporation is organized and operated primarily for the purposes set forth under Article One of these Bylaws. It is to be operated in such a way that it does not result in the accrual of distributable profits, realization of private gain resulting from payment of compensation in excess of a reasonable allowance for salary or other compensation for services rendered or realization of any other form of private gain.
Section 12.2. Distribution of Assets. The Corporation pledges its assets for use in performing the Corporation’s charitable functions. In the event the Corporation is to be terminated, after all liabilities and obligations of the Corporation are paid or provision is made therefore, the Corporation’s Board of Directors shall distribute the remaining assets of the Corporation as they shall determine but only for purposes consistent with the purposes of the Corporation or to such organization or organizations organized and operated exclusively for charitable, religious, or educational purposes and which are exempt under Section 501(c)(3) of the Code. Any of such assets not so disposed of shall be disposed of by a court of competent jurisdiction of the county in which the principle office of the Corporation is then located, to one or more organizations exempt under Section 501(c)(3) of the Code in a manner which best accomplishes the purposes of the Corporation. No Director or officer of the Corporation and no private individual will be entitled to share in the distribution of any assets of the Corporation in the event of its termination.
Section 12.3. Decision Making Authority. The Corporation’s voting members shall have the sole and exclusive right to vote on and make decisions regarding or in any way involving the dissolution, merger and consolidation of the Corporation and decisions regarding the sale of substantially all of the Corporation’s assets. | 2019-04-19T01:22:12Z | http://www.compositerepair.org/by-laws/ |
Today while walking to the Wiehe-Reston East Metro station I noticed a VDOT contractor dumping snow on the Wiehle Ave trail leading to the station. The same crew had just blocked a previously cleared trail on Sunrise Valley Dr leading to the Wiehle trail. Reston Association crews have been out clearing most of the 50 miles of Reston trails only to have VDOT crews block them.
Reston Association has budgeted funds to maintain their trails, including having a fleet of snow plows. Fairfax County needs to figure out how to make areas around Metro stations and schools safe for their residents.
The first photo shows the once cleared trail on Sunrise Valley Dr.
The next photo shows the front loader dumping huge piles of snow on the Wiehle Ave trail.
At the entrance to the Wiehle Station at Reston Station Blvd and Wiehle Ave, the Wiehle trail is blocked and I saw two pedestrians walking in the road in the short time I was there.
The Lake Thoreau trail along Sunrise Valley Dr was cleared by Reston Association crews earlier this week. Today the VDOT crews blocked it, which has happened multiple times in the past.
The Lake Thoreau trail near South Lakes Village Center blocked by snow from the road. This trail was passable earlier. I was forced into busy South Lakes Dr to get around the pile.
Update 30 Jan: The same situation occurred in West Springfield as reported by NBC4 in their story Plow Crew Dumped Snow Onto Cleared Sidewalk in VA. After a resident notified NBC4 about the problem, VDOT crews were out to clear the sidewalk. That was Greeley Blvd that is a two lane road. The snow being cleared was for on-road parking. VDOT must have lots of money to clearing free on-road parking; maybe that money should be used for major transportation trails.
Today Greater Greater Washington published Pedestrian deaths tripled in Fairfax County. Bad road design didn't help. The article references an NBC4 report on the topic, including footage of the Gallows/Route 29 intersection near the Mosaic District. The footage also shows a motorist running a red light when turning right and you hear another motorist honking at a pedestrian crossing in a crosswalk. The report was filmed before the big snow storm; conditions are much worse out there now. See our earlier report about a walkable Mosaic District.
Eleven people on foot died in crashes in Fairfax County in 2015. That continues a rising trend since 2012, when the number was just four. What's going on?
NBC4 reporter Adam Tuss talked to some people about what's going on. A leading hypothesis in the story is that more people are walking around. That seems likely, but one element is missing: how poorly Fairfax's roads are designed for walking.
A number of people in the story talk about newcomers. One driver says, "I definitely worry about people who aren't from here," who try to cross when they don't have the light or not at a crosswalk. The subtext sure sounded like, "... people aren't familiar with the way we haven't designed roads for pedestrians in Fairfax County."
Just look at this intersection where Tuss is standing, the corner of Gallows Road and Route 29. It's about 0.6 miles from the Dunn Loring Metro station. And it's huge.
If you've tried to walk or ride anywhere in the county you know that VDOT and Fairfax County do not clear snow from sidewalks and trails. To make matters worse, VDOT snow removal crews usually dump huge piles of snow at the intersections, forcing pedestrians to walk in the road.
Fairfax County does not have a sidewalk/trail snow removal policy. The Board of Supervisors has discussed the issue many, many times. They've held snow summits. What's the outcome of all this talk? A web page that tells residents to Take Your Snow and Shovel It. While we agree that it's good to encourage residents to clear snow from sidewalks, this is not a very impressive outcome of all the talk.
It seems that every time it snows lately, the excuse is that it's a once in a lifetime storm and the county and VDOT can't be expected to make it safe for pedestrians and cyclists to get around. We don't accept that anymore. Maybe there should be a moratorium on transportation spending for new projects until the state and county can figure out how to maintain our existing infrastructure.
We also need a snow removal ordinance. Arlington and Alexandria require residents to clear snow on their sidewalks and the county needs to do the same, with exceptions for those who physically can't remove the snow and a county-organized volunteer program to help others.
In a recently published study by Boston University researchers a survey of 89 US mayors of medium and big cities (populations over 100,000) showed that these public officials express strong support for bike-friendly policies and bicycling infrastructure. Not surprisingly, however, aging and underfunded physical infrastructure was the most pressing challenge the mayors faced, and these problems continue to outrank bicycling, pedestrian, parks and recreation needs in the cities’ funding priorities.
The study explains that “big ticket” infrastructure needs, including roads, mass transit, and water, wastewater and stormwater infrastructure, dominate the mayors’ plans and efforts.
three areas you would prioritize if you could allocate a significant amount of new money?
Still, when asked to name more modestly priced infrastructure priorities, bike and pedestrian infrastructure and parks were the most frequently cited by the mayors. According to survey, the mayors included bike-friendly policies with biking infrastructure as funding priorities.
Figure 5: Please think about “small” infrastructure projects. That is, projects with costs equal to a small portion of your city’s annual capital budget. If your city were given an unrestricted grant to pay for any ONE such “small” infrastructure project, what would you spend it on?
As encouraging, 70 percent of mayors support improved bike accessibility, even at the expense of parking and driving lanes. A mere 15 percent disagreed or strongly disagreed.
bicycles even if it means sacrificing driving lanes and/or parking.
The study points out that as cities grow and national governments devolve new powers to local officials, mayors and other local leaders have become increasingly important in the U.S. and around the world. FABB continues to advocate strongly with Fairfax County officials to help keep the county in step with what other areas are doing in developing the adaptive policies needed to make urban and surrounding suburban areas more connected and more economically successfully, healthy, and livable.
Today we drove by sections of the W&OD Trail in Reston. At Old Reston Ave (first photo below), looking west, it looked like a snow plow or blower had made a pass up to Old Reston Ave. To me this looks like the tracks of a Reston Association plow. Maybe they were just using the W&OD Trail to get from Reston Parkway to Old Reston Ave.
As you can see below, to the east there were only footprints in the snow with a pile of snow at the trail entrance.
The next photo was taken at Newton Square looking west. Someone had cleared a narrow track on the short stretch of trail from Newton Square to the next service road (across from BAE).
In the photo below, looking east, only one or two tracks of footprints were visible.
At Wiehle Ave there was no sign of any plowing/blowing in either direction. The same was true at Sunrise Valley Dr. It's going to be a long time before the trail is passable in Reston.
At Sunset Hills Road we saw several people walking in the road, which is very dangerous given that in several areas only one lane in each direction is clear, which was true for most four lane roads. Most intersections had mountains of snow blocking the trails and sidewalks.
Gary McMullin regularly commutes from his home in Fairfax to the Burke Centre Virginia Railway Express station. When he spoke to FABB on Bike to Work Day Gary said he rides for the exercise and the pure enjoyment of biking. Gary told FABB that the thing he likes most about riding is how relaxing it is. He added, however, that more bike paths lanes would make riding safer and even more relaxing.
FABB agrees—and studies show—that better bike infrastructure helps overcome safety concerns that discourage many people who would like to ride bikes. European cities that provide significant infrastructure for the bicycling transportation alternative have demonstrated that when cycling feels safe, especially for children and the elderly, more people ride. And, these European cities have shown that as the roads become safer for cyclists, there is less danger for motorists and pedestrians.
VDOT recently released the I-66 Outside the Beltway Tier 2 Environmental Assessment (EA) documents. The project includes a major regional trail parallel to I-66 from Gallows Road to the Fairfax County border as well as new bike and ped facilities on rebuilt bridges. We are asking cyclists to contact VDOT by February 4 to support bike access along and across I-66. Please reference "I-66 Tier 2 Revised EA" in the subject.
See the Fairfax County portion of the Parallel Trail map. The trail would extend the Custis Trail located inside the Beltway. See the FABB I-66 Info Page for other supporting documents.
The Federal Highway Administration requires the preparation of an EA when VDOT conducts a significant project like the expansion of the I-66 corridor. According to the document, "This Tier 2 Revised EA is being made available for a 15-day public review and comment period. All comments received on the Tier 2 Revised EA will be considered and substantive comments will be addressed prior to finalizing the Tier 2 EA process and prior to a Tier 2 NEPA decision by FHWA." Comments are due February 4.
From VDOT: "Submit your written comments on the Tier 2 Revised EA by February 4, 2016 to Ms. Susan Shaw, PE, by email to Transform66@VDOT.Virginia.gov or via the online comment form on the project's website or join the discussion. Please reference "I-66 Tier 2 Revised EA" in the subject line for all correspondence."
Earlier VDOT released the Requests for Proposals that included Technical Requirements which contained proposed locations where the parallel trail can be accessed from adjacent neighborhoods. We will post our analysis of those points soon.
Accommodating bicycles strategically along and crossing the I-66 corridor could provide substantial benefits to the transportation system, benefits to the environment, and improvement to people’s quality of life. Bicycling is a healthy, efficient, and affordable transportation alternative. It can dramatically enhance people’s access to transit and related facilities, connect neighboring communities, improve people’s health, and reduce people’s reliance on a personal automobile for short and moderate-distance trips.
Fairfax County and Prince William County have invested substantially in bicycling infrastructure, and along with transportation-related associations, the counties have also invested in programs to encourage and expand bicycling. Currently, I-66 is a barrier to bicycle network connectivity in many locations outside the Beltway (I-495). The modification of bridges, ramp termini, the freeway mainline, and some local roadways offers the opportunity for bicycling networks along the corridor (existing and planned) to become better connected.
Folks who ride a bike to work will tell you that there’s simply no better way to commute. It’s often the fastest way to get to work, it offers a wide array of route options, and, unlike driving or taking public transportation, doing it regularly is actually good for you.
But the numbers from Capital Bikeshare tell a stark, though not shocking, story: People ride to work much less often in the winter. Even as peak usage in warm months has climbed from about 140,000 in 2011 to over 360,000 last year, January or February usage has remained around 110,000.
Of course, those numbers don’t reflect people who own their own bikes and who might be considered more dedicated to cycling and likely to ride when the weather gets cold. However, the most recent annual report by Strava, a popular fitness app for cyclists and runners, also showed that bike rides classified as “commuting” declined 63.3 percent in the winter.
Happy 2016! We hope everyone is staying warm on their bike rides over the past few days; very different than the shorts weather we experienced over the holidays. I'd like to take this opportunity to give you a quick update on FABB’s plans and activities for 2016.
During 2015, FABB embarked on a path to change its status from a sponsored project of the Washington Area Bicyclist Association (WABA) to an independent non-profit. As part of that process, the decision was made to rename FABB the Fairfax Alliance for Better Bicycling, to reflect our goal of building an even stronger advocacy organization. FABB’s increased focused on alliance-building fulfills a need to take advantage of the combined strength of Fairfax County’s many other bicycling focused organizations to achieve our common goals by enhancing our collaboration with County government as well as the private corporations that show a rising interest in supporting bicycling.
We are still in the process of filing the applications to be granted 501(c)3 status, so changes to FABB’s activities and the effort to publicize our new status and goals will occur gradually. We are extremely grateful to WABA for sponsoring FABB over the past several years and look forward to continue working with Greg Billings and his team to promote better bicycling throughout the Greater Washington region.
Bruce continues in the acting Executive Director role but expect to see a very active and engaged Board of Directors.
However, we can't do this alone and are eager to engage more Fairfax cyclists and bicycling supporters than we have in the past. To improve this outreach FABB will be rotating its monthly general meetings among each of the Supervisor districts in Fairfax County. With the FABB Board now handling organizational “housekeeping”, our goal is to sharpen the focus of FABB meetings and make them more interesting and useful by discussing advocacy opportunities related to the district hosting the meeting along with other topics of interest to area cyclists and cycling advocates, such as VDOT meetings and Fairfax Board of Supervisors actions.
Mosaic Community Space in the Mosaic District. (Directions below).
We will keep this meeting to three topics: (1) an open discussion with citizens about Providence district issues, (2) working w/ a consultant to engage those in attendance in creating a promotional video about FABB and (3) a quick discussion of FABB's 2016 goals.
Please join us afterwards for a beer at nearby MatchBox around 8:45pm to toast recent cycling related successes in Fairfax - like Tuesday's announcement of Capital Bike Share coming to Reston AND Tysons.
Be safe and see you on Wednesday.
Directions to the Mosaic Community Space: Located near the intersection of Gallows Road and Lee Hwy. Coming from Lee Hwy, head south on Eskridge Rd then turn left on Merrifield Town Center and continue past Brine and DGS Deli to a right on Merrifield Cinema Dr. The meeting room is located just around the corner. See the map of FABB Meeting Room Locations. If you have problems finding the room you can call 703-328-9619.
On Bike to Work Day 2015 Christopher Anello talked to FABB about his biking habits and views.
Christopher said he mostly rides from his home in Fairfax City to the Burke Centre Virginia Railway Express station as part of his daily commute. He travels by bike to avoid driving (and to avoid adding to the region’s hydrocarbon pollution) but also because he finds his rides a fun way to start and end his workday.
Asked what he most likes about riding, Christopher said he enjoys the exercise, is glad to help protect the environment, and appreciates that his bike commute is relatively fast in comparison to driving on the region’s clogged highways. He would like to see more bike lanes in Fairfax County to make riding less dangerous.
Yesterday Northern Virginia Regional Commission (NVRC) released an interactive map of pedestrian and bike crashes from 2012 through November 2015.
Clusters of bike crashes occurred along major roads such as Routes 1, 7, 50, 123 (especially in Vienna), 236, and 237. Several crashes occurred in the Merrifield area, including two pedestrian fatalities, highlighting the importance of providing good bike and ped facilities in a rapidly growing area round the Dunn Loring Metro station.
Only one bicycle fatality is shown, on Columbia Pike, where Elizabeth Shattuck was killed. Not shown was the location where Andrew Gooden was killed on Sunrise Valley Drive in Herndon which occurred on August 31, 2015.
The data were also used to create a heat map.
The Northern Virginia Regional Commission (NVRC) has released an online map indicating the location of bike and pedestrian injuries, fatalities and property damage throughout the region for calendar years 2012 through mid-November 2015. In total there were 3238 incidents that either led to an injury, property damage or fatality. Specifically, there were 3116 injuries, 95 fatalities and 27 instances of property damage. The data for the maps was provided by the Virginia Department of Motor Vehicles.
View the map with each incident plotted. View a heat map of the region plotting these incidents.
Today the Board of Supervisors approved $1.7 million for bike sharing in Reston and Tysons. We were very encouraged that Tysons was included in the most recent plan.
In Reston the county has been moving forward with bike station planning and preparing to purchase the stations and bikes.
While FABB has promoted bike sharing in Tysons and bike sharing was recommended in the Tysons bike plan (p. G-3), but we expected it would be implemented in a second, later phase. What made this happen was Tysons Partnership stepping forward to make a "financial contribution to the initial capital cost, as well as operating costs."
This is great news, especially now that there are over 2 miles of new bike lanes in Tysons. Thanks to Fairfax County for having the foresight to implement bike sharing, and to Tysons Partnership for their sponsorship.
See a FABB blog article a presentation by Charlie Denney of Alta Planning about bike sharing in Tysons.
From the the county news release Fairfax County Approves Bikeshare System for Reston, Tysons.
The Board’s action comes just one month after the approval of 31 pedestrian and bicycle projects in Herndon and Reston, and it also supports the county’s Strategic Plan to Facilitate Economic Success. The economic plan calls for creating dense, mixed use communities with many transportation options, and Capital Bikeshare helps to accomplish this goal. Not only do bikeshares attract the younger “creative class” that fuel an innovation economy, but also they produce tangible economic benefits. Also, a 2014 Capital Bikeshare member survey found that its riders make more trips to restaurants and stores than they normally would without the bikeshare system. In addition, an academic study found that 23 percent of Capital Bikeshare riders spent more money because they used the system.
In 2014, the Metropolitan Washington Council of Governments (MWCOG) awarded Fairfax County a Transportation and Land Use Connection grant to study the feasibility of launching a bikeshare system. The results of the study showed that bikeshare could succeed as a viable transportation option in Reston.
In the summer of 2015, the Virginia Department of Transportation (VDOT), in partnership with Fairfax County, and with the support of the Tysons Partnership, implemented several miles of bike lanes in Tysons. This new infrastructure provoked the idea that a bikeshare system could succeed in Tysons, as well as in Reston. The Tysons Partnership approached FCDOT with a proposal to bring bikeshare to Tysons, and to make a financial contribution to the initial capital cost, as well as operating costs.
For more information on this project, contact FCDOT at 703-877-5600, TTY 711 or visit www.fairfaxcounty.gov/fcdot/bike.
http://www.capitalbikeshare.com/assets/pdf/cabi-2014surveyreport.pdf , p. v, Jan. 5, 2016.
http://ntl.bts.gov/lib/51000/51900/51965/VT-2013-06.pdf , p. 22, Jan. 6, 2016.
Rose talked to FABB about her love of riding at the Wiehle-Reston East Metro station on Bike to Work Day 2015.
Rose: I love the exercise and fresh air, plus it’s a fun activity with my husband.
Rose: C&O Canal Towpath, local trails, country roads.
Rose: More trails and encouraging other cyclists to give "passing on your left" alerts.
Rose noted that she and her husband often travel around the country and use bikes to explore local areas.
Thank you to everyone who donated to FABB this year! We are your local bicycle advocacy group and your donations will go to making Fairfax a better place to bike. A special thank you to Fionnuala Quinn and Skip Bean for their financial support for the Bicycle Master Plan celebration in January. We also want to thank our two corporate donors this year, IMS Health and Squij Kat. This year Squij Kat, a local company that makes bike-specific products, dedicated a percentage of their sales to be donated to FABB. When you Celebrate Cycling with Squij Kat you know that a portion of that money will go to FABB.
We have a long list of goals for 2016 that the new FABB Board is in the process of finalizing. Those goals include getting funding to implement the Bicycle Master Plan by working with the county to develop a list of project and program priorities, holding FABB meetings in more locations around the county to about your local cycling needs, to continue to monitor the I-66 project, work towards making Fairfax a Bicycle Friendly Community, building an alliance of Fairfax bicycle advocates, and more. Your support is critical to our success. Thank You!
The Fairfax County Planning Commission will vote on plans for the Reston Town Center Metrorail station on Wednesday, January 13. There will be 20 short term car parking spaces, 28 bike racks (56 bikes), and 6 bike lockers at the station. We think there should be more bike parking.
Compare the capacity of 62 bikes at the Town Center station to the Wiehle-Reston East station. On the north side, the Charlie Strunk bike room has capacity for over 200 bikes and the free, covered bike room can hold approximately 36 bikes. According to a bike count we conducted at the Wiehle station from July 31 to September 3, 2014, an average of 76 bikes/day were parked on the north side.
Bike racks at the Town Center station, surrounded by much more residential and commercial density than the Wiehle station, will like be filled once the station opens. Some cyclists who currently ride to the Wiehle station will want to ride to the Town Center station and since the station is over a quarter mile from the Town Center and major development planned north of there, biking to the station will be an appealing option.
Bike lanes on Sunset Hills Rd, which leads to the station, are included in the Bicycle Master Plan and depicted on the station plan. The bike racks are located near the road so access should be easy, but the details of that access need to be well-thought out, which was not the case at the Wiehle station.
Reston Town Center station design, SE 2015-HM-024, PRC 86-C-121-05, 2232-H15-10.
Given the success of the Wiehe-Reston East Metro station bike room and free, covered bike parking, I think the RTC station should have more and better bike parking. The Wiehle station has bike parking capacity on the north side of nearly 250 bikes. The Town Center station, surrounded by much more residential and commercial density, will have a capacity of 62 bikes which will likely be filled from day one. According to our bike count at the Wiehle station from July 31 to September 3, 2014, an average of 76 bikes were parked on the north side.
If no additional bike parking is provided WMATA should ensure that additional bike parking will be added where the racks are currently located (through a double-decked system), or that racks will be added in other locations.
Bike access to the Wiehle bike room was not well planned. WMATA needs to identify the precise path that bicyclist will follow to the station, provide clear wayfinding signage to bike parking, and reduce conflicts with motorists and pedestrians.
The county has funds to begin to implement bike sharing in Reston. It is only logical that a bike share station be located at the Town Center station. It's a long walk to the Town Center and parts north which will be redeveloped soon. There should be a location dedicated to a bike share station at the RTC station.
Today Congressman Earl Blumenauer (D-OR) and Congressman Vern Buchanan (R-FL) introduced legislation that makes it easier for federal transit and highway programs to fund bikeshare systems and related improvements. Over the last several years bikeshare systems have expanded throughout the United States. By clarifying that bikeshare projects are eligible for federal funds the Bikeshare Transit Act will make it easier for communities to create and improve bikeshare systems.
Making it clear that bikeshare projects and associated equipment are eligible for federal transit funds and FHWA Congestion Mitigation and Air Quality Improvement funds.
Fairfax County is considering paving a section of the Long Branch Trail between Woodland Way and Wakefield Chapel Road. An open forum to discuss the project will be held January 12 from 7:30-8:30 p.m. at the Kings Park Community Room located on the second floor of the Kings Park Shopping Center. Fairfax County Park Authority reps will discuss the project and field questions.
In October the Park Authority held a similar meeting on paving of the nearby Cross County Trail in Wakefield Park. Strong opposition was voiced by some community members and several mountain bikers.
For more info about the meeting see the Friends of Long Branch Stream Valley December newsletter.
On Saturday, January 9, Fairfax Co VA legislators will hold a public hearing at the Fairfax County Government Center, 12000 Government Center Parkway, Fairfax, VA starting at 9am. This is an opportunity to support two bike bills that we've heard will be introduced this year.
Not passing another vehicle at a crosswalk: Del. Kaye Kory plans to submit a bill that will address the situation in which one motorist is stopped for a pedestrian or bicyclist in a multi lane road. A vehicle approaching in the same direction must not pass that stopped vehicle. Del. Kory introduced a similar bill, HB 320, two years ago.
Many crashes occur when one motorist stops, a cyclist proceeds through the intersection, then a second motorist does not stop and hits the cyclist. In 2012 a cyclist was struck while in the crosswalk on Wiehle Ave at the W&OD Trail. There are four southbound lanes and often motorists in one, two, or three lanes will stop but not in the fourth lane, usually the inside turn lane that has a separate green arrow.
In 2013 a cyclist was severely injured when crossing Sunrise Valley Dr at the W&OD Trail when one motorist stopped and the second motorist continued.
We do not yet have bill numbers for these two proposed bills. Click on the link to the sponsor's name to see their sponsored bills, and consider attending the public meeting on Saturday and supporting the bills, or at a minimum, asking Del. Kory and Sen. Surovell about their bills.
HJ55 - House Joint Resolution 55 was introduced by Del. Lingamfelter (R-Fauquier & Prince William Counties). "Requesting the Department of State Police to study the laws and policies governing bicycling on state highways. Report."
Del. Lingamfelter voted agains the Three Foot Passing bill and the Following Too Closely bill.
In conducting its study, the Department of State Police shall consider (i) the policies that govern safe bicycling on state highways and how bicycling on highways may be made safer; (ii) whether certain highways should be restricted to bicycling on the basis of hazardous physical characteristics, traffic flow, speed limits, and visibility of bicyclers by motorists; (iii) any safety-related standards bicyclists should observe that are not currently required; (iv) any additional measures motorists should observe that are not currently required; (v) whether additional education requirements for bicyclists and motorists are needed to enhance bicycling safety; and (vi) effective bicycling safety policies in other states.
SB195 - Senate Bill 195 was introduced by Sen. Kenneth Alexander (D-Chesapeake City & Norfolk City). "A BILL to amend and reenact § 46.2-1015 of the Code of Virginia, relating to rear lights on bicycles, electric personal assistive mobility devices, electric power-assisted bicycles, and mopeds."
The bill would require all bikes ridden on the highway to have a white front light and a red rear light. Currently rear lights are not required at night unless the speed of the road is 35mph or greater. However, the word "highway" was added to the existing bill which in effect would not require bicyclists to have lights at night if riding on a trail.
We will post updates to these and other bills as we hear about them during the legislative session progresses. Our main source for info during the session is usually Virginia Bicycling Federation folks.
On August 31 of this year Andrew Gooden and a coworker were riding their bikes home from work as lifeguards in Herndon. Andrew was struck from behind and killed by a Uber driver. The incident occurred on Sunrise Valley Drive just north of Coppermine Road. See previous FABB blog posts about the crash.
The road at the crash location is relatively flat, with two lanes in each direction, and the speed limit is 40 mph. There is also a right turn lane where we think the two cyclists were riding. The crash occurred just after sunset. No charges were filed against the motorist.
We recently obtained a copy of the report of the crash, known as the DMV form FR300P. We are concerned about some of the information contained in the report. According to the report, Andrew was "overtaking on a hill." According to the other cyclist, Andrew was riding beside him and they were talking. There was no "hill" and Andrew was not "overtaking." Because Andrew was talking, the detective working the case determined that Andrew was "Distracted."
More importantly, we are concerned about statements in the crash description, about safety equipment that is not required by law, that imply that Andrew was responsible for the crash. Our comments are in [Bold] within brackets.
"Driver 2 (D1) was driving a 2014 Suburban northbound on Sunrise Valley Drive in the right lane. D1 did not see the two bicyclists until just prior to impact. D1 applied brakes, activate [sic] ABS and turned V1 to the left. He struck D2 but not the second bicycle. After impact the operator, Mr. Gooden, was thrown approximately 81 feet from the bicycle and struck the pavement and curb.
"V2 is a Huffy Descent Rally mountain bike. The bicycle is owned by his employer Community Pool Services."
Hunter Mill Road was recently repaved and it now has wider paved shoulders. During the repaving additional asphalt was added to the outer edges of the roadway and now there is an approximately 3-4' shoulder that appears to be rideable. We plan to check it out in the near future.
The new shoulder extends from Route 123 north to Lawyers Road. In some areas the shoulder becomes very narrow, mostly at intersections where a left turn lane is located. At those points cyclists would likely have to ride in the main travel lane.
FABB has been advocating for paved shoulders on Hunter Mill Road for several years. The road is one of the only connecting N-S roads in this part of the county. The last time the road was paved some additional asphalt was added but not enough to be safe for cyclists. We're pleased with our first look at the shoulders on Hunter Mill Road and will be interested in hearing cyclists' reactions to the new paving. When combined with wide paved shoulders on Lawyers Road from Twin Branches Road, it may now be possible to ride from Reston to Oakton mostly on paved shoulders.
Shoulder paving to accommodate bicycle and pedestrian activity shall have a goal of utilizing at least 2% of the district’s pavement budget (pavement asset budget) and shall be accomplished only upon roadway sections that have adequate unpaved shoulder width to accommodate the paving and that shall not require the adjustment of utilities. If less than 2% of the pavement budget is spent, the reasons for the inability of the district to meet the goal must be documented and approved by the District Administrator.
See FABB notes from the March 2013 Statewide Bicycle and Pedestrian Advisory Committee meeting where this topic was discussed. Also see info from the State Bicycle Policy Plan.
On Bike to Work Day 2015, FABB spoke to Don Curran of Burke and asked him about his biking habits. Don said he generally bikes to work and does it because he enjoys it. He especially likes that biking gives him time to think.
As a committed bike commuter who rides in all seasons, Don suggested that keeping the bike paths clear of snow, ice, and debris during the winter would make riding easier and safer.
Cyclists can notify VDOT of issues like these by submitting work requests on the myVDOT page or by calling 1-800-367-7623 (FOR-ROAD). VDOT is good about responding to requests for repairs, especially if there is a safety hazard involved. | 2019-04-20T19:19:48Z | http://fabb-bikes.blogspot.com/2016/01/ |
The goal was to get home to Julie.
Limping slightly and unable to turn his head, Cutter was the first soldier from his transport to get through the gate in Boston. It had been a long, painful ten years, but if he was lucky, he might be home in an hour. He understood that he might not be lucky. He understood that things had changed around here. He tried turning his head against his neck injury. Both his neck and leg were repaired, but he needed at least another day or two to break them in. Just a little more mobility, a slightly wider field of vision.
Everyone should have been scanned automatically leaving the lot. Cutter’s rental merged with the traffic right before the gate, and the MP waved most everyone through.
When it was Cutter’s turn to stop, the barrier went across and the MP stepped out. He was an older man, who looked like he should be retired. He had a huge railgun slung over his shoulder. He held a scanner and scanned the code on Cutter’s neck.
Cutter had to twist his whole torso to look. Even then, pain bloomed up his entire left side.
“I don’t know where everyone else is going, but I’m going home,” Cutter said.
“Not everyone gets to keep that kind of gear,” the old man said.
It wasn’t a question, so Cutter didn’t feel the need to answer. The old man consulted his computer.
Cutter glanced in the mirror, wondering why no one was honking. Everyone just sat there, like sheep.
“Why don’t you say the magic word, Captain Cutter?” said the old man.
“Please,” Cutter said. He meant it, too.
The gate went up. Cutter’s car took him out of the lot, took him through all the correct exits to head north. The traffic thinned quickly.
Cutter didn’t need the MP telling him. The stories had spread long before the war ended, stories short on details but all agreeing that it was more dangerous some places on Earth than it had been in the war.
He took the time dig the sidearm out of his pack while the car drove itself. He slid in a fresh fuel cell and snapped in a full clip of darts. Then he secured the heavy sidearm in its long pocket on the flight suit.
He had seen the worst of the war. He had learned to assume the worst.
He had heard from Julie. It was only a text message but it was only a year old. “I’m alive,” her message said. “You still have a home where you left it,” it said. It proved she was alive six years after the backlash. So it didn’t matter. He had to try and get there even if the worst was true.
The first sign that things were different appeared quickly. Traffic thinned to nothing. The car was making excellent time, traveling well over a hundred miles an hour. But grass started appearing on the highway, growing out of the pavement. At first it was just a few strips here and there in seams of the breakdown lane, then more and more of it in the regular traveling lanes.
Before many miles the only clear pavement was in the ruts made by the occasional traffic. Cutter tried to take some comfort. The car had fully functional autopilot. Instead, he found himself taking hold of the controls. He had not survived this long by taking comfort. And he was all alone on this road, now.
He began slowing down just in time. Lucky. He was on a long straight section. The grass made it difficult for the tires to get a purchase, and the anti-lock mechanism vibrated softly. An alarm sounded, warning him that navigation had been lost. Skid marks proved that other drivers had not seen this coming. He passed a wreck that looked at least a month old. Nothing was recent; there was no sign of any living human.
He passed a barricaded exit that he knew should have led to another Interstate. He came over the crest of a hill and approached the bridge that would have killed him.
The right and middle lanes were stripped down to the rusty girders with no markers, no barriers, no beacons, or anything. Another wreck, pink plastic faded and smeared, was wedged between two of the girders, upside down. It could just as easily have gone right through into the river. Cutter assumed some others had.
He stopped to look things over. It looked like abandoned repair work, not structural decay. It looked safe. There was a corpse hanging from the window of the wreck. Another soldier, in uniform. To Cutter’s practiced eye it looked about six months old — very recent compared to the backlash. Seven years since the backlash.
He decided to try the bridge, but not on this side. No holes in the oncoming lanes. There was a guardrail. He pulled his gun and severed it with one explosive dart. He severed it in a second spot two posts down. Then he took out the posts. Before doing anything else, he replaced his spent ammunition, not only putting in a fresh clip but also reloading the partially-used one. He dragged the section of guardrail out of the way, eyes sharp.
As soon as his his hands were free, his gun was back out.
Except for small craters where the posts had been, it was flat on the median and the car crossed easily. Cutter kept his gun in hand. He went full-out across the bridge, hoping the speed would help him across weak spots.
The crossing was uneventful. There was still no sign of human life. Several deer grazed in the breakdown lane at the top of the next hill. When he saw that the deer were feeding on one of their own, he kept going. He stopped and took out another section of guardrail. Again carefully reloading. Eyes sharp.
He crossed back into the correct lanes. He crossed out of Massachusetts.
Fifteen miles later he came to the town. His first warning was a signal from the autopilot. It was functional again.
Julie might have come here for safety. He would have traded almost anything to know that she was safe.
He tried to imagine what she might look like after ten years and then had to force himself to concentrate. During the war he had spent so much time looking at her holo that it had lost all meaning. It had nothing to do with what they had done together. The marines allowed exactly one holo, no sound, no animation. But all of this was beside the point. He had to concentrate, or he would never see her again.
The highway was still nothing more than cracked pavement with grass, but the houses looked better. The trend was clear. A moment later, he saw a house with smoke coming from a chimney. Finally, he saw the exit, which was barricaded, but only from the other side.
There had always been an exit here. It was ten years since he had been up this highway, but he remembered. Things were very different now, as they were everywhere else, but there were a couple of landmarks. There was a small town library on a hill and a familiar church steeple. Even the smells were familiar. He was very close to home.
He took the exit and found himself in a temporary-looking village across from a strip mall. There was a main road running past the mall that looked repaired, and there were side streets. He could see people. There was a huge black fence to the north, maybe a mile away, maybe two hundred feet high. It ran east to west as far as he could see. It had to cut across the highway.
He asked the computer if Julie lived here, in this village.
The computer had the town on its records, but it had no record of her. He still needed some information about that fence, so he pulled into the strip mall.
Cars were sparse in the parking lot, and almost all of them looked old. There were several that looked to be internal combustion, and there was one horse-drawn wagon. There was also one modern delivery truck, however, and the few people in sight appeared normal. Cutter had been told to expect abnormalities.
“You came from the Interstate,” said the waitress.
Just like with the man at the airport, this was not a question. Cutter sat himself gently at a booth, sticking his healing leg out into the aisle. He had a clear view of his car from here.
She gestured at his suit, then his sidearm.
“Yes, but now it’s just us doing stuff to ourselves,” Cutter said.
“No aliens to give us an excuse,” the waitress said.
She handed him a menu and went off to help someone else.
This was his first non-military meal in ten years. He surveyed the offerings in some confusion. Each meal was described in so much detail. Before he could decide, a woman approached and pointed to the code on his neck.
Only someone in Intelligence could scan a code without a computer. Just looking and reading it required neuro-enhancement. Her bearing was military, and she wore a graduation ring from West Point.
Cutter found himself wishing people would stop telling him what he was doing.
“I guess I’m following a standard pattern,” he said.
She sat down, and Cutter settled himself back at his place. His neck and leg were a little looser than they had been at the airport, but still not right. He looked at the woman nervously. She said it didn’t matter who she was. That meant she was still on active duty.
“Have I done something wrong?” he asked.
“Why is that?” Cutter asked.
She leaned in at him and spoke quietly, as if this was for his ears only.
“I heard the laws of physics are changed.” Cutter said.
“Always get the special,” she said.
Then she turned and went out of the restaurant.
At first Cutter thought she was shorter than Julie, but he honestly couldn’t tell. He couldn’t remember, and couldn’t afford to try. The woman’s smell lingered.
Cutter ordered the special. He secretly feared that it might be venison, but it was nothing more than corn chowder and a couple of slices of an aromatic, dark rye bread. It brought back memories, eating something that was so much like home cooking. So much texture and taste.
He realized that he had not eaten since coming down from orbit this morning. It was fortunate that he had stopped at this place. He was a little drowsy. He had a cup of tea, and felt the drowsiness leave him.
After eating, he went back to his car. Maybe he was expected to poke around looking for a hole in the big wall, some place where a stream went under, or where he might be able to find some secret tunnel. Instead, he got right back onto the Interstate and went north, as far as he could. The wall stopped him.
It was black reinforced concrete set into a trough that cut through the land as far as he could see. Maybe the people who built it had tried to make it straight, but it bulged and listed. It looked like the builders had struggled to maintain sanity, or that the backlash had pushed at them harder in some spots than in others. The gap between the edge of the trough and the wall was wide and deep, but Cutter was sure he could jump it in his suit, even with his bad leg. He was a little too close with his car for what he had planned, however. he backed up. When he finally got out, he carried his pack and the one other item he’d brought down from space.
This was one of the alien swords. They were rare, very rare. Cutter had taken this one while storming one of the alien ships. He had been on the boarding party each of the three times an alien ship had survived. The people in Intelligence, glutted with so many other things to study, had ignored the swords, which appeared ornamental, low-tech and therefore of little interest. However, they were made from some unknown composite that was both very sharp and very tough. Cutter had tried this one on many materials. As far as he could tell, it could cut through anything that wasn’t reinforced by ionic armor.
Alien physiology was very different, but Cutter figured out a way to strap the sheath onto his back. It required tying some of the straps instead of using the strange fasteners, but it was sturdy.
Next, he put on his helmet, and checked all the readings for his suit and his sidearm. His computer was still logged onto the satellite. Everything registered as fully functional. He activated the armor.
Finally, he braced himself against his car, put the sidearm on full muzzle velocity and began burying explosive darts into the wall. The noise was tremendous and the wall trembled and threatened to topple. Cutter hadn’t used the weapon on full-auto since basic training and was surprised at how precisely he could still fire the darts. The weapon waited a moment between rounds so that the explosion of one dart didn’t interfere with the penetration of the next. He found it gave him time to collect his aim. The vibration was like a sounding board, making thunder.
Each clip held seven darts. He used up ten clips. Then he waited for the wall to stop vibrating.
The hole was anything but clean. It was jagged, with lengths of severed re-bar sticking inward, huge spikes holding fast onto large chunks of debris. Some of the re-bar still wasn’t severed at all, but blocked the path like strands of iron spiderweb. However, daylight shone all the way through. He consolidated all the loose darts and partially loaded clips. Exactly eighteen shots. He would be without ammo very quickly, if he found himself in a fight.
A hundred yards from the wall: he got into the car, and backed it up another hundred yards. Then he got out, made a final check of his gear, and not-too-painfully jogged toward the hole. The jump from the edge of the trough to the hole was easy with the help of the armor. He drew his sword and sliced his way through, lopping off re-bar that blocked the way, and cutting chunks of concrete until they fell to the floor in pieces small enough to move.
In ten minutes, he stood at the other end, his neck and leg feeling much looser from the suit-assisted exercise. The highway continued north, disappearing around a curve about a quarter of a mile ahead. Guardrails were more or less intact, and there was even a faded green sign announcing the next exit. That was the last similarity to the familiar world.
In the far distance, a huge, snow-capped crater loomed over a riot of deep green vegetation. Not a standard feature in New England. The crater, possibly ten miles wide at the top, smoked, and the vegetation everywhere steamed. Cutter had seen this crater from orbit, and had watched it cool from bright orange during the months right after the backlash.
In the middle distance, on the broken foothills surrounding the crater, the greenery was blotched with black and gray swaths of bare or scorched earth. There were streaks of other colors through the vegetation, bright oranges, reds and yellows, like fall foliage, but also pinks and blues. He couldn’t tell if the colors were from flowers or leaves, or both, but it was spring, so he guessed flowers.
He scanned for danger. He assumed that his thunder had scared away most of the animals. His final shots must have been spectacular from this side. His last shot had gone clear through the hole, and made a crater in the bedrock embankment of the highway where it had penetrated. This, he hoped, would intimidate the more intelligent enemies.
Still, he kept sharp. From the heads-up display on his helmet, he activated the rental car, and had it accelerate toward him at full speed. He kept watch over the land for danger. There was one final, thunderous crash. Pieces of debris flew over him. He glanced behind and saw the crushed vehicle wedged tight in the hole, locked there by the rebar spikes. There was a lot of dust and a few falling pieces of concrete, but no daylight. Nothing would get through for a while. Nothing to do now but go forward.
He started right out jogging, and he followed the highway. He understood that things had changed, that the roads might be watched by anyone or anything.
He knew that the road at the next exit came around the small rise to his left. He would save distance just by cutting over that rise, especially if the apple orchard still grew on the other side. But he didn’t know what grew there. He would do better acting like a complete stranger. If attacked, he might be able to use the occasional islands of familiarity in surprising ways. On the other hand, if he cut through the country, his advantage would be more than lost.
So he jogged on, toward that first exit.
With the help of the suit, he could run comfortably for several hours. However, the sun was getting low, giving an advantage to the shadows. His need for his instruments would only get worse — at least as long as they worked. He would need all the help he could get if what he saw was correct.
He reached the exit, and after faking some indecision, headed west on the town road. He watched, looking for anything that might tell him whether this was a real threat. His computer was programmed for interspecies combat. It was very clear about human and non-human forms. At the moment, it showed both human and non-human readings. There were close to a hundred of them, surrounding him completely. Interspecies cooperation.
As yet, they kept to the woods, moving quietly. When Cutter turned west, the readings followed. They showed definite pursuit and ambush. Yes, these were definitely real; the instruments were not lying.
He had spent ten years in combat, but had never killed another human. He did not intend to do so. Furthermore, he had neither the weapons nor the advantage against this foe. His only choice was to use his greatest strength. He would try to run.
Running at high speed would deplete his fuel cell in an hour. It would take a significant toll on his body in half that time. The suit could run a mile in less than three minutes. Half an hour would take him over ten miles.
He accelerated gradually, not wanting to force them into the open. He watched, trying to learn as much about them as possible, and what he saw was bad news.
As his speed increased, the non-human signals accelerated to match, and all signals in front of him, human or otherwise, closed in toward the road. Cutter abandoned subtlety and started running flat-out. At that moment, five huge dog-wolves broke into the open and moved to cut him off.
His weapon was still out, but he hesitated. He didn’t know for certain that he was facing a hostile action.
Just then, a man with a shotgun came from behind a ruined house, took aim and fired. Cutter felt his helmet whip to the side from the impact, right against his sore neck. He felt himself falling. He felt another concussion on his chest, looked, and saw at least three men firing at him. He felt close to passing out. He forced himself to roll up and start running again.
He had never really stopped moving. His suit was still fully functional and as he fought off the faintness, he felt the familiar sick of combat. He had to hope they had no access to armor-piercing ordnance.
Enough were ahead that they could still grab him and bring him down. There were more and more people and dogs pouring out of the woods. Cutter had survived war because he was a very good shot. He opened fire, placing darts into each of the five wolf dogs already blocking his path.
There were small craters in the pavement, and pieces of dog flesh rained everywhere. But one of the men was also down, gripping his stomach. Cutter had not aimed anything at the man. The others were headed back for the woods. Cutter’s sprinting suit blasted through the smoke and falling flesh and he was finally clear, headed west. His terminal showed that he was still being chased.
He ran like this for several miles, then slowed the pace just a little. He would not be able to keep this speed indefinitely, but if he stopped, his injuries, both old and new, would tighten up. If he stopped, any further progress would be impossible.
The sun went behind a distant hill. This made the light more even, and made it a little easier to see into the shadows. The readings showed his enemy losing ground, but that’s when the howling started. There were answering howls from somewhere ahead. He could not tell if the howls came from canine or human throats.
He had his terminal zoom out. The signals of his pursuers became one big signal with the single pixel of him just in front of it. Off to the northwest another signal showed another bunch, already moving quickly to cut him off. The computer ran distance and time. He would beat both groups with room to spare, but he would have to maintain this pace.
He needed to get far enough ahead so that he wasn’t followed home.
As the running settled into routine, aches and pains became more difficult to ignore. His neck spasmed from time to time, and pain shot around his bruised chest with every step, even with the cushioning of the suit. At least his repaired leg held, and at least he thought his ribs weren’t broken.
Finally, he passed the last possible point where he could be cut off. It was an open area, dead. His readouts were momentarily scrambled. He didn’t need the terminal, however. The people and dogs were visible as they surged over a knoll to the north, and one of them shot a rifle in his direction, though the shot was out of range.
Right after that, the terminal cleared up.
Getting past this last threat was an emotional boost. He watched both groups fade behind him. He took comfort from this, and it was a mistake.
After several minutes, the open area gave way to more woods, dark and gloomy in the diminishing light. When these woods proved small, and another open area appeared ahead, he welcomed it. As he got nearer, he took note of the total lack of vegetation, but failed to take account of the meaning. He was fifty yards onto the scorched earth before his instruments scrambled, another hundred yards before they went out completely, and another quarter of a mile before his entire suit seized.
He found himself face down in the dirt, barely able to move, when the suit’s life-support failed along with the armor.
For a moment, he lay there panting while the window of his helmet fogged over. Moving was a struggle, but he had to get out of the suit fast. It was a space suit. Built for vacuum. Built to stay closed.
And he had at most five or ten minutes before his pursuers caught him.
It took him a full minute to get his arm up to the latch on his helmet, and he was only mostly conscious when he got it off. Then came the problem of unfastening the suit, which was tangled in the straps of his pack and the alien sword. He wasted time fumbling with knots and fasteners. He had better luck with the sword itself, got it unsheathed and cut the suit away.
It was easier after that. Cutter unfastened the last electrodes and catheters, and staggered to his feet. He didn’t know where his enemies were, and he was completely naked.
A few steps convinced him that he was too weak to carry the suit, the pack and the sword together. He left the suit — first, taking care to remove the sidearm, the extra clips and the suit’s fuel cell. The fuel cell was designed to work in the sidearm, as well. He used the sword to hack the inactive suit apart. He split the helmet. There was no telling what this enemy could do with a weapon like a modern battle suit. A few steps convinced him that he could not drag the pack with the straps sliced. He was forced to waste another minute or two improvising some knots on the straps, so that he could carry everything on his shoulders. It was not comfortable by any stretch, but at least he got moving again.
He had barely reached another wooded area when he heard the faint sound of howls. Winded and limping, he stopped to consider. There was no way he could outrun them, but he was in familiar territory. He knew there was, or used to be, a creek that ran parallel with the road. It circled behind a small graveyard just ahead. The graveyard was slightly back from the road. It was getting dark. He needed clothes and he needed shelter.
Cutter went flat on the ground and had his sidearm out of the pack before even he was aware. He fanned it back and forth, ready to fire into the shadows. It took a few moments for the logic to kick in. He would be dead already if that was the plan. He put the sidearm back.
A figure appeared in the shadows under a tree, a figure covered everywhere in rags, covered so well that it was impossible to identify the species. The rags blended with the leaves. They didn’t use suits like that in space, but he knew it was called a Ghillie suit. The figure sounded female, human, older. She carried a deer rifle slung over her shoulder.
The naked Cutter followed into the overgrown woods, too aware of the danger to question any of this. They went toward the stream, then went upstream toward the graveyard. The figure walked with a limping shamble, hunched over and swaying, but there was strength in that limp. When they got to the low fence surrounding the graveyard, the figure vaulted it without difficulty and without hesitation. Cutter, exhausted and wounded, took longer.
“Almost there,” the figure said.
They passed burned remains of the caretaker’s house, a jumble of charred beams inside a stone foundation hole. Cutter remembered the house as small, a white Cape Cod with shutters and a neat lawn. They went around a low hill and arrived at a tomb. He remembered the tomb as a place kids sneaked into.
“Are you Mrs. Bixby?” he said.
His high school chemistry teacher: The figure nodded recognition, nothing more, produced a key and opened the tomb. Her husband had taken care of the graveyard.
Cutter held his pack over his crotch, suddenly feeling modest now that they weren’t in full flight.
She pulled off her rag hood. A scar went diagonally across her face. Her gray hair was matted and very thin. She’d retired the same year the war started, already old, but now still alive. Her eyes had hardened and her cheeks had caved in. Cutter had seen that same face on many a combat veteran.
“Maybe you noticed things are different here,” she said.
She pointed to her matted hair.
Cutter thought of the cannibal deer. Mrs. Bixby must have seen something in his reaction.
Mrs. Bixby cocked an ear, sniffed like a wolf.
She held the door only long enough for him to slip in. She closed it gently, quickly but quietly. The darkness went total, except for a faint glimmer through the keyhole. She stayed near the door while Cutter felt in his pack for his lamp, flicked it on.
He killed the lamp but kept it in hand.
“Can we talk?” he said.
Cutter smelled only death in here.
“There still bodies in here?” he said.
Rotting meat and dead bodies had been his company too often, and for too long. He had hoped those were behind him, along with the pain of injury and the fog of exhaustion.
“I need rest,” he said.
There was a rustling from her rags, the sound of a zipper, then a tiny light shining at a coffin.
The lid was closed. The coffin itself rested on the second level of a rack. He went there, opened it, tossed his lamp inside. Then he struggled up and lowered his nakedness inside, leaving his pack on the floor. He did this in the dark. She’d killed her light long ago.
“It’s no good if you keep it open. Close it,” she whispered, almost next to him.
He closed it, then gripped hard on his lamp. He did not turn on the lamp.
He felt something like comfort for the first time since he’d left the car. He felt a gentle current of ventilation but also felt warmth from the silk lining. But there was something else. The faint odor of chemicals gone half-rotten. There had been an embalmed body in here. He found he cared about that, but not enough to keep him awake.
He regained consciousness to the sound of thumping. It lasted only long enough for him to be certain he didn’t dream it.
Perhaps he’d slept only a moment. Absolute darkness still held, but now he felt a strange weight bearing down on his body. The weight moved a little. A tendril went over his mouth and by his ear. It had scales. In the war he had survived by knowing when to move and when to be still. A scaly tendril meant he needed to be still.
The lamp: he fought himself, fought his temptation to use the lamp.
Maybe he managed to doze awhile, maybe he fainted, but a tendril slithered along the inside of one of his legs a little too close to his crotch. He had no way to be certain that it was merely a snake. There would be no sleep now, if only because he might startle it by moving.
It was the beginning of summer, and the nights were short, but this one did not seem so. He managed to keep still until the light of dawn began to show through the air holes in the back of the coffin. As the light got brighter, he was able to move his eyes enough to get a look at the tendril right in front of his face. He saw a rattle on the end, and stopped moving even his eyes. Most of his body was numb from immobility. He didn’t think he could move even if he wanted to. A ray of sun hit the stone wall outside the ventilation holes, and soon the howls returned.
Bright sun lit the other side of the holes by the time his enemies came to the graveyard. Numb and stiff, Cutter simply waited to die. He heard the dogs sniffing, the people talking, and finally saw shadow darken the wall beyond the ventilation holes.
“In here,” a voice said.
“You see him?” said a second.
Cutter was afraid his heartbeat would scare the snake thing on him. He prayed it wouldn’t rattle. There were sounds of movement out there in the tomb, but they came no closer.
The shadow moved over enough to let the sun hit the wall. The brighter light filtered under the tendril on Cutter’s face.
The other voice faded backward.
As the voices faded, Cutter couldn’t stop himself from breathing deep, and when he took a breath, the snake started to move. He was not bitten. Slowly, as slowly as someone tickling a trout, he went to move his arm, to brush snake away very, very slowly. He had little hope, but he couldn’t stay like this any longer. Another movement, somewhere near his leg. He hadn’t done anything to disturb it. Then, slowly, it started to move off him. Even after the the part on his face had moved, he was afraid to look. Through the crack of his partially-closed eyes he saw nothing except the warm light streaming in through the air hole. The snake kept slithering off, going somewhere else. Finally, after a long time of feeling nothing, Cutter pushed open the coffin.
It took him another long time to get the feeling back in all his extremities, took him a long time just to get where he could sit up and look around. He saw a single snake going into a gap in the wall, a snake with three tails, all with whithered rattles. It looked more pathetic than dangerous.
His neck and chest were bad. His chest was purple and blue and he could not hold his head straight. The leg was better, almost normal. There were no snakes left. There was no sign that there had ever been snakes.
His pack was between his feet. Maybe Mrs. Bixby had made the thumping sound when she put it in the coffin. The sidearm was gone, but there was a bottle of water, two sticks of dried meat and his sword. Both power packs were gone, not that he had a use them for now anyway.
He arranged the straps of his pack to work like suspenders and sliced two leg holes into the pack’s bottom. He tried to bend. After passing out for a moment he got himself bent enough to slide the improvised shorts over his legs. Finally covered, he rolled himself to the rim and used it to get to his knees. He felt looser now. He held onto a section of the rack overhead and eased himself down.
Before he went out, he drank some water and ate one of the pieces of meat. He kept it down.
Sword in hand, he listened at the door and then stepped out.
From this vantage, it looked like the old Earth on a perfect summer morning. The songs of the birds sounded normal, and the trees and grass held the green of his pre-war memory. He watched and listened first. He went out of the graveyard the same way he had come in. He planned to cut through the woods to get to Julie.
On the other side of the creek he started to feel the pain in his feet, which were already cut and bruised. The feeling was coming back into his body all over. It wasn’t a good feeling. Only his head still felt foggy, and only now did he remember that he should have tape in a side pocket of his improvised shorts.
He felt the pocket. Mrs. Bixby had left him the tape.
Stopping under the roots of a fallen tree, he taped his feet and taped his chest. It was duct tape, but he’d worry about getting it off only if he lived that long. At least his neck felt a little more flexible now.
He went on, wading up the stream.
He had grown up in these woods. The backlash had changed the land, and he had re-entered an area where the trees were too green, at least if they were green in the first place. Some were other colors. Too much was mutated and bent but the hills were still where they were supposed to be. The ruins of houses, the roads and the streams were placed correctly. At least here.
He had around five miles left to go. In less than a mile, he heard the sounds of pursuit.
He began to jog and his feet were merciful enough to go numb. However, deeper pains began to go up his legs and outward from his chest. For a little while it sharpened his mind, but he was experienced enough to know this wouldn’t last. He couldn’t imagine making the five miles at this pace.
He found himself following a ridge, another sign that he wasn’t thinking, and this is when the pursuers caught sight of him. As they swarmed over a rise they let out howls of elation.
He went faster, fighting to stay conscious. He came to a road. A familiar road where nature went back to normal bird songs and normal trees — normal species like pine and maple. Most of the houses looked abandoned, but there were still curtains in one. He had passed by here many times when driving by with Julie, had known these people by sight, if not by name. He had not thought about Julie yet today. He could not afford to think about anything except moving and hiding.
The sounds of pursuit resumed, closer than they should have been. Had some of his enemies been waiting ahead? They were all around him, but they weren’t close enough. Perhaps staying in the stream had confused them.
He got across the road by crawling through a culvert. He pushed his sword ahead of him through a trickle of brown water, which pooled in the improvised shorts. On the other side, when he heard someone nearby, he snuck into an old barn. The crawling had torn open the tape he had wound around his chest. It drew blood from the crust over his bruise. Worse, a metal snag in culvert had broken one of the straps. He got out the tape and taped the strap to his shoulder.
He heard the pursuit fading. He left the barn. After a time he left the stream, made another mile before he hit the first wall. It was only a chain-link fence, patched, and topped with razor wire, but it was a sign of life.
He looked behind him, and saw pursuers come over a stone wall. They had no firearms, no dogs, but they looked happy to kill. They must have been stalking him the whole time, waiting for this moment. He could see them clearly enough to pick out their leader, who grinned.
Cutter had no time to look for a way through. He pulled the sword from its scabbard, that tissue-thin blade as stiff and heavy as a big hammer. He cut through the fence — so easily — and it made his enemies pause in wonder. He saw the lust for it in the eyes of several, then saw them look at each other. This bunch would fight over that sword. He planted it in front of the gap. He could think of no way to destroy that weapon.
It felt like surrender. He could not fight now. With no weapon he was no longer a warrior, just ready to die.
He ducked through the fence and ran. He was careful, still, more careful than ever. Such a weak fence must be patrolled. For all he knew, he was coming right into the territory of his enemies.
He went straight toward his home. It seemed like mere moments before he heard the enemy find his sword. He was barely out of sight when sounds of their fighting over it began. He ran, pushing himself, running harder than he should, maybe passing out, but never falling down.
He made the last mile, spent, breathless.
He came to where he expected his home to be. Where he expected his house and his yard. He found an entire village. His house was there, all right, but there were lots of others, all over what should have been his fields. There were people there, walking from house to house, doing business. They looked a lot like his pursuers — ragged. At least they had clothes, something more than a pack with leg holes. And nature here looked perfectly normal.
There was another fence, this with armed guards in plain sight. Lots of guards. There was a wide swath along the perimeter, maybe a hundred meters of bare earth packed perfectly smooth.
For too long, he crawled at the edge of this swath, searching for the gate, but he could hear his pursuers right behind him, then in front. He came to a gully. He could go down into it, and maybe be trapped. He could follow it deeper into the woods, and leave the area of his home. Or he could step into the open, and take bullets from the surprised guards.
He rested, caught a little breath.
“Standing still is the wrong choice,” said the voice of Mrs. Bixby.
He turned toward it and glimpsed the familiar rag-suited figure brandishing his sidearm. The figure faded back into the underbrush.
“Go! Now!” she said, as if he didn’t know.
He stepped into the open, slowly. He saw a good dozen rifles snap into aim straight at his face. He realized he must be a suspicious sight, clothed in tape and a backpack, bruised all over, head shaved for his helmet, holding up his hands like a lunatic.
The people in the streets turned. The howls of his pursuers grew to a frenzy as they burst into the open, going for him. He heard the report of his sidearm, firing on full auto. There were screams, and pieces of flesh rained over him. The guns from the village opened up, but not at him. Cries of pain and panic and retreat.
People ran for the houses, and in moments the street was cleared, except for the gunners aiming at his head. The tape gave way so that the pack covering him fell askew leaving him feeling exposed in too many ways. He felt like less than a man, even less than the human animals behind him. From the distance, his sidearm fired again but the screams of pain were also farther away.
It was only a matter of time, one way or another. He waited, shivering, exposed, hands in the air. Finally an older woman, much older than he remembered, but still the most beautiful woman on Earth, burst out of a door and came toward him, running.
This entry was posted on Thursday, September 17th, 2009 at 3:00 pm and is filed under Featured Story, Science Fiction, September 2009. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
Very suspenseful – with so much to infer about this future world as the story rockets along, every line fascinates. While the story satisfies it could also serve as the first chapter of an amazing novel…!
An apocalyptic tour de force. It is relentless and large in scope for a short story. | 2019-04-25T04:07:05Z | http://www.aflyinamber.net/?p=478 |
BY JAMES T. McCLEARY, M.C.
TO THE MEMBERS OF MY CLASSES IN CIVICS, WHOSE QUESTIONS HAVE AIDED ME IN DETERMINING WHAT SUBJECTS TO TREAT, AND WHOSE EARNESTNESS AND INTELLIGENCE HAVE MADE IT A PLEASURE TO BE THEIR TEACHER, THIS BOOK IS AFFECTIONATELY INSCRIBED.
The thought constantly in mind in the preparation of this book has been to furnish useful material in usable form.
Attention is invited to the scope of the work. The Constitution of the United States, not a mere abstract of it but a careful study of the text, is properly given much space but is not allowed a monopoly of it. Each of our governmental institutions deserves and receives a share of consideration. The order of presentation—beginning with the town, where the student can observe the operations of government, and proceeding gradually to the consideration of government in general—is based upon conclusions reached during eighteen years of experience in teaching this subject.
Matter to be used chiefly for reference is placed in the appendix. Attention is asked to the amount of information which, by means of tabulations and other modes of condensation, is therein contained. Documents easily obtainable, such as the Declaration of Independence, are omitted to make room for typical and other interesting documents not usually accessible.
Is this book intended to be an office-holders' manual? No; but it is intended to help students to get an insight into the way in which public business is carried on.
Is it designed as an elementary treatise on law? No; but the hope is indulged that the young people who study it will catch something of the spirit of law, which to know is to respect.
PART I.—GOVERNMENT WITHIN THE STATE.
Highly competent teachers are the very ones who receive most kindly suggestions meant to be helpful. For such these words are intended.
The local organizations are so related that it is advisable for all classes to consider each of them. Especial attention should, however, be given to the organization (town, village or city) in which the school is. Here considerable time can be profitably spent, and the matter in the book may be much amplified. Here must be laid the basis of future study.
Certain typical instruments deserve careful study. For a student to have made out understandingly an official bond, for instance, is for him to have gained greatly in intelligence.
It will be of great advantage to the class for the teacher to have a complete set of the papers whose forms are given in Appendix A. These may be obtained at almost any newspaper office, at a cost of about 50 cents.
A scrap-book or series of envelopes in which to file newspaper clippings illustrative of the every-day workings of government, may be made very useful. Pupils should be permitted and encouraged to contribute.
One good way to review is for the teacher to give out, say once in two weeks, a set of twenty-five or more questions, each of which may be answered in a few words; have the pupils write their answers; and the correct answers being given by teacher or pupils, each may mark his own paper. Each pupil may thus discover where he is strong and where weak.
The questions given for debate may be discussed by the literary society. Or for morning exercises, one student may on a certain day present one side of the argument, and on the following day the negative may be brought out by another student.
A student should not be required to submit his good name to the chances of answering a certain set of questions, however excellent, at the examination, when from anxiety or other causes he may fall far short of doing himself justice. One good plan is to allow each student to make up 50 percent of his record during the progress of the work, by bringing in, say, five carefully prepared papers. One of these may be a resume of matter pertaining to his local organization; another may be an account of a trial observed, or other governmental work which the student may have seen performed; a third may be a synopsis of the president's message; the fourth, a general tabulation of the constitution; the fifth, a review of some book on government, or a paper on a subject of the student's own choice.
Among reference books, every school should have at least the Revised Statutes of the state and of the United States, the Legislative Manual of the state, a good political almanac for the current year, the Congressional Directory, and Alton's Among the Lawmakers.
A Teachers' Manual, giving answers to the pertinent questions contained herein, and many useful hints as to the details of teaching Civics, is published in connection with this book.
You will notice in chapter one that at the close of nearly every paragraph questions are thrown in. They are inserted to help you cultivate in yourself the very valuable habit of rigid self-examination. We are all liable to assume too soon that we have the thought. Not to mar the look of the page, the questions are thenceforward placed only at the close of the chapters.
You will soon discover that these questions are so framed as to require you to read not only on the lines and between them, but also right down into them. Even then you will not be able to answer all of the questions. The information may not be in the book at all. You may have to look around a long time for the answer.
If you occasionally come to a question which you can neither answer nor dismiss from your mind, be thankful for the question and that you are bright enough to be affected in this way. You have doubtless discovered that some of your best intellectual work, your most fruitful study, has been done on just such questions.
After studying a provision of the constitution of the United States, you should be able to answer these four questions: 1. What does it say? 2. What does it mean? 3. Why was the provision inserted? 4. How is it carried into practical effect? Some of the provisions should be so thoroughly committed to memory that at any time they may be accurately quoted. The ability to quote exactly is an accomplishment well worth acquiring.
After you have got through with a line of investigation it is a good thing to make a synopsis of the conclusions reached. Hints are given at appropriate places as to how this may be done. But the doing of it is left to you, that you may have the pleasure and profit resulting therefrom.
Finally, without fretting yourself unnecessarily, be possessed of a "noble dissatisfaction" with vague half-knowledge. Try to see clearly. Government is so much a matter of common sense, that you can assuredly understand much of it if you determine so to do.
GOVERNMENT: WHAT IT IS AND WHY IT IS.
At the very beginning of our study, two questions naturally present themselves: First. What is government? Second. Why do we have such a thing?
These questions are much easier to ask than to answer. The wisest men of the ages have pondered upon them, and their answers have varied widely. Yet we need not despair. Even boys and girls can work out moderately good answers, if they will approach the questions seriously and with a determination to get as near the root of the matter as possible.
Beginning without attempting an exact definition of government, because we all have a notion of what it is, we notice that only certain animals are government-forming. Among these may be mentioned the ant, the bee, and man. The fox, the bear, and the lion represent the other class. If we should make two lists, including in one all the animals of the first class and in the other all those of the second class, we should make this discovery, that government-forming animals are those which by nature live together in companies, while the other class as a rule live apart. The generalization reached is, that only gregarious animals form governments. We would discover upon further investigation that the greater the interdependence of the individuals, the more complex the government.
Confining our attention now to man, whose government is the most complex, we may put our generalization into this form: Man establishes government because by nature he is a social being. This may be taken as the fundamental reason. Let us now proceed to trace the relation between cause and effect.
In order that people may go from place to place to meet others for pleasure or business, roads are needed. Some of these roads may cross streams too deep for fording, so bridges must be provided. These things are for the good of all; they are public needs, and should be provided by the public. But "what is every body's business is nobody's business." It follows that the public must appoint certain persons to look after such things. By the act of appointing these persons, society becomes to that extent organized. We see, then, that society organizes in order to provide certain public improvements, to carry on certain public works.
For his own preservation, man is endowed with another quality, namely, selfishness. Sometimes this is so strong in a person as to cause him to disregard the rights of others. By experience man has learned that every person is interested in seeing that conflicting claims are settled on a better basis than that of the relative strength of the contestants. In other words, all are interested in the prevalence of peace and the rightful settlement of disputes. That this work may surely be done, it is obvious that society must appoint certain persons to attend to it; that is, society organizes to establish justice.
Communities take their character from that of the individuals composing them, therefore communities are selfish. A third reason appears, then, for the organization of society, namely, the common defense.
Government is the organization of society to carry on public works, to establish justice, and to provide for the common defense.
The term government is also applied to the body of persons into whose hands is committed the management of public affairs.
To show that government is a necessity to man, let us imagine a company of several hundred men, women, and children, who have left their former home on account of the tyranny of the government. So harshly have they been treated, that they have ascribed all their misery to the thing called government, and they resolve that they will have none in their new home. They discover an island in the ocean, which seems never to have been occupied, and which appears "a goodly land." Here they resolve to settle.
They help each other in building the houses; each takes from the forest the wood that he needs for fuel; they graze the cattle in a common meadow; they till a common field and all share in the harvest. For a time all goes well. But mutterings begin to be heard. It is found that some are unwilling to do their share of the work. It becomes manifest to the thoughtful that community of property must be given up and private ownership be introduced, or else that the common work must be regulated. In the latter case, government is established by the very act of regulation; they are establishing justice. If they resolve to adopt private ownership, industry will diversify, they will begin to spread out over the island, and public improvements will be needed, such as those specified above. The conflict of interests will soon necessitate tribunals for the settlement of disputes. And thus government would, in either case, inevitably be established. A visit from savages inhabiting another island would show the utility of the organization for common defense.
Thus government seems a necessary consequence of man's nature.
In this country we have the general government and state governments, the latter acting chiefly through local organizations. For obvious reasons, the common defense is vested in the general government. For reasons that will appear, most of the work of public improvement and establishing justice is entrusted to the state and local governments.
These we shall now proceed to study, beginning at home.
QUERIES.—Would government be necessary if man were morally perfect? Why is this organization of society called government?
THE TOWN: WHY AND HOW ORGANIZED; OFFICERS; TOWN BUSINESS.
Necessity.—Now instead of a company going to an island to found new homes, let us think of immigrants to a new part of a state.
Like the people on the island, they will need roads, bridges, and schools; and they will desire to preserve the local peace. Hence they, too, will need to organize as a political body.
Size.—Since these people are going to meet at stated periods to agree upon the amounts to be put into public improvements and to select officers to carry out their wishes, the territory covered by the organization should not be very large. It should be of such a size that every one entitled to do so can reach the place of meeting, take part in the work thereof, and return home the same day, even if he has no team.
Basis.—Will anything be found already done to facilitate matters? Yes. Those parts of the state open to settlement will be found surveyed into portions six miles square. These squares are called in the survey "townships," plainly indicating that they were meant by the general government to be convenient bases for the organization of "towns." And they have been so accepted.
Corporate Powers.—A town is in some respects like an individual. It can sue and be sued. It can borrow money. It can buy or rent property needed for public purposes. And it can sell property for which it has no further use. Because a town can do these things as an individual can it is called a corporation, and such powers are called corporate powers.
When we say that "the town" can do these things, we mean of course that the people of the town as a political body can do them, through the proper officers.
Officers Needed.—The town needs one or more persons to act for it in its corporate capacity and to have general charge of its interests.
Should there be one, or more than one? Why? How many are there?
Every business transaction should be recorded, and the town should have a recording officer or secretary.
What is the recording officer in this town called? What is his name? Which officer would naturally be the custodian of public papers?
It takes money to build bridges and to carry on other public works, and the town needs some one to take charge of the public funds.
What is the officer called? Who occupies that position in this town? How is he prevented from misappropriating the money belonging to the people?
Our plan for raising public money for local purposes is, in general, that each person shall contribute according to the value of his property. Hence the town needs a competent and reliable man to value each person's property.
What is such an officer called? What is the name of the one in this town? Is any property exempt from taxation? Why? Just how is the value of the real estate in the town ascertained for the purpose of taxation? The value of the personal property? Get a list and find out what questions this officer asks. Read the statement at the bottom of the list carefully, and then form an opinion of a person who would answer the questions untruthfully for the purpose of lowering his taxes.
The immediate care of the roads will demand the attention of one or more officers.
How many in this town? What are such officers called? Name them.
Differences about property of small value sometimes arise, and to go far from home to have them settled would involve too much expense of time and money; hence the necessity of local officers of justice. These officers are needed also because petty acts of lawlessness are liable to occur.
How many justices of the peace are there in each town? Why that number? What is the extent of their jurisdiction?
The arrest of criminals, the serving of legal papers, and the carrying out of the decisions of justices of the peace, make it necessary to have one or more other officers.
What are such officers called? How many in each town? Why? Look up the history of this office; it is interesting.
The public schools of the town may be managed either by a town board of trustees, who locate all of the school-houses, engage all of the teachers, and provide necessary material for all of the schools in the town; or the town may be divided into districts, the school in each being managed by its own school board.
Does the township system or the district system prevail in this state? Name some state in which the other system prevails.
How Chosen.—In this country most of the public officers are chosen by the people interested. The great problem of election is how to ascertain the real will of those entitled to express an opinion or have a choice. And all the arrangements for conducting elections have in view one of two things: either to facilitate voting or to prevent fraud. The town serves as a convenient voting precinct.
Find out from the statutes or from the town manual or by inquiry, when the town meeting is held; how notice is given; how it is known who may vote; who are judges of election; how many clerks there are; how voting is done; how the votes are counted and the result made known; what reports of the election are made. Give the reason for each provision. Can a person vote by proxy? Why? What is to prevent a person from voting more than once? If the polls are open seven hours, and it takes one minute to vote, how many persons can vote at one polling place? What may be done in case there are more than that number of voters in the town? How are road overseers elected, and in what part of the day? Why then? What other business is transacted at town meeting? How do the people know how much money will be needed for the coming year's improvements? How do they learn the nature and expense of last year's improvements?
Give four general reasons for our having towns.
Prepare in due form a petition to the proper authorities asking that a new town be organized. [Footnote: For forms see Appendix. If necessary, all the pupils in the room or school may act as "legal voters." (This "Practical Work" may be omitted until the review, if deemed best.)] Be sure that the order establishing the new town is duly made out, signed, attested and filed. Give reasons for each step.
II. HOLDING ANNUAL TOWN MEETING.
1. Preliminary.—What report does each road overseer make to the supervisors? When is the report due? What do the supervisors require this information for?
Who gives notice of the town meeting? When? How?
When does the town treasurer make his report to the persons appointed to examine his accounts? When does this examination take place? What is its purpose?
What report does the board of supervisors make to the people at the town meeting? When is it prepared? Why is it necessary?
(a) That the proper officers are in charge. (b) That the order of business is announced and followed. (c) That the polls are duly declared open. (d) That the voting is done in exact accordance with law. (e) That general business is attended to at the proper time. (f) That reports of officers are duly read and acted upon. (g) That appropriations for the succeeding year are duly made. (h) That the minutes of the meeting are carefully kept. (i) That the polls are closed in due form. (j) That the votes are counted and the result made known according to law. (k) That all reports of the meeting are made on time and in due form.
3. After Town Meeting.—See that all officers elected "qualify" on time and in strict accordance with law. Especial care will be needed in making out the bonds.
Town clerk must certify to proper officer the tax levied at town meeting.
III. LAYING OUT AND MAINTAINING ROADS.
1. Laying out a Road.—Make out a petition for a town road, have it duly signed and posted. In due season present it to the supervisors who were elected at your town meeting.
The supervisors, after examining the petition carefully and being sure that it is in proper form and that it has been duly posted, will appoint a time and place of hearing and give due notice thereof.
When the day of hearing arrives they will examine the proofs of the posting and service of the notices of hearing before proceeding to act upon the petition.
Having heard arguments for and against the laying of the road, the supervisors will render their decision in due form.
In awarding damages, the supervisors will probably find four classes of persons: first, those to whom the road is of as much benefit as damage, and who admit the fact; second, those who should have damages, and are reasonable in their demands; third, those who claim more damages than they are in the judgment of the supervisors entitled to; and fourth, those who from some cause, (absence, perhaps,) do not present any claim. From the first class, the supervisors can readily get a release of damages. With the second, they can easily come to an agreement as to damages. To the third and fourth, they must make an award of damages. Let all of these cases arise and be taken care of.
The supervisors must be careful to issue their road order in proper form, and to see that the order, together with the petition, notices, affidavits and awards of damages, are filed correctly and on time. The town clerk must read the law carefully to ascertain his duty, and then perform it exactly. See that fences are ordered to be removed. Let one of the persons who feels himself aggrieved by the decision of the supervisors, "appeal" to a proper court. Let this be done in due form. As each step is taken, let the reasons for it be made clear.
2. Maintaining Roads.—Road overseers return the list of persons liable to road labor. How are these facts ascertained, and when must the "return" be made?
Supervisors meet and assess road labor, and sign road tax warrants. When and how is this done?
How is the road tax usually paid? How else may it be paid? How does the overseer indicate that a person's tax is paid? If a person liable to road tax does not "commute," and yet neglects or refuses to appear when duly notified by the road overseer, what can the latter do about it? How is delinquent road tax collected? How can a person who has paid his tax prove that he has paid it?
Under which of the three great purposes of government mentioned in the preliminary chapter does the making of roads come?
Does the town system or the district system prevail in this state? If the latter, tell how a school district is organized. Give an account of the organization of this district.
How many and what officers have charge of the schools? State the duties of each. Name the officers in this district. When are the officers chosen, and how long do they serve? Are all chosen at once? Why? How do they "qualify?" Are women eligible to school offices? To any other?
Did you ever attend the annual meeting? When is it held? Why held then? Who take part? What business is transacted? What are "special" school meetings?
What expenses must be met in having a school? Where does the money come from? How does the treasurer get it into his possession? What is to prevent his misusing it?
By whom is the teacher chosen? Why not elect the teacher at the annual meeting? Get a teacher's contract and find out who the contracting parties are, and what each agrees to do. Why is the contract in writing? How many copies of it are made? Who keep them, and why?
If you had a bill against the district, how would you proceed to get your money? If the district refused or neglected to pay you, what could you do? If some one owed the district and refused to pay, what could it do?
Who owns the school buildings and grounds? How was ownership obtained? If it seemed best to erect a new schoolhouse in some other part of the district, what could be done with the present buildings and grounds? Could the district buy land for other than school purposes? Could it lend money if it had any to spare? If the district had not money enough to erect its buildings, what could it do? What are the corporate powers of a district?
Resolved, That it is unfair to tax a bachelor to support a school.
Resolved, That the town system is better than the district system.
PRIMITIVE MODES OF ADMINISTERING JUSTICE.
Trial by Ordeal.—Boys settle some matters about which they cannot agree by "tossing up a penny," or by "drawing cuts." In a game of ball they determine "first innings" by "tossing the bat." Differences in a game of marbles, they settle by guessing "odd or even," or by "trying it over to prove it." In all these modes of adjustment there is an appeal to chance. Probably behind these practices is the feeling that the boy who ought to win will somehow guess right. This appealing to chance to settle questions of fact is characteristic of society in its primitive state. Modes of establishing justice similar in principle to these boy practices prevail to this day among superstitious peoples. They have prevailed even in Europe, not only among people of low mental power, but also among the cultured Greeks. Among our own Saxon ancestors the following modes of trial are known to have been used: A person accused of crime was required to walk blindfolded and barefoot over a piece of ground on which hot ploughshares lay at unequal distances, or to plunge his arm into hot water. If in either case he escaped unhurt he was declared innocent. This was called Trial by Ordeal. The theory was that Providence would protect the innocent.
Trial by Battle.—Sometimes boys settle their disputes by fighting. This, too, was one of the modes of adjudication prevalent in early times among men. Trial by Battle was introduced into England by the Normans. "It was the last and most solemn resort to try titles to real estate." [Footnote: Dole's Talks about Law, p. 53.] The duel remained until recently, and indeed yet remains in some countries, as a reminder of that time. And disputes between countries are even now, almost without exception, settled by an appeal to arms. Perhaps the thought is that "he is thrice armed that hath his quarrel just." Sometimes when one of the boys is too small to fight for his rights, another boy will take his part and fight in his stead. Similarly, in the Trial by Battle, the parties could fight personally or by "champion." Interesting accounts of this mode of trial are given by Green and Blackstone, and in Scott's "Talisman."
Arbitration.—Two boys who have a difference may "leave it to" some other boy in whom they both have confidence. And men did and do settle disputes in a similar way. They call it settlement by Arbitration.
A boy would hardly refer a matter for decision to his little brother. Why?
Folk-Moot.—Still another common way for two boys to decide a question about which they differ is to "leave it to the boys," some of whom are knowing to the facts and others not. Each of the disputants tells his story, subject to more or less interruption, and calls upon other boys to corroborate his statements. The assembled company then decides the matter, "renders its verdict," and if necessary carries it into execution. In this procedure the boys are re-enacting the scenes of the Folk-moot or town meeting of our Saxon ancestors.
Boy-Courts.—Let us look at this boy-court again to discover its principal elements.
In the first place, we see that every boy in the crowd feels that he has a right to assist in arriving at the decision, that "the boys" collectively are to settle the matter. In other words, that the establishment of justice is a public trust. So our Saxon forefathers used to come together in the Folk-moot and as a body decide differences between man and man. The boys have no special persons to perform special duties; that is, no court officers. Neither, at first, did those old Saxons.
Secondly, in the boy-court the facts in the case are brought out by means of witnesses. So it was in the Folk-moot, and so it is in most civilized countries today. Among those old Saxons the custom grew up of allowing the facts in the case to be determined by twelve men of the neighborhood, who were most intimately acquainted with those facts. When they came over to England these Saxons brought this custom with them, and from it has been developed the Trial by Jury. The colonists of this country, most of whom came from England, brought with them this important element in the establishment of justice, and it is found today in nearly all the states.
Again, when in the boy-court the facts of the case have been established and it becomes necessary to apply the rules of the game to the particular case, the boys frequently, invariably in difficult cases, turn to some boy or boys known to be well versed in the principles of the game, and defer to his or their opinion. And, similarly, in the Folk-moot, much deference was paid in rendering judgment to the old men who for many years had helped to render justice, and who, in consequence, had much knowledge of the customs, unwritten laws, in accordance with which decisions were rendered. In this deference to one or more persons who are recognized as understanding the principles involved in the case, we see the germ of judgeship in our present courts.
And finally, a boy naturally reserves the right, mentally or avowedly, of appealing from the decision of the boys to the teacher or his father, in case he feels that he has been unjustly dealt with.
Thus we see that the principal elements of the courts of today, the establishment of justice as a public trust, the determination of the facts by means of witnesses and a jury, the application of the law by one or more judges, the right of appeal to a higher court, are not artificial, but in the nature of things. We inherited them from our primitive ancestors, and in that sense they may be said to have been imposed upon us. But their naturalness appears in the fact that boys when left to themselves introduce the same elements into their boy-courts.
CHANGES MADE IN COURSE OF TIME.
In the Officers.—As has been said, there were in the old Saxon courts no court officers. But quite early the necessity for such officers became manifest. And several of the offices then established have come down to us. Some of them, however, have been so modified in the progress of time as to be hardly recognizable.
PROCEEDINGS IN A JUSTICE COURT.
I. IN ORDINARY CIVIL ACTIONS.
Definitions.—A Civil Action is one having for its object the protection or enforcement of a private right or the securing of compensation for an infraction thereof. For instance a suit brought to secure possession of a horse, or to secure damages for a trespass is a civil action. The person bringing the action is called the plaintiff; the one against whom it is brought, the defendant. The plaintiff and the defendant are called the parties to the action.
Sometimes on bringing an action or during its progress a writ of attachment is obtained. To secure this writ, the creditor must make affidavit to the fact of the debt, and that the debtor is disposing or preparing to dispose of his property with intent to defraud him, or that the debtor is himself not reachable, because hiding or because of non-residence. In addition, the creditor must give a bond for the costs of the suit, and for any damages sustained by the defendant. The justice then issues the writ, which commands the sheriff or constable to take possession of and hold sufficient goods of the debtor and summon him as defendant in the suit.
Another writ sometimes used is the writ of replevin. To secure this writ, the plaintiff must make affidavit that the defendant is in wrongful possession of certain (described) personal property belonging to the plaintiff. The plaintiff then gives a bond for the costs of the suit and for the return of the property in case he fails to secure judgment, and for the payment of damages if the return of the property cannot be enforced, and the justice issues the writ. This commands the sheriff or constable to take the property described and turn it over to the plaintiff, and to summon the defendant as before.
Pleadings.—The next step in the process, in any of the cases, is the filing of an Answer by the defendant, in which he states the grounds of his defense. The complaint of the plaintiff and the answer of the defendant constitute what are called the pleadings. [Footnote: For a more extensive discussion of pleadings, see chapter VII.; or Dole, pp. 30-42.] If the answer contains a counter-claim, the plaintiff is entitled to a further pleading called the Reply. The pleadings contain simply a statement of the facts upon which the parties rely in support of their case. No evidence, inference or argument is permitted in them.
Issue.—It is a principle of pleading that "everything not denied is presumed to be admitted." The fact or facts asserted by one party and denied by the other constitute the issue. If the defendant does not make answer on or before the day appointed in the summons and does not appear on that day, judgment may be rendered against him. If the plaintiff fail to appear, he loses the suit and has to pay the costs. For sufficient cause either party may have the suit adjourned or postponed for a short time.
Jury.—On demand of either party a jury must be impaneled. The jury usually consists of twelve persons, but by consent of the parties the number may be less. The jury is impaneled as follows: The justice directs the sheriff or constable to make a list of twenty-four inhabitants of the county qualified to serve as jurors in the district court, or of eighteen if the jury is to consist of six persons. Each party may then strike out six of the names. The justice then issues a venire [Footnote: For forms, see page 280.] to the sheriff or a constable, directing him to summon the persons whose names remain on the list to act as jurors.
Witnesses.—If any of the witnesses should be unwilling to come, the justice issues a subpoena [Footnote: For forms, see page 279.] commanding them to appear. The subpoena may contain any number of names and may be served by any one. It is "served" by reading it to the person named therein, or by delivering a copy of it to him. A witness, however, is not bound to come unless paid mileage and one day's service in advance.
Opening Statement.—The usual procedure is as follows: After the jury has been sworn, the plaintiff's attorney reads the complaint and makes an opening statement of the facts which he expects to prove. The purpose of the opening statement is to present the salient points of the case, so that the importance and bearing of the testimony may be readily seen by the jury.
Evidence.—The evidence [Footnote: The most important Rules of Evidence are given in chapter VII.] for the plaintiff is then introduced. Each witness, after being duly sworn, gives his testimony by answering the questions of counsel. After the direct examination by the plaintiff's attorney, the witness may be cross-examined by the attorney for the defendant. When the evidence for the plaintiff is all in, the defendant's attorney makes his opening statement, and then the witnesses for the defense are examined. The direct examination is now, of course, conducted by the counsel for the defendant, and the cross-examination by opposing counsel. When all the evidence for the defense has been introduced, the plaintiff may offer evidence in "rebuttal," that is, to contradict or disprove new matter adduced by the defense. And the defendant may then introduce evidence to refute matter first brought out by the rebuttal.
Argument.—The case is now ready for "argument." One attorney on each side addresses the jury. Each tries to show that the evidence adduced has proved the facts alleged in his pleadings, and each asks for a decision in favor of his client. Usually the side upon which rests the burden of proof has the closing argument.
Counsel must confine themselves to the law, the admitted facts and the evidence.
Verdict.—The jury then retire in care of an officer to a room set apart for their use. Here they deliberate in secret. If after a reasonable time they cannot agree, they are discharged, and the case stands as if no trial had taken place. But if they agree they return to the court room and render their verdict. This is given by the foreman, and is assented to by the rest.
Judgment.—After the verdict, the justice enters judgment in accordance therewith. Judgment may include certain sums of money allowed to the successful party in part compensation of his expenses. Such allowances and certain court expenses are called "the costs."
Appeal.—If the defeated party feels that he has not been justly dealt with, he may ask for a new trial. If this be refused he may appeal his case to a higher court. He must make affidavit that the appeal is not taken for the purpose of delay, and must give bonds to cover the judgment and the costs of appeal. The higher court affirms or reverses the judgment, in the latter case granting a new trial.
Sometimes the case is tried anew in the higher court, just as if there had been no trial in the justice court.
Execution.—If no appeal is taken the defeated party may "satisfy" the judgment, that is, pay to the justice the sum specified therein. If at the expiration of the time allowed for appeal the judgment remains unsatisfied, the justice may issue an execution [Footnote: For forms, see Appendix, pp. 282-3.] against the property of the debtor.
2. To examine persons charged with crimes greater than those specified above, and to dismiss them or hold them for trial in a court having jurisdiction, as the facts seem to warrant.
3. To prevent crimes, by requiring reckless persons to give security to keep the peace.
Complaint.—If a crime has been committed, the sufferer, or any one else, may appear before the justice of the peace and make complaint, under oath, specifying the nature of the crime, the time of its commission, and the name of the person believed to have perpetrated it, and requesting that he be apprehended for trial.
Warrant.—If upon careful examination of the complainant and any witnesses whom he may bring, it appears that the offense has probably been committed, the justice issues a warrant, reciting the substance of the complaint, and commanding an officer to arrest the accused and produce him for trial.
Bail.—The accused is entitled to a speedy trial. But if for good cause it seems best to postpone it, the accused may be released from custody upon giving sufficient bail for his appearance at the time fixed for trial. If he cannot furnish bail, he is committed to jail or left in charge of the officer.
Subpoena.—One good reason for postponing a trial is to enable the parties to secure witnesses. To this end, the justice issues subpoenas. But in this case the witnesses must come without the tender of the fee.
Arraignment.—The first step in the trial proper is to inform the defendant of the nature of the crime with which he is charged. The accusation, as stated in the warrant, is distinctly read to him by the justice, and he is required to plead thereto. If he pleads guilty, conviction and sentence may follow at once. If he pleads not guilty, the trial proceeds.
Trial.—After the joining of issue, and before the court proceeds to the examination of the merits of the case, a jury is impaneled as in a civil action. A jury may be waived by the defendant. Then follow the taking of the testimony, the arguments of counsel, the consideration and verdict by the jury. The defendant is then discharged if not guilty, or sentenced if found guilty. The penalty depends, of course, upon the nature of the offense.
Need of Examination.—Over crimes punishable by fine greater than $100 or imprisonment for more than three months, a justice of the peace usually has no jurisdiction of trial. The action must be tried in the district court, on the indictment of a grand jury. But in the meantime the perpetrator of a crime might escape. To prevent this, the accused may be arrested and examined by a justice of the peace, to ascertain whether or not there are sufficient grounds for holding him for trial.
Proceedings.—The preliminary proceedings are precisely like those in case of a trial. Upon complaint duly made a warrant is issued, and the accused is arrested and brought before the justice. In the presence of the accused, the magistrate examines the complainant and witnesses in support of the prosecution, upon oath, "in relation to any matter connected with such charge which may be deemed pertinent."
Rights of Accused.—The accused has a right to have witnesses in his behalf, and to have the aid of counsel, who may cross-examine the witnesses for the prosecution.
The Result.—If it appears upon examination that the accused is innocent of the crime, he is discharged. If his guilt seems probable, he is held to await the action of the grand jury. In the case of some offenses bail may be accepted. But if no suitable bail is offered, or if the offense is not bailable, the accused is committed to jail. Material witnesses for the prosecution may be required to give bonds for their appearance at the trial, or in default thereof may be committed to jail.
Reports.—The justice makes a report of the proceedings in the examination, and files it with the clerk of the court before which the accused is bound to appear for trial.
Prefatory.—But it is better to prevent crime than to punish it. Indeed, one reason for punishing wrongdoers is that the fear of punishment may deter people from committing crime.
Proceedings.—As a conservator of the public peace, then, a justice may require persons to give bonds for good behavior. The preliminary proceedings are similar to those in the case of a trial—the complaint, warrant and return. But the complainant simply alleges upon oath, that a crime against his person or property has been threatened. The examination is conducted as in case of a criminal offense.
Result.—If upon examination there appears reason to fear that the crime will be committed by the party complained of, he shall be required to enter into recognizance to keep the peace, failing in which he shall be committed to jail for the time to be covered by the surety, said time not to exceed six months.
Are the justices and constables town, county or state officers? How is it known at the county seat who the justices and constables in each town are? Define docket, summons, warrant, pleading, subpoena, crime, felony, misdemeanor, venire, costs, execution, recognizance. Why are there two justices in each town? What is meant by "change of venue?" How is an oath administered in court? What persons may not serve as witnesses? If a criminal should make confession of the crime to his lawyer, could the lawyer be subpoenaed as a witness on the trial? Name some things "exempt from execution" in this state. What is to hinder a bitter enemy of yours, if you have one, from having you committed to prison. Can a civil suit proceed in the absence of the defendant?
Assume that John Smith bought from Reuben White a cow, the price agreed upon being $30; that Smith refuses to pay, and White sues him. Write up all the papers in the case, make proper entries in the docket, assessing costs, etc.
Need of.—Owing to conditions, natural and artificial, favorable to business enterprises, people group together in certain places. Living in a limited area, the amount of land occupied by each family is small, and the territory is surveyed into lots and blocks. To make each homestead accessible, streets are laid out. The distances traveled being short, people go about principally on foot; hence the need of sidewalks. To reduce the danger of going about after dark, street-lamps are needed. The nearness of the houses to each other renders it necessary to take special precautions for the prevention of fires, and for their extinguishment in case they break out.
Again, the circumstances being different, the regulations must be different in this part of the town. For instance, in the country a man may drive as fast as he pleases, while here fast driving endangers life and must be prohibited. In the country sleigh-bells are not needed, while here they must be used to warn people of the approach of teams. In the country, if a man's house takes fire no other person's property is endangered; but here the danger is such that all the people are interested in each man's house, and the community may require that chimneys be properly constructed and ashes safely disposed of.
How Incorporated.—Villages are, with rare exceptions, incorporated under a general law specifying the number of inhabitants, the mode of voting on incorporation, etc.
The method in Minnesota, which may be taken as typical, is as follows: Upon petition of thirty or more voters resident upon the lands to be incorporated, which lands have been divided into lots and blocks, the county commissioners appoint a time, and give due notice thereof, when the voters "actually residing within the territory described," may vote upon the question. If a majority of those voting favor incorporation, the commissioners file with the register of deeds the original petition, a true copy of the notice of election, and the certificate showing the result of the vote. The village thus becomes incorporated, and has the usual corporate powers. It organizes by electing officers.
1. To establish and regulate a fire department; to purchase apparatus for extinguishing fires; to construct water-works; to designate limits within which wooden buildings shall not be erected; to regulate the manner of building and cleaning chimneys, and of disposing of ashes; and generally to enact such necessary measures for the prevention or extinguishment of fires as may be proper.
2. To lay out streets, alleys, parks, and other public grounds; to grade, improve, or discontinue them; to make, repair, improve, or discontinue sidewalks, and to prevent their being encumbered with merchandise, snow or other obstructions; to regulate driving on the streets; to appoint a street commissioner.
3. To erect lamp-posts and lamps, and provide for the care and lighting of the lamps.
4. To appoint a board of health, with due powers; to provide public hospitals; to regulate slaughter-houses; to define, prevent, and abate nuisances.
5. To establish and maintain a public library and reading-room.
6. To prohibit gambling; to prevent, or license and regulate the sale of liquor, the keeping of billiard-tables, and the exhibition of circuses and shows of all kinds; to appoint policemen, and provide a place of confinement for offenders against the ordinances.
7. In general, "to ordain and establish all such ordinances and by-laws for the government and good order of the village, the suppression of vice and immorality, the prevention of crime, the protection of public and private property, the benefit of trade and commerce, and the promotion of health, not inconsistent with the constitution and laws of the United States or of this state, as they shall deem expedient," and to provide penalties for the violation of the ordinances.
All fines and penalties imposed belong to the village.
Appointive Officers.—The council appoints, as provided by law, a village attorney, a poundmaster, one or more keepers of cemeteries, one or more fire-wardens, and regular and special policemen; and it prescribes the duties and fixes the compensation of these officers. The council also elects at its first meeting, a village assessor, who shall hold his office one year.
Vacancies and Removals.—Vacancies in any of the village offices are filled by the council, and it has power to remove any officer elected or appointed by it whenever it seems that the public welfare will be promoted thereby.
Like Town Officers.—The assessor, treasurer, justices of the peace, and constable, have the same duties and responsibilities as the corresponding officers in the town. The village has a seal, of which the recorder is the custodian; and he is, as has been said, a member of the council. Otherwise the duties of the recorder are similar to those of the town clerk.
Elections.—A village usually constitutes one election district and one road district. Village elections are conducted as are those in a town.
Enlargements.—Lands adjoining the village may be annexed to it, at the wish and with the consent of the voters of the territory and of the village. The will of the voters aforesaid is expressed at an election called, after due notice, by the county commissioners.
Name the incorporated villages in your county. Any others that you know. Name some villages, so-called, which are not incorporated. Why are the petition and other papers of incorporation recorded?
Can a person living in a village build a sidewalk to suit his own fancy? Why? Suppose that owing to a defective sidewalk you should break your leg, what responsibility would lie on the village?
How would you get your pay if you had a bill against a village?
The village council has power "to establish and regulate markets." Why should the sale of meats be regulated any more than the sale of flour or of clothing? May the sale of bread be regulated?
What is the difference between a policeman and a constable.
Compare the village and the town, telling wherein they are alike and wherein they are different.
Resolved, That for a village of 1000 inhabitants or less it is wise not to become incorporated.
Need Of.—A village being one election district has only one polling place. The community may increase so in numbers as to make it necessary to have several voting places. For the accommodation of the people, these would naturally be located in different parts of the community; and to prevent fraud, voting precincts would have to be carefully defined. The council would naturally be made up of representatives from these divisions.
When, under this arrangement, the voters assemble in different parts of the community, they could not listen to financial reports and vote taxes, as they do in the town and the village. Hence it would be necessary to endow the council with increased powers, including the power to levy taxes without the direct authorization of the people.
The expenses for public improvements, for waterworks, sewers, street-lighting, etc., may take more money than it would be prudent to assess upon the community for immediate payment. In this case it would be desirable for the community to have the power to issue bonds.
Again, with increase in population there is an increase in the number of disputes over private rights, and temptations to crime become more numerous. Hence the need of one or more courts having jurisdiction greater than that possessed by justices of the peace. The conditions necessitate also an increase in the number and the efficiency of the police. And to render the police efficient it is necessary that they be under the direction of one man, the same one who is responsible for the carrying out of the ordinances of the council, namely, the mayor.
A community organized to comply with the foregoing requirements—divided into wards, having a council made up of aldermen from those wards, having a council authorized to levy taxes at its discretion, having a municipal court, having regularly employed police acting under the direction of the mayor—is a city, as the term is generally used in the United States.
Another reason for establishing a city government is frequently potent, although unmentioned. The pride of the community can be thereby indulged, and more citizens can have their ambition to hold public office gratified.
How Organized.—A city may be organized under general law or special charter from the legislature. Large cities, and small ones with great expectations, usually work under a charter. But the custom is growing of organizing cities at first under general law. Then if a city outgrows the general law, grows so that it needs powers and privileges not granted therein, it may properly ask the legislature for a special charter.
"Whenever the legal voters residing within the limits of a territory comprising not less than two thousand inhabitants, and not more than fifteen thousand, and which territory they wish to have incorporated as a city, shall sign and have presented to the judge of probate of the county in which such territory is situated, a petition setting forth the metes and bounds of said city, and of the several wards thereof, and praying that said city shall be incorporated under such name as may therein be designated, the judge of probate shall issue an order declaring such territory duly incorporated as a city, and shall designate the metes, bounds, wards, and name thereof, as in said petition described." And the judge of probate designates the time and places of holding the first election, giving due notice thereof. He also appoints three persons in each ward, of which there shall be not less than two nor more than five, to act as judges of election. The corporation is established upon the presentation of the petition, and the organization is completed by the election of officers.
The usual elective officers of a city are a mayor, a treasurer, a recorder, one justice of the peace for each ward, styled "city justice," all of whom shall be qualified voters of the city, and one or more aldermen for each ward, who shall be "qualified voters therein." All other city officers are appointed.
The term of mayor, city justices and aldermen is in most states two years; that of the other officers, one year.
Any officer of the city may be removed from office by vote of two-thirds of the whole number of aldermen. But an elective officer must be given "an opportunity to be heard in his own defense."
A vacancy in the office of mayor or alderman is filled by a new election. A vacancy in any other office is filled by appointment. The person elected or appointed serves for the unexpired term.
The Mayor is the chief executive officer and head of the police of the city. By and with the consent of the council, he appoints a chief of police and other police officers and watchmen. In case of disturbance he may appoint as many special constables as he may think necessary, and he may discharge them whenever he thinks their services no longer needed.
The City Council consists of the aldermen. [Footnote: In some states the city council consists of two bodies.] It is the judge of the election of its own members. A majority of the members elected constitutes a quorum for the transaction of business.
The council chooses its own president and vice-president. In case the mayor is absent from the city or for any reason is temporarily unable to act, the president of the council acts as mayor, with the title Acting Mayor.
Passing Ordinances.—The mode of passing an ordinance is unlike anything that we have considered up to this time, and deserves special attention on account of its resemblance to the mode of making laws in the state and general governments. It is as follows. If a proposed ordinance is voted for by a majority of the members of the council present at any meeting, it is presented to the mayor. If he approves it, he signs it, and it becomes an ordinance. But if he does not approve it, he returns it, through the recorder, to the council, together with his objections. [Footnote: This is called vetoing it, from a Latin word veto, meaning I forbid.]The council, then reconsiders the proposed ordinance in the light of the mayor's objections. If, after such reconsideration, two-thirds of the members elected vote for it, it becomes an ordinance, just as if approved by the mayor. "If an ordinance or resolution shall not be returned by the mayor within five days, Sundays excepted, after it shall have been presented to him," it shall have the same effect as if approved by him.
Publication of Ordinances.—The ordinances and by-laws of the council are published in a newspaper of the city, selected by the council as the official means of publication, and are posted in three conspicuous places in each ward for two weeks, before they become operative.
Council Powers.—The city council has about the same powers as a village council in regard to streets, the prevention and extinguishment of fires, etc.—the same in kind but somewhat more extensive. But it can also levy taxes for public purposes, as has before been said. It usually elects the assessor, the city attorney, the street commissioner, and a city surveyor, and in some states other officers.
The recorder, treasurer, assessor, justices of the peace, and police constables, have duties similar to those of the corresponding officers in a village or a town.
If two persons should claim the same seat in the city council, who would decide the matter?
State three ways in which a proposed ordinance may become an ordinance. Two ways in which it may fail. How can persons living in a city find out what ordinances the council passes? How far are the ordinances of any city operative?
Compare the government of a village with that of a city.
Are school affairs managed by the city council? How is it in a village? In a town.
If a new school-house is needed in a city, and there is not money enough in the treasury to build it, what can be done?
If you live in a city having a special charter, borrow a copy of it from a lawyer or from the city recorder, and find out what powers and privileges are granted to the corporation not specified in the general law; what limitations are imposed; and, if a municipal court is provided for, what its jurisdiction is in civil actions and in criminal prosecutions.
Name the principal officers in your city. The aldermen from your ward.
What are some of the dangers of city government? Consult Macy's Our Government, pp. 51-53, and Nordhoff's Politics for Young Americans.
Resolved, That for a community of 5000 inhabitants or less a village organization is better than a city organization.
1. To establish the lower organizations. As we have seen, the organizations within the county are established by county officers. But, it may properly be asked, why not have them organized by the state directly? There are at least three good reasons: In the first place, it would be too burdensome to the state; that is, the state would act through the legislature, and to organize all the individual school districts, towns, villages, and cities, would take up too much of the time of the legislature. In the second place, the organizing could only be done at certain times, namely during the session of the legislature, and in the meantime communities would have to wait. In the third place, the records of incorporation would be inaccessible in case they were needed for reference.
2. To serve as a medium between the state and the lower organizations. The state uses the town, village, and city to value property for purposes of taxation and as election districts. But it gets its taxes and its election returns through the county. Here again may arise the question, why not send the state taxes directly to the capital and make election returns directly also? At least two good reasons appear: It would increase the work and therefore the number of officials at the capital, and if a mistake should be made it could not be so easily discovered and corrected.
3. To carry on public works beyond the power of the towns individually. A desired local improvement may be beyond the power of a town either because it is outside of the jurisdiction of the town or because of its expense. Thus, a road may be needed between two centers of population, villages or cities, which would run through several towns, while the jurisdiction of the towns individually extends only to their own borders. Or a bridge over a wide stream may be needed, which would be too expensive for the town in which it is located. The road and the bridge would better be provided by the county.[Footnote: Sometimes state aid is secured. Do you think it wise, as a rule, for the state to grant such aid?] And the poor can generally be better cared for by the county than by the individual towns, for the county can erect and maintain a poor-house.
4. To secure certain local officers not needed in every town; for instance, a register of deeds, the coroner, the judge of probate, the superintendent of schools (in most states), and the surveyor.
5. To serve as a territorial basis for the apportionment of members of the legislature. This is, perhaps, merely an incidental gain. But its convenience in defining legislative districts is obvious.
6. To make justice cheap and accessible. It is well in many ways, as we have seen, to have in every town, village, and city, courts of limited jurisdiction. But to establish justice in any generous or satisfying sense there should be within the reach of every citizen a court competent to try any difference between individuals regardless of the amount in controversy, and able to punish any crime against the laws of the state. To bring such a court within the reach of every one was the original reason for the establishment of the county, and remains today the greatest advantage derived from its existence.
Establishment.—Counties are established by the state legislature.
In thinly settled parts of a state the counties are much larger than in the populous parts. A county should be large enough to make its administration economical, and yet small enough to bring its seat of justice within easy reach of every one within its boundaries. In the ideal county a person living in any part thereof can go to the county seat by team, have several hours for business, and return home the same day.
County Board.—The administration of county affairs is in the hands of the county commissioners or supervisors. This board is usually constructed on one of two plans: Either it consists of three or five members, the county being divided into commissioner districts; or else it is constituted of the chairmen or other member of each of the several town boards. The former plan prevails in Minnesota, Iowa, and other states; the latter in Wisconsin, Michigan, most of Illinois, and in other states.
The commissioners have charge of county roads and bridges, county buildings and other county property, and the care of the county poor. Through the commissioners the county exercises the usual corporate powers.
Recording Officer.—The recording officer of the county is called in some states the county auditor, in others the recorder, and in others the county clerk. As we would expect, he is secretary of the board of commissioners and the custodian of county papers; and all orders upon the treasurer are issued by him. The auditor is also bookkeeper for the county, that is, he keeps an account of the money received and paid out by the county treasurer.
In Minnesota and some other states, he computes all the taxes for the county, [Footnote: In some states, among them Wisconsin, this computation is performed by the several town clerks, and the moneys are collected by the town treasurers.] and makes the tax-lists, showing in books provided for the purpose just how much the tax is on each piece of real estate and on personal property. These books he turns over to the county treasurer to be used in collecting the taxes.
1. The selection of an honest man for the office, so far as possible, is a prime consideration.
2. The treasurer must give a bond for such amount as the county commissioners direct.
3. He shall pay out money only upon the order of proper authority. [Footnote: Moneys belonging to school district, town, village, or city, are paid on the warrant of the county auditor; county money, on the order of the county commissioners, signed by the chairman and attested by the county auditor; state money, on the draft of the state auditor in favor of the state treasurer.] This order signed by the payee is the treasurer's receipt or voucher.
4. He shall keep his books so as to show the amount received and paid on account of separate and distinct funds or appropriations, which he shall exhibit in separate accounts.
5. The books must be balanced at the close of each day.
6. When any money is paid to the county treasurer, excepting that paid on taxes charged on duplicate, the treasurer shall give, to the person paying the same, duplicate receipts therefor, one of which such persons shall forthwith deposit with the county auditor, in order that the county treasurer may be charged with the amount thereof.
7. The county auditor, the chairman of the board of county commissioners, and the clerk of the district court, acting as an auditing board, carefully examine at least three times a year the accounts, books and vouchers of the county treasurer, and count the money in the treasury.
8. The state examiner makes a similar examination at least once a year. No notice is given in either case.
9. As security against robbers, the money in the possession of the county treasurer must be deposited on or before the first of every month in one or more banks. The banks are designated by the auditing board, and must give bonds for twice the amount to be deposited.
Register of Deeds.—Without hope of reward no one would work. To encourage frugality, people must be reasonably secure in the possession of their savings. One of the things for which a person strives is a home. Therefore, great care is taken to render a person who has bought a home, or other landed property, secure in its possession. Among the means employed are these: 1. The purchaser is given a written title to the land. This is called a deed. 2. In order that any person may find out who owns the land, thus preventing a person reputed to own it from selling it, or the owner from selling to several persons, a copy of the deed is made by a competent and responsible public officer in a book which is kept for that purpose and which is open to public inspection. This is called registering the deed, and the officer is called the register of deeds. [Footnote: Incidentally this officer records other instruments, such as official bonds, official oaths, etc.] The register may have assistants, if necessary, he being responsible for their work.
Judge of Probate.—But not only should a person enjoy the fruit of his labors while living, he should also be able to feel that at his death his property shall descend to his family or others whom he loves. Many persons before they die make a written statement, telling how they wish their property disposed of. This written statement is called a will or testament. Some who are possessed of property die without making a will. They are said to die intestate. To see that the provisions of wills, if any be made, are complied with, and, in case no will is made, to make sure that the property comes into possession of those best entitled to it, is the important and wellnigh sacred duty of an officer called the judge of probate. If no one is named in the will to look after the education and property of minor heirs, the judge of probate may appoint a guardian. The appointee must give bonds for the faithful discharge of his duty. [Footnote: see chapter VII.] Incidentally it is made the duty of the judge of probate to appoint guardians for any persons needing them, such as insane persons, spendthrifts, and the like. He seems to be the friend of the weak.
County Surveyor.—To survey all public improvements for the county, such as roads, lands for public buildings, &c., there is an officer called the county surveyor. He is required to preserve his "field notes" in county books furnished for the purpose. Individuals frequently call upon him to settle disputes about boundary lines between their estates.
Superintendent of Schools.—Not every one is competent to teach, and to protect the children as far as possible from having their time worse than wasted by incompetent would-be teachers, is the very responsible duty of the county superintendent of schools. From among those who present themselves as candidates he selects by a careful examination those whom he deems most competent, and gives to each a certificate of qualification. He visits the schools and counsels with the teachers regarding methods of instruction and management. It is his duty also to hold teacher's meetings. He reports annually to the state superintendent of public instruction such facts as the superintendent calls for.
County Attorney.—Like railroads and other corporations, the county keeps a regularly employed attorney to act for it in all suits at law. This officer is called the county attorney. He represents the state in all criminal prosecutions and is for this reason sometimes called the state's attorney.
Sheriff.—An ancient officer of the county is the sheriff. He has three principal lines of duty: 1. To preserve the peace within the county. 2. To attend court. 3. To serve processes. He pursues criminals and commits them to jail. He has charge of the county jail and is responsible for the custody of the prisoners confined in it. He opens and closes each session of the district court, and during the term has charge of the witnesses, the juries, and the prisoners. It is his duty to carry into execution the sentence of the court. He serves writs and processes not only for the district court, but also for justices of the peace and court commissioners.
Coroner.—Another officer of the county, ancient almost as the sheriff, is the coroner. If the dead body of a human being is found under circumstances which warrant the suspicion that the deceased came to his death by violence, it is the coroner's duty to investigate the matter and ascertain if possible the cause of the death. He is aided by a jury summoned by him for the purpose.
At a time in early English history when the only county officers were the sheriff and the coroner, the coroner acted as sheriff when the latter was for any reason incapacitated. And the practice still continues. Thus, if there is a vacancy in the office of sheriff, the coroner acts till a new sheriff is chosen. And in most states the coroner is the only officer who can serve process upon the sheriff or who can arrest him.
Clerk of the Court.—The district court [Footnote: See next chapter.] is a "court of record." That is, it has a seal and a special officer to record its proceedings. He is called the clerk of the court. He of course also files and preserves the papers in each case. He has also certain incidental duties.
Court Commissioner.—Court is not always in session, and there are certain powers possessed by a judge "in chambers," that is, which the judge may exercise out of court. For instance, he may grant a writ of attachment or of habeas corpus. Where a judicial district comprises several counties, as is usually the case, a provision is made in some states for an officer in each county authorized to perform such duties in the absence of the judge. In Minnesota and most other states he is called the court commissioner.
Election and Term.—The county officers are in most sections of the country elected by the people of the county. The term is usually two years.
Removals and Vacancies.—Provision is made for the removal of any county officer for non-feasance or malfeasance in office. The power to remove is generally vested in the governor. The accused must be given an opportunity to be "heard in his own defense." Vacancies are generally filled by the county commissioners. They appoint some one, not one of themselves, to serve until the next election.
Qualifying.—Each officer before assuming the duties of his office takes the official oath. All of the officers except the commissioners and the superintendent of schools are required to give bonds. Copies of these bonds are preserved by the register of deeds, and the originals are forwarded to the secretary of state.
Compensation.—Compensation is usually by salary or by fees. The matter is usually in the hands of the county commissioners, except so far as concerns their own compensation, which is fixed by law. This is usually a per diem.
Eligibility.—Any voter who has resided in the county a certain time (usually about thirty days) is eligible to any county office, except that of attorney or court commissioner. The former must be a person admitted to practice in all the courts of the state. The latter must be a man "learned in the law."
In some cases a person may hold two offices at the same time; thus, a person may be court commissioner and judge of probate. But no person can hold two offices one of which is meant to be a check upon the other. For instance, no one could be auditor and treasurer at the same time. In some states there is a bar against holding certain offices for two terms in succession.
What is the difference between a town road and a county road? Point out one of each kind. If you wanted a change in a county road, to whom would you apply?
Get a warranty deed and fill it out for a supposed sale. Compare with it a mortgage deed. A quitclaim deed. Compare a mortgage deed with a chattel mortgage. Account for the differences. If A buys a farm from B and does not file his deed, who owns the farm?
If a man possessing some property should get into habits of gambling and debauchery, squandering his money and not providing for his family, what could be done? On what grounds could this interference by a public officer be justified?
Who would be keeper of the jail if the sheriff should be a prisoner? Why not one of the deputy sheriffs?
Study out carefully the derivation of the words auditor, sheriff, coroner, probate, commissioner, supervisor, superintendent.
The county attorney is usually paid a salary while the register of deeds usually gets the fees of his office. What seems to govern in the matter? Name the salaried officers in this county. The officers who are paid fees.
To whom are school taxes paid? Town taxes? County taxes? State taxes? How much of the money paid at this time goes to the United States?
How does the tax collector know how much to take from each person? From whom does he get this book?
The amount of a person's tax depends upon the value of his property and the rate of tax. How is the former fact ascertained? To whom, then, does the assessor report when he has concluded his labors?
The rate of tax depends upon the amount to be raised and the value of the property on which it is to be assessed. Who determines how much money shall be raised in a district for school purposes during any year? When is this determined? Who records the proceedings of the meeting? To whom must he report the amount of tax voted? Who determines how much money is to be raised in the town for bridges, etc.? When? Who records the proceedings of the meeting? To whom must he report the amount of tax voted? Who vote the taxes in a village? When? Who reports to the computing officer? Who vote the taxes in a city? Why not the people? When? How reported to the computing officer? Who determines how much money is to be raised for county purposes? When? Who is secretary of the meeting? To whom does he report? Who determines how much money shall be raised for state purposes? How does the proper officer become acquainted with the facts necessary to the raising of the money?
State the gist of the matter brought out by the questions in the last four paragraphs.
How does the school district treasurer get the school district money?
Trace a dollar from the time it leaves a farmer's hand as taxes till it reaches the teacher as salary.
If you had a bill against the county how would you get your pay? What could you do if pay were refused? Make out in due form a bill against your county.
ESTABLISHING JUSTICE IN THE COUNTY.
Classes of Cases.—There are three general classes of judicial business carried on in the county: probate business, civil actions, and criminal prosecutions.
Jurisdiction.—The principal business and characteristic work of probate courts is the settlement of the estates of deceased persons. Jurisdiction extends in most states over both personal property and real estate. Incidentally probate courts appoint guardians for minors and others subject to guardianship, and control the conduct and settle the accounts of such appointees.
In many states jurisdiction wholly extraneous to the characteristic work of these courts is imposed upon them, or the probate business is associated with other jurisdiction in the same court. Thus, in Minnesota the judge of probate is petitioned in the organization of cities, as we have seen. In Wisconsin, the county court, which has charge of the probate business, has civil jurisdiction also. In Illinois, the county court in addition to the probate business has jurisdiction "in proceedings for the collection of taxes and assessments." And in Kansas, the probate court has jurisdiction in cases of habeas corpus.
2. Citation to persons interested. Acting on the petition, the probate judge publishes in a newspaper a notice to all persons interested in the estate that at a specified time, action will be taken on the petition. To afford all who are interested an opportunity to be present at the "hearing," the notice must be published for a prescribed time, and in some states each of the heirs must, if possible, be personally notified.
3. Hearing the proofs. At the time specified in the notice, unless postponement be granted for cause, the proofs of the validity of the will are presented. It must be shown that the testator is dead, that the instrument was executed by him voluntarily, in the manner prescribed by statute, and while he was of "sound mind and disposing memory." Usually it will be sufficient for the two witnesses to the instrument to appear and testify to the material facts. If any one interested in the distribution of the property thinks that this will should not be accepted as the "last will and testament" of the deceased, he should now enter objections. In case of a contest, the proceedings are about the same as those in a justice or circuit court; but there is no jury in the probate court, nor is there any plea except the petition.
6. Notice to creditors. It is a principle of law that all just debts shall be paid out of one's property before any further disposition thereof can take effect. In order that all persons having claims against the estate of the deceased may have an opportunity to present their accounts, a time for such presentation is designated by the court, and due notice thereof is given, usually by publication in a newspaper.
7. Inventory of the estate. In the meantime, the executor makes an inventory of the property, and appraisers appointed for the purpose "put a value" thereon, the several items of the inventory being valued separately.
8. Auditing claims. At the time appointed in the notice, the court passes upon the claims of creditors. Since unscrupulous persons are at such times tempted to present fraudulent claims, the judge exercises great care in examining the accounts. To facilitate matters it is required that accounts be itemized, and that they be verified by oath.
Debts are paid out of the personal property, if there be enough. If not, the court authorizes the executor to sell real estate to pay the balance.
9. Settlement of estate and division of property. The executor having collected debts due the estate and settled all claims against it, makes his final statement to the court, and the remaining property is distributed among the heirs and legatees. To continue and perfect the chain of title, the division of the real estate is recorded in the office of the register of deeds.
If there are minor heirs, the court appoints guardians for them.
1. Someone interested in the estate petitions for the appointment of a certain person as administrator.
2. Notice of hearing is given by publication, citing those interested in the estate to appear at a certain day if they desire to enter any objection to the appointment.
3. If at the time specified for the hearing no objection is made, the person petitioned for is appointed administrator, and "letters of administration" are issued to him.
Then beginning with the sixth step the proceedings are substantially the same as in case of a will, except that the basis of distribution in the ninth is the law instead of the will.
What is a will? [Footnote: See Dole's Talks about Law.] Why must it be in writing? Must it be in the handwriting of the testator? Why are the witnesses essential? Is the form of a will essential? Is it necessary that the witnesses know the contents of the will?
What is the difference between an heir and a legatee? May either be witness to the will? Why? If the witnesses die before the testator, how can the will be proved?
What is a codicil? If there be two wills of different dates, which will stand? What difference does it make whether a person having property makes a will or not?
Group the proceedings in case of a will into three groups.
A minor may have two guardians, one of its person and the other of its property? Why? What is to hinder a guardian from abusing his trust?
DISTRICT, CIRCUIT OR SUPERIOR COURTS.
Jurisdiction.—This court has original jurisdiction in all civil and criminal cases within the district which do not come within the jurisdiction of the justice courts. It has appellate jurisdiction from probate and justice courts as provided by law.
Procedure.—The proceedings are substantially the same as in a justice court except that in criminal cases they are based upon an indictment by the grand jury, and after the arguments the judge "charges" the jury, that is, instructs it regarding its duty.
Pleadings.—The pleadings in the district court are somewhat more elaborate than in a justice court, and a few words in regard to them further than what has already been given may not be out of place here.
The defendant in making his plea may raise a question as to the jurisdiction of the court, or he may ask that the case be thrown out of court on account of some irregularity of the writ upon which it is based. Since these pleas, if successful, simply delay the trial, because a new suit may afterwards be brought, they are called dilatory pleas.
But he may deny the plaintiff's ground of action by denying the allegations of the plaintiff and challenging him to trial. This plea is called the general issue. He may admit the plaintiff's allegations but plead other facts "to avoid their effect." This is called the plea of confession and avoidance. These pleas are on the merits of the case, and are called pleas in bar. There are other pleas of this kind.
"Pleas in bar, except the general issue, may give rise to counter pleas" introduced by the parties alternately.
But the issue may be one of law instead of fact, and the defendant may enter a demurrer, claiming that the matters alleged are not sufficient in law to sustain the action.
Evidence.—Some of the fundamental principles or rules which govern the taking of evidence and the weighing of testimony may properly appear here. These rules are designed to exclude all irrelevant matter and to secure the best proof that can be had.
1. Witnesses must be competent. That is, in general, they must be able to understand the nature and solemnity of an oath. This will usually exclude children below a certain age, insane persons and persons drunk at the time of offering testimony.
2. Witnesses must testify of their own knowledge. Usually they are barred from telling what they simply believe to be the fact or what they have learned from hearsay.
3. Evidence must go to prove the material allegations of the pleadings. It must be confined to the question at issue. It is to be observed that the evidence must not only go to prove the matter alleged, but it must be the material not the superfluous matter. What is material and what superfluous will depend upon the case. Thus if it is alleged that a suit of clothes was obtained by the defendant at a certain time, his obtaining the clothes is the material fact and the time may be superfluous or immaterial. But if a note is in controversy its date is material as establishing its identity.
4. "The evidence must be the best of which the case is susceptible." Thus, in case of a written instrument the best evidence is the instrument itself; the next best, a copy of it; the next, oral statement of its contents. And a copy will not be accepted if the original can be produced.
5. The burden of proof lies on the affirmative. In civil cases the party affirming is usually the plaintiff. In criminal cases it is the state. Harmonizing with this principle is the constitutional provision that in criminal cases the accused shall not be required to give evidence against himself.
These are the principal rules of evidence, but they have many applications. Learned volumes have been written elaborating them.
Grand Jury.—A grand jury may be defined as a body of men returned at stated periods from the citizens of the county, before a court of competent jurisdiction, chosen by lot, and sworn to inquire of public offenses committed or triable in the county.
The number of grand jurors was formerly twenty-three. By statute many of the states have fixed upon a smaller number, Oregon having only seven. A common number is fifteen. Some states have no grand jury. In some others the grand jury is summoned only when requested by the court.
The United States constitution and most of the State constitutions declare that no person shall be held to answer for a criminal offense, except a minor one, "unless on the presentment or indictment of a grand jury." This is to save people from the vexation and expense of arrest and trial unless there is reasonable presumption of their guilt. On the other hand, a grand jury should aid in bringing to justice persons who indulge in practices subversive of public peace, but which individuals are disinclined to prosecute, such as gambling. Incidentally the grand jury examines into the condition of the county jail and poor-house.
The mode of selecting grand jurors is in general the same in all the states. The steps are three: first, the careful preparation of a list of persons in the county qualified to serve; second, the selection, by lot, from this list of the number of persons needed; third, the summoning of the persons so chosen. The number of persons in the first list is from two to three times the number of jurors. The preparation of the list is in some states entrusted to the county board; in others, to jury commissioners; in others, to the local boards. The names are reported to the clerk of the court, who in the presence of witnesses, makes the selection by lot. The summoning is done by the sheriff. | 2019-04-22T13:01:00Z | http://hotfreebooks.com/book/Studies-in-Civics-James-T-McCleary.html |
The present invention relates to a multi-stage method for producing one or multiple molded bodies, the method comprising the following steps: a) constructing one or multiple molded bodies in layers by repeatedly applying particulate material by the 3D printing method; b. a presolidification step for achieving a presolidification of the molded body; c. an unpacking step, wherein the unsolidified particulate material is separated from the presolidified molded body; d. a final solidification step, in which the molded body receives its final strength due to the action of thermal energy. The invention also relates to a device which may be used for this method.
This application is a national phase filing under 35 USC § 371 from PCT Application serial number PCT/DE2013/000589 filed on Oct. 10, 2013, and claims priority therefrom. This application further claims priority from German Patent Application number DE 10 2012 020 000.5 filed on Oct. 12, 2012. Both PCT/DE2013/000589 and DE 10 2012 020 000.5 are incorporated herein in their entireties by reference.
The present invention relates to a multi-stage 3D printing method as well as a device which may be used for this method.
A wide range of methods are known for producing molds and foundry cores. Automated machine molding methods are an economical approach in the area of large batches. Tool-less mold production using so-called rapid prototyping methods or 3D printing methods are an alternative to machine molding methods for small to medium-sized series.
Laser sintering methods that permit tool-less manufacturing were developed based on the Croning Method (DE832937), which is known by the name of its inventor, Johannes Croning. According to this method, a molded part is built in layers from particulate material that is coated with a binder. The binding of the individual loose particles is achieved, for example, by applying energy with the aid of a laser beam (EP 0 711 213).
In practice, the solidification described in the prior art is scarcely reached by means of the polycondensation reaction, since process difficulties occur. An exposure to light that is sufficient for developing the final strength would thus result in a severe shrinkage of the binder casing and this, in turn, would cause a process-incompatible distortion of the present layer. The strengths (green strength) of the molded parts produced in this manner are therefore extremely low during removal of the molded parts—also referred to as unpacking—from the loose sand. This causes problems when unpacking and not infrequently results in damage to the molded parts, rendering them unusable. A method has been described for solving this problem during unpacking by using a soldering lamp and thus additionally solidifying the surface with the aid of a soldering lamp. However, this procedure not only requires a great deal of experience, it is also extremely labor-intensive and time-consuming.
The lack of green strengths is due to excessively small or excessively weak binder bridges. If one wishes to engage in distortion-free production, the binder remains excessively viscous and does not form an adequate bridge.
However, a layering method is described in DE 197 23 892 C1, in which Croning sand is printed with a moderating agent, which causes the activation energy of the printed binder-encased Croning sand to be increased or decreased with respect to the unprinted material, and the sand is then exposed to light with the aid of a thermal radiation source. This is intended to cause only the printed or the unprinted areas to be hardened or bound. The finished molded parts are then removed from the unbound sand. However, it has been determined that suitable moderating agents, such as sulfuric acids, are only poorly suited or not suited at all for being printed with the aid of commercial single drop generators. It has also been determined to be disadvantageous that the unsolidified sand is pre-damaged by the exposure to light to such an extent that it may no longer by fully reused in the method. This not only increases the amount of material used but also the costs and is therefore disadvantageous.
A layering method for producing models is described in US 2005/0003189 A1, in which a thermoplastic particulate material is mixed with a powdered binder and printed in layers with an aqueous solvent. The binder should be easily soluble in the aqueous print medium. The models are subsequently removed from the surrounding powder and possibly dried in an oven during a follow-up process for the purpose of increasing the strength.
A layering method for producing investment-cast original models is described in DE 102 27 224 B4, in which a PMMA particulate material, which is coated with a PVP binder, is printed in layers with a mixture of a solvent and an activator for the purpose of dissolving the binder and activating the binder action.
Either the known methods are tool-dependent processes or the known 3D printing processes achieve green strengths that are too low for the efficient and economically advantageous manufacture of molded parts.
Therefore, there was the need to provide a method for the tool-less construction of molded parts in layers, preferably for foundry applications, with the aid of binder-encased particulate material, in which removal strengths or unpacking strengths are achieved which make it possible to reduce or entirely avoid time-consuming and cost-intensive manual work and preferably facilitate machine- or robot-assisted unpacking, or in any case to reduce or entirely avoid the disadvantages of the prior art.
Preferred embodiments are implemented in the subclaims.
d. a final solidification step, in which the molded body receives its final strength due to the action of thermal energy.
The molded body is preferably subjected to one or multiple additional processing steps. All other methods or work steps known to those skilled in the art may be used. The one or multiple additional processing steps are selected, for example, from the group comprising polishing or dyeing.
In the method according to the invention, the molded body (also referred to as the component) is solidified in the presolidification step to the extent that an unpacking from the unsolidified particulate material is possible, and the molded body essentially retains its shape defined in the 3D printing method. In particular, shrinkage or the like is essentially avoided. The unpacking operation may take place manually or mechanically or in a robot-assisted manner.
Flexural strengths of more than 120 N/cm2, preferably more than 200 N/cm2, particularly preferably 120 to 400 N/cm2 may be achieved in the presolidified molded body (green body) after the presolidification step.
After unpacking, the molded body may again be surrounded by particulate material, which is preferably inert, to thereby be able to support the molded body in the subsequent heat treatment step and better conduct the heat as well as to achieve a uniform heat conduction. Shaking devices may be used to evenly distribute the particulate material.
The object of the application is also achieved by a device or device arrangement for this method.
Flexural strengths of more than 250 N/cm2, preferably from 250 to 750 N/cm2, preferably more than 750 N/cm2, particularly preferably more than 1,000 N/cm2, even more preferably more than 1,200 N/cm2 may be achieved in the molded body after the final solidification step.
In one preferred embodiment, the method is carried out in such a way that the presolidification step takes place without the application of additional thermal energy.
The presolidification step will preferably take place using a solvent and/or a polymerization reaction.
The final solidification step may preferably take place with the aid of heat treatment. However, other final solidification methods and treatments known to those skilled in the art are also possible.
The component may be supported by inert material during the heat treatment.
Temperatures of preferably 110° C. to 130° C., preferably 130° C. to 150° C. particularly preferably 150° C. to 200° C. are used in the final solidification step.
The temperature at the component is preferably in the time range of 2 to 24 hours; particularly preferably the temperature is maintained over 2 to 5 hours.
Natural silica sand, kerphalite, cera beads, zircon sand, chromite sand, olivine sand, chamotte, corundum or glass spheres are used as the particulate material.
The particulate material is characterized by a single-phase coating or casing having one or multiple materials. The coating or the casing may preferably be a binder.
In the method according to the invention, the casing or coating preferably comprises or includes thermoplastic polymers, soluble polymers, waxes, synthetic and natural resins, sugars, salts, inorganic network formers or water glasses.
The solvent preferably comprises or includes water, hydrocarbons, alcohols, esters, ethers, ketones, aldehydes, acetates, succinates, monomers, formaldehyde, phenol and mixtures thereof.
In the method, the binder may contain polymerizable monomers. In one preferred embodiment of the method, the coating or casing contains materials for starting a polymerization with the binder.
The material contained in the casing or coating preferably contributes to the final strength or to the preliminary strength in the presolidification step and to the final strength in the final solidification step.
In the method according to the invention, according to one preferred embodiment, two different materials are contained in the casing or coating, the one material being essentially destined for the presolidification step and the other material essentially being destined for the final solidification step.
The method is thus simplified, may be carried out faster and is thus more economical.
The coating or casing may preferably contain a color indicator which is activated by the binder.
In another aspect, the invention relates to a device or a device arrangement suitable for carrying out the method according to the invention.
The first step of the method according to the invention may, in principle, be carried out as described in the prior art for 3D printing methods. In this regard, EP 0 431 924 B1 and DE102006038858 A1 are cited by way of example. The subsequent unpacking step may be carried out manually but preferably in a mechanically assisted manner. Robot-assisted unpacking is another preferred variant of a mechanical method step according to the invention. In this case, both the unpacking, i.e., the removal of the unsolidified particulate material, and the transfer of the molded part may take place with the aid of computer-controlled gripper arms and extraction units.
The invention is preferably carried out with the aid of a particulate material bed-based 3D printing method. The desired molded body is created during 3D printing by repeated layering. For this purpose, particulate material is applied (leveled) in a thin layer onto a surface. An image according to the section of the desired 3D object is printed using an ink-jet print head. The printed areas solidify and bond to underlying, already printed surfaces. The resulting layer is shifted by the thickness of one layer according to the design of the equipment.
3D printers may be used which lower the layer in the direction of gravity. Machines are preferably used which are designed according to the cycling principle, and the layers in this case are moved in the conveyance direction. Particulate material is now again applied to the building surface. The build process, which involves the steps of coating, printing and lowering, continues to be repeated until the one or more molded body(ies) is/are finished.
The method step of 3D printing and the presolidification step are preferably implemented by selectively printing a solvent onto the binder-encased particulate material. The solvent liquefies the casing. The viscosity is significantly lower than in thermal melting. While the viscosities of polymer melts may be in the range of approximately 10 to 1,000 Pas, a polymer solution may reach a viscosity of a few mPas, depending on the quantity added and the solvent. A viscosity of 2 to 100 mPas is preferred, 2 to 10 mPas is more preferred, 2 to 5 mPas is even more preferred.
When drying the solvent, the fluid mixture withdraws into the contact point between two particles and then leaves behind a strong bridge. The effect may be strengthened by adding polymers to the printing fluid. In this case, suitable method conditions are selected or corresponding components that are necessary for a polymerization reaction are worked into either the solvent or into the coating of the particulate material. All resins or synthetic resins known to those skilled in the art and which are suitable for polymerization, polyaddition or polycondensation reactions may be used. Materials of this type are preferably defined by DIN 55958 and are added to the disclosure of this description with reference thereto.
According to the invention a binder-encased foundry molding material may be used as the particulate material. The casing is solid at room temperature. The particulate material is thus pourable and free-flowing. The material encasing the particles is preferably soluble in the printing fluid that is applied by the ink-jet print head. In a similarly preferred design, the printing fluid contains the casing material or its precursors in the form of a dispersion or solution.
The material present in the printing fluid may likewise preferably belong to a different material group. In one embodiment of the invention, the solvent dissipates into surrounding particulate material or into the atmosphere by means of evaporation. Likewise, the solvent may also react and solidify with the casing material.
The material groups for the particulate material and the casing are varied. The base materials may be, for example, natural silica sand, kerphalite, cera beads, zircon sand, chromite sand, olivine sand, chamotte or corundum. However, other particulate base materials are also generally suitable. The casing may be organic or inorganic. It is applied either thermally, in solution or by mechanical striking or rolling.
In addition to phenol resin, examples of suitable binders are furan, urea or amino resins, novolaks or resols, urea formaldehyde resins, furfuryl alcohol urea formaldehyde resins, phenol-modified furan resins, phenol formaldehyde resins or furfuryl alcohol phenol formaldehyde resin, which may each be present in liquid, solid, granulated or powdered form. The use of epoxy resins is also possible.
For example, encased silica sand having an average grain size of approximately 140 μm, such as the RFS-5000 product from Húttenes-Albertus Chemische Werke, is particularly preferred. It is supplied with a resol resin casing. In one simple design, an ethanol/isopropyl alcohol mixture may be used as the printing fluid. Predissolved resin may also be added to the printing fluid. One example of this is the Corrodur product from Húttenes-Albertus. According to the invention, a strength of more than 120 N/cm2 results after a time period of 24 hours following the printing process and the addition of 10 wt % liquid binder. Even delicate structures may be quickly unpacked thereby.
A highly concentrated material in the form of predissolved resin of the Corrodur type may furthermore preferably be used as liquid binder for the system. Dioxolane may be used as the solvent additive. Due to the high proportion of resin, molding base materials having a low casing content may be selected. Likewise, untreated sand may be used—with a loss in strength. The design according to the invention in this case may be seen in the complete dissolution of the coating material.
In one particularly preferred embodiment, the materials used in the first method step of 3D printing already include all components required for the final solidification step, preferably binders in the particulate material, which are first bound in the presolidification step using another binding mechanism (physical instead of chemical or vice versa) or other materials (binder in the printing solution) and react/solidify in the subsequent final solidification step in such a way that the advantageous final strength is achieved. It is thus advantageously possible to simplify the different solidification steps in that the particulate material already contains, in the first method step, all materials required for final solidification, and it is possible to achieve the advantageous final strength without introducing additional material in the heat treatment step.
Using the method according to the invention and the device according to the invention, by combining materials and method conditions, the inventors were able to advantageously achieve the fact that an efficient method was provided, which makes it possible to combine work steps, reduce the use of manual steps and thus positively improve the process speed. Using the method according to the invention, it is also possible to achieve flexural strengths in the green body which are sufficient to supply it to a thermal solidification step without damage or other impairments and without the use of tools in the 3D printing method.
Using the method according to the invention and the devices suitable therefor, it is surprisingly possible to include all the materials required for the presolidification step as well as the final heat solidification step in the particulate material. It was astonishing that the combined materials, i.e., the active materials for the presolidification step as well as the final solidification step, did not interact in a way that resulted in interactions between these materials that were detrimental to the method.
By purposefully selecting the materials, the inventors were indeed able to achieve an advantageous effect in preferred embodiments for both the presolidification step and the final solidification step. It has proven to be particularly advantageous that all components required for the method—with the exception of the binder—could be combined into one particulate material, and only one single particulate material may thus be used without the need for additional mixing steps or application steps.
The particularly preferred material combinations according to preferred embodiments are Illustrated in the examples. Subcombinations of materials from different examples may also be used together.
FIG. 1 shows particulate material (100), a sand grain (101) being encased with binder (102).
FIG. 2 shows the process of evaporating particulate material (200), to which solvent was added, whereby the particles (200, comprising 201 and 202) are bound and the material is presolidified. The evaporation of the solvent may also be accelerated by the application of heat (203).
FIG. 3 shows the structure of a presolidified molded body (300).
FIG. 4 shows the operation after printing; in this case the solvent begins to penetrate binder coating (402) of particle core (401).
FIGS. 5a through 5d show the evaporation process of the solvent, the mixture concentrating in the contact point (503) between the particles (500) (FIG. 5d ).
As described above, the molded body is formed by binding individual particles (FIG. 3).
The particulate material-based process is based on a particulate material (100) which is encased by a binder (102) (FIG. 1). Casing (102) characteristically has different properties than base material (101). The sand known from the Croning process may be mentioned as an example. In this case, a grain of sand (101) is coated with a novolak resin (102). This resin is melted on and mixed with the sand during the manufacturing process. The sand continues to be mixed until the resin has cooled. The individual grains are separated thereby and a pourable material (100) results.
Base materials having an average grain diameter between 10 and 2,000 μm may be considered as suitable sands for processing in the method according to the invention. Different base materials, such as natural silica sand, kerphalite, cera beads, zircon sand, chromite sand, olivine sand, chamotte, corundum and glass spheres are suitable for subsequent use in casting processes.
Binders may be applied in a wide range of materials. Important representatives are phenol resins (resol resins and novolaks), acrylic resins and polyurethanes. All thermoplastics may furthermore be thermally applied to the grains. Examples of materials that may be used according to the invention are polyethylene, polypropylene, polyoxymethylene, polyamides, acrylonitrile, acrylonitrile styrene butadiene, polystyrene, polymethyl methacrylate, polyethyl methacrylate and polycarbonate.
Additionally or entirely without the supply of heat, solvents may be used to coat grains coated according to the invention with a bindable material. Other casings may also be implemented by means of solvents. For example, water glass may be dissolved in water and mixed with sand. The material is subsequently dried and broken. Excessively coarse particles are removed by sieving. Since the dissolution process is reversible, the material thus obtained may be used in the process according to the invention by printing it with water as the printing fluid.
In one preferred embodiment of the invention materials may be provided in casing (102) which demonstrate a reaction with the fluid binder during the dissolution process. For example, starters may be provided for a polymerization. In this manner, the evaporation process of the solvent in the particulate material may be accelerated, since less printing solution needs to escape from the particulate material cake by evaporation. As a result, the molded parts may reach their green strength faster and thus be unpacked from the particulate material earlier.
Since the printed parts do not differ much from the surrounding loose particulate material in a solvent process, it may be sensible to dye the molded parts by introducing a pigment into the print medium. In this case, it is possible to use a color reaction based on the combination of two materials. For example, litmus may be used in the solvent. The base material is mixed with the salt of an acid prior to coating with the binder. As a result, not only is a dyeing possible but also a control of the intensity of the dissolution reaction. If the reactive substance, for example, is in direct contact with the grain of the base material, and if it is protected by the casing, the color indicator shows that the casing was completely dissolved.
The process of evaporating the solvent may also be accelerated by supplying heat (FIG. 2). This may take place by means of convection or heat radiators. The combination of an air draft and heating is particularly effective. It should be noted that if the drying process is too fast, the binder may only be partially dissolved. Optimum values with regard to strength development and unpacking time may be ascertained through tests and variations of the solvent.
A printing fluid is applied to the coated grain in the printing process. In its main function, the printing fluid dissolves the binder casing. In the case of Croning sand, approximately 10 wt % of printing fluid is printed for this purpose. Isopropyl alcohol, for example, is suitable as the solvent. After printing, the solvent begins to penetrate the binder casing (FIG. 4). The concentration of the casing material in the solvent increases. When solvent evaporates, the mixture concentrates in the contact point between the particles (FIG. 5). Additional evaporation causes the casing material in the contact point to solidify. Due to the comparatively low viscosities, a favorable process window results, in contrast to melting processes. With the aid of commercial Croning sand of the Húttenes-Albertus RFS 5000 type, for example, an unpacking flexural strength of more than 100 N/cm2, preferably more than 120 N/cm2, is reached. This is sufficient to unpack even large-format, delicate parts safely and distortion-free.
After the removal method step—also referred to as unpacking—the molded parts are supplied to the final solidification step. The molded parts are subsequently supplied to additional follow-up processes. This method step of the invention is preferably carried out in the form of a heat treatment step. Parts made of Croning sand, which are manufactured according to the process according to the invention, may be used as an example. After unpacking, these parts are preferably re-embedded in another particulate material. However, this material does not have a binder casing and preferably has good thermal conductivity. The parts are subsequently heat-treated above the melting temperature of the binder in an oven. In one of the preferred embodiments, the special phenol resin of the casing is cross-linked, and the strength increases significantly. Melting adhesives are generally preferred for this method step of final solidification. The following may preferably be used as base polymers: PA (polyamides), PE (polyethylenes). APAO (amorphous poly alpha olefines), EVAC (ethylene vinyl acetate copolymers), TPE-E (polyester elastomers), TPE-U (polyurethane elastomers). TPE-A (copolyamide elastomers) and vinylpyrrolidone/vinyl acetate copolymers. Other common additives known to those skilled in the art, such as nucleating agents, may be added.
A Croning sand of the Húttenes-Albertus RFS 5000 type is used in a layering process. For this purpose, the sand is deposited onto a build plane in a 0.2-mm layer. With the aid of a drop-on-demand print head, the sand is subsequently printed with a solution of isopropyl alcohol according to the cross-sectional surface of the desired object in such a way that approximately 10 wt % is introduced into the printed areas. The build plane is then shifted relative to the layering mechanism by the thickness of the layer, and the operation comprising the layer application and printing starts again. This cycle is repeated until the desired component is printed. The entire operation is carried out under normal conditions. The temperature in the process room should be between 18° C. and 28° C., preferably between 20° C. and 24° C.
Approximately 24 hours must pass before the final layers of sand have developed an adequate strength. The component may then be unpacked, i.e., removed from the surrounding sand and freed of all powder deposits. When printed test bodies are dried in the circulating air oven for 30 minutes at a temperature of 40° C., they demonstrate a flexural strength of 120 N/cm2.
The parts are then prepared for the subsequent heat treatment step. For this purpose, they are introduced, for example, into uncoated sand, which is situated in a temperature-resistant container. To ensure a good contact between the part and the supporting sand, vibrations are applied to the container during placement and filling with sand.
Any deformation may be avoided in the manner during the hardening reaction, i.e., the final solidification step, at high temperatures. The component is thus heated in the oven for 10 hours at a temperature of 150° C. After removal from the oven, approximately 30 minutes must again pass until the component has cooled enough to allow it to be handled and removed from the powder bed. Following this process step, the deposits may be removed by sand blasting. Treated bending test bodies demonstrate a flexural strength of 800 to 1,000 N/cm2 following this final solidification step.
A layering process is carried out in a manner similar to the first example. A Croning sand of the Húttenes-Albertus CLS-55 type is used in this case. For this purpose, the sand is again deposited onto a build plane in a 0.2-mm layer. A solution of 15% Corrodur from Húttenes-Albertus, 42.5% ethanol and 42.5% isopropyl alcohol is used as the printing fluid.
Approximately 10 wt % of fluid is printed onto the sand.
The flexural strength after unpacking the molded body and completing this first method step, which is also referred to as the presolidification step, is 140 N/cm2 in this case. The final flexural strength after the second method step, which is also referred to as the final solidification step, is again 800 N/cm2.
The process for this preferred manufacturing method is carried out in a manner similar to the previous examples. In this case, strengths of 800 N/cm2 are achieved using untreated sand as the base. A mixture of 50% Corrodur and 50% dioxolane is used as the binder fluid. 10 wt % are printed. The process takes place at room temperature. The component does not have to be unpacked from the particulate material after printing, since the unencased material cannot be bound by means of thermal energy. Either the entire box or, for example, one printed box may be introduced into the oven to carry out the final solidification step. A sand volume of 8×8×20 cm, which contains a bending test body, is heat-treated in the oven for 24 hours at a temperature of 150°. The strength upon conclusion of the final solidification step is approximately 800 N/cm2. A determination of the organic proportion by means of ignition loss determination demonstrates 5 wt %. The material in this case corresponds to the RFS-5000 and CLS-55 products from Húttenes-Albertus. After the oven process, the parts may be cleaned by sand blasting.
wherein the action of thermal energy includes a chemical mechanism.
2. The method according to claim 1, wherein the final solidification step includes heating to a temperature from 110° C. to 200° C.
3. The method according to claim 2, wherein the the casing or coating includes a binder.
4. The method according to claim 2, wherein the casing or coating comprises or includes thermoplastic polymers, soluble polymers, waxes, synthetic and natural resins, sugars, salts, inorganic network formers or water glasses.
5. The method according to claim 1, wherein the pre-solidification step takes place using a solvent comprising water, hydrocarbons, alcohols, esters, ethers, ketones, aldehydes, acetates, succinates, monomers, formaldehyde, phenol and mixtures thereof.
6. The method according to claim 3, wherein the binder contains polymerizable monomers.
wherein the material contained in the casing or coating contributes to the final strength or to the preliminary strength in the pre-solidification step and to the final strength in the final solidification step.
the body is heat-treated with the assistance of an inert material.
9. The method of claim 8, wherein the presolidified molded body (green body) has a flexural strength of 120 to 400 N/cm2 following the presolidification step; and the molded body has a flexural strength of more than 750 N/cm2 after the final solidification step.
10. The method of claim 1, wherein a temperature in the final solidification step is from 130° C. to 200° C.
11. The method of claim 2, wherein the temperature at the component is maintained within a time range of 2 to 24 hours.
12. The method of claim 11, wherein natural silica sand, kerphalite, cera beads, zircon sand, chromite sand, olivine sand, chamotte, corundum or glass spheres are used as the particulate material.
13. The method of claim 6, wherein the coating or casing contains materials for starting a polymerization with the binder.
two different materials are contained in the casing or coating, the one material being essentially destined for the pre-solidification step and the other material essentially being destined for the final solidification step.
15. The method of claim 1, wherein only the fluid including the solvent is printed on the particulate material.
the coating or casing contains a color indicator which is activated by a binder.
17. The method of claim 16, wherein the molded body is subjected to polishing, dyeing, or both.
wherein the particulate material includes a base material and a coating or casing having one or multiple materials for the pre-solidification step and for the final solidification step.
19. The method of claim 18, wherein two different materials are contained in the casing or coating, the one material being essentially destined for the pre-solidification step and the other material essentially being destined for the final solidification step.
20. The method according to claim 19, wherein the final solidification step includes a chemical reaction.
International Preliminary Report on Patentability and Written Opinion of the International Search Authority, Application No. PCT/DE2013/000589, dated Feb. 25, 2014.
International Search Report, Application No. PCT/DE2013/000589, dated Feb. 25, 2014.
Marcus, et al., Solid Freeform Fabrication Proceedings, Sep. 1995, p. 130-133.
Voxeljet's VXconcept-Continuous 3D printing for sand casting, You-Tube, Nov. 16, 2011, XP002713379, retrieved from the Internet URL: http://www.youtube.com/watch?v=hgIrNXZjIxU retrieved on Sep. 23, 2013.
Voxeljet's VXconcept—Continuous 3D printing for sand casting, You-Tube, Nov. 16, 2011, XP002713379, retrieved from the Internet URL: http://www.youtube.com/watch?v=hgIrNXZjIxU retrieved on Sep. 23, 2013.
ES2331476T3 (en) 2010-01-05 Methods for making three-dimensional shapes layered.
ES2376237T3 (en) 2012-03-12 Termopl system powder material? Stico for appearance models from 3D printing systems. | 2019-04-18T11:34:40Z | https://patents.google.com/patent/US10052682B2/en |
Click to view featured videos.
After-School All-Stars (ASAS) is a past grantee that provides students with educational opportunities outside of school. In 2018, the ESA Foundation is supporting the ASAS video game design curriculum, Minecraft: Education Edition and 9 Dots, which serves more than 500 students in nine cities. Thanks to the ESA Foundation’s support, the curriculum this year will include a science, technology, engineering, arts, and math (STEAM) career exploration event in partnership with Verizon.
Becker College recently launched its ForEach Academy STEAM Community Outreach Program, which introduces underprivileged seventh and eighth-grade girls to game design and programming and provides hands-on work with augmented reality (AR), virtual reality (VR), design, modeling, and electronics. Support from the ESA Foundation will help in the expansion of this program to ensure more middle-school girls have access to science, technology, engineering, arts, and math (STEAM) education, ultimately building the pipeline of future video game makers.
Extra Life is a 24-hour video game marathon and fundraiser that has raised over $40 million for medical research and treatment at Children’s Miracle Network Hospitals (CMNH) across North America since its inception in 2008. As a returning grantee, Extra Life will use its 2018 funding from the ESA Foundation to develop their mobile and social fundraising apps, which will allow participants to fundraise “on-the-go” and will give CMNH the ability to provide suggested messaging for social platforms.
Girlstart is a returning grant recipient that promotes young women’s early engagement and academic success in science, technology, engineering and math (STEM), and ultimately resolves the gender gap that currently exists in today’s STEM workforce by serving almost 30,000 girls nationwide. In 2018, Girlstart will use its ESA Foundation grant to offer more free summer camps that encourage girls to participate in STEM activities, with a particular focus on computer science and video game design.
Global Game Jam supports the next generation of game developers by hosting the world’s largest annual game jam. The ESA Foundation will support Global Game Jam’s new youth program GGJNext, a comprehensive week-long curriculum that ends in a youth game jam, where students can showcase the games they spent all-week learning about and creating. Funding from the ESA Foundation will ensure that the week-long program remains completely free and open to all students who wish to learn more about the development of video games.
Global Kids, Inc. programs help youth tap into their curiosities to develop games that have an impact on communities, by giving them access to engaging environments and open-source tech tools. In 2018, the ESA Foundation funding will support the expansion of the nonprofit’s game-design program, Haunts. With this funding, Global Kids will be able to expand this STEM-based learning program to Houston, New York City, and Washington, DC, giving more students the opportunity to create an educational, geo-locative alternative reality game.
Scholastic’s Alliance for Young Artists & Writers Inc. empowers creative teenagers in continuously changing artistic fields, including video game design. Thanks to support from the ESA Foundation, the alliance will help develop and expand the reach of its video game workshops to teens in seventh through twelfth-grade. These workshops will introduce students to video game design platforms and teach them successful game structures and storytelling.
The Association of Hole in the Wall Camps' mission is to provide children with serious medical conditions and life-threatening illnesses a chance to attend summer camps with children who have similar conditions. Through intentional programming and therapeutic recreation, traditional camp programs are designed to foster self-confidence, enhance coping and resilience and help campers reach beyond the limits of their medical conditions. The first camp was founded by Paul Newman in Connecticut in 1998; today camps and programs span the globe - reaching children and their families in 39 countries and all 50 US states.
The 2005 ESA Foundation grant enabled camps in New York and Florida to create and maintain web sites that allow campers to interact with friends made at camp, volunteers, counselors and staff throughout the year. This interaction fosters friendships and offers relief from the isolation children may feel upon returning home and support as their illness progresses.
The Association on American Indian Affairs (AAIA) promotes the welfare of American Indians and Alaska Natives by supporting efforts to: sustain and perpetuate their cultures and languages; protect their sovereignty, constitutional, legal and human rights, and natural resources; and improve their health, education, and economic and community development. AAIA is a national Indian organization, governed by an all-Native American Board of Directors, whose programs fall into four main categories: youth/education, health, cultural preservation and sovereignty.
ESA Foundation awarded a grant to AAIA in 2010 to create interactive web-based learning materials for Native American children to use in learning their Native languages. This software strengthens tribal communities and enhances the overall well-being and academic achievements of American Indian students through preservation of cultural wisdom contained in language.
Ball State University Department of History provides training and materials, and also develops media projects for elementary teachers and students. It creates high-quality media products to enrich the curriculum and instruction of elementary social studies, which are distributed free of charge to schools and public libraries.
ESA Foundation awarded a grant to Ball State University Department of History in 2012 to support the creation of a digital gaming simulation of the Underground Railroad. Through playing the game, upper elementary students learned about the social and geographic aspects of the journey on the Underground Railroad.
The Massachusetts Digital Games Institute (MassDiGI) was established in 2011 as the center for economic development and academic cooperation across the Commonwealth. Its mission is to facilitate cooperation among industry, government, and academia; to strategically foster job growth and economic development in Massachusetts; and to grow, attract, and retain digital and video game companies. In 2014, the ESA Foundation supported MassDiGI’s annual Game Challenge, which helped aspiring game designers create games. The ESA Foundation also supported the launch of the MassDiGI 101 program, which included numerous workshops to provide students and teachers with information about how to bring game design and computer programming into the classroom.
The Boys & Girls Clubs of America (BCGA) received funds to develop a STEM Training Track for Club professionals, which would ensure high quality STEM program experiences in clubs across the country. The ESA Foundation’s support for the STEM program, which encourages kids and young adults to apply STEM concepts to real-life experiences to improve their local communities, is a continuation of a 2016 grant that helped fund the research and piloting of the STEM initiative.
Case Western Reserve University (CWRU) is a nationally ranked research university located in Cleveland, Ohio. CWRU improves people's lives through preeminent research, education and creative endeavor. They realize this goal through scholarship that capitalizes on the power of collaboration; learning that is active, creative and continuous; and promotion of an inclusive culture of global citizenship.
ESA Foundation awarded a grant in 2011 to CWRU to support the Great Lakes Game Project Challenge through a partnership between Electrical Engineering and Computer Science Department and the Great Lakes Energy Institute at Case. The Challenge will be directed at high school students in the four states lying adjacent to Lake Erie: Michigan, Ohio, Pennsylvania and New York. Students will compete to create a video game focused on wind energy and sustainable energy generation.
Children's Health Education Center (CHEC) is a member of the Children's Hospital of Wisconsin. CHEC and its BlueKids.org e-learning programs offer resources and programs for children, teachers, parents and caregivers to help keep kids healthy and safe. These game-based, interactive health education programs are delivered online and align with national health education standards.
The 2011 ESA Foundation grant to CHEC will fund the delivery of "Act Now!", an e-learning bullying prevention program, to over 600 middle school classrooms in Wisconsin, Ohio and Pennsylvania. The multiple 45 minute e-learning lessons for 6th- 8th graders will address physical, verbal, emotional and cyber bullying and be accompanied by classroom activities and discussions led by a teacher, counselor or trained professional. An online staff development and training program for schools and the community, which provides a step-by-step process for creating a bully-free school, is also available.
As the world’s largest children’s museum, the Children’s Museum of Indianapolis’ mission is to create extraordinary learning experiences across the arts, sciences and humanities. With the ESA Foundation’s support, the museum will create interactive video games that will allow children and their families to imagine they are part of a team of astronauts aboard the International Space Station. The games will be available online and on display in the Beyond Spaceship Earth exhibit, which is funded by NASA.
The Colonial Williamsburg Foundation’s mission is to engage, inform, and inspire on- and off-site visitors to the colonial capital where they encounter historic events and the diverse people who helped shape a new nation; and to preserve and restore 18th-century Williamsburg so that the future may continue to learn from the past.
In 2012, ESA Foundation supported the Kids Zone, a child-friendly website that introduces young people to Colonial Williamsburg. ESA Foundation’s support enhanced the Kids Zone site and developed new games and activities for youth to enjoy on mobile devices.
Computers for Youth (CFY) is a national non-profit organization that seeks to enable low-income children to succeed in school by improving their at-home learning environment. CFY's programs are designed to enhance the educational resources available in children's homes, improve parent-child interaction around learning at home and help teachers connect classroom learning with the home.
The 2008 ESA Foundation grant supported the California expansion of CFY's Take IT Home program. The Take IT Home program provides participating sixth grade families with a free computer designed as a home learning center, educational software titles, Internet access at reduced cost, bilingual web content, Family Learning Workshops that teach parents and children how to best utilize their computer systems, technical support from CFY's bilingual help desk and additional training.
DonorsChoose.org is dedicated to addressing the scarcity and inequitable distribution of learning materials and experiences in public schools. DonorsChoose.org improves public education by engaging citizens in an online marketplace where teachers describe specific educational projects for their classrooms and individuals can choose which projects to fund. Their vision is of a nation where students in every community have the resources they need to learn. Since launching in 2000, DonorsChoose.org has directed more than $37 Million in resources and experiences for public school students and have empowered more than 150,000 teachers and citizen philanthropists to become change makers.
The 2010 ESA Foundation grant encouraged student and teacher innovation by funding 89 classroom projects which utilized video games or technology in traditional subject areas, reaching nearly 8,000 students in 27 states. The ESA Foundation grant was matched by 316 citizen philanthropists helping to bring $50,000 worth of resources to classrooms in high-need public schools across the country.
Drexel University, a leader in collegiate game design programs, will use its ESA Foundation grant to offer game development workshops for girls in the Philadelphia region in partnership with TechGirlz, a nonprofit dedicated to reducing the gender gap in technology occupations. The workshops will give girls a hands-on experience with different technologies and encourage them to pursue degrees and occupations in technology-related industries.
Edheads creates unique, educational web experiences designed to make hard-to-teach concepts understandable using the power and interactivity of the Internet. They deliver in-depth content in a fresh, exciting style allowing the user to learn intuitively in an online environment. Edheads strives to promote STEM careers to K-12 students nationwide by tying math and science curriculum to real world situations and advancing critical thinking skills.
The 2011 ESA Foundation grant will fund the development of an online interactive engineering design experience centering on nanoparticles. Edheads will work with the Ohio State University Nanoscience and Engineering Center to create the game, which will blend engineering, human health and medicine, and critical thinking skills to appeal to girls ages 15-18 who are considering medical careers.
EverFi was founded in 2008 to leverage technology to teach K-12 students critical life skills. In partnership with the nonprofit Southeast Community Development Corporation, EverFi launched the Ignition™ – Digital Literacy and Responsibility initiative in Los Angeles, CA. and Austin, TX. Ignition™ is a highly interactive, web-based learning platform that educates students about digital citizenship, including digital footprints, securing online identities, cyberbullying, good texting practices, conducting online research, digital time management, and creating multimedia products.
With support from the ESA Foundation, EverFi was able to continue to build relationships with the Los Angeles Unified School District (LAUSD), Austin Independent School District (AISD), and community organizations in both cities to offer Ignition™.
Established within one week of the attacks on September 11, 2001, the Families of Freedom Scholarship Fund's purpose is to provide education assistance for post-secondary study to financially needy dependents of those people killed or permanently disabled as a result of the terrorist attacks and during the rescue activities relating to those attacks. The Fund has already provided millions of dollars in scholarship support, and will continue to provide education assistance through the year 2030. ESA Foundation made a generous donation to the Families of Freedom Scholarship Fund in 2002 to support the 9/11 relief efforts.
The Federation of American Scientists (FAS) is a science policy organization that addresses a broad spectrum of policy issues in carrying out its mission to promote humanitarian uses of science and technology. Its Learning Technologies Program includes research and development to harness the potential of emerging information technologies. FAS created Immune Attack, an educational video game that introduces basic concepts of human immunology to middle school, high school and entry-level college students.
In 2012, ESA Foundation made a three-year commitment to FAS to help distribute and evaluate the pedagogic use of its Immune Defense game for teaching biology concepts.
Games for Change (G4C) is the leading global advocate for making and supporting digital games for social impact, and harnessing their power to engage the public in the most pressing issues of our day. It acts as a catalyst for the creation of high quality, high impact educational and social change games.
ESA Foundation awarded G4C a grant in 2012 to expand the impact of its Games for Change Festival and Games for Change Awards. The Festival and Awards bring together leaders from government, corporations, civil society, media, academia, and the gaming industry to explore the impact of digital games as an agent for social change. A national program of award winners were presented in cultural institutions.
George Mason University’s Center for Digital Media Innovation and Diversity serves as a resource for research, design, and dissemination of digital media for diverse populations. Its goal is to leverage the expertise of scholars and industry professionals from across the country to conduct research, design digital media products, and provide access to quality educational media products for diverse audiences.
In 2013, ESA Foundation awarded a grant to George Mason University’s Center for Digital Media Innovation and Diversity to fund its Saturday game design workshops for middle and high school students from traditionally underserved communities in Northern Virginia, Southern Maryland, and the District of Columbia. Workshop participants will learn game design techniques using the GameMaker, Scratch, and UNITY development tools, and build technical skills that they can apply on projects that have real value in the world.
For 100 years, the Girl Scouts of America has empowered a diverse range of girls – many of them from low-income backgrounds – to be effective, self-assured leaders in their schools, families, and communities. In partnership with Women in Games International (WIGI) and E-Line Media, Girl Scouts of Greater Los Angeles (GSGLA) plans to develop a video game patch program to interest scouts in game design-related topics and STEM fields.
The ESA Foundation’s grant enabled GSGLA and WIGI to conduct seven hands-on game design workshops for 1,000 girls in the Los Angeles area, with a goal to reach an additional 60,000 girls, parents, and youth mentors through its outreach program to build awareness for future workshops.
The Smithsonian's Hirshhorn Museum and Sculpture Garden is a leading voice for contemporary art and culture and provides a national platform for the art and artists of our time. They enhance public understanding and appreciation of contemporary art through acquisition, exhibitions, education and public programs, conservation, and research.
The 2011 ESA Foundation grant will help fund the creation of a digital youth center to advance next generation learning. The unique environment will inspire creativity and experimentation, from individual projects like website and game design to group projects such as large-scale video productions.
HopeLab harnesses the power and appeal of technology to motivate measurable positive health behaviors in young people. Since 2006, the ESA Foundation has supported HopeLab’s creation and release of Re-Mission and Re-Mission 2, the online and mobile game that promotes successful, long-term treatment outcomes for adolescents and young adults with cancer. This year, ESA Foundation will help increase the awareness of Re-Mission 2 and commemorate the 10-year anniversary of the original Re-Mission game.
The White House created Hispanic Heritage Foundation (HHF) in 1987 to commemorate the establishment of Hispanic Heritage Month in America. HHF has been recognized by the White House, Congress, and Fortune 100 companies for its mission to identify, inspire, prepare, and position Latino leaders in the classroom, community, and workforce to meet America’s priorities with a focus on innovation. HHF also promotes Latino cultural pride, accomplishment, and role models.
In 2014, the ESA Foundation grant supported HHF’s Leaders on the Fast Track (LOFT) Video Game Innovation Fellowship. This new program furthered HHF’s commitment to advancing Latinos, African Americans, and females in STEM careers by awarding 20 youths, ages 16-24, with grants to create video games that solve critical problems in their communities.
Inspire USA’s mission is to help millions of young people lead happier lives. The foundation of its work is the design and delivery of innovative technology-based services that promote mental health and prevent suicide.
In 2012, the ESA Foundation grant allowed Inspire USA to create a Facebook application to raise awareness of the mental health impact of cyberbullying and bring attention to ReachOut.com, a cyberbullying resource for teens. The application was developed through a national competition for young programmers to design and build it.
The Institute of Play is a design lab and learning center that seeks to activate a next generation of engaged citizens and lifelong learners by leveraging the power of games, game design, and systems thinking.
In 2012, ESA Foundation has granted support to the Institute of Play to design, develop, and launch an online platform, Playforce. Playforce fostered a multi-generational online community of gamers, parents, and teachers. It catalogued educational games that can be used in school settings, allowed kids to gain skills and earn recognition in a way that can be leveraged, and created a community with a common language and framework to assess games as learning objects and advocate for their relevance.
Just Think teaches young people to lead healthy, responsible, independent lives in a culture highly impacted by media. They develop and deliver cutting-edge curricula and innovative programs that build skills in critical thinking and creative media production. Just Think teaches young people media literacy skills for the 21st century. They have been successfully creating and delivering in school, after school and online media arts and technology education locally, nationally and internationally since 1995.
In 2002 ESA Foundation supported "September 11th: Reflecting, Responding, Helping and Healing" in New York City and Washington, D.C., to help communities with youth affected most by the events of September 11th. ESA Foundation supported additional education training in Boston and Los Angeles in 2003.
The Juvenile Diabetes Research Foundation (JDRF) is the leader in research leading to a cure for type 1 diabetes in the world. It sets the global agenda for diabetes research, and is the largest charitable funder and advocate of diabetes science worldwide. JDRF was founded in 1970 by the parents of children with juvenile diabetes – a disease which strikes children suddenly, makes them insulin dependent for life and carries the constant threat of devastating complications. JDRF's mission is constant: to find a cure for diabetes and its complications through the support of research. ESA Foundation donated to JDRF's Fund-A-Cure campaign in 2001.
The Lewis and Clark Foundation’s Lewis and Clark Interpretive Center’s mission is to impart upon the public a personal sense of President Thomas Jefferson’s vision of expanding America to the west. Specifically, the organization works to inspire awe and awaken curiosity about the challenges faced by Lewis and Clark’s famous expedition as they portaged the great falls of the Missouri River and explored the “unknown.” The Foundation brings to life the daily experiences of the expedition and celebrates the indomitable spirit of human discovery we all share.
In 2013, ESA Foundation supported the Center in developing Meriweather, a historically accurate computer role-playing game based on the Lewis and Clark Expedition to bring the engaging strengths of interactive educational games to youth ages 13 through 20.
Mothers Against Violence in America (MAVIA) was a nonprofit organization whose goal was to reduce youth violence through grassroots advocacy and student-driven educational programs. MAVIA established SAVE (Students Against Violence Everywhere) with 100 chapters in schools of all levels. They collaborated with elected officials and industry representatives to enforce ratings, gun safety and education. MAVIA received a grant in 2001 to support the expansion of the SAVE program in middle and high schools in six states and to provide training and leadership development for the SAVE program at the University of Michigan.
Since 1984, the National Center for Missing & Exploited Children (NMEC) has served as the nation’s clearinghouse on issues related to missing and sexually exploited children. Now with better public awareness, training, laws and technology, the recovery rate of missing children has jumped from 62 percent in 1990 to more than 97 percent today. The ESA Foundation will support NCMEC’s development of NetSmartz Kids Club UYN, a monthly online feature that promotes Internet safety with animated media, interactive activities and more.
Since 1996, the National Institute on Media and the Family (NIMF) has worked to help educate parents and communities about their children's media exposure. NIMF is an independent, nonpartisan, nonsectarian and nonprofit organization that is based on research, education and advocacy. Its MediaWise Network helps parents, teachers and community leaders monitor and influence the media world by providing free resource guides, the latest research, blogs and more.
In 2009 ESA Foundation supported SWITCH, a childhood health and wellness program designed to change three key behaviors - physical activity (Do), television viewing/screen time (View) and fruit/vegetable consumption (Chew). The SWITCH program provides participants, typically 3rd graders, and their families with easy to use tools and resources to make healthy choices.
The National Museum of the American Indian (NMAI), a component of the Smithsonian Institution, is dedicated to preserving one of the world’s most expansive collections of Native American artifacts. With support from the ESA Foundation, NMAI will develop The Great Inka Road: Engineering an Empire, an in-gallery game play experience that will explore the importance of the Inka Road and how the indigenous peoples of the Western Hemisphere changed the course of world history.
For over 20 years, the Museum of the Moving Image has used moving image media to advance the understanding, enjoyment, and appreciation of the art, history, technique, and technology of film, television, and digital media. The Museum presents exhibitions, education programs, significant moving-image works, and interpretative programs, and collects and preserves moving image- related artifacts. It was the first museum to collect and exhibit video games, beginning in 1989 with Hot Circuits: A Video Arcade, and continue to regularly showcase video games in its exhibitions and programs.
In 2013, ESA Foundation funded the Museum of the Moving Image’s education programs related to its landmark exhibition, Spacewar!: Video Game Blast Off, which explored the legacy of Spacewar!, one of the earliest forms of video games.
One Economy Corporation is a global nonprofit organization that uses innovative approaches to deliver the power of technology and information to low-income people, giving them valuable tools for building better lives. They help bring broadband into the homes of low-income people, employ youth to train their community members to use technology effectively, and provide public-purpose media properties that offer a wealth of information on education, jobs, health care and other vital issues. Their mission is to maximize the potential of technology to help low-income people improve their lives and enter the economic mainstream.
ESA Foundation first supported the expansion of the Digital Connectors program to Chicago, New York City, Oakland, San Francisco and San Jose in 2009. One Economy grew these programs with the 2010 ESA Foundation grant. The Digital Connectors program is the best practice, youth development movement that engages low-income teens and young adults, ages 14 to 21, in leadership development, digital education, life skills management and community service. By making a difference in their respective communities, taking field trips to high tech companies, hearing from emerging business leaders and connecting to each other through the Connectors Club web site, youth are able to hone technical competencies and grasp lifelong principles that inspire educational advancement and workforce preparation. The Digital Connectors program was designed to unleash the power of technology for youth and disconnected families.
Parents’ Choice Foundation is the nation’s oldest nonprofit evaluator of children’s media and toys. Its mission is to provide parents with a trusted, independent resource for recommending toys, games, and media for children and families of all achievements, abilities, and backgrounds.
In 2013, Parents’ Choice received funding from ESA Foundation to develop the methodology and inter-rater reliability for the Ability Index program for digital games, a new nationwide initiative expanding the scope of the 34-year history of the organization’s work to include the products’ therapeutic benefits for children and youth with special needs.
PAX is a nonpolitical nonprofit organization working with all Americans to help end gun violence against children and families. PAX's two innovative programs -- SPEAK UP and ASK (Asking Saves Kids) -- offer practical solutions for protecting children from gun violence. SPEAK UP is a proven national youth violence-prevention initiative that empowers students with critical knowledge and resources to prevent weapon-related violence in their schools and communities. The SPEAK UP Campaign consists of a national hotline (1-866-SPEAK-UP) for students to anonymously report weapon threats, a mass awareness campaign and a youth education initiative.
ESA Foundation supported the SPEAK UP program in 2006 and 2007 and are pleased to again collaborate with PAX in 2010 and 2011. This two-year grant supports measurable implementation of the SPEAK UP program in Cumberland County, North Carolina including community outreach and coalition building, 1-866-SPEAK-UP customization, media outreach, education kits and materials as well as assessments.
The Pulitzer Center has a history of award-winning, innovative multimedia approaches to education and journalism. It works to address issues affecting journalism by supporting journalists’ work and raising awareness about what goes into good journalism.
In 2014, the ESA Foundation supported the Pulitzer Center’s collaboration with Decode Global to create educational games that increase media literacy and global issue awareness among high school students from low-income communities. Specifically, the Pulitzer Center designed Timbuktu: Mali’s Ancient Manuscripts, an immersive role-playing game that allows students to experience being an international journalist. The game aims to teach students about good journalism, help foster their creativity, improve their critical thinking abilities, and enhance their storytelling skills.
The mission of the Purdue Center for Serious Games and Learning in Virtual Environments is to provide support for implementing, designing and developing serious games and virtual environments for learning; to encourage collaboration across Purdue and with K-12 schools; and, to establish a foundation for securing funding and conducting research at Purdue on the use of serious games and virtual learning environments in education.
ESA Foundation awarded a grant in 2010 to the "Serious Games Center" at Purdue University to develop National Pastime, a citizenship education video game designed to teach middle and high school students about the internment of Japanese-Americans in the United States during WWII. By immersing players in the internment experience, the game seeks to engage students with their roles as citizens in a democracy, highlight challenges a democracy can face and illuminate the responsibilities of citizens to protect their freedoms. The grant will also fund the implementation and evaluation of the game in an alternative high school.
Rensselaer Polytechnic Institute (RPI), one of the country’s oldest technological universities, was founded to educate students to apply science to common purposes of life. Its Games & Simulation Arts and Science program prepares students to enter the digital game industry, and its Center for Cognition, Communication and Culture investigates new types of education and learning behaviors.
In 2012, ESA Foundation supported these two RPI programs by working together to develop an immersive environment for children to learn and practice the Chinese language using A Virtual Space for Children to Meet and Practice Chinese. The project supplemented classroom instruction with virtual teachers and provide opportunities for students to practice Mandarin Chinese through conversation and interaction.
Save the Children's mission is to create lasting, positive change in the lives of children in need. They seek to ensure that children in need grow up safe, educated and healthy, and better able to attain their rights. They provide a wide range of programs including training new mothers with prenatal care, supplying life-saving immunizations for young children, building schools in developing countries and improving literacy and nutrition for children living in rural poverty in the U.S.
ESA Foundation awarded a special Hurricane Katrina relief grant to Save the Children in 2006 to assist the hundreds of thousands of displaced children. Save the Children set up schools, camps, childcare and counseling centers throughout the Gulf Coast.
ESA Foundation sponsored the creation of Coping with Chemo through the Starbright Foundation, which was incorporated into the Starlight Children's Foundation in 2005. Coping with Chemo later became part of Starbright World, the premier online social network for teens with chronic and life-threatening medical conditions and their siblings. Teens are able to connect with other teens that are at home or in the hospital. Users post pictures, chat, post blogs and bulletins and find new friends in similar situations.
Coping with Chemo is a series of webisodes written by teens with cancer to help other young people find positive ways to deal with the cancer experience. Each webisode addresses a different topic -- getting diagnosed with cancer, side effects of chemotherapy and other treatments, telling your friends and celebrating your last treatment. Coping with Chemo continues to be an important part of Starbright World today.
The Starlight Children's Foundation is dedicated to improving the quality of life for children with chronic and life-threatening illnesses and life-altering injuries by providing entertainment, education and family activities that help them cope with the pain, fear and isolation of prolonged illness. Starlight offers a comprehensive menu of outpatient, hospital-based and web offerings that provide ongoing support for children and families from diagnosis through the entire course of medical treatment.
In 2004, ESA Foundation provided a grant to Starlight in support of the Kids Activity Network (KAN). The KAN program was an outpatient program designed to meet the emotional needs of seriously ill children and their families with a variety of events and outings each month throughout the United States. In 2007, the ESA Foundation awarded a joint grant to support distribution of HopeLab's Re-Mission game through the Starlight Starbright Foundation's PC Pals computer network and to cancer camps in the United States.
Street Law is a nonprofit organization dedicated to providing practical, participatory education about law, democracy and human rights. Through its philosophy and programs, people are empowered to transform democratic ideals into citizen action.
YouthVision was a program that challenged young people to design creative ways to resolve problems by addressing conflict, prejudice or violence in their school or community. It was a collaborative effort of five organizations: the Conflict Resolution Education Network, the Center for Youth as Resources, the National Crime Prevention Council, the Society of Professionals in Dispute Resolution and Street Law. A grant by ESA Foundation sponsored the participation of youth Advisory Committee members and alumni in the annual "Take the Challenge" YouthVision Leadership Training in Washington, D.C., and of the enhancement of the YouthVision initiative's web strategies in 2001.
The National Association of Students Against Violence Everywhere (SAVE), Inc. is a nonprofit organization striving to decrease the potential for violence in our schools and communities by promoting meaningful student involvement, education and service opportunities in efforts to provide safer environments for learning. SAVE has a triad approach in addressing violence in schools and communities that includes: 1) conflict management, 2) crime prevention and 3) service to the community. SAVE has 1,800 chapters in 47 states in elementary, middle, high schools, colleges and youth-serving community organizations across the country.
Through ESA Foundation funding, SAVE administered 40 grants to community chapters in 2005 and 2006 to implement violence prevention strategies so that all students will be able to attend schools that are safe and secure, free of fear and conducive to learning. SAVE has received several Inspiration in Prevention Awards from Youth Crime Watch of America and the National Crime Prevention Council. Individual members of the Youth Advisory Board (YAB), whose national violence prevention activities were supported in part by the ESA Foundation, have received numerous Presidential Student Service Awards.
The ESA Foundation was the title sponsor in 2005 for Summer Lovin', a high profile fundraising reception to benefit the Leukemia and Lymphoma Society and Back on Track, a tutoring program for low-income children.
ThanksUSA is an effort to mobilize Americans to “thank” the men and women of the U.S. armed forces. It provides college, technical and vocational school scholarships to the children and spouses of military personnel. It also offers Treasure Hunt, a digital American history game that reminds players of the freedom and values sustained by members of the armed services. The ESA Foundation has supported Treasure Hunt since 2009, and will continue to do so in 2015.
The Animation Project (TAP) offers a compelling and revolutionary form of animation therapy that, by taking full advantage of adolescent's interest in video games, propels adolescent development in the emotional, social and cognitive areas. TAP's technology-based group therapy builds self-esteem, pre-planning and collaborative skills as well as technical abilities. The combination of improved mental health with newly acquired hi-tech aptitudes prepares the adolescents for success in the modern workplace.
The 2009 ESA Foundation grant supported expansion of TAP's 3D Computer Animation therapy to at-risk adolescents in New York and New Jersey. Working in support groups, youth make their own videogame-scenarios and animations that are used over the course of the program as a therapy vehicle. A licensed art therapist and a professional computer animator lead the groups.
The Cooper Institute (CI) is a research and education organization dedicated globally to preventive medicine. The Institute's founder, Kenneth H. Cooper, M.D., M.P.H., the "Father of Aerobics," was an Air Force physician who became interested in the role of exercise in preserving health. When he published his first best seller, Aerobics, in 1968, he introduced a new word and was the spark for millions to become active.
In January 2010, the Texas Department of Agriculture awarded CI with a grant to develop NutriGram, a data-driven and interactive, web-based educational application for schools, teachers, and parents to improve healthy eating knowledge, attitudes and behaviors for students in grades 3-5. The web site will host the first 3D nutrition game called The Quest to Lava Mountain, specifically designed for elementary-age students for use in classrooms or at home, empowering students to eat well and move more, while having fun. ESA Foundation awarded CI a grant in 2011 to develop enhancements and additional interactive web based games for the NutriGram program.
The Survivors' Fund was established to support the long-term recovery of individuals and families affected by the September 11, 2001 terrorist attack at the Pentagon. The goal of the Survivors' Fund was to help survivors and their families receive the assistance and services they needed to rebuild their lives. The Fund partnered with Northern Virginia Family Service to provide case management services and financial support to 1,051 individuals in 517 families. The Survivors' Fund ceased operations in 2008. ESA Foundation made a donation to the Survivors' Fund in 2002 in support of the 9/11 relief efforts.
The mission of the Tiger Woods Learning Center (TWLC) is to deliver unique experiences and innovative educational opportunities for youth worldwide. Since its inception in 2006, TWLC has benefited more than 50,000 students through programs emphasizing STEM learning, college preparation, career exploration, the arts, sports, and community service.
In 2014, the ESA Foundation grant supported TWLC’s computer and engineering programs delivered at campus locations across the country. Through these programs, hundreds of disadvantaged students learned about video game design and object-oriented programming using Multimedia Fusion Developer 2, while exploring careers related to the video game industry.
The Trust for Representative Democracy, a program of The National Conference of State Legislatures, involves legislators in democracy education outreach by bringing civics to life for students across the country.
In 2013, ESA Foundation awarded funding to the program to develop The American Democracy Game, a new interactive game that will place students in the shoes of a lawmaker.
The University of Texas at Austin is a comprehensive research university with a broad mission of undergraduate and graduate education, research, and public service. For the second straight year, its College of Education (COE) ranked #1 in the nation among public universities according to U.S. News & World Report’s 2013 edition of America’s Best Graduate Schools.
In 2013, ESA Foundation funded the university’s COE Alien Rescue program, an award-winning, immersive, media learning program that aims to help middle school students develop problem-solving, collaboration, decision-making, and other critical 21st century thinking skills, and also motivate them to learn science.
VisionQuest (VQ20/20)’s mission is to protect children and families from devastating academic, psychosocial, and lifelong economic consequences of undetected vision disorders and preventable blindness. VQ20/20 is a returning grantee and received 2017 funds to bring gamified vision screening to schools in Hawaii and Arizona using the entertaining and medically validated EyeSpy 20/20 video game technology. Through this project, VQ20/20 expects to provide 25,000 vision screenings thanks to ESA Foundation funding. The project will also propel the Hawaii partners to achieve statewide implementation within 3-5 years.
Web Wise Kids (WWK) teaches kids, parents, and the community how to make safe and wise choices in a technologically-evolving world by creating and distributing interactive content through the media to influence youths’ lives. Its games include MISSING, Air Dogs, and Mirror Image, which teach students to stay safe online, and IT’S YOUR CALL, a game about cell phone and texting privacy.
In 2012, ESA Foundation supported the development of a game concept based on current Internet safety issues, including cyberbullying, social media, and keeping reputations safe. ESA Foundation also supported a national contest that engaged students ages 14-18 in the development of the game.
Work, Achievement, Values and Education's (WAVE) mission is to motivate at-risk youth to complete school, lead productive lives and make a valuable contribution to their communities. For 38 years, WAVE has been an innovator in the youth development field with its dropout prevention, recovery programs and experience working with community organizations and local youth development professionals. Since the organization's inception, WAVE's specially designed curricula and training programs have reached over a half a million youth in 550 programs across 40 states.
In 2004, the ESA Foundation established the ESA Foundation-WAVE Incentive Grants Program to bring WAVE's expertise to needy communities with the goal of helping more school dropouts and potential dropouts change the course of their lives. From 2006 through 2008 ESA Foundation helped WAVE to serve youth and train teachers and youth development professionals working in New York, Tennessee, Texas, Virginia, DC and Florida. In 2007, ESA Foundation also funded an independent evaluation, which clearly showed that WAVE positively influences the developmental trajectories of youth, and as a result, they are able to defy the negative predictions that have been made about their futures.
World Wide Workshop (WWW) works to harness the potential of computers and the Internet to enhance technological fluency for creative learning, leadership, innovation and livelihood skills among underserved children and youth worldwide. It developed the Globaloria learning network to leverage game design to empower youth in disadvantaged communities.
In 2014, the ESA Foundation grant allowed for the expansion and continued support of WWW’s original pilot program of Globaloria.
WGBH is a public service media producer for New England – on TV, radio, the Web, and in the community. It is the single largest producer of PBS prime time and online programming, and a major source of programs heard on public radio from coast to coast. WGBH is a pioneer in educational multimedia and in media access technologies for people with hearing or vision loss. WGBH created THE GREENS, a web site geared towards kids ages 9-13 offering flash-animated episodes, interactive games and quizzes, engaging dialogues, a blog and other activities that illustrate environmental concepts and suggest ways to make a difference.
ESA Foundation awarded WGBH a grant in 2009 to help develop online animations and games that teach tweens how to live sustainable lifestyles, which are the centerpiece of THE GREENS web site (pbskids.org/greens). The games and animations guide kids in a critical exploration of green choices, prompt real-world action and underscore a shared relationship with others worldwide in facing environmental challenges. Project advisors include the Earthwatch Institute, the Institute for Sustainable Energy and the North American Association for Environmental Education. | 2019-04-18T21:08:59Z | http://esafoundation.org/beneficiaries.asp |
It is a rather funny thing that that such words as “Calvinists,” “Calvinism,” and the like exist. I don’t think Calvin himself would find it either funny or flattering. He would be most troubled that his attempts to mine the truths of the Bible would be something that resulted in attaching his name to a movement, which is really a number of movements. But the terms related to Calvin’s name are useful as identifiers when used correctly.
What is too easily overlooked is how Calvin the man was so different from those of us who have appropriated the name Calvinists. Calvin was often more a devotional writer than a scholarly theologian. He seems to have had one and only one audience: God’s sheep, the congregation. His preaching schedule was murderous, and his method was expository teaching through the Bible book by book.
Some years ago, Banner of Truth (which is a favorite publisher) reprinted several facsimile editions of Calvin’s sermons. These were English translations from the 1500’s and maybe the 1600’s. These were beautiful books–big, well bound, and printed with quality in mind. But for reading purposes, they were less appealing. The size of the books, the older versions of English print, and the other features expected in a facsimile edition render these books hard to read. When I preached through 1 Timothy a few years ago, I don’t think I even looked at the facsimile that I have.
Now here is the good news: Calvin still speaks to us today. His message is still relevant. And, translations are pouring off the printing presses that are much more manageable, readable, and attainable. While Banner of Truth is not the only publisher to be mining the riches of Calvin’s sermons and books, they books they have made available are outstanding.
Currently, I am reading from Letters of John Calvin. Banner has a more complete multi-volume edition of Calvin’s letters and other writings that is quite attractive. It is called Tracts and Letters of John Calvin. Many years ago, I picked up a four volume set of Calvin’s letters that has been valued, but under-used in my library. It was published by some scholarly publisher, and I suspect Calvin’s correspondence was rare until the recent Banner set.
But most people are not going to casually or devotionally read multiple volumes of Calvin’s mail. This book is just the right size. It is a relatively small book of some 70 letters and less than 300 pages. The letters are preceded by a biographical sketch of Calvin’s life. Despite having read books and articles by the scores on the life of Calvin, I always enjoy revisiting his story once again.
His correspondence provides an autobiographical look into the man’s personality and character. It is also a testimony to the front line issues of the Reformation and key figures in it. Because Calvin’s intent and life was God-centered, this book is devotional reading and theological study as well.
Robert White is, as far as I know, the best Calvin translator around today. Several years ago, I received and read from his translation of Calvin’s Institutes. It is a beautiful rendering of Calvin’s words. Most recently, I have acquired Sermons on First Timothy. It rests on the stack of books I read from in the mornings, and for now, it is part of my Sunday morning reading. In other words, I am inching my way through this book of sermons.
I would think that the better method would be to read a sermon every day, but time constraints prevent that right now. But Calvin can be enjoyed in just short and even infrequent doses. Cotton Mather said that he loved to sweeten his breath with the taste of Calvin before going to bed. Me, on the other hand–I prefer a dose of Calvin along with strong morning coffee.
Whether read in conjunction with Calvin’s commentary on 1 Timothy or read as a resource, this book would be most useful to the pastor or teacher working through the letter. Also, as a book just for spiritual edification (as though that were a minor component of life), this volume is first rate.
Take note that Banner now has volumes of sermons on 2 Timothy, Titus, Genesis, Job, Jeremiah and Lamentations, Daniel, and perhaps others that I have overlooked. Needless to say, there are far too many good books around than I can wrap my mind, time, or pocketbook around. Nevertheless, we do what we can. Inch along the way and get Calvin’s books in the new, faithful translations.
Banner Books on Calvin: HERE.
This year I have been teaching a history course on the twentieth century. With a number of historical periods that I have studied, read about, and taught on, the twentieth century is possibly my most frequently studied period. My class and I spent an inordinately long time studying the Great War (World War I) which, like all historical turning points, extends both back in time and forward in its causes and effects. We are currently wrapping up a study of the Russian Revolutions. Next I will be devoting attention to the period between the World Wars, leading up to a month or more of looking at World War II.
The chessboard of twentieth century history includes many key players. The United States, Great Britain, Russia, Germany, and France are vital to the whole period. But one cannot overlook Italy, Japan, China, and then some major minor players like Belgium and Serbia in World War I and Poland and Spain (particularly the Spanish Civil War) in World War II. The post-war period brings in a whole new cast including Greece, Israel, Korea, Yugoslavia, Vietnam, and other countries.
One could make analogies to various chess pieces and the leading countries. Then there are the pawns whose movements may or may not be significant to the causes of events. Any chess player (and I am not one) can affirm that pawns can make or break a game of chess. They can be minor pieces, but their impact can direct the course of events.
This brings me to the topic of the Netherlands and the Dutch people in the twentieth century. I am not when or if the fine textbook I am using refers to events in the Netherlands after the age of Napoleon. The Netherlands was neutral during World War I (wise move on their part) and were a quick knock-out in World War II. The Dutch underground in the Second War gets some attention. The failed Allied offensive (recounted in the book and move A Bridge Too Far) took place in the Netherlands, but that story is one of the British, American, and German armies.
After World War II, the Netherlands was a NATO member, but has remained on the periphery of historical movements. One recurring story is of decadence and immortality in that country which seems to be ahead of the rest of the West in moral degeneracy.
The history books and the news accounts often miss or don’t know the whole story or even the greater story. The late 19th and 20th century history of the Netherlands is rich in certain respects. Unlike my hopeful title, the Dutch have not saved civilization, but they have pointed to and promoted what would be civilization saving in many respects.
There are a number of Dutch Christians who lived in the middle to late 1800’s and up through the mid-1900’s who have grasped issues even more important than the immediate challenges of ending World War I, defeating Naziism in World War II, or holding on to the Free World against the Communist Bloc in the Cold War.
The names are familiar to those who have waded into the deep currents of Reformed theology and philosophical thought. Guillaume Groen van Prinsterer, Abraham Kuyper, Herman Bavinck, Herman Dooyeweerd, Geerhardus Vos, Klaas Schilder, Hendrik van Riessen, H. R. Rookmaaker, and Cornelius Van Til are among the key leaders in the intellectual revolution of the past 100 plus years.
I could devote quite a few paragraphs and pages to talking about the various men named above. I actually have talked and written about most of them. In fact, I have literally talked from coast to coast about them. (I spoke at two conferences years ago–one in Virginia and one in Alaska.) For now, I will focus on two of the many books that are now available highlighting key ideas from the Dutch Calvinist Worldview Thinkers, as I like to call them.
Lectures on Calvinism by Abraham Kuyper is a Christian classic. It has been reprinted and edited many times since it first emerged from the Stone Lectures that Abraham Kuyper gave at Princeton Theological Seminary in 1898. One such reprinting and repackaging changed the name to something other than either Lectures on Calvinism or The Stone Lectures. The goal of all such publications is to get the message of these lectures out.
This book calls for a big dose of humility from all Christians. Reformed Christians need to realize how limited our vision is when we think of Calvinism as a system of 5 Points or we think that our efforts to promote Christianity are full-orbed. Non-Calvinists need to realize how, despite whatever struggles they may be having in regard to soteriological (salvation related) issues, the claims of God are over all areas of life.
Many books, movements, schools, colleges, ideas, study centers, and terms have grown out of this book. Many Christians speak today of having a Christian worldview without knowing that this idea springs from Kuyper. Kuyper, however, spoke of a World and Life System rather than using the more compact term Worldview. Every concern that comes up about the Christian role or lack thereof in politics needs to be referenced back to Kuyper’s chapter on politics.
He also spoke about science, art, and the future, which can be studied for how Kuyper may or may not have foreseen events.
American Vision has reprinted and edited the edition of the book pictured above. Some of Kuyper’s sentences were a bit long and heavy and many of his references are obscure to most of us. This book has modified some of the language and punctuation without rewriting or condensing the content. Also, footnotes explain many of the terms or references that Kuyper and his audience would have been familiar with.
I would include this book for essential reading not just in my top 100 or 50 or 25 reads, but in my top 10 reads. Furthermore, it is not a read-once-and-shelve book. This is a book to reread often. Get it and read it.
One of Abraham Kuyper’s mentors and contemporaries was Guillaume Groen van Prinsterer. Usually and conveniently, he is referred to as Groen, pronounced to rhyme with prune and equivalent to our word green. Groen was a brilliant Christian historian and political leader in the Netherlands. At some point in his career, he gave a series of lectures at his house on the key determining issue of his age. That issue was the French Revolution. It was not the details of the storming of the Bastille or execution of Louis XVI and Marie Antoinette that concerned Groen.
Behind the Revolution and preceding from it was a worldview or philosophy. As has been often, but not often enough, pointed out, the so-called American Revolution and the French Revolution were not twin events. Their differences are comparable to the knife use of a surgeon and that of a street criminal. Lest someone think this is a odd-Christian weirdo interpretation, just look at such books as James Billington’s Fire in the Minds of Men.
Before Billington and before all the forces for secularism, humanism, and whatever other objectionable isms of the twentieth century, Groen was discussing the essential beliefs and unbeliefs that propelled Europe into the modern age with revolutions continuing for over a century.
For years this book has been hard to find. It was translated into English and published by a small Canadian publisher back in the 1980s and 90s. I doubt that it is on the reading lists of any or certainly not many college courses on the French Revolution, modern thought, revolution in general, or political philosophy. Groen would not have been shocked or surprised by that omission.
Unbelief and Revolution has been reprinted by Lexham Press. Along with a number of great books, including Geerhardus Vos’s Reformed Dogmatics and many volumes by Abraham Kuyper, Lexham Press is turning into a modern center of Reformed Christian thought and theology. Harry Van Dyke, a great scholar and acquaintance of mine, translated this book. Jake Mailhot, who is what I want to be like when I grow up, is a key figure in the distribution of Lexham Press publications.
Read the Dutch Christian authors. Start with Kuyper and Groen.
As an incurable reader, I often find myself stumped over what kind of book I need to read next. My tastes range from theology to literature to history to politics to poetry to philosophy to biography and more. I could almost paraphrase Will Rogers and say, “I never met a book I didn’t like.” I have met a few that were not to my liking, but I am prone to find something of use in even the worst of readings.
My morning reading time is when I focus on Biblical and theological books. If a book is devotional, without being fluffy, and enlightening, it makes for a good start for the morning stack of books. I have about an hour to read and usually read a chapter or about 10 pages from each of 3 or 4 books. (This method works well for me.) After the book aimed at the heart, I am more ready for the book aimed at the mind. So, a book applying Bible teachings might be read from first and then followed by a bit more weighty theological reading. The preferred third book is usually more focused on Christian worldview thinking. It might be on history, education, current issues, philosophy, or some other area. It might or might not be a specifically Christian book.
This brings us to the topic of The Essential Jonathan Edwards: An Introduction to the Life and Teaching of America’s Greatest Theologian by Owen Strachan and Douglas A. Sweeney. This book is published by Moody Publishers.
In light of the different types of books I like to read in the morning session, The Essential Jonathan Edwards can fit into any of the categories. The breadth of the approach of the book itself is similar to the breadth of the subject. Jonathan Edwards is acclaimed as one of the great preachers of all time. He is also one of the great theologians. He was also a prolific writer. He is recognized for his contributions to the field of philosophy. He is studied for his views on any number of topics, both those pertinent to his times and to ours.
As the subject of biography, Edwards’ life is also rich. He lived in colonial America during a period that was just past the heyday of Puritan thought and just before the period leading up to the American Revolution and War for Independence. I will assume for the moment that the term “American Revolution” refers to the change in thinking and outlook that developed prior to any shots being fired at Lexington and Concord, and I am borrowing this definition from John Adams. Back to Edwards: He was a major figure in the Great Awakening. Along his labors were limited geographically to a small part of New England, his role through his preaching and writing explained, furthered, and cautioned against aspects of the revival. He was the spokesman for this side of the Atlantic.
His marriage and family are models for both understanding American culture and for spiritual edification. His tumultuous relationship with his Northhampton congregation is insightful into the workings of colonial communities and all too familiar territory for many pastors and their churches. Edwards was briefly connected to the still new Princeton University and had been educated at Yale. His life shows the richness of potential opportunities in the colonial period even accounting for the particular genius and gifts of the man.
The most scholarly and library-bound academic wanting to grapple with theological conundrums (like free will and Original Sin) can study Edwards alongside the more profound student of philosophy, especially the one interested in American contributions. But the pastor can also find Edwards a helpful mentor giving encouragement to his soul as he prepares sermons and lessons for his congregation. Again, the study of Edwards is a hall filled with treasures.
So where do you begin? Or how can you access the wealth of Edwards’ life, faith, and thought?
The Essential Jonathan Edwards is an excellent place to begin. The book contains an account of Edwards’ life, but it is only partially a biography. Much of the focus is on the teachings of Edwards. The book is heavy laden with quotes and lengthy ones at that. It doesn’t take many lines of reading Edwards to realize that this guy cannot be skim read. He is not impossible or overly technical, but his language is rich and detailed. While the entire book reveals biographical details, the first section is more largely focused on his life.
The fourth section deals with a troublesome issue in Edwards’ ministry and in our times. Statistics show certain numbers of people who are Christian by profession. Church rolls show smaller groups of the same. Yet nominalism, that is, being Christian in name only, is a huge problem. Protestants like to think it is merely a Roman Catholic problem. Within Protestant groups, one group will wag their heads at another for this plague, but the truth is that it hits ever section of Christianity and every church. If you don’t know of where to locate the dangers of nominal Christianity, begin by looking in a mirror. I am not saying that you and I are believers in name only. But I do know it is a real threat to me. Those of us in Christian works (and I teach in a Christian school) can easily confuse occupation with salvation. The problem beset Edwards both in the times of his grandfather’s Half-Way Covenant approach and in his own dealings with a congregation that fired him.
The final section deals with heaven and hell. Edwards is once again a needed instructor to our times. Because Christianity offers so much in this world, we can easily undervalue what it teaches about the world to come. And the doctrine of Hell is just uncomfortable.
I recently posted a blog review highlighting a number of books on, by, or about Edwards. For the reader wanting to meet the great theologian, this is the book to start with. For the reader who has already read a lot by and about Edwards, this book is also a great read.
His brand of theology was that formulated by Protestants, particularly those who followed in Calvin’s stead, and more particularly, the theology inscribed in the Westminster Standards. His philosophy was rooted in the same sources. For such reasons, he has been called “The Presbyterian Philosopher.” That, of course, is the title of Douglas Douma’s definitive biography of Clark. In terms of his life, Clark spent his days at his desk writing, in classrooms lecturing, and in Christian circles discussing, defending, and debating theological and philosophical issues. A family man, he enjoyed time with his wife and two daughters, playing chess, and painting (which he worked on but never became proficient at doing).
Most Christians today are unfamiliar with Clark. Most Christian bookstores do not carry his books. Many pastors are unfamiliar with his work. Yet, when R. C. Sproul was asked what theologian of our time would be read in 500 years, he answered, “Gordon Clark.” The fact that his books are not the most popular, most read, and most often quoted says more about problems of our times than it does about the man himself.
In our theology class at Veritas Academy, we recently read Clark’s short book In Defense of Theology together. I know that this was a challenge to the students because, even after having read it few years prior to this year, it was a challenge to me. Clark was a precise and clear thinker, but not one who could write on a popular level. His writings take some work, but with some persistence and a few helps, reading Gordon Clark can be quite rewarding.
Clark’s main contention in the book is that theology is something all Christians should both respect and learn from. While he devotes portions of the book to answering atheists and Neo-Orthodox people, he returns frequently to his primary audience. He wants regular Christians to read the Bible with some theological skills. While he was an intellectual and an academic, his goal was edification of the believer and not simply the accumulation of knowledge. Clark affirms, “Scripture tells us about God; therefore, we should study it.” This will entail learning verses relating to a particular topic. While memorization is good, it is also necessary to mentally bring the verses together to make a whole or more complete thought on the topic. Here again, the message is that theology is essential.
Much of the Bible is given in historical narratives. Other parts consists of laws or poetry, which is sometimes called wisdom literature. Theology takes place when the reader “connects simple historical events with their theological significance.” One might think of the progression of rulers and prophets in such books as First and Second Kings. On the one hand, such stories are full of interesting figures and ups and downs of Judah and Israel, both politically and morally. Theologically, however, the events pertain to God’s covenant with Israel, the promise of a Messiah to come, the keeping of God’s law, and the providence of God. So, a person can read the Bible while professing to steer clear of theology, but to understand the Bible, to see it as a whole, theology is necessary.
While Clark is concerned about the average man in the pew who may not think theology is necessary, he also sees the need to address those opposed to theology on other grounds. In our day, there have been several public figures who have attained status as our public atheists. This is a far cry from the day when the late Madelyn Murray O’Hara was the key spokes-person (loud mouth) for the cause. Clark died before the current figures were prominent, but the arguments against atheism are pretty much the same old recipes. Of more concern to Clark were the Neo-Orthodox and, in particular, Karl Barth. The German Barth had a major impact on lots of evangelicals in the United States. The Barthian tidal wave was slow in coming due to the necessity of translating his work from German to English. (On might think further translation was still needed from English to comprehensible English.) Along with Cornelius Van Til, Clark was incensed by the pattern of Neo-Orthodox theologians undermining the language and meaning of theology and undercutting the authority of the Bible. (For more on this, see Douglas Douma’s essay found HERE.
A key chapter in A Defense of Theology concerns logic. Clark not only wrote a book on logic, but he was driven by the topic. In short, he believed that God is logical, God’s Word upholds logic, and that logic can be used to unlock or defend Scriptural doctrines. In a discussion of logic, Clark shows how logic undergirds key passages from the Bible. He is convincing in making the case that Christians should be able to use skills honed in logic to work through the Scriptures.
Gordon Clark was not a popularizer or an easy author to read. He lived and battled in a time when Reformed theology was relegated to the sidelines or consigned to the dustbin of history. I doubt that his books ever sold in large numbers, but he plugged away, writing, teaching, and adhering dogmatically to the truths that sparked the Reformation and that are founded on the Bible. Trends come and go. They did before, during, and after the time of Gordon Clark. He didn’t budge. He was steadfast and unyielding. Perhaps he could have soften a few edges, and in our time, that might be advisable in our times. But God raised him up in a different era. Nevertheless, the books remain ready and waiting for us to read and use in our times, in our circumstances.
and the pocketbook may not change in the slightest.
The grass is too high, getting tough, growing slowly, and a bit brown from the heat and lack of rain. School has started. Nights a bit longer now, and there is a promise of cooler weather not so long from now. Personally, I wish time were moved six to eight weeks back and I were stuck, isolated, abandoned with only my family and lots of books at some beach house overlooking the incoming waves or some cabin in the mountains with a valley to see from the back porch. But that didn’t happen in June or July or August, so I accept the inevitable–September. But as De la Casa noted in the quote above, there can be riches found in the month of September.
Of the making of study Bible in our times, there is no end. That is not given as a complaint, but as a thanksgiving. I have often read the concern about study Bibles which says that people will be prone to read the notes in the Bible and accept them as being on a par with Scripture. My problem is not anywhere near that. I am prone not to read the notes at all. In fact, my preferred daily reading Bible has no notes or added materials, except maybe a paragraph introduction before each book.
The Worldview Study Bible is published by B & H Publishing, an outstanding source for Christian books and Bibles. For high school or college students, this would be a great resource. The translation is the CSB, which is produced by B & H (or Holman as the Bible arm of that company is called). Others more qualified can weigh the merits or problems with the CSB translation. I can lament that we have the NKJV, ESV, NIV, and now the CSB, along with many others, that are making a common Bible among Christian folk nearly impossible. I can give a somewhat approving nod to those who prefer the King James Version (while separating myself from those who contend that the devil is the source of all other translations). Hey, we live in a time of many sound, conservative, evangelical Bible translations. That is not exactly the stuff of Foxe’s Book of Martyrs caliber problems.
The key feature of this book is the inclusion of large number of essays dealing with Christian worldview issues. I know there is some debate back and forth over the concept or limitations of teaching about having a Christian worldview. I know that sometimes we have used the term as a way of importing a somewhat Americanized and politically conservative way of thinking into our Bible studies. I know that the term Christian worldview can be trivialized and separated from other aspects of the full orbed Christian life. But I still like the term. I still buy, read, and borrow from books promoting a Christian worldview for interpreting every area of life and thought. I am a Kuyperian, a devoted fan of David Naugle’s book Worldview: The History of a Concept, and a promoter of Christian education that teaches worldview thinking.
The topics in this study Bible range from theological issues like inspiration and inerrancy to social issues like recreation, careers, LGBT concerns, and more. Science issues relating to creation/evolution debates and gender debates are included. Essays on philosophy, politics, economics, music, and other such ideas are also here. The essays are authored by some leading Christian teachers, pastors, and writers, and they are placed throughout the Bible in places that tie in with the themes of each book.
I have just begun to harvest the fruit of this fine study Bible. Those looking to understand what is meant by having a Christian worldview or those who are teaching others would enjoy this work.
Every Moment Holy is published by that delightful and creative group known as The Rabbit Room.
This is a beautiful book both in outward appearance and in content. Buy it for someone for Christmas, but get at least 2 copies because you will want to keep one. It consists of prayers for every moment, time, and circumstance.
My morning history study is Justifying Revolution: Law, Virtue, and Violence in the American War for Independence. Edited by Glenn A. Moots (author of Politics Reformed) and Philip Hamilton. While this is a slow and studious read, it is a great look behind the battles and leaders of the American War for Independence that considers the books, ideas, philosophies, and ethical concerns relating to that war.
Martin Luther’s Commentary on Saint Paul’s Epistle to the Galatians (1535) is published by another new publishing group, 1517 Publishing.
Translated by Harloldo Camacho and with a foreword by Michael Horton, this big book is even bigger than it is. (Yes, I know that is awkward phrasing.) At 557 pages, this book is the Protestant Reformation, the 5 Solas, the confession of what we believe. A historical document–yes–but also a great study into a pivotal teaching of the Bible. Praise God for this new translation.
Two days now into reading Eternity is Now in Session by John Ortberg, published by Tyndale Press.
We are not just waiting to get to heaven so all will be experiencing eternal life. It is here and now and eternal matters are not just some heavenly idealistic realm but are for here and now. Powerful and instructive.
After a few unexplained delays, I am now reading Atheism on Trial by Dr. Louis Markos. If he writes it, I want to read it. He is both a gifted writer and an engaging (irrepressible) speaker. This book is no fluff work on current atheist evangelists, but is a serious look at atheism and its related philosophical and scientific ideas as found in the ancients, in philosophers of past centuries, and in the current discussions. Published by Harvest House Publishers.
I am always glad to see another edition or promotion or quote from Abraham Kuyper’s lectures at Princeton in 1898 that have sometimes been called the Stone Lectures or more commonly Lectures on Calvinism. Going back to the topic of Christian worldview thinking–this book is the foundation of all the modern applications. Brilliant.
Thanks to American Vision for publishing this new edition of a Christian classic. An added feature or benefit is that this edition contains some slight alterations in punctuation so as to make the text flow. Kuyper is not an easy read, and so having a few modernizations to style issues is a help. In my opinion, Lectures on Calvinism is one of the most important books ever.
I might have titled the book under review a bit differently. I would suggest it be called Eleven Christian Leaders of the Eighteenth Century along with One from the Nineteenth Century. That title is too long, but it reflects the fact that the reader gets lots of insights into the Christian mind and character of the author, Bishop J. C. Ryle.
Let me begin my review with a different approach. These are Augustus Toplady’s four main points about preaching. Most of us know Toplady primarily for his hymn “Rock of Ages. The following is from page 352 of the book.
Preach Christ crucified, and dwell chiefly on the blessings resulting from his righteousness, atonement, and intercession.
Avoid all needless controversies in the pulpit; except it be when your subject necessarily requires it, or when the truths of God are likely to suffer by your silence.
When you ascend the pulpit, leave your learning behind you; endeavour to preach more to the hearts of the people than to their heads.
Do not affect much oratory. Seek rather to profit than to be admired.
These four points were not only at the heart of Toplady’s preaching, but were central to all the leaders who were the subjects of this biographical study. The past was no more totally Christian than the present is totally non-Christian. The Eighteenth Century, the 1700’s, began as a time of Christian and evangelical drought and deprivation. Scarcely a century after the waves of Puritan revivals, the faith had largely stagnated into Deism, formalities, and works righteousness for the churched folks. For the unchurched, many of whom were poor, ignorant laborers, gin and immorality were dominant.
God sent revival to the British Islands. The book begins with two chapters describing the cultural and religious conditions before and when the revival movement came. The first two leaders in this book are also the best known: George Whitefield and John Wesley. Each man’s story is a fascinating portrait of how God works to save sinners and raise up preachers. How odd that Whitefield was first lead by the Wesleys while at college, but then became the first of those three (John and Charles Wesley and himself) to be awakened to real, saving knowledge of Jesus Christ and of the new birth. Whitefield was the first to do something utterly shocking: Preach outside the walls of a church building. He ruffled lots of clerical feathers in the process.
Of course, he and his dear friends, the Wesleys, parted and sparred over Calvinism. I first became aware of this while in my freshman and sophomore years of college. Actually, what I became aware of was that there was a man named Whitefield (too often ignored in Methodist circles where I grew up) and Calvinism. I read Wesley’s sermon on Free Will and then Whitefield’s answer. Whitefield drove Wesley off the field in that battle. Wesley’s greater skills at organization and administration enabled the Wesleyan branch of Methodism to trump the Whitefield branch.
To a large degree, Whitefield was buried in church history until Arnold Dallimore penned the first volume of The Life and Times of George Whitefield. During those same decades, books like Ryle’s Christian Leaders were not handily available either.
Lest my proclivities make this a Whitefield versus Wesley/Calvinist versus Arminian post, Ryle always stated his commitment to a Calvinistic interpretation of Scripture while commending men who differed. His chapters on Wesley and John Fletcher, another Arminian, were included to highlight great preaching and the godly lives of these men. In regard to Augustus Toplady, who could often be the John Robbins of his day, Ryle notes that he was sometimes a bit too caustic in his attacks on Arminianism.
Whitefield, Wesley, and Toplady were the only three of the eleven that I was familiar with. The other men covered are William Grimshaw, William Romaine, Daniel Rowland (I think I had heard of these three a few times), John Berridge, Henry Venn, Samuel Walker, James Hervey (not the scientist), and John Fletcher. They were all great preachers and/or solid writers in their day. They were all Church of England men, as was Ryle. Most were highly educated.
Perhaps the most fascinating common trait was that most entered the ministry without a clear grasp of essential, evangelical, and Biblical doctrines. In their early sermons and ministries, we can say that, at best, they were muddled in their thinking. Did they even know God at those times? Were they saved, to use more contemporary language? Were their doctrines sound and orthodox? These questions involve some heart issues we cannot determine. But in their early days, their beliefs were incomplete and defective. It is utterly astounding how God reached each of these men (and no doubt, many others) in divers times and places and awakened their minds to the beauty and power of the saving grace of God.
For certain, these were not men to hide their lights under bushels. Quite the contrary, in each of the eleven stories these men pastored congregations, parishes, or whole lands (as in the case of Whitefield and Wesley) where they preached and preached and preached. In several cases, if they did not literally die in the pulpit, they preached themselves and worked themselves into early graves. Zeal for God’s House, Name, and Saving Grace consumed them.
Sometimes, I found myself wishing that Ryle’s subjects could say or write something that didn’t sound overly stilted or spiritual. It seems as though none of these men said things like, “It has been a rough day and I have a headache.” Instead, they said things like, “I am been buffeted hither and yon by the storms of life, yea even by Satan himself, and I feel pain in my head reminding me of my own unworthiness and weakness in the flesh.” I like the heart of that last statement and subscribe to the need to think more Christianly, but sometimes feel that these men are too marble-like and ideal to have been flesh and blood folks like me, my pastor, my friends and brethren.
Not, of course, that there is any magic about the past. People were no cleverer then than they are now; they made as many mistakes as we. But not the same mistakes. They will not flatter us in the errors we are already committing; and their own errors, being now open and palpable, will not endanger us. Two heads are better than one, not because either is infallible, but because they are unlikely to go wrong in the same direction.
Reading this book–Christian Leaders in the Eighteenth Century–did not leave me thinking, “This is how it (Christianity) should be done.” It did leave me with lots of convictions, reminders, a few laughs, and a desire to see ongoing revival and reformation in our day. When and as God sends revival into 21st century North America, Brazil, England, Africa, etc., it will not look like the world Ryle described. But there is so very much found in this book that will be found whenever and wherever God pours out waves of revival and raises up Christian leaders.
I still have not recovered from something my college student daughter TaraJane told me a few months back. She said that theology majors in college that she knew were the most cynical students around. By that, she went on to explain that they were critical, negative, and generally contrary in their views on church, the Bible, God, and life in general.
I think that theology students should be giddy most of the time. That is not because the study of theology is light and breezy, but because it exposes the heart and mind to an incredible array of richness concerning God, the Bible, the Kingdom of God, salvation, truth, mankind, and all of life.
I am, at best, only one who dabbles in theology. My training, college education, and main work experiences have not been in theology per se. I am a history teacher, a literature teacher, an administrator in a Christian school, and a book reviewer. But I have also been a pastor/teacher/elder for several decades. I have no formal theological training, but have played on the scrub teams for years by reading lots of books, listening to sermons and lectures, and acquiring a working familiarity with theology.
One of the too often used descriptions of theology is that the study is dry and dusty. Granted that is possible. All fields involve some archived information that is a labor to wade through. Some writers are technical and analytical in ways that prevent them from being readily readable. Often theologians, like scholars in all fields, write for an audience of peers and develop a language that the insiders are familiar with but that stumps the novice or newcomer to the studies. When you are reading and thinking, “I don’t know what this book is talking about,” you may simply not know enough of the background. Be slow to condemn, but don’t be ashamed to be baffled.
But is theology just an academic, scholarly, intellectual pursuit? Of course, it can be. The same can be said for the Battle of Gettysburg. At its best and in terms of its primary purpose, theology is designed not to equip brainy, intellectuals who are Christian with an outlet for their mental synapses. Rather, theology is to minister to, teach and instruct, comfort and confirm the Christian sitting in the pew on Sunday and working at the factory, elementary classroom, hospital, or home on the weekdays.
I want to call attention to two books I read over the past several months that are made up of incredibly rich theological discourses. The chapters in these books were given as sermons and talks. I find myself feeling dizzy at the thought of giving a talk on the high level of the contents of these books. Yes, there are some things hard to understand. No, these are not the works for a new and unread Christian to pick up. But everytime I think back on either of these works, I remember just how they were packed full of soul-nurturing, mind-pleasing, convincing and convicting truths.
These two books are pictured below. The first one is High KIng of Heaven: Theological and Practical Perspectives on the Person and Work of Jesus. It is edited by John MacArthur. Contributors include Albert Mohler, Mark Dever, Ligon Duncan, Michael Reeves, Mark Jones, Stephen Lawson, and many others. These men are some of the foremost preachers and teachers in the various branches of Reformed and Evangelical Christianity. Published by Moody Publishers and The Master’s Seminary, the book is hardbound and affordably priced.
The second book is Oh Death, Where Is Thy Sting? Collected Sermons by John Murray. It is published by Westminster Seminary Press.
Let me begin the examination of these two books by discussing John Murray himself. Murray was a Scotsman and was primarily a teacher of systematic theology. He fits my favorite image of a true Scots Calvinist: He was stern, serious, somber, and searching. Add scholarly to that list. His major works are Redemption: Accomplished and Applied and Principles of Conduct: Aspects of Biblical Ethics. He also wrote a powerful commentary on Romans, and his various writings were collected together by Banner of Truth into a four volume set of the Collected Writings of John Murraywhich is temporarily (we hope) out of stock.
On different occasions, Murray did do some pulpit supply. In the case of these sermons, which were preached to small congregations in Canada and the Scottish Highlands, Murray was filling pulpits during his summer breaks from seminary. Most of these sermons were transcribed from audio messages. As is noted in the editor’s comments, a sermon read does not contain the emphases and style and passion of the spoken message. Nevertheless, the the theological ground covered in this collection is vast.
The first seven sermons are from Romans. They might serve as a helpful overview of Murray’s commentary, and one would wish we had a collection of sermons covering all of Romans. Since Romans deals so intricately with salvation including justification, sanctification, election, and so on, Murray beautifully explains these doctrines. The next eight sermons are from various texts, mostly from the New Testament. The final selection is Murray’s charge to Edmund Clowney, who was taking on a theological chair at Westminster.
This book has been very beautifully crafted. See the pictures connected to the link above. It can be read in part or in whole, beginning with any one of the sermons. It is a book I thoroughly enjoyed and hope to read from again and again.
In this section, Michael Reeves begins with “The Eternal Word: God the Son in Eternity Past.” Paul Twiss follows with “Son of God and Son of Man.” Mark Jones, author of Knowing Christ, speaks on Isaiah 50 and the topic of “The Son’s Relation with the Father.” Subsequent chapters are the Virgin Birth, Christ as the Bread of Life, the Good Shepherd, “The Way, the Truth, and the Life,” and Christ as head of the Church.
Messages in this section are on the different phases of Christ’s incarnation, life, death, resurrection, ascensioin, and second coming.
This portion includes a discourse called “No Other Gospel: The True Gospel of Christ” and another on the completion of the New Testament Canon. Two subsequent portions cover the relation of Christ to the Old Testament.
Twenty three messages, all focusing on different Bible passages, with little of no repetition, differences, or diversions–this book is a gem for theological study, devotion, or sermon preparation. | 2019-04-26T06:26:22Z | https://benhouseblog.wordpress.com/tag/reformed-theology/ |
Sell your QUELLA VECCHIA LOCANDA items on eBay !
Im my opinion Italian prog does not come better than this group. Although short in time but time only , this style of music is the most present in Italian prog but few of them manage to move my neurons like they do. The music on here is simply stinging me with curiosity , something that many of the other bands mange not.
The first release from QUELLA VECCHIA LOCANDA and in my opinion one of the all time Italian prog greats. This excellent debut album has a strong PFM-like attitude with loads of violin and classical themes. Songs are delicate and exceptionally well performed with warm precision. Imagine great 70's sounding keyboard work layered with flute, violin and great guitar work and you have got QVL. As you listen to this album four toes will be tapping and you hands will be moving as this music captivates your motor reflexes. QVL draw on some pretty heavy classical interludes to build their music on. Along the way we are treated to many thematic mood swings and tempo changes. This album has many standout tracks which combine the classical underground 70's Italian sound with a solid blend of tranquility and beauty. Vocals are very expressive and are full of harmonic textures.
This debut album and their second one as well are almost equal, regarding of their good skill at the composition; even though the present debut is closer to such stuff by PFM, as from the use of the violin and considering also their classical arrangements, rather than resembling only the acoustic lightest moments within the stuff by Mussida & company, unlike "Il Tempo della Gioia". However such arrangements are often inspiring despite of being quite similar to those ones by PFM . Moreover "Q.V.L." are not a derivative ensemble: in fact this opinion is confirmed by the array of personal arrangements. Their true imprinting, which made this album a classic one in the seventies!!
Excellent, sophisticated, melodious Italian progressive rock music. Using flute and electric violin liberally, this album reminds me a little of PFM and JETHRO TULL. All the songs are very good but I particularly like 'Realtà' which is mellow, and 'Dialogo' which is heavier with some good synth, bass and drums followed by vocals making a great song. 'Immagini Sfocate' rocks, as does 'Il Cieco' which also has some great flute work. 'Verso La Locanda' starts with some slightly jazzy violin and piano and then ups tempo; the track reminds me a little of TRAFFIC in places. The last track 'Sogno, Risveglio E...' starts with some good piano and electric violin, and turns into quite a classical-sounding piece with a good tune.
I should also mention that the electric guitar, bass and drums are by no means neglected on this album. I suspect that this album would appeal to the entire spectrum of progressive rock fans: it doesn't have the very sentimental feel of LOCANDA DELLE FATE which is a turn-off for some (not me, though), it has some very melodious parts, and there are also some good heavier parts. Only if you're not keen on violin might you not like this album so much, as the electric violin does play quite a role; it does fit in perfectly though.
A timeless classic of Italian progressive rock. Highly recommended.
Quella Vecchia Locanda's debut album is quite an energetic offer, full of electric fire, which comes from both the rock and the blues area, and yet, their style can't be labelled as hard prog (more suitable for Metamorfosi, Biglietto, Museo Rosenbach.). Sure these guys can rock, since there's plenty of highlight space for the guitar riffs, and there's also furious drumming and mean bass lines along the way; but regarding the repertoire as a whole, you can tell that the presence of recurrent baroque motifs, as well as some passages full of Mediterranean romanticism (tracks 3 and 8, by he way, a delicious closure), keep the sonic balance in favour of the achievement of an overall symphonic prog sound according to regular standards. At this point, the general reminiscences may be accurately referred to PFM, though by no means is QVL to be considered as a clone. The vocal performances are somewhat relevant in QVL's repertoire, though the lyrics are not precisely too abundant: yet, the two lead singers the flutist and the guitarist) alternate their different ranges for good effect, and there's also a bunch of enthusiastic choral parts. The violin is the most prominent lead instrument, since it not only serves as a basis for all those classically inspired intros and interludes, but also appears as a complementary companion to the electric guitar parts in most of the heavier parts: straight examples of this are incarnated on the first two numbers, though QVL shines in its most explosive levels on tracks 4-7. These ones really show you that Donald Lax, despite being the last to enter the band, became te main muscal focus for the sextet. The flute passages add some excellent colours into the varied musical pallet created by QVL (lovely lines on track 3), while the keyboardist makes tasteful use of chords on the piano, the pristine harpsichord, and the aggressive organ, as well as mesmerising ambiences on mellotron and some occasional solos on synth. Tracks 6-7 are my favs when it comes to appreciating the most inspired level of interplay among all six musicians. 'Quella Vecchia Locanda' is a real Italian prog gem, that should have deserved a better sound production; sure some of the material could have actually benefited from a little more consistency in the arrangements department, but all in all, it is a stunning piece of prog music.
The debut-album belongs to the top of the acclaimed Italian progrock from the Seventies, it contains eight beautiful and original compositions. They are a bit short but it's such a splendid blend of folk, classical and symphonic. The music is build around the magnificent (inter)play from the sparkling piano, compelling violin and cheerful flute but some guitarplay (acoustic and electric) adds a pleasant dimension to the very warm atmosphere on this album. SIMPLY WONDERFUL!!
"I live in this dark wood where there will be no other life than the mine. I love the world but the world hates me. Its refusal means death for me, death for me..."
Quella Vecchia Locanda is an important italian band of the seventies with that peculiar sound, an evocative reference to classical and baroque music. Rich instrumentation: flute, piccolo, piano, organ, mellotron, moog, electronic zither, spinet, acoustic and electric vionins (their trademark, wonderfully played by Donald Dax), bass, acoustic and electric guitars, 12 strings guitar, drums, percussions and frequency generator.
Their self titled debut album is a wonderful product of the year 1972. At the time they seemed to have a wide success, maily due to the great live shows as for example the Villa Pamphili Festival. It's trange thing such a band only managed to release two albums and then disappeared.
I do not think their first album is their best effort. It is excellent, indeed, but lacks sometimes in maturity, in my opinion. In particular you can hear many references to Jethro Tull that will be gone definitively in the second work, which seems really more original both in sound and music' structure.
"Prologo" is a delightful opener and immidiately the listener is pleased by the fantastic violin. "Immagini Sfocate" is the most explosive and hardest track they ever recorded. All the lyrics's meaning is not what most people could imagine listening to their music: anguish, illusion, fear, tremor for existence.
From the first sound of Donald Lax's marvelous violin which opens "Prologo" you know that the QVL sound is unique. They are one of the most distinct and important bands from the classic Italian scene. Mixing rock with a classical or jazzy sound and incorporating flute, violin, guitar, and keys with a tightly wound "Fragile" style rhythm sound. Add to that very good Italian vocals and lots of mood changes and you have the right ingredients for a great debut. Some think there is a Tull comparison here but it is only fleeting to me-QVL sounds like no one else. These songs have the punchy quality that PFM sometimes has and maintains the sound quality level.
Lax now lives in Hawaii and is still performing. While recalling his time in QVL very fondly, in a 2004 interview he sadly reports that the band never made a cent from the albums, were treated poorly, and were not even informed of the reissues. He says he had to go on the Internet and pay for his own music just to get a copy!
"Un Villaggio, Un Illusione" does sound a bit Tullish when the flute kicks in albeit with mad violin the comparison is only partially worthwhile. It is Lax's marvelous playing that steals the show here, without it this track is basically a grooving rocker.
"Realta" begins softly with acoustic guitar before the warm vocals usher in a nice melody. This track sounds very PFM with piano, flute and percussion all very good. This has to be one of the most perfect examples of the beautiful Italian sound.
"Immagini Sfocate" sounds quite experimental at first but devolves into a guitar rocker with some great drumming at the end and a nice guitar solo. The lead guitars on this album have a unique sounding distortion to them, quite dry.
"Il Cieco" and "Dialogo" both have some nice moments but with less of the magic of the other tracks. "Verso la Locanda" is better than the previous two but again I sense some lack of direction in the overall song.
"Sogno, Risveglio" may be the highlight of the album and I think it hints at the potential that this band would realize on their masterpiece two years later. Gorgeous pastoral moments mingle with occasionally edgy violins and an unsettled piano that keeps trying to rock the boat. But they come together at the end for a lovely closing.
Both QVL albums are a must for anyone interested in putting together even a modest Italian collection. This debut is more accessible at first and more instantly likable but their follow-up is the real thing, even if it takes longer to appreciate.
The Japanese mini-lp sleeve is another gatefold that shows off the beautiful cover art that I never get bored with. The remastered sound is excellent for the time period. 3 ½ stars.
I have to admit that I like the Italian prog style quite a lot although I have only reviewed a few albums so far (lot more to come). But I'm afraid that I won't be in-line with most of the reviews for this one.
I am lacking the emotional style of several other bands from this genre. Contemporary of "Quella" or not. This mood will be present, but too scarcely (second part of "Prologo" for instance). Quella mixes the classical genre too much IMO. When I listen to "Un Villaggio, Un'Illusione", I can't find any link between the several parts of the song (even if it is rather short one).
My second preferred track of the album is "Realta". When you listen to the intro "Immagini Sfuocate", you'll know where "La Maschera Di Cera" got part of their inspiration. But I very much prefer the complex style of "La Maschera" to the one of "Quella". Mater of taste I guess.
The jazzy and chaotic start of "Il Cieco" contrasts with the classical break which follows. Nice fluting (but this is valid in any number featuring this instrument). Vocals are weak in here while the finale is very pleasant.
There are several very good moments on this album but short and too few. "Dialogo" is one example. Lot of energy developped; but it sounds a bit too jazzy for my taste. But again, a very gentle and tranquil part will bring a breathe of fresh air.
At times, I fully fall in love with this band. It is the case when I listen to "Verso la Locanda". This is what I expect from an Italian band, I guess. Fully symphonic, beautiful harmonies and passion (even if some jazzy flavour is present). The best song, IMO.
Another RPI legend and among the Italian prog wave's founders,QUELLA VECCHIA LOCANDA were formed in Rome in 1970.They started as a pop/rock group and actually there is an early live recording from 1971 with famous covers and three original arrangements with hints of what was going to come a year later.So,in 1972 the band released their first eponymous debut through Help Label.With a total sound lifting QUELLA VECCHIA LOCANDA delivered superb-executed symph-oriented progressive rock with deep vocals and rich compositions.Their style starts from delicate acoustic guitars,tasteful flutes and soft musicianship and ends in furious driving violins,up-tempo rocking structures and classical interludes and interplays.A magnificent album,QUELLA VECCHIA LOCANDA's debut is among the greatest albums of the traditional Italian progressive rock!Nothing more or less than extremely highly recommended!
QVL's debut album is a good choice if one is interested in jumping into the ISP environment. Here you'll find a lot of the issues that made the sub-genre fame: bombastic instrumentation, dramatic vocalization, all-weather symphonic atmosphere blended with touches of jazz and folk, weird sound effects and neat rock tunes. The influences are all displayed too: Jethro Tull, Pink Floyd, late Beatles, other contemporary Italian pop & rock & prog acts, opera compositions, some classical movements, etc.
In many cases, this mayonnaise may taste sour and indigest but it's not the case for QVL's "QVL" where all condiments are disposed accordingly, turning this release into a nice appetizer, truly an indication for beginners - considering that notorious ISP fans know this album from heart since it's really basal. Also noticeable is that unlike other Italian prog conceptual albums of the same period, QVL show a bit of irony together with the usual tragic and/or pastoral widespread themes.
'Prologo', the opening track is a fine introduction card for the album content, a real summary of everything to be heard en suite: flute, guitars and vocals are grabbing. 'Un villaggio, una illusione' keeps the climate warm with its colorful violin intro, later replaced by a mix of instruments playing in frenzy. 'Realta' is soft and catchy while 'Immagini sfuocate' jams intensely. 'Il cieco' closes album's first half not in the same level of previous tracks.
Final part is a memorable one, thanks to the stunning section provided by the last 3 tracks, a gorgeous and admirable musical adventure, supplied with beautiful flute and violin chords, backed by splendid piano and keyboards tunes. The farewell song, 'Sogno, risveglio e.' deserves the honor to be included in the pantheon of the great prog songs ever recorded.
No need to add more, except to recommend this magnificent album as an excellent addition to any music collection (be it prog or not).
Rough around the edges and filled with spontaneous, infectious bursts of energy, Quella Vecchia Locanda's first album easily finds a place in your heart if that's what does it for you, musically speaking. At first it almost felt too unpolished, as if some of the songs were put together mostly on a whim; lacking in smooth transitions and natural mood-shifts. Thankfully, after a dozen or more listens, that criticism still remains, but the fantastic music has been given a proper chance to grow.
Unfortunately, this is one of those rare albums that just doesn't reach me on an emotional level, even though it has all the characteristics for it. The only reasonable explanation I can think of is the fact that much of it feels very slightly familiar. Almost every spin of it have revealed new passages, breaks or moments where I can swear I've heard almost the exact thing before. Most of these moments actually originate outside of RPI, from all over the prog spectrum. While initially mostly funny, it now somehow alienates me from the record. Paranoia? Perhaps. But it sure consumes some of the originality for me.
It is an interesting and varied blend of all the characteristics of RPI, I cannot deny that. Stating that this is one of the most representative albums of them all isn't much of an exaggeration. There are the abundant classical influences, folky touches, inspired vocals and warm keys. On top of that also an interesting rhythm section with a skilfully played, more melodic bass and great flute and emotional violin work. Just as many other early albums playing this kind of music, much of the heavier influences are still there. Definitely more hard-rocking than what the poster names PFM, Banco and Le Orme come across on their biggest albums. It adds spice to the mix, a quality that makes this album interesting for people with a broader taste of music, who not necessarily think RPI is a sub-genre for them. Some songs almost come across as some kind of contemporary pop-rock, reaching its peak in the often eerily Babe I'm Gonna Leave You-like Realtà.
The more I listen to this album, the more I feel that it can be broke down in four distinct pieces. The hard 'n' heavy guitar and rhythm-carried parts, the Tull-ish folk-jazz blend with flute, the pastoral and delicate classical arrangements and finally the best part of it all; where it all forms a consistent whole. I'd want more of that fusion to be really happy with this album. Too often are these individually excellent parts just that.alone. One follows after another without interfering with each other, making many of the songs feel scattered. This might come across as a little stingy, but this is what keeps me from enjoying the album fully, even with the many pros considered. When these different parts diffuse in to each other in an original way, those moments aren't far from the masterpiece zone.
Recommended for discovering many of the characteristics of the genre. 3 stars.
4.5 stars. There sure isn't much about this album that I don't like. Lots of variety on this one with excellent vocals. I like the prominant piano, violin and flute. I really like the edge that this record has, their second album would be much softer.
"Prologo" opens with the violin, piano and guitar trading off. Drums come pounding in as the tempo picks up. Great sound as piano and violin dominate. Vocals 1 1/2 minutes in. Synths and a beautiful soundscape follows. A calm 3 minutes in as it becomes dreamy. It kicks back in before 4 1/2 minutes to end it. "Un Villaggio, Un'Illusione" opens with some terrific violin melodies as drums pound. Guitar and flute come in ripping it up. Vocals a minute in. Amazing section. A calm with flute after 1 1/2 minutes as it builds. Violin is back after 3 minutes. "Realta" is a mellow track with birds chirping as acoustic guitar and flute are gently played. Reserved vocals join in. An outburst a minute in and then it calms back down as piano takes over. Themes are repeated as vocals, acoustic guitar and flute return. Some violin 2 1/2 minutes in.
"Immagini Sfocate" is experimental to open. A violin / drum melody arrives a minute in. Mellotron 1 1/2 minutes in followed by guitar tearing it up. Vocals follow. This is fantastic ! "Il Cieco" opens with a catchy beat as synths then vocals join in. Piano comes in and i'm finding it impossible not to groove to the music. A calm as violin and flute take over. The uptempo melody returns 3 minutes in. Violin and piano end it. "Dialogo" opens with drums, bass and synths. A calm 2 1/2 minutes in as vocals and piano lead the way. Synths 3 minutes in and then drums. "Verso La Locanda" opens with piano then violin and drums come in. It picks up 2 minutes in with flute. A calm, then reserved vocals come in after 2 1/2 minutes. It kicks back in after 4 1/2 minutes. "Sogno,Risveglio E..." opens with piano as mellotron comes and goes. Violin after 1 1/2 minutes and then flute. Vocals 3 1/2 minutes in. Themes are repeated.
There is so much to enjoy on this album. If you want to check out the Italian scene then this is a must.
01. Prologue Classical, classical music all! Interesting. violin solando much, I felt an influence of Vivaldi in the violin, which makes everything even more interesting. The next to voice synthesizers also provide a different touch. An incessant melody broken until .... ours, a melodious guitar and keyboard doing to bridge the voice even more beautiful. In the first theme returns to the flute appears to enhance further the theme, Jethro Tull? Influence total.
02. Un Villaggio, Un'illsione What I talked about the Vivaldi is further confirmed here. Only this time mixed with the Rock (and well before heavy!) The voice and could not fail to be Italian in the bands are very well sung, the very language gives a taste of most everything. The second part reminded me something I do not know and explain but it is very good, and once again the influence of Ian Anderson on flute, which is fine (Jethro Tull).
03. Realtà The classical guitar renaissance is that voice takes on the issue (along with the flute). The violin with wha wha was interesting and also the piano, in general the sound of the guys Quelle Vecchia Locanda is very classic.
05. Il Cieco The various vocal jazz piano and give the tone that follows in style. In the second duel of the violin and flute, you need more?? Keyboards and various traditional flutes. This is extremely beautiful. And once again the second most rock, and again pro Psicodélico.
06. Dialogo Synthesisers excellent. Instrumental even better. Duplicates the vocals near the piano, moogs strangers. Of trash!
07. Verso La Locanda Piano and violin, hit the weight attached to classic, one of the first bands to do that. The timbre of guitar music that are heavy for the progressive patterns of the time, gives a whole charm, serious and well- marked low, Romualdo Coletta has beautiful lines in that song.
08. Sogno, Risveglio And ... This here is classical piano total, Massimo Roselli is worthy of concert. Are still added to the mixture violin and flute, this is classical music from beginning to end! And if anyone cares? Of course not, but soon enter the smooth voice of Giorgio Giorgi (owner of flutes, too)!
This disc is almost classical music, wonderful musicians and music idem. When I heard I was in doubt that he was recorded in 1972, I find it very 'fresh' face again! Excellent!
Indeed, The Old Inn is a band to remember carefully, because after 37 years, the record has still an heavy impact on the serious listener: you just can't get enough of it.
The inevitable Jethro Tull comparison is not as big as I read somewhere else; they have their own sound and range of originality. Proof, name a band with tragic italian/gypsy violins melted in classical grand piano with tempestous flutes and mellotron all together?...just a magical atmosphere for you to discover.
It's a blessing not having to search too hard for this record compared to other italian cds; so take advantage and give it a shot!
A complete winner, worth every penny.
The first of only two studio albums from Quella Vecchia Locanda, this eponymous debut released in 1972 is rightly regarded as a highly important album in the RPI genre. Their second album, Il Tempo Della Gioia would take them into mellower territory, expanding on the classical influences already on display here. Their debut however is to a large extent a more bombastic affair with a rawer sound, yet still displaying moments of finesse and beauty.
Alongside the classical influences is a band playing powerful early seventies prog on the rocky side, sometimes reminiscent of early Jethro Tull; I'd be very surprised to hear if they weren't an influence. To reinforce the Tull vibe is the use of flute to great effect. Equally important to their sound is the exciting violin playing, ranging from the sublime like on Sogno, Risveglio E... to wilder moments perfectly captured in album opener Prologo. A solid, though not complex rhythm section lays the foundations for this alongside some impressive keyboard work to please mellotron, moog and piano lovers. Electric guitar, again has a Tull vibe at times as well as some tasteful acoustic playing and Giorgio Giorgi is a very good vocalist displaying a powerful and melodic voice capable of subtle restraint like on the folk inflected Realta.
It's a fairly short album at only 34 minutes, the eight compositions all between the three to five minute mark, but surprisingly with such short pieces they still find the space for dynamic instrumental workouts which do tend to take precedence over the vocals. The melodic nature of the material here makes it a very accessible album, an ideal early entry into discovering the vast world of Italian Progressive rock.
Many years ago, a friend introduced me to QUELLA VECCHIA LOCANDA (That Old Inn), with an original cassette of their self titled debut, my first thought was that I would probably listen another good band from the fertile Prog scenario of the early 70's, with beautiful melodies and a pastoral sire, but I was wrong.
QUELLA VECCHIA LOCANDA is a different specie, of course they have that unique Italian melodic sound but their approach is much more aggressive than most bands of the region, the massive use of violin makes the different to most of their coetaneous plus the fact that they still have a post Psyche sound reminiscent of the 60's with a relation to FOCUS rather than to any other band.
"Prologo" introduces us to their unique style, the first section is a contrapuntal between violin, piano, violin drums and then a an explosive section as intricate as the early works of KING CRIMSON, this guys rock much more than their peers and are not afraid to take a violent approach instead of the mellow and soft that we could expect from other band of Italy, even the choirs are absolutely radical with polyphonic and almost chaotic arrangements, in other words one in a kind.
"Un Villaggio, Un 'Illusione" starts again with the violin as lead instrument with a strong and perfectly syncopated drum as support, when the flute enters, I feel all the style of Thijs Van Leer, and the vocals introduce us to Hard Rock with a sound of the late 60's. The bass work by Romualdo Coletta is brilliant, and maintains the band connected with reality while the rest of the instruments are allowed to wander through uncharted territory. Special mention to the breathtaking violin solo, is simply outstanding.
The first notes of "Realta" announce that this is the first track in which they sound more as a classical Italian band, the sweet melody is impressive, but still the multi layered vocals indicate that the band wants to be unique. Even when they never abandon the soft mellow sound, they manage to experiment with contradictory styles and complex arrangements.
"Il Cieco" begins with a drum and bass duet that leads to some sort of Italian Rock with touches of Prog, but it's not until when the instrumental breaks begin that they dare to explore most than almost any Italian band of the era, the JETHRO TULL aroma is there, but much more rough and aggressive.
"Dialogo" is another complex track that reminds of KING CRIMSON structures, with a lot of dissonances and complex sections that morph into different ones in a matter of seconds, the song jumps from elaborate and weird to melodic and soft and weird again, in other words pure Progressive Rock.
"Verso Locanda" sounds like an Emersonian keyboard nightmare with Jazzy atmosphere and ultra radical changes, a good prelude for the melodic and extremely beautiful"Sogno, Risveglio E..." that closes the album with a fantastic piano performance by Massimo Roselli who almost makes me break in tears, specially when the nostalgic violin of Donald Lax and the flute of Giorgo Giordi join to create a Classical oriented song that closes the album with a special taste.
To be honest there are many Italian Symphonic albums that have impressed me more, but this album deserves no less than 4 stars, despite my personal taste.
One of the aspects of RPI that appeals to me most is that the classical influences are transparent. That is, while Genesis and ELP have some obvious roots in classical music, the Italians seem to have it written in their marrow. The amount of "rock" in their prog is variable, but the classical influence is always crystal clear. Among the bands with the heaviest classical flavor, by far my favorite is Quella Vecchia Locanda (QVL). Their second, classic album was the second RPI album I owned and I still hold it as one of the masterpieces in its genre even after acquiring a much bigger collection. I later acquired their self titled debut and found another great piece of work. Perhaps not as evocative as the sophomore, it is still a grand album well worth owning.
Interestingly, one of the stars of this particular album is an American, classically trained violinist Donald Lax. His violin opens the "Prologo" theme which is echoed by various other instruments, and then has a torrid solo which would have been a better example of the devil's champion that that offered by Charlie Daniels. Indeed, despite the minor keys and classical arrangements, his energy and tonality are almost fiddle-like, stretching an already diverse sound.
The most rocking part of this band is the rhythm section. Conventional trapset and electric bass form the backbone, and they can range from quiet accompaniment to nice bombast. Unlike the second album, guitar plays a slightly larger role, with the track "Immagini Sfucate" having both a distorted riff and rock leads. Flute plays a large part in the mix, adding a significant 60's psychedelic flavor. The voices are pleasant and less operatic than some RPI, though they still fall within genre. Finally, keys range from clean piano to distorted organ and possibly mellotron.
Some of the highlights are the Genesis-like "El Cieco" which features frenetic rock interspersed with ethereal flute interludes, the aforementioned creepy "Immagini," and the Zeppelinish "Realta." The latter combines with the Italian harmony vocals beautifully, before a wah-electric guitar brings up the intensity. The whole album is more rocking than the second, more intense, but also lacks a little of the intricacy and longer, more ambitious compositions. Still, it is an excellent piece of work and among my favorite RPI albums. 4 stars.
Quella Vecchia Locanda's first of only two studio albums (a shame, really), offers a tornado that ripped up symphonic and heavy progressive rock styles and churned out something delicious yet hard to digest. Most of the pieces are admittedly disjointed- the compositions do not all flow very well, but the individual ideas themselves are quite a compensation. All in all, this is an important album for any Rock Progressivo Italiano collection.
"Prologo" With call-and-response violin and piano, other instruments join in, soon creating a frantic yet easy-to-follow rhythmic backing- it is the violin that is the most crazed, sawing through several lower notes before abruptly shooting up into a long high one. Backed by that same rhythm, the singing comes through loud and clear, with a floating monophonic synthesizer in the backdrop. It all falls away to bring in gentle acoustic guitar and more keyboards. The flute solo at the end is a non sequitur, but a welcome one.
"Un Villaggio,Un'illsione" Sweet violin begins this track, offering an almost classical introduction (not unlike the Electric Light Orchestra). Soon it becomes a bit harder rocking than the previous track; the lighter flute passage is akin to early Jethro Tull, but that violin sets it apart.
"Realta" A delightfully familiar finger-picked acoustic guitar passage provides this song's gentle, melancholic foundation. The vocal harmonies are excellent, and this time the flute outshines the violin. For those familiar with The Steve Miller Band, "Winter Time" sounds very much like this song.
"Immagini Sfuocate" Emerging with a far more experimental sound initially, this piece eventually takes on a more coherent form. From then on, it's all heavy progressive rock finished off by a quick drum solo.
"Il Cieco" The drums fade back in, inviting a cool bass groove to tag along. I really like this primal rhythm and the harmonic synthesizers that creep in. After this, however, is one of the most breathtaking passages I've heard in the genre- violin and flute, like two graceful fairies of different worlds dancing over a lush bed of organ. The tribal business makes a brief return before gorgeous violin and piano finish it off.
"Dialogo" A tumbling bit of guitar and synthesizer kick this off, and soon there's a funky bass line in 6/4 time along with a nasally synthesizer lead. The vocals arrive over piano, and the late verse has a slight Supertramp feel.
"Verso La Locanda" A strange bit of piano opens this track- it sounds like a nervous child practicing at home under the watchful eye of an instructor. A refreshing violin and some rock music rescues the lad. This is, however, the most disjointed of the material on this album, with several abrupt changes and an apparent lack of direction. The verse is one of the quietest points. The flute plays over a calm electric guitar, but everything settles into a nice groove with yet another interesting bass line, and as it picks up, a wild synthesizer solo concludes this difficult music.
"Sogno, Risveglio E..." This has the same feel as the previous track initially, like that of a person practicing the piano at home, although the player here is clearly more advanced- in fact, the piano is brilliant, and the ghostly violin adhering to it, followed by a reluctant flute, is one of the highlights of the album, despite not being a rock song at all. Vocals follow, with discordant fills on the piano. It is strange to me that the band would choose this sleepy work as the conclusion to such an otherwise dynamic and disorderly affair, but perhaps that is my American sensibilities peeking through!
The opener "Prologo" (Prologue) depicts in music and words a desperate and gloomy landscape of solitude and alienation, an old and decrepit house in a dark wood where even the worms refuse to live, where love for life gets hate in return. The very first notes are picked up from Johannes Brahms' piano trio, op. n. 8, then the music develops in a shifting way before calm down... "These walls are heatless and dark / You can find here nothing but pain / Light I'm looking for you / Life, I'm running after you".
"Realtà" (Reality) is a beautiful dreamy ballad where the dread illusion melts and hope rises... "I can breath / I can see the light... Pain, hate and love do not make any sense to me by now / I look at the sky that resembles to me / A storm starts raging / Shouts, lightning, wind... They do not notice the people who is suffering and envy their honey... I'm dreaming of something that could give you a real peace... Who can understand this reality?".
"Verso la locanda" (Towards the inn) is another excellent track, bittersweet and dreamy, where the violin leads the way to the house of joy... "Thousand shadows are running in my direction / Come, my friend / The way is long but I will lead you / That's the inn...".
One of the reasons RPI attracts me so much is the endless diversity between bands. Most bands only managed one or two albums, but with each new discovery you get introduced into an entirely new sound world, often taking influences from the UK bands but molding everything into something personal and unique.
Quella Vecchia Locanda is no exception. Their debut is a prime example of the creative bliss that struck Italy in the early 70's, mixing clashing elements such as heavy prog, classical piano and violin, Jethro Tull folk flutes, Italian pop and much more into one heartfelt and passionate album. The vocals are outstanding, very emotive, both tender and full. This nation can sing! The music reminds me a bit of the Paese Dei Balocchi album, another example of that rare successful marriage between rock and classical orchestration.
With no dip anywhere in the entire album, this is yet another highly original and captivating RPI album. Looking forward to the next one!
The debut album by Quella Vecchia Locanda is a concept album about a person experiencing an allegorical dream, in which they try to seek out a legendary inn ("Quella Vecchia Locanda" meaning "That Old Inn"...) in which the joy they used to feel in life can still be found. Though the typical RPI influences of Trespass-era Genesis and Jethro Tull are to the fore on this album, at the same time it also displays an impressive ability to synthesise these sounds into something new, yielding a gentle, acoustic style of prog much along the same lines as Premiata Forneria Marconi's work.
This is the debut by an excellent RPI group that released only 2 (now-classic) albums before collapsing due to the lack of greater success. As with PFM, the sound of QVL included flute and violin, but these bands have otherwise quite different styles. Instead of PFM's soft, pastoral and romantic side QVL has more dynamic rock attitude, and more complex time signatures. (I prefer PFM.) These musicians share the passion for classical - especially Baroque - music, and many of them had classical studies. This shows mostly in the virtuotic use of instruments, especially on violin and keyboards (including harpsichord). The compositions are complex but not "symphonic" in the YES sense. The longest track lasts no more than 5:15.
One can make comparisons to e.g. KING CRIMSON and GENTLE GIANT, but QVL don't quite have the wide pallette of moods, forms or arrangements (like PFM also does), but instead play with high intensity - and complexity WITHIN the tracks - most of the time. One of the highlights is the soft instrumental first half of the ending track (title meaning "sleep, waking up and...") with the piano lead, where also the flute part is gentle. Then suddenly the sharp violin - soon followed by vocals, guitars and drums - brings more edge to the track. The flute is often very JETHRO TULL influenced. Acoustic guitar is not much heard even if there is some folk flavour here and there. The violin is often a bit too strong for my taste. Since I have several negative notions about this undisputed classic, I give it three stars only. Probably not on my Top Ten of Rock Progressivo Italiano albums, but definitely worth recommending if you enjoy classical influences served with the rock power.
A true gem of not only the Italian scene, but prog as a whole. Quella Vecchia Locanda's debut self-titled album is a real treat, playing out like a storybook, painting vivid musical pictures as it takes its listener on a spell-binding journey.
Many people have noted stylistic similarities between the band and Genesis or Jethro Tull. While these comparisons are valid, they're a little bit contrived. Sure, there are lyrical, pastoral moods, charismatic vocals, classical interplay and heavy blues rocking throughout the album but those are just characteristic of the Italian sound. No cloning to be heard here.
So what makes "That Old Inn" stand out so much in the vast RPI scene? It really all comes down to composition. This is one of the most wonderfully, thoughtfully, intelligently composed albums that I've heard. The amount of energy that the album conveys is incredible, and it's evident from the opening bars of "Prologo", yet it's not a hack-and-slash headbanging album at all. There's such a great diversity of mood in this album, from soft, nocturnal pastoral sections to spirited classical melodies to hard rock riffing. And what makes it so powerful is that it's all delivered so succinctly. "Quella Vecchia Locanda" offers build-ups, climaxes, lyrical storytelling, breathing room, all in the short span of under 35 minutes! Though it's a shorter album, the length makes it easily digestible and always leaves me in awe, wondering how a band can take me on such a great journey in such a short time frame.
An absolute must-have for anyone looking for a quality album that's short and sweet. 5 well-deserved stars for an underrated classic.
This is a nice blues rock album with a lot of input from classically trained musicians and composers. The presence of flutes, violins, and clarinet make it a little more interesting. The drums and bass play make it sound rather dated.
An album that does a fairly competent job of melding classical music instruments and compositional styles and themes with rock instruments and formats. It would have been better if the rock compositionship was a little beyond fairly simple, straightforward blues rock formats.
A near-masterpiece of classically-infused blues ("progressive") rock.
QUELLA VECCHIA LOCANDA has so much more of a beautiful ring to the ears than the rather plain sounding English translation 'This Old Inn' which found this band from Rome carrying on the Italian progressive rock tradition of taking on a cutesy band name in the same style as Premiata Forneria Marconi (Award-winning Marconi Bakery) and Banco del Mutuo Soccorso (Bank of Mutual Relief). This quintet formed in 1970 and enjoyed a rather vigorous live setting that helped them become one of the more remembered Italian prog rock bands of the heyday in the early 70s. Their eponymously titled debut emerged in 1972 after a rather pop-oriented beginning which while almost completely faded into history left traces only lingering about on a various artists compilation titled 'Progressive Voyage' (The track is titled 'Io ti amo' or in English 'I love you.' While they would hone their prog rock chops in no time and be ready for the big time, there's no doubt that the pop aspects of this band carried over to their proggier side and allowed them to dish out some of the more melodic flow of compositions in the Italian prog rock scene.
QUELLA VECCHIA LOCANDA only released two albums in their short career with this one being released on the Help label and then finally getting picked up by RCA for their second album 'Il Tempo Della Gioia.' While they only released two albums, both are quite distinct in their style despite both firmly placed in the category of classically infused rock with folk and jazzy touches. This debut album lacks the production prowess of the second album but for my ears is the more interesting album of the two as it unleashes a powerful youthful exuberance and enthusiasm that 'Il Tempo Della Gioia' lacks as they began to slip into a comfort zone but a very beautiful one i must add. The band's main leaders were lead singer and flautist Giorgio Giorgi, guitarist and clarinetist Raimondo Maria Cocco, keyboardist Massimo Roselli and percussionist Patrick Traina who all played together in the earlier pop rock phases of the band but for their more adventurous prog years added Donald Lax to dazzle with his violin skills that added a unique gypsy swing and Paganini element to the band's overall sound that set them apart from many of the purely symphonic rock contemporaries of the day.
'Prologo' bursts onto the scene with a scorching duo between the violin and piano with the guitar bursting in and finally the drums and as the intro cedes into the more symphonic leaning rock segments, the instruments all go crazy on each other. Lax plays both acoustic and electric violins and sometimes delivers frenetic assaults reminiscent of the Mahavishnu Orchestra and at times reminding of the folkier side of the prog rock scene from such bands like Comus or Spirogyra. While not unique to QUELLA VECCHIA LOCANDA, the band mastered the dynamics shifting of soft sensual classical piano oriented pastoral segments with the heavy guitar laden rock sections that allowed Roselli to unleash his best Keith Emerson inspired keyboard wizardry. Certain tracks like 'Un Villaggio,Un'illsione' display Lax's playing around with Bach, Brahms, Corelli and other classical masters and weave them into a more Paganini performance that would be reworked into the rock fusion compositions that start out with classical intros and slowly morph into the heavier guitar, bass and drum action accompanied by the passionate vocal style of Giorgi who had the perfect vocal style for this type of music magic.
It may only last slightly over 34 minutes in duration but the debut album by QUELLA VECCHIA LOCANDA is one of the best offerings the early RPI scene had to offer. These eight tracks are chock full of passionately strewn classically infused rock sophistication very much at a level of the other greats of PFM, Banco, Il Balleto di Bronzo, Le Orme and the rest. The music is as perfectly constructed as the stunningly beautiful album cover and covers so many grounds in such a small amount of time that i can easily put this one on rotation and listen to it repeatedly without getting bored for one second. This band mastered the melodies, the Tull inspired folk feel, the ELP keyboard prowess, the medieval chamber aspects, the freak gypsy folk and the symphonic heavy rock. Chock full of brilliant dynamic shifts and progressive time signature workouts without sacrificing some of the most intricately designed melodic developments, QUELLA VECCHIA LOCANDA is one of the Italian greats of the era. For my money this debut release is one of the absolute best examples of this era of Italian progressive rock that rightfully deserves all the high praise and positive criticism that it has received ever since.
Post a review of QUELLA VECCHIA LOCANDA "Quella Vecchia Locanda" | 2019-04-25T06:55:54Z | http://www.progarchives.com/album.asp?id=1137 |
I’m going to show you exactly how you can double, triple your profits and charge MORE for your services as quickly as TODAY! With my step-by-step guide.
Read on and I GUARANTEE your profits will soar. Adopt these principles to build an unstoppable Business and have the right type of clients queuing up to do business with you.
Seriously, this course outlines the EXACT model I used to grow 3 separate 7 figure PROFIT businesses. It’s the same system I apply to every business I run, start, or consult to.
Disclaimer: This is for service-based businesses. If you are a product-based business, this article (CLICK HERE) is going to be more suitable for you!
The Myth Of Time For Money Pricing.
In this section you’ll learn the importance of how you look at yourself.
Let’s face it… you’re never going to increase your profits by charging market rates.
I’ll show you how to change the way you look at yourself and the value that you bring to the table.
I want you to think about the first time you ever had to attach a price to yourself as a business professional..
If it’s yourself, it was probably when you got your first job. You were sitting in front of your, potentially new boss, and after talking about yourself, your CV, and then the position; the conversation undoubtedly shifted towards money.
Now, you probably had enough sense to go into this discussion knowing what, the position you were going for, typically earnt. If you asked for too high of a salary, you might not get the job; if the salary is too low, then you’re stuck making less than what you could have made.
And then, when you went out on your own and started your own business, you probably did something similar. You researched what the value of a similar business like yours charged. Or, if you were freelancing, you just reverse engineered your salary to come up with an hourly rate.
This is what most people end up doing. After all, whether we realize it or not, we want to be able to justify the price we’re charging. The most successful of professionals don’t give a damn about market rates, or what others charge, I know I didn’t. They know that ultimately this doesn’t matter. They realize there’s no such thing as the right price for the services they offer.
When I first started out on my own, I charged $50 an hour because I wanted to earn a cool $105k a year. And all the best resources out there for Management Consultants were suggesting this-or-that calculator or telling me or to just divide my salary by 2,000 (the number of average working hours in a year)… so I settled on $50 an hour.
And boy, things have changed for me since then. Just last week, we won a project at $30,000 for 7 days work — or about $600 an hour.
Truthfully, the skills I’m using (mostly: advising business owners on marketing and growth, mergers and acquisitions and, on the side, we work with a firm of accountants who offer compliance work) are the same skills I was using when I started my consultancy firm that went from Zero to $5.5 million in just 4 ½ years, making over $2 million profit. They are the same skills I use on every business I invest in, set up and consult to.
Someone once said to me "the sign of insanity is keep doing what you’ve always done and expecting different results!!!"
Over the next few lessons, I’m going to give you tactical and conceptual advice that will help you escape market rates.
Market rates are for losers.
Market rates imply that you’re offering a commodity service that just about anyone can offer — even those charging $4 an hour in India.
Market rates are out of your control. The market dictates and determines what you’re worth. The market says that you’re defined by the technicals of what you do and not the results you deliver to your clients.
This first lesson is intentionally light on strategy and tactics, but it’s meant to make you think about what you’ve been doing through now.
Until you change the way you look at yourself and the value you bring to the table, you’re not going to be able to apply the concepts of this course. The first change you need to make to your business is the way you think.
The Real Reasons Consultants Are Underpaid.
In this section, you’ll learn the importance of how you define yourself.
Depending upon what you ‘label’ yourself as, will depend whether or not you can charge a PREMIUM rate….so let’s delve into this in more detail.
Now, we’ll talk a bit about why most business owners tend to be underpaid (it has to do with presentation), and afterward we’ll analyze the way you’ve presented yourself to clients in the past.
Did you know that the worst thing you could say when meeting with a prospective client is that you’re an accountant!!!
And that the second worst thing you could say would be to qualify yourself as an accountant, with telling them what it is you do (e.g., “I’m a partner”)?
It’s important to look at what those words mean from the perspective of a client. It’s fine to describe yourself to your peers as being an accountant, and you probably want to tell others you’re an auditor if you happen to be at a conference for auditors.
But giving yourself the job title of Accountant can cause prospective clients to immediately have the wrong expectations of you.
What’s wrong with the word “Accountant”?
An accountant is somebody who does work for somebody on a temporary basis billed (in the client’s head) by the hour (in your head 6-minute billing).
As you can see the working relationships employer/employee are usually sort of similar: there’s the “greater” party (the client) and the “lesser” party (the accountant).
By using the label “accountant,” you’re stating that you’re similar to an employee. You expect to be brought their papers and you’ll do the work. The client is the BOSS.
My goal with this course is to help you become a high-value Entrepreneur. And while we’re not quite yet ready to talk about how to do just that (it’s going to take a few more lessons), you need to realize that it’s critical to discard the identity of “your label”… at least when talking with a potential client.
Because once you shed that identity and can become seen by your client as a partner and ally, instead of just “the accountant we hired”, you’ll be able to charge a premium rate AND you and your clients will be happier. They’ll get better work from you; you’ll get more interesting work that just COMPLIANCE!
...But you gotta ditch the word " your label " first.
And what’s wrong with describing yourself this way?
I’ve hosted a lot of events and conference, and the attendees were accountants, equity directors, financial advisors, brokers or lawyers.
And this is fine because this is exactly what I was asking for — again, this was a peer (me, a fellow professional) asking the audience what they did.
Next, we’ll be covering commoditization and how to free yourself from being seen as a commodity provider, but let me try to explain why defining yourself by what you do is a bad idea.
Let’s then pretend that we get to the point where we’re talking about taking me on as a client, and you quote me a higher-than-normal rate for the work you’re planning on doing.
The underlying issue is that from the beginning, you defined yourself as a provider of services — and the implication, even though it’s probably far from the truth, is that accounts is accounts is accounts. It’s a commodity. A product. And you’re trying to sell that product for significantly more than some of your competitors.
It would be like driving down the highway, having your petrol light turn on, and getting off at the nearest exit. You see two petrol stations. One is selling petrol at $5 a gallon, and the other is selling gas at $50 a gallon.
Unless you’re mad, you’ll buy the $5 a gallon petrol. After all, it’s just petrol. And while we might vehemently defend that not all accountants are equal, by defining ourselves as just an “accountant”, we’re implying that they are.
So chew on these ideas for the rest of the day. Think about how it is you present yourself to your clients. Are you pitching yourself as an employee-without-benefits who provides commodity development services? And if you are, how are you going to ever be able to break free of how the market prices your commodity service?
This is why most professionals are underpaid. They’re caught in a race-to-the-bottom and they’re selling and pitching the same stuff as everyone else.
How to avoid being seen as a commodity?
So, hopefully you understand the concept now that ditching that ‘label’ is the key to getting more lucrative clients…YES?
This next section is all about positioning yourself as an investment and NOT and expense.
Here’s how to avoid your clients thinking of you as a commodity…..
Now, we’ll look at how you can avoid positioning yourself as a commodity.
So as a natural follow-up, let’s start to talk about how you can distance yourself from coming off as a commodity, and instead present yourself as a premium professional to your clients.
What makes a management consultant different?
A lot of us, tend to avoid the word management consultant. It conjures up images of public school boys sitting in “strategy meetings” with FTSE 100 companies. My distaste for the word came from an idea that management consultants were all talk and no action — they advised their clients on what to do and often had little to no real world experience at practicing what they preach. Everything coming from a text book!!!!
…Right back to the course.
It’s critical to realize that this is ultimately why people hire you — your clients have reasoned that paying you management accounts can, for example, help them with their problem and in turn get them more customers which makes them more money. It’s not about the management accounts, for example, if you are an accountant, it’s about what they think that management accounts will do for them. Or they want a tax review done because they think there’s a chance it’ll save them a lot in taxes.
Management Consultants capitalize on these two universal truths and sell solutions to their clients that are meant to directly affect the business of their clients – And this is what you need to do as well.
The issue is that most people focus solely on what it is we do, instead of why it is we do.
And this naturally sets us up as a commodity, because the what without context is just an expense; a cost that the business incurs. And when businesses buy stuff, whether it be reams of paper, computer equipment, or accounts, they want to minimize their costs.
But what happens when we start focusing on the why?
When we stop selling one of our services and start instead selling why, if your clients need management accounts say, what it will do for their business? We can then position ourselves as an investment instead of an expense. An expense is management accounts; an investment is managing cashflow so they know what they can afford on advertising and marketing to acquire customers, whose total value outweighs what it cost to get those customers.
You may be familiar with the concept of “cost centers” and “profit centers”, terms popularized by Peter Drucker more than 50 years ago.
The closer you get to the actual businesses and understand their goals, the better it is for you and your business.
For many, this can be very uncomfortable at first.
Accountants, for example, feel they not all amazing at business. You probably never had to manage a department or run a business other than your own. Getting involved in your clients’ businesses can be intimidating — after all, it’s easier to sell what we’re confident at.
The rest of this course is dedicated to helping you overcome whatever roadblocks — either internal or due to a lack of experience — are keeping you from becoming more vested in the businesses of your clients.
Simply changing what you call yourself or just doubling your rates isn’t going to set you up to be a high-value management consultant who’s in perpetual high demand. It’s going to require you to radically rework the way that you work.
…Guess what? You’ve already started that change.
The other lesson, when I had you consider the implications of the ‘label’ and describing yourself by what you do for others, you set into motion a shift in your outlook.
Let’s keep that momentum up!
In this section we’ll discover how to solve your client’s problems so that they will say YES to working with you.
If you want to stay ahead of your competition, and achieve a 100% success rate, then you need to stop focusing on your accolades and focus on your client.
Here’s what you can do to win them over.
I’ve hired plenty of professionals over the years. When running a consultancy, I had to recruit people, and nowadays I work with a pool of accountants, marketing professionals, business consultants and financial experts who help me do what I do. This has put me in the position of both working both with clients and as the client/boss — often times simultaneously.
I know what it’s like to put my money on the line and go through the hoops necessary to engage with people.
Here are a few things to keep in mind whenever dealing with clients, especially if they haven’t engaged you just yet.
Your clients’ chief concern is their own interests. This is critical to understand, but I’ve seen a lot of people who don’t realize this.
You’ve probably seen (or are guilty of) proposals that are centered upon the professional and their accolades, their website that quips about their team and the services they offer, with nothing anywhere that says here’s what YOU will get by working with me, or have obsessed over test-driven growth strategies, etc.
This is often how we try to sell ourselves because we’re proud of our skills and qualifications, probably spending years to acquiring them. But to the client, you and your skills are simply a means to an end — the end being the squashing of some particular business problem they face. So whenever you’re talking with a potential or actual client, always question whether the focus is on you or them.
When networking; before rambling on about yourself and what you do, ask who you’re speaking with to open up to you. What’s their business like? When did they start? What do they do day-to-day at work? What’s the chief problem they’re struggling with right now?
When qualifying a new lead, realize that somebody who might feel intimidated by working with you is cautiously optimistic that you could help them. Sympathies with the fact that they may be on the hook — to their customers, maybe their boss, or to their stakeholders — to solve a particular problem, and that this is a big deal to them.
When describing a particular task or service, don’t describe it technically but describe it in a way that aligns with the interests of your clients. Instead of selling services, let them know that they will significantly gain, both personally and the company.
I like to say that we all have a particular risk profile associated with us in the minds of our clients.
…but if they’re worried about your capacity, either technically (like, can you actually do the work) or professionally (are you reliable and provide a good services and know how to work within timelines?).
Or you are just an entrepreneur and offer the same services any other entrepreneur can offer then they won’t be as willing to deepen their pockets for you.
The key thing we focus on in this course is helping you to lower the risk you present to your clients.
That’s why we’ve started talking about how to figure out the why behind a service, and we’ll soon be looking at how to quantify its worth, identify the right solution, and so on — because all of these factors contribute to lowering your risk.
Beginning from your very first contact with a prospective client and all the way through sending a proposal, your #1 job is to eliminate the risk you present.
When you finally ask for the sale, they’re going to be racking their brain and trying to come up with any objections (“the price is too high,” “she’s not experienced enough,” “I doubt whether my problem is solvable, and he’s not convincing me he can solve it”) that they can use to say no.
Because people have a fear of buying, especially things that aren’t impulse buys and are untested (like you applying a chunk of your time to chipping away at their problems).
And this is what sales is all about.
You need to overcome the objections, either real or imaginary, that your clients have about working with you, and show that you are capable of solving their problem. That’s it. It’s not about experience, case studies, testimonials, a beautiful website for your business, referrals, or whatever else. Those are all just factors that, done right, can increase your credit score with a client.
So with this, our fourth lesson, out of the way, you should now have a clear understanding of the mindset shifts that need to accompany the tactics that the rest of this course is dedicated to.
How to use Stamford Questioning to figure out what your clients need?
In this section, I want to introduce you to my own technique, which if used correctly will GUARANTEE a near perfect 100% close rate when winning over a new client….
It’s been tried and tested and I know it will help you win that new client.
If a new lead reached out to you today with a request, what would your process be for moving that prospective client through your sales pipeline?
If you’re like most people, you probably don’t really have one. You’ll probably arrange a phone call or in-person meeting, and you’ll likely do a bit of research beforehand.
The most important thing you can do early on when talking with new leads is to develop a process around how you learn about who they are and what they want. Don’t leave your conversations open-ended; come armed with an agenda.
Becoming a high-value management consultant starts with understanding the why behind each business client you work on. We’re going to look at the exact process I use to reveal the pains behind each client I work with.
When I first started working as a management consultant, I used to work in fleet street and during our lunch break we was not allowed back in the building. This was fine on hot summer days but we don’t have many of those in the UK! I found myself sheltering from the cold listening to court cases.
What really stuck out was that Barristers that won always had an agenda; there was always some truth or idea that they wanted the person who they were talking with to admit.
Here, we want to know what event occurred to spark this change. Often, this can be a series of built up events that is ultimately set off by something specific.
Remember: No prospect exists without a backing problem. Your competitors likely won’t care enough to investigate what that problem is. If we know the problem, we can better tailor the proposal and what we end up doing toward that end.
Most people won’t be willing to open up to you and share the inner workings of their company. At this point, I’ll often present my Master Services Agreement. (by producing a non-disclosure agreement, shows the customer that you respect their business, and you realise that what they are about to tell you is confidential and shouldn’t be public knowledge).
You want to understand what it means long-term for this problem to still be a problem. When we eventually go in for the sale, we’re going to anchor the need for this client against the cost of this problem not going away. We want the prospective client to admit the cost to us. Note that not everyone will have these numbers available — but that’s OK. You’re mostly looking for ballparks or ranges.
– (This is often referred to as future pacing). The question above was pretty painful… we’re talking about how uncomfortable this problem is. Now we want to switch gears and move to happier pastures.
Let’s let the client dream about tomorrow. What does it mean if this problem goes away or the business succeeds? Try to quantify in hard numbers what sort of impact you might have on the client’s business.
What’s really important to understand is that you’re not just looking to collect data. You want your clients to externally vocalize the necessity of working with you. You want them to see that something’s different about working with you. You’re not just reacting to a request — you’re guiding them through understanding what this means for them and their business.
And this is going to help you tremendously when it comes time send off a proposal. My closing rate is near 100%, and I’m almost always the most expensive quote my clients get.
How to determine the value of a project?
Learn how to win your clients over by seeing what tomorrow could bring them.
The previous lesson I covered how Stamford Questioning can be used to both figure out the pain behind a project and to get a glimpse of what tomorrow should look like for your client (e.g. after the problem goes away). The next step is to quantify just how valuable the successful completion of your services is.
The first thing you want to do is to figure out how you can influence the financials of your clients’ businesses. (Note that not every client is looking for you to help them make money. Non-profits, startups, and other types of organizations might have other aims).
I have a student offering business plans as an added value service. He wanted the avoid the commoditized deathtrap that is business planning, and was aiming for a minimum $2+ budget for most of the “Business plan” he was putting together. So when a rehabilitation clinic came his way looking for a business plan, he used Stamford Questioning to determine that what they were looking for was more patients. He knew that he needed to sell them on something of value that he could control.
How could a business plan help them get more patients?
He knew he needed to start with this question, as it was the closest he could get “to the money” of this prospective client’s business.
He knew he couldn’t influence this number, so he had to dig deeper.
So now he knew that for every ten people who contact the clinic, they get one patient. In sales terms, they have a 10% conversion rate from lead to customer.
But he knew that he couldn’t influence how effective the clinic is at “selling” new patients, though some basic back-of-the-napkin math now showed that a lead is valued at around $2,000.
Ah, but he CAN influence how many leads this clinic gets.
He’s just leveraging what his prospective client has already told him to draw some assumptions about what would happen if their business generated just one new lead a month.
And this also helped him better understand what they really wanted: more leads.
He wouldn’t be proposing a business plan. Rather, he’d be proposing more leads for their business — which will help them get the new patients they desperately need.
Through Stamford Questioning and then quantifying what tomorrow could look like, he was able to figure out that this business really didn’t need a business plan.
They needed more leads. The client had already realized this (they either consciously or subconsciously realized that a business plan might equal more patients), but now they’re talking to someone who also understands this. They won’t just be getting a business plan, they’ll be getting something produced solely for generating leads.
When he ultimately quoted them $5,000, how do you think they responded?
Who wouldn’t pay $5,000 to make more than $24,000+? (Well I know some business owners who wouldn’t but that’s why they struggle).
The clinic wasn’t able to then compare his costs to what other business plan writers were charging — the product he was selling was totally different. It might be fulfilled through the creation of a business plan, but that’s not what he was proposing to do.
Needless to say, he ended up winning their business.
In this section, you’ll learn how to get one step closer to closing that deal. Discover how to stop being a cost and charge more money. Let me show you what I mean.
My clients will often ask me why I’m asking them these questions, and why I’m so keen on understanding the value of their project. What surprises them is that I’m not using the Financial Upside to help me figure out what I’m going to charge.
Basing your costs on value is what’s known as value-based pricing, and the idea is that if you can make your client, say, $100,000 — you can charge them anywhere between $1 and $99,999 and they’ll still come out “on top”.
I’m not a huge fan of value-based pricing, unless used in the right circumstances, you’ve just saved someone $100,000 in tax say.
I charge a weekly rate for my time, and right now I bill upwards of $25,000 a week. If you practice value-based pricing, you better be awesome at predicting the future. I think there’s just too much guessing that’s happening when you’re quantifying the value of a client, and so much of it is out of your control.
Just because a new upstart pizzeria down the street has bought a state-of-the-art pizza oven doesn’t mean they’re going to be successful — the ingredients, location, neighborhood demand for pizza, marketing effectiveness, and more all contribute to the ultimate success of that venture.
Instead, I anchor my costs against the upside of the business. I’m not guaranteeing a particular outcome, but I show my clients that I’m aiming toward it.
I tell my clients that I only want to work with clients where I'm an investment.
I don’t want to ever be an expense.
I want all of my clients to net a return-on-investment, or ROI, from working with me.
When a prospective client reads one of my proposals, the first number they stumble upon isn’t my cost.
And most businesses lose out when their price is the first (and often only) number found in their proposals.
A few months back, I quoted a client $40,000 for 3 days of work. My rate is significantly above what others charge for the kind of technical work that I do (a mixture of company restructuring, estate and legacy planning). But before the client read anything in my proposal about what my costs would be, I referenced the six-figure PLUS tax saving we uncovered together during earlier discussions.
So by the time they saw my five-figure price tag, they’d already had in their head that I’d be helping them save six-figures. I anchored my cost against the saving that we’d previously mutually agreed upon.
You can also do the same on the increased revenue they will benefit from working with you.
Not only am I able to then win higher price clients, but it’s also easier for me to win them. Instead of seeing my price floating in a vacuum, my client sees it contextualized against the value of the solution I’m proposing. And rather than having them need to consider whether my costs are too high, too low, or just right, my clients see my price as the cost of admission to the land of tomorrow (Future pacing).
We meet with a client, talk about their requirements, and then disappear for a while to write a proposal.
We write down what we’ll do, what it will cost, and how long we think it will take, and send it off. And then we hope and pray that the price is what they’re expecting.
The longer we take going back to a client the less likely they are in doing business with us. Rather than sending off your proposal and hoping for a best you need to physically see the client and go though your proposal and sign on the dotted line THERE AND THEN.
How to tie it all together with a killer proposal?
In this section you’ll learn how to produce a proposal which guarantees your client will say YES.
Get a step-by-step proposal structure to get more clients and customers to sign up.
Proposals are meant to be persuasive, and a bunch of line items with a price tag doesn’t persuade anyone of anything. People buy when they know a seller has something made specific for them — and your proposal should do exactly that.
Remember back in Lesson 5 when we talked about Stamford Questioning? In that lesson, I showed you why selling is more than just responding to a client request — we looked at why it’s critical to understand the root problem or problems behind each client.
You’re going to want to kick off your proposals by both reminding your client about why they’re here, and by letting them know (once again) that you understand their needs. You want to recite, ideally word for word, exactly what they told you in these meetings.
How is it affecting their business?
Monetarily, how are they being impacted as a result? It might sound counter-intuitive to reiterate the problem, especially if you don’t have a background in sales.
Just like Stamford-anchoring an upside against your costs helps justify your costs, anchoring the solutions you’re proposing against the problems being solved is highly effective. Don’t be afraid to remind your clients about just how important this service is for their business, even if you assume they’re fully aware of that.
The solution is the removal of the problem — it’s NOT what you plan on doing (e.g. the actual service). After describing the pain behind the service, you now want to forecast what “tomorrow” could look like for their business (Lesson 5) along with the Financial Upside that comes with it (Lesson 6).
This isn’t a fairy tale; you’re not inventing anything here. You’re simply including the takeaways from the conversations you had with the client about their business and what parts you’re able to influence and quantifying what that would mean for them.
This is where you actually talk about what you’ll be doing and how much it’ll cost. And while you do want to include some technicals about the process, but you want to always bundle each feature with its related benefit, known as a feature benefit statement.
That’s how you determine what needs to be done. And there’s not just one path, or offer, that connects every problem to its solution.
Think about the problem of being cold and wet in a rainstorm.
The solution is to stop being cold and wet. The offer can be any number of things: an umbrella, a poncho, a piece of cardboard, or a toasty lodge with a roaring fireplace. These are all ways to stop being cold and wet, each with its own degree of intensity and completion.
And that is why you want to understand the pain behind each client. If you don’t know what that is, and you simply respond to whatever superficial need is brought to you, the offer you end up proposing is risky.
You might do great work, but that’s no guarantee you’ll solve their problem. And mastering the art of understanding how to bridge problems with solutions will help you uncover multiple offers, each of which can be pitched to your clients at different price points.
This makes it so you can compete against yourself — Offer A vs. Offer B — instead of competing against the world. My proposals end up becoming long-form sales letters for one: the client whose business I want.
And my clients aren’t just receiving a list of instructions with a price tag that can be later shopped around to the lowest bidder. They’re getting a letter from someone who has a deep understanding of their business and their needs and has a plan for making things better.
How to double your rate without scaring off your client?
Discover how to double your fees without your clients wincing!
You’re gonna love this…..double your profits without taking on extra clients!
That shift in mindset — from expense to investment, business owner to management consultant, cheap commodity to a provider of solutions — isn’t something that happens overnight. Like with anything, it takes time. You can shorten that time by being around the right type of people hint hint… Entrepreneurs Gateway.
But if you work at it and are OK with experimenting, you will be able to charge significantly more without scaring off your clients.
When I first started my first company back in 1994, we charged $30 an hour. We thought this made for a fine living, and we also thought this was right. After all, it’s what other accountants were charging.
We became more successful. Our company grew from being just the initial founder (12 of us) to 300 full-time employees and over $6 million a year in profits.
Our clients were more successful. The work we were doing was the *right* work. It was grounded by the business problem that had to be solved. It became easier to close deals, and we were starting to get a lot more referrals and repeat clients.
In early 2014 — after about a 2 year break (I had exited my practice to start a software and marketing company) — I started consulting but this time not a management consultant and business development consultant but still a consultant – again.
I got bored of not working.
My ex MD embezzled a considerable amount of money so I didn’t get my finial MBO lump sum.
I made some poor investment decisions.
I believe consulting is the best way to make money quickly. And when I returned to consulting, I used the same strategies and tactics I learned running my practice — much of which I covered over the previous lessons.
In the first 12 months, I netted $620K or $1490 per hour I’m not 50x better than I was back when I was charging $30 an hour. I’m not 50x more networked. I’m not 50x more “famous”.
But I do know how to gauge how valuable the problems are my clients bring me. And this helps me win high-value clients at the rate I want and that my clients are more than happy to pay.
I’m a fairly introverted guy (unless you get me on a subject I love). I’d have a hard time looking someone in the eyes and telling them my five-figure a week rate if I wasn’t confident that I was worth that.
I hope you got a lot out of this free course.
My goal was to help you get a good grasp of the strategy of why and how to price higher. As I mentioned above, mastering this has helped me tremendously both personally and professionally, and it’s not really something that a lot of tax professionals are all that comfortable with.
Could I get your review? I put a lot into creating courses like this, so if you have a few mins I’d love for you to reply to this post with what you thought of the course and what you plan on doing next.
Share this on Facebook and Twitter? You probably know other accountants, so would you mind sharing this free course with your friends and followers? I’d love for as many accountants as possible to learn about this stuff – because, let’s face it, a rising tide lifts all boats.
The more people who practice these strategies, the more likely we all are to work with better, more knowledgable clients in the future.
All they have to do is subscribe and they are in!
This course, though brief, is a lot to digest. It’s going to take some practice to really implement much of what we’ve covered in your business.
Incredible article Shane! I really like how you put things into perspective that most clients don’t really care about credentials more than what you can do to solve their problem with the aim of increasing their cashflow or save huge amounts in expenses. The benefit feature aspect of the proposal as well I think keeps it simple and clear what the client can get from accepting your service and most importantly justifies the fee. This one’s an eye opener. Thank you for sharing your knowledge. | 2019-04-25T05:45:10Z | https://entrepreneursgateway.com/double-your-profits-fast/ |
Explore what our Clients are saying about their Laurel & Wolf experiences!
I have previously used Havenly for my other rooms and I was happy with them. I wasn't sure that I was going to give Laurel & Wolf a shot. But my experience here with the overall design process has been much better than Havenly. The designer took her time to come up with a final design package without rushing through to meet deadlines. And the deliverable included a floor layout which I had not seen with Havenly. I am glad I tried L&W.
This process has made our lives so much easier! Not to mention, our style acumen is lacking, we needed some guidance before we started purchasing furniture. This service is everything I anticipated and more.
Obsessed, as usual. I am very happy. It is incredible how my designer stays on budget, suggests unique finds and stays focused on customer service and satisfaction.
This was a really fun process. Laurel and Wolf really has the tools to do everything online that one might think you could only do in person. I had an amazing experience and the designer Kamila that I worked with nailed it. I will definitely come back for more!
This was a great experience. I kind of knew what I wanted but was overwhelmed by all the shopping I would need to do so this was a great solution to help me just get my redecorating done.
Had the best experience with Laurel & Wolf - I am an extremely happy customer. Sam was amazing and every single person I interacted with was responsive, positive and professional. I will definitely be back again and highly recommend to all my friends and family!
I loved the experience! I am normally pretty good decorating by myself but my designer took it to another level! I have been wanting to do this for a while and so glad I finally pulled the trigger.
I was so skeptical at first and wasn't sure what to expect. I was blown away by the result and quickly signed up for another room to design! The process was so easy and exciting and my designer Claudia was amazing!
I was hesitant about the online design process when I first signed up. However, my fears were quickly put to rest starting with the information requested from me during the initial project description process. I loved the flexibility of being able to select from three concept boards. Lastly, giving feedback to the designer was a very easy process. I would highly recommend Laurel & Wolf to my friends.
If I had known about this service 5 years ago I would have saved over $10,000 in unexpected designer billable hours. I recommend this as a fun way to tie dreams to specific plans. With the buying service you don't have to worry about where the materials can be found. They even track the purchases for you. THE BEST NEW ONLINE experience for me in 2016-2017.
Carolynne is so talented. I'm almost a year out of my remodel and to this day I get compliments on the design and pieces.
I had a lot of fun with the whole process! I thought it was a great value for the money. I would definitely work with Laurel & Wolf again and specifically with my designer, Brooke!
Laurel and Wolf was fantastic! I loved browsing the website and the user friendly site made the entire process very easy and fun! The staff was friendly and prompt with responses to questions. I was ecstatic to find this opportunity and believe it opens the door for so many people to have a fresh, new look for their home without spending a crazy amount of money. Thank you very much!
This was such a fun process. Made my move to a new apartment so much more fun and exciting!
This is such a fun process and it helped my husband and I take the angst about decorating on our own away. We feel confident that our living room will make sense now vs. being an odd mix of random pieces we hoped would look well together if picked out on our own. We can't wait to use Laurel & Wolf for the rest of our house!
Jennifer was super easy to work with! Her communication was top notch! She was constantly sending me new ideas and did a great job working with our existing constraints!
I love my sunroom design! I wanted something a little bit contemporary that would tie in with my mostly traditional style home and decor. It's exactly what I wanted. And Annie Sue stayed within my limited budget by choosing items from online retailers and local home stores which are really beautiful and look much more expensive than they are. I'm ready to spruce up my living room next! Thank you.
Really top-notch, you guys. I am absolutely shocked by the value of the service that you provide. Our designer really did some excellent work and I am well beyond satisfied. Not only have I already sent half a dozen friends and family your way, but I'm looking forward to using the service again myself. Thanks again for making design affordable for the rest of us!!
Once I got comfortable with the website it was very easy to correspond with the designer. I love that the process was easy and that there is a dead line by which to finish so the project gets done.
Love L&W! I didn't want to spend $2000+ on a local interior design company but wanted a professionally decorated room. L&W was the perfect solution. The designers have been absolutely great to work with.
Very easy platform to use for designing our space. So much more user friendly than other online design companies that we have used. Excellent experience. We will be back for more!
Ridiculously prompt response times, very flexible and welcoming of feedback and collaboration, and scarily intuitive of what I wanted even though I didn't always have the words to describe it. My designer was able to nail the perfect room in minimal revisions with tons of time to spare.
Overall pleased. There are some inevitable hiccups when designing online (color accuracy, some piece sizing issues etc.) but I appreciate the straightforward, organized experience overall.
I have already recommended Laurel and Wolf to my sister and another friend who are looking to update a kitchen and decorate a new office and I myself have already purchased another project for a future kitchen remodel! I love the ability to choose between 3 different designers and then work collaboratively with them to get something that is perfect for me. I live in the Bay Area and have been told designers here cost $250 AN HOUR so it was amazing to complete this entire process for around the same amount, be able to work on things in my own time vs. setting up appts. and working with online vendors to make it easier and more flexible for ordering. I had first read about Laurel and Wolf on a Southwest Airlines magazine and knew I had to try it, and I am beyond thrilled with how everything turned out!
Great communication and beautiful end result! I can't wait to see it completed!!
Alexia has been great to work with, from start to finish. I loved her original style board, and found her really easy to work with to make adjustments and modifications. She has a strong understanding of design, finishes, scale, and more...everything I was looking for in a designer! If I changed my mind on something or had extra questions, she responded with patience and enthusiasm. In addition, she stayed on top of the project and really put a lot of thought into my individual space. I recommend her services and look forward to working with her on my next project!
I was not sure what to expect, but the project process was well thought out and completed in a timely manner. I felt I had enough time with my designer and allowed for back and forth communication that resulted in a design I love.
LOVED using L&W. Made my bathroom project fun instead of overwhelming.
Stacy was great to work with! She was patient with my many questions and offered very valuable advice. She listened to my concerns about price and gave me a variety of options for individual pieces throughout the design process. This was my first experience with Laurel and Wolf and it was a great one! Thank you, Stacy!
Wow. Our experience with Sonia was even better than we could have imagined. We worked with Sonia on our first room in our first home--our bedroom. As the design is coming to life before our eyes, we're feeling grateful to be building a home base that feels cozy, stylish, and very "us"--and we could have never done this on our own! Sonia must be a mind reader--somehow able to discern what our style(s), preferences, and back-of-our-mind hopes were even when we didn't have words to express them. She's also a confident, great teacher, showing us the principles behind the concept. This extended into her thoughtfulness about our budget, as helped us understand where there's value in purchasing a higher-priced item and where there's not. Sonia was positive, fun, flexible, responsive--an encouraging communicator and great with follow-up. She also never made us feel like we were bugging her; instead, we always felt like we were her only clients. We hope to work with Sonia again on other rooms before long. What a great experience!
Great design for a multi-use bedroom space.
Ashley put together two beautiful rooms for us and really understood our style from only a few theme pictures and discussions. We are so excited to see our spaces come together with everything she has designed.
This is such a great idea for people that would love a designer but don't know where to start or don't have the money to hire one. I loved the experience! I will definitely use Laurel & Wolf again.
Website design, site instructions and feel was on point. Good selection of designers. Very responsive to questions and requests.
Loved the experience and the design, was just my style and I loved how it evolved from conceptual to final!
Very much enjoyed my first experience.... I will be back soon for another new room look!
Very fun experience and well worth the money.
I thought it was great! I would definitely use it again as I get ready to do other rooms in the house.
This process was so easy and quick. I haven't been able to achieve the same result with a local designer.
I have never used this type of format before so Im not sure I utilized it to its full potential with our project. That being said I am happy with my design and enjoyed working with Heather.
The experience was great. Maria is so responsive and you quickly understand that she's been able to translate what you're trying to communicate regarding style and function. She moves quickly and comes back with lots of options when you need them. It was a seamless experience!
I didn't know what to expect with this business model, but I was delighted from start to finish. Great customer service, an intuitive interface for viewing ideas and communicating with my designer, the single shopping cart - all of this really helped reduce decision fatigue and resulted in a fantastic room that was well within the budget that we established. I'll definitely recommend Laurel & Wolf to my friends!
Very positive. Great interaction with our designer and the site is well set up to work through the process. We got access to a great designer that we probably wouldn't have spent the money on otherwise and feel like we got a great bargain.
I was apprehensive to do an online designer, but my husband finally talked me into it and I loved it. My designer, Jamie, was wonderful and it was so easy to communicate with and get to decisions quickly. I will definitely use L&W again!
Lacie was great throughout the entire process. She was super responsive and open to my ideas as well. Really happy with my first experience at Laurel & Wolf.
This is an incredible platform - it's made decorating our new home so fun and easy!
I have already recommended L&W to friends and family and we plan on using L&W again and again as we continue to update our house! It was a fantastic experience! THANK YOU!
Lori did amazing job, with very little information and not a lot of feedback from me. I was super busy and she kept delivering ideas. My space is really coming together.
Great site, creative, fun and helpful!
Everything was seamless, and my designer was terrific! I would highly recommend this, as Laurel & Wolf gives you access to designers at a small fraction of the cost. I had a wonderful experience. Thanks again Andrea!
Laurel and Wolf is so far superior to my experience with an online decorating competitor. Better service, better communication and MUCH better results. The Buy for Me program is definitely the way to go, could not be easier.
What a great idea and very affordable. I worked with designers before and always thought I couldn't afford to hire someone to do my entire house.. until now :) Loved coming home from work to find new ideas, boards and designs.. all within my budget. Works like a dream!
Love the site - saves a lot of time for a busy working mom!
This is by far the best money I have spent. I loved the experience, so easy to use. I will definitely be considering it in the future when I move.
When my daughter told me that she was planning to use an online decorating service for her new home, I was skeptical. Then I saw the results and was so impressed I decided to try Laurel and Wolf as well.
Really helped me with my analysis paralysis ... I've been living in my house for a year and have made no moves towards decorating. I think I was too nervous to make a first move! Now I have a road map to getting everything and finally having a fully functional living room space.
I would absolutely do this again. I do most of my shopping online and for me, personally, it's much easier to do my decorating online, too.
This was fun! I learned about L&W through my daughter who has used this design company and her results are beautiful! I'm excited to go shopping!
I've recommended L&W to a number of friends at the moment. I'm encouraging them whether they have a vision in mind or have no idea what to do... L&W proved to be a great way to get my general ideas of theme and color crystalized into an actionable plan. Thumbs up!
Overall loved it. The team was responsive. Loved the end result. Great concept. Website was great overall. I did have some challenges posting pictures a few times. Great in general.
Outstanding! Amazing what this service can do online without actually walking into the physical space. I am a very satisfied customer with the ease, quick turnaround, interaction and especially the designs for my rooms! Thank you Laurel & Wolf, and my designer Pamela!!!
In terms of the process of the site, the whole thing is so well thought through and timed that this just did exactly what we needed it to while adding some fun. We have already recommended your service to 3 other people!
I wasn't sure how well interior design would work online, but I am 100% convinced now. The process was easy and fun, and I'm so happy with the result. Thank you!
This was really fun, affordable, and easy! The interface was super easy to use and intuitive, when on a PC. It was a little trickier on a tablet, but still doable. You guys should make an app for tablets and iPhones! Thanks!
I'm realllllly impressed with the detail that goes into the final package.. this has been a great experience! I appreciate the help with design most of all! You have very experienced designers helping with paint colors and furniture.. wow!
I can't speak highly enough about Laurel & Wolf, I recommend the service to everyone I talk to. Amazing customer service and super high quality talent!
From start to finish my experience has been great. Having so many first looks made it easy to select designer that shared my odd style. My designer was attentive and listened to my input and made changes or offered a better idea. Mary was great. The ordering process was very fast and efficient. My overall experience has been excellent.
The experience from start to finish was enjoyable. I appreciated the professionalism of both L&W and my designer.
Cait is incredible. She listened to my feedback and always had very thoughtful and creative suggestions. We ended up with many different pieces than her original "first look", but she kept the spirit and intention of the design in all her suggestions to us. LOVED working with her.
My 2nd L&W experience. Loved every minute of it!!!! Cant wait to start another project!
Absolutely LOVED having Julian as our designer. He made the experience wonderful. We love the space he created. It was fantastic being able to use the Laurel & Wolf platform to share ideas and refine the layout. I especially enjoyed the order for you feature. We were able to get everything we wanted and meet our deadline. Would definitely use again!
This was the greatest experience ever! I love the concept and it worked out perfect! We were furnishing our lake home and had waited years because we couldn't commit. After reading reviews we were convinced this was the way to go. It was exciting to get all of our designs. Our designer was perfect for us. Our space looks amazing and we feel like we are on vacation every time we step through the door.
So glad to work with Brittany O. She listened to me, worked on all the details and designed a modern bathroom that is up-to-date, functional and beautiful all this within a small budget. The planning process helped me refine what I wanted and make thoughtful choices. Many of the items I can find at local sources. Thanks Brittany!
What an incredible experience! It was incredibly helpful and incredibly fun. It so simplified the shopping process for me. My designer was so good at picking things I loved and picking things that were affordable for my price range. Am definitely recommending to everyone I know!
This process was fantastic! It took more work than I expected (so many decisions!) but once we were done it was perfect.
This was a very easy process with a great result. Sonia tapped into my style immediately, and had great ideas for my space. She gently pushed at my comfort level but also had no problem going along with my ideas and thoughts. She designed a room that I could never have done on my own and I am excited to see the results!
This was my first time working with Laurel & Wolf and for that matter with an online design service and I am extremely happy with the services. My designer Pamela Leist was the best. She was professional, loved her designs and suggestions, she knows her art well and I enjoyed working with her. Must say a very talented lady. A shout out to the entire team of Laurel & Wolf who made this journey so easy for me. Thank you Pamela, Laurel & Wolf, Meg, Lauren and Brittney. I will definitely recommend Laurel & Wolf to others.
I can't believe it took us this long to submit our comments. We absolutely loved working with Chelsea and are thrilled with the result. She totally "got us" and was patient as we found the perfect pieces. We love that we got to buy at our own pace and now have a family room that we can enjoy every day.
Heather was fantastic, helping us choose items over and over again until it was exactly what we wanted. She was extremely responsive and easy to work with, we loved all of her suggestions and insight. Thanks so much for an incredibly easy and happy first experience with an interior designer!
Amanda was amazing. She was so responsive and listened to every idea I had. The final design package is even better than I envisioned it would be.
Boldin Design was absolutely fantastic! We look forward to starting other projects with Boldin in the future!
She was thorough and knowledgeable and just fantastic! This was truly a wonderful experience and I'm so glad I got to work with Amanda!!!
LW & Gracie have helped me so much getting my mind around new build! I can't wait to see how it turns out, and am so happy to have a wonderful vision!
Linda was not only an amazing designer, but a lovely person to work with. She picked up on my style and sensibility right away and was incredibly responsive. It was so nice having someone to reign in my scattered ideas and put them into one cohesive room, without even meeting in person. This is a wonderful service. Can't thank you enough.
Amy was fantastic to work with. She was quick to respond, honest with my suggestions, clear in her offerings, and worked with my budget super well. When i was confused with what I wanted or concerned about layout, she was super helpful in calming my fears and doubts. The overall experience is something I will praise forever, from the easy access online to the communication and to the professionalism behind Amy and the company. Absolutely thrilled with my decision to purchase this design package!!
Lynne was fantastic. Her choices were amazing. Some of the items she chose, I never would have thought to pick...but I absolutely love them!! She was responsive to all of my questions and requests and very patient with my indecisiveness even on small items. I started from scratch so Lynne designed everything from lighting, rugs and window treatments to furniture. This entire process was even better than I could have hoped! I am recommending Laurel & Wolf and more specifically Lynne to all my friends.
Laurel & Wolf is such a cool concept! I'd been aware of you guys for a while, but I finally decided to bite. I wish I'd given this a try sooner! I was amazed by the first looks I got back. Each one was absolutely amazing! I ultimately chose Yekii's design because she perfectly nailed my style. I've been gushing about my experience to my friends and coworkers!
We had a huge space to design and we come from a family of antique rug and furniture collectors. So already there was a big task ahead. Elizabeth did a great job being patient with us and offering many options (and at times, starting fresh with the bigger items). It's clear she knows what she is doing and we are very happy with the plan that we now need to implement. We thank L&W for extending our time numerous times.... that was definitely key as it seems that it was a busy couple weeks for Elizabeth and us. Thanks again. I hope to be back.
Fantastic, flexible, timely, and spot on design!
Love my final design. The whole process was great. Two of the three initial designs were right on target and the third was close. While I think I would have been happy with any of the three designers, Mimi, who we picked was amazing. Really responsive and did a great job of incorporating an existing mirror that we wanted to keep and a few pieces I had already found and was interested in buying for the room. Truly, I could not have asked for someone who was more responsive and detail focused. When I asked questions such as "Is this fabric kid friendly?" she contacted the manufacturer to find out. She also sent me additional ideas for pillows and even kept looking after I said I liked one of the options. In the end she found something even better! Great experience.
This was my first time working with any interior designer. I loved the experience AND my designer was incredibly awesome!! Thanks, Christina Di Vito!!
I have raved about Laurel and Wolf over and over again! I have tried other sites like this and I was not pleased. I got my living room designed with an existing 1980's style, peach, leather couch and my designer still made my living room look sleek and modern. My family has spent so much more time in the living room since our upgrade and we still are not finished implementing all the aspects the designer suggested. I've already started a new project for my bed room through Laurel and Wolf and know I will love it just as much as I loved my living room!
Great experience and highly recommend. Have already referred others. Only negatives for improvement are in the navigation - sometimes it's a little hard to find out where your comments are (general versus attached to item on a style board). Not sure how to fix. Overall, experience with first looks was great except that one designer didn't look at my input at all (including that we had to work around two pieces) and even addressed her email to me by the wrong name! I almost wrote you to ask for another first look because of that but was happy with the other two so decided against it. Leslie, the designer that we chose, was fantastic. We chose her because she did listen to our input and was very collaborative, yet also was not afraid to say or suggest if one of our ideas was not appropriate.
The whole experience was so much fun! I really enjoyed working with Elena, she took all my ideas into account, came up with a practical and aesthetically pleasing design that reflected both my dreams and the reality of having a budget and young kids. She was willing to incorporate my suggestions and expand my design horizons. I can't wait to put it all together!
I wish I had words to say how incredibly fabulous this experience was for me. If I didn't have a job I loved, I would ask to do marketing for Laurel and Wolf. Arianne was so incredibly responsive and helpful, especially for someone like me that didn't know exactly what I wanted. I have told everyone that will listen about my experience, posting on my mom listserv of 2500 moms in Arlington, VA, posting on FB. I haven't been so excited about anything related to a house, EVER! Thank you Laurel and Wolf. I can't wait to see how everything looks when it is all put together!
Allison was great! She understood the idea and sent me great options. The process was very simple and allowed me easy access to a great designer.
April worked tirelessly on my livingroom and bedroom designs with such patience. Her creative input was on point and she helped me gather all the right pieces for my new apartment. I am so happy I found Laurel & Wolf! I would definitely hire your service again as well as recommend Laurel&Wolf to all my friends and family!! Thanks again!!!
Loved doing this. For a nominal fee it is wonderful to have help pulling together a room. I would do this again! Just have to finish up with this one. Thank you!
This entire experience was truly fantastic! I was recommended to your site from a well known designer since I don't have the type of budget his clientele has, but I can certainly say this was truly impressive. From beginning to end, the experience was perfect with very attentive designers and when I chose the designer to work on my project (Luca Pepitone), he was great. He listened to all of my ideas and on questions and tailored everything around what - I - wanted. I love going home to my apartment and have received numerous compliments. I will certainly be using this service again, and I will be recommending it to anyone who wants a fantastic design experience.
Very pleased with my L&W experience. I was dreading the idea of an office makeover, but Alison actually made it painless and fun! I am thrilled with the final result and will use them again! This is the way to go if you want seamless, low maintenance, high quality design service for any room. Strongly recommend their services.
I have been so incredibly impressed with this process and with Luca. His attention to detail and flexibility with the design far exceeded my expectations.
Almost done with my room!! Love it!
I thoroughly enjoyed my experience with L & W. I had worked with a designer in my home in the past, but she came for 4 hours and we made decisions way too quickly in my opinion. I enjoyed cuddling up to my computer ever night to leave comments on aspects of my L&W plan, and slowly with Krystyna getting to the right choice on each aspect. Her style was great, and I really appreciated getting a few options for everything to help guide me to understand what I want - really I don't know at first and it doesn't come naturally to me! I would highly recommend your system, and will be back to tackle the next project.
This was a great experience! So positive that I recommended to my sister (she started a project last week) and I started a second as well. The process was efficient and Silvia is a talented, creative, experienced designer who designed a room I can't wait to work in! Thank you!!
I loved it!!! It was easy and really fun to do. This has been a great experience and I love my new room. Kelly Gibbson was the designer I worked with and she has a great eye and is really flexible to any suggestion. She's also really fun to work with.
I love this concept and I love it even more having tried it! My designer was incredibly accessible and was able to answer my questions, hear my opinions, and render an amazing final design that I can not wait to implement! I will definitely be a repeat customer!
I have already recommended this site and Paige to several friends. We had a great experience and as I decorate the rest of my house I will use Laurel & Wolf to help.
Working with Gracie was so easy. She is so responsive and creative. I enjoyed working with her and L&W. Thank you.
This was great, Emily was so patient and really got our taste, and the experience through the website was really convenient and fun.
Great experience, Sarah was extremely responsive and really provided some out of the box ideas which turned out to be great! Definitely recommend to my friends when they renovate.
Samantha Culbreath was an outstanding designer to work with. She was responsive, attentive and highly skilled. I could not be happier with my experience. The Laurel and Wolf system has worked exceptionally well for my needs. I've already recommended it to several other friends. I would definitely use Samantha's services again.
It was a great experience, Nikkie was awesome to work with. Thanks!
I really enjoyed using Laurel & Wolf. My designer was quick to respond to all of my questions. A great experience overall. I cannot wait to start another project.
Annie Sue was amazing! She took so much time on my design and was VERY responsive. She researched brands I requested and made changes along the way to ensure my satisfaction. Each time I received a "new look" or and "update" it was like Christmas morning. All of my girls would gather around and say "Let's see what Annie Sue did today!" I love your service and she gave me the most amazing vision for my living room. Can't wait to pull it all together! Thank you Annie Sue! Will think of you every day we enjoy our new space!
Working with our designer was a great experience! When you're busy it's hard to spend time sifting through your Pinterest boards and measuring everything to crystallize one single vision. Ashley did that for us and did a great job. She did an excellent job of capturing our style.
My designer was so much more thoughtful and thorough than I thought possible for such an affordable price. Her designs are intricate, clever, and completely tailored to my needs. Thank you so much!
I love this site. What a great tool!!!
Thank you, Mila, for all of your hard work! You're brilliant! Loved working with you and Laurel & Wolf. Highly recommending this to friends and family!
Sarah was a delight to work with. I had no idea what I was really wanting and she helped guide me in the right direction. I'm excited to put it all together :) Thank you!
Loved the ease of the process, the answers to all my "pre-signing on" questions as well as the many I had during the process. All designers submitting designs gave great initial ideas from which I could choose "my" designer. Love my living room - it is exactly what I wanted. And I could not have done it without the incredible help of Laurel and Wolf. I'll be using them again for my next room!
This is an incredible service and I am very very happy with how everything turned out. Will definitely recommend!
It was very easy. I liked being able to chose from three options. I had so much trouble coming up with something for this room on my own. This was perfect.
I loved my experience. I am super happy with my room. The only thing is that on my part I did a ton of research to pick out exactly what look I am going for and what I like. I totally recommend it to people who are willing to do that. The other thing is that a few colors looked different in the room than on online. Besides that I loved the efficiency and tangible results.
We have used Laurel & Wolf's online interior design services for our living room and our dining room. We recently moved into a new house, so we were excited but overwhelmed by the thought of trying to figure out what kind of furniture and decorations to get. L&W came to the rescue! The process was surprisingly easy, and the designers we've worked with have been very responsive and helpful. It's a multi-step, back-and-forth process of communicating between you and the designer, but the overall result is a look that is YOUR desired style -- unique but just right for you. If you're not happy with it, you can keep working on it until you get it right. L&W has a satisfaction guarantee policy! Now we're working on our 3rd L&W project -- the master bedroom. I'm eagerly awaiting the final design! Hands down, I wholeheartedly recommend Laurel & Wolf!
What a fun way to design a room! I was happily surprised by how easy the whole process was and am thrilled with my final design. Jennifer Hardy was a gem to work with - great taste and very responsive. I would definitely recommend to a friend.
I loved working with Erica and she came up with a gorgeous design for us. Thank you!
Our designer did a lot of extra research to get better prices upon request, and was very responsive through the website. We opted to purchase most of the items and have now received most everything. Some items have arrived damaged or taken longer to ship than was originally advertised - specifically a broken side table on arrival from West Elm and a damaged and delayed sofa from World Market (refused it on delivery). Just passing this along for L&W though it is not their fault of course, just part of the challenge buying furniture online. Overall we would do this again and recommend it to a friend. I think what we would do differently is be aware that many furniture retailers are working with 12 week timelines. Thanks Lisa for being helpful and flexible. We are vey happy with our new space.
I had an incredible experience with my designer on Laurel and Wolf. This exceeded my wildest expectations. The value for cost is outstanding. I have to believe this is the future of home improvement and design. I honestly could not be more pleased. Thank you for offering this service.
This is a fantastic service. Plenty of great designers who work to your taste and budget. Brittany was a huge help. Can't wait to use L&W again.
Laurel & Wolf has been worth every penny! I was feeling overwhelmed by making all of the selections for our complete bathroom remodel, and L&W made me feel confident in the decisions. The designer has been collaborative and offered her true opinions. Thank you!
Sharon was spot on in identifying my style and what my room needed to give it an update and refresh. I can take her design and work with it over time to give my room a much needed style upgrade. Would definitely look to Laurel & Wolf again when I am ready to tackle another room.
My boyfriend and I had such a great experience redesigning our living room! As huge fans of HGTV and the DIY Network, we've been wanting to redecorate for quite some time, but were unsure how and where to start the process. We had so many ideas in our heads and were thrilled to see that our designer, Heather Porlier, was able to incorporate everything. Again thank you for the awesome experience and your above-and-beyond customer service. We look forward to working with you soon and will absolutely recommend your services to all of our friends!
I absolutely love the site! Everyone from customer service to personal shopper to designer was super responsive and great to work with. I would definitively recommend Laurel & Wolf to anyone and will certainly be back for another project.
I had such a great experience and already told few of my friends about L&W!! I haven't moved into my condo yet (next month). But I am excited to move in and put the design together! Thank you for bringing such a wonderful service to the public at such a reasonable price!!
Had a great experience with Laurel & Wolf. My designer's name was Kristin. Within a fraction of the proposed budget, she came up with a floor plan to rearrange the furniture, add new accents and colors, and change up some of the artwork to completely transform the room. One of the best $500 I've ever spent, or at least the most fun! L&W has been super responsive at any questions, following up long after my design was done to make sure that I could find the products (even sourcing new products when one was discontinued), etc. They have been terrific! I love their shopping service, too, where you give them a credit card and they order it, track it, ship it to you, etc. Great service and super nice staff!
This is a great service and my wife and I have recommended it to quite a few people already. The diversity of the designers and ability to have one that matches your style was terrific. This made what can be a very intimidating and expensive process fun and cost effective - all tailored to exactly what we needed.
I used Laurel & Wolf to redesign my living room. We got a great variety of boards to choose from and the designer we ultimately went with was great -- quick to respond, listened to our feedback and delivered a great product that was well within budget. My husband even enjoyed the process. I will definitely use this for the other rooms in our house.
What a wonderful service! I was so pleased with the variety of design boards I had to choose from. The designer I selected was professional, timely, and had some great ideas and options for me. Final design board looks amazing and I can't wait to put it together! The online format was so easy and extremely convenient. Thank you!
Amazing designer that patiently worked with me on my needs. It was a fun and efficient service and I look forward having Laurel & Wolf help me with other spaces in my house.
I had an amazing experience with my designer and people love our newly designed space. It's what I call a Victory!
Working with Laurel & Wolf exceeded my expectations. The designer was professional and completely understood my desired aesthetic and goals for each room. She gladly made changes (which weren't many!). I was concerned that I wouldn't get a personal experience working online, but I am so pleased with the entire process. It was easy, conversational and collaborative. I've just ordered my new furniture and accessories, and I cannot wait to put these rooms together. I wish I had another room in my house that needed the expert advice of a Laurel & Wolf designer!
Laurel & Wolf was a great find when redecorating our condo. I had some ideas, but they are especially useful to take your concepts and make them reality. Even my personal friends that were interior designers were out of my budget and this was a godsend. Because it's all online, it helped a lot that I could do this on my time (usually after 11pm) and still made progress. Love it - will use it on the next project.
We used Laurel & Wolf to redesign our high end children's boutique, Pumpkinheads. We couldn't be happier with the innovative design that was created. HIGHLY recommend this fabulous design tool to anyone that wants top notch innovative interior design at an unbelievable price! This product is GENIUS and we will be using for our home...it is exactly what people need to just get some expert help to pull things together and create a look!
I am so happy I stumbled upon Laurel & Wolf (while reading Business Insider’s “25 hot Los Angeles startups you need to watch” article). I’d been wanting to redesign my bedroom for over a year, but had no clue where to start. When I learned about L&W, I thought, “This is it!! Just what I need!” I quickly signed up and enjoyed the process of filling out my design brief, sharing my vision and mapping out what I wanted (the best I could). I was shocked when I saw the design board from Black Cat Interiors, who I ultimately selected. She totally got me!! It’s like she went into my head, figured out who I was and exactly what I wanted and was able to bring it to life. It was perfect!! I may have even squealed out loud with excitement! Yekii, the designer, was quick to respond, easy to work with and eager to ensure I was happy. I ended up buying everything she suggested and am thrilled with my new room!! I’ve already referred a handful of people to L&W, one of which who selected Yekki as well and is overjoyed with her new room too! Definitely recommend using L&W!
Style boards are great and very fun to get the process started. Lucinda is incredible and beautifully designed 5 bedrooms in my home!
Excellent communication with the designers! They were quick to return with solutions and feedback. The process of going through the designers and picking one was smooth. We made a pick and ended with a really great design.
The experience was easy and fun! Although my designer and I differed on the final style board, it gave me great insight into my space.
Had an awesome experience with Elizabeth Thannum. She put together a really thoughtful plan - paid attention to the big items and layout as well as the details. She was responsive, fast and stayed within budget too! She made it a lot of fun and was really patient with my numerous questions. Thanks Elizabeth! | 2019-04-25T01:04:42Z | https://www.laurelandwolf.com/reviews?ref-id=Reviews&ref-location=%2Fpricing |
P.Mean >> Category >> Sample size justification (created 2007-08-09).
These pages provide formulas and advice for justifying the sample size in a research study. Some of these pages describe the pragmatic and ethical concerns about sample size. Also see Category: Hypothesis testing, Category: Post hoc power, Category: Small sample size issues. I also have a blog, and you might want to look at my blog entries with the sample size tag.
51. P.Mean: How sample size calculations are reported in the literature (created 2012-02-23). I am preparing a webinar on sample size calculations and wanted to examine some examples in the published literature. There were lots of interesting examples in an open source journal called Trials. I only included a few examples in my webinar, but I wanted to save the examples I found here in case I want to expand the talk.
50. P.Mean: Is sample size justification really different for animal studies compared to human studies? (created 2012-01-06). Dear Professor Mean, I've spent my entire career (so far) in developing statistical analysis plans for human subjects research. Recently, a neuroscientist who performs experiments on rats asked me to assist in a power analysis. My conversation with him reminded me of that YouTube video (Biostatistics vs Lab Research): "I think I only need 3 subjects..." In his case, he seemed fixated on needing only 6 rats per group---which is what he had always done in the past. Are the rules for sample size justification different for animal studies than for human studies?
48. The Monthly Mean: I want to calculate power, but I don't have a standard deviation for the formula (March/April 2011). Someone was asking for assistance on calculating power. A research agency was willing to lend some of its data for a secondary data analysis on a large data set (1,314 observations), but it asked for anyone requesting this data to demonstrate that their hypothesis had adequate power before sharing their data. There were publications based on this data, but using different endpoints, so the person could not get the standard deviation needed for the formula for power.
45. P.Mean: Fighting the claim that any size difference is clinically important (created 2010-08-05). When working with people to select an appropriate sample size, it is important to establish the minimum clinically important difference (MCID). This is a difference such that any value smaller would be clinically trivial, but any value larger would be clinically important. I get told quite often that any difference that might be detected is important. I could be flippant here and then tell them that their sample size is now infinite and my consulting rate is proportional to the sample size, but I don't make flippant comments (out loud, at least). Here's how I might challenge such a claim.
44. P.Mean: The futility of small sample sizes for evaluating a binary outcome (created 2010-06-16). I'm helping out with a project that involves a non-randomized comparison of two groups of patients. One group gets a particular anesthetic drug and the other group does not. The researcher wants to compare rates of hypotension, respiratory depression, apnea, and hypoxia. I suggested using continuous outcomes like O2 saturation levels rather than discrete events like hypoxia, but for a variety of reasons, they cannot use continuous outcomes. Their original goal was to collect data on about 20 patients in each group.
43. P.Mean: An interesting alternative to power calculations (created 2010-06-09). Someone on the MedStats Internet discussion group mentioned an alternative to power calculations called accuracy in parameter estimation (AIPE). It looks interesting. Here are some relevant references.
42. P.Mean: Minimum sample size needed to a time series prediction (created 2010-06-08). Someone asked what the minimum sample size that was needed in a time series analysis model to forecast future observations. Strictly speaking, you can forecast with two observations. Draw a straight line connecting the two points and then extend that line as far as you want in the future. But you wouldn't want to do that. So a better question might be what is the minimum number of data points that you would need in order to provide a good forecast of the future.
41. P.Mean: Power calculations for comparison of Poisson counts across two groups (created 2010-01-11). Suppose you want to compare Poisson count variables across two groups. How much data would you need to collect? It's a tricky question and there are several approaches that you can consider.
39. P.Mean: Accounting for clusters in an individually randomized clinical trial (created 2009-10-13). I have a clinical trial with clusters (the clusters are medical practice), but unlike a cluster randomized trial, I am able to randomize within each cluster. From what I've read about this, I can provide an estimate for the Intraclass Correlation Coefficient (ICC) that will decrease my sample size. But I'm uncomfortable doing this. Can you help?
38. The Monthly Mean: Power for a three arm study (November 2009) and P.Mean: Power for a three arm experiment (created 2009-09-14). "I want to compute power for a three arm experiment. The outcome variable is binary (yes/no). I know how to compute power for a two-arm experiment already, but have no idea how to handle the third arm."
37. P.Mean: The first three steps in selecting an appropriate sample size (created 2009-07-20). I got an email last week from a client wanting to start a new research project looking at relationships between parenting beliefs and childhood behaviors. The description of the sorts of things to examine was quite elaborate, and it ended with the question "how many families would we need to have any significant differences if they exist?" Unfortunately, all the elaborate information provided did not include the information I would need to answer this question. Justifying a sample size usually involves three steps.
36. P.Mean: Example of power calculation for a repeated measures design (created 2008-10-19). I was asked how to calculate power for an interaction term in a repeated measures design. There were two groups (treatment and control), and subjects in each group were measured at four time points. The interaction involving the third time point was considered most critical.
35. P.Mean: Power calculations for repeated measures designs (created 2008-09-25). I�ve been struggling with a design/analysis question related to repeated measures design and power analysis. Can you help?
34. P.Mean: Source for sample size formula (created 2008-08-20). Hello, I am looking at your page on sample size calculation, and I'm curious as to where you got the equation shown there. I can't seem to find that exact form in Cohen's book, not does it appear anywhere else that I've looked. Would you happen to know its original source?
33. P.Mean: Where did that standard deviation come from? (created 2008-07-09). Someone wanted some help with a power calculation. I gave the standard spiel that you need three things: a research hypothesis, an estimate of the standard deviation of your outcome measure, and the minimum clinically important difference. This was for a study looking at 10 exposed patients (recent spider bites) and 30 control patients. I got an article back in email very quickly, and while it was interesting to read, it wasn't quite what I needed.
Peter Bacchetti, Leslie E. Wolf, Mark R. Segal, Charles E. McCulloch. Bacchetti et al. Respond to "Ethics and Sample Size--Another View". Am. J. Epidemiol. 2005;161(2):113. Excerpt: "We thank Dr. Prentice (1) for taking the time to respond to our article (2). We explain here why we do not believe that he has provided a meaningful challenge to our argument. We see possible objections related to unappealing implications, use of power to measure value, implications for series of trials, how value per participant is calculated, and participants� altruistic satisfaction." [Accessed July 7, 2010]. Available at: http://aje.oxfordjournals.org.
John S. Uebersax. Bayesian Unconditional Power Analysis. Description: When you perform a traditional power calculation, you need to specify the size of the difference that you want to detect. Sometimes this represents the minimum difference that is clinically relevant and sometimes it is a difference that is observed in a previous research study. If the latter is chosen, you need to account for sampling error in the previously observed difference. Otherwise the estimated power is biased, often biased downward. This website was last verified on 2009-11-15. URL: http://www.john-uebersax.com/stat/bpower.htm.
David A. Schoenfeld. Considerations for a parallel trial where the outcome is a time to failure. Description: This web page calculates power for a survival analysis. You need to specify the accrual interval, the follow-up interval, the median time to failure in the group with the smallest time to failure. Thne also specify two of the following three items: power, total number of patients, and the minimal detectable hazard ratio. In an exponential model the last term is equivalent to the ratio of median survival times. [Accessed June 16, 2010]. Available at: http://hedwig.mgh.harvard.edu/sample_size/time_to_event/para_time.html.
Peter Bacchetti. Current sample size conventions: Flaws, harms, and alternatives. BMC Medicine. 2010;8(1):17. Abstract: "BACKGROUND: The belief remains widespread that medical research studies must have statistical power of at least 80% in order to be scientifically sound, and peer reviewers often question whether power is high enough. DISCUSSION: This requirement and the methods for meeting it have severe flaws. Notably, the true nature of how sample size influences a study's projected scientific or practical value precludes any meaningful blanket designation of <80% power as "inadequate". In addition, standard calculations are inherently unreliable, and focusing only on power neglects a completed study's most important results: estimates and confidence intervals. Current conventions harm the research process in many ways: promoting misinterpretation of completed studies, eroding scientific integrity, giving reviewers arbitrary power, inhibiting innovation, perverting ethical standards, wasting effort, and wasting money. Medical research would benefit from alternative approaches, including established value of information methods, simple choices based on cost or feasibility that have recently been justified, sensitivity analyses that examine a meaningful array of possible findings, and following previous analogous studies. To promote more rational approaches, research training should cover the issues presented here, peer reviewers should be extremely careful before raising issues of "inadequate" sample size, and reports of completed studies should not discuss power. SUMMARY: Common conventions and expectations concerning sample size are deeply flawed, cause serious harm to the research process, and should be replaced by more rational alternatives." [Accessed July 7, 2010]. Available at: http://www.biomedcentral.com/1741-7015/8/17.
Scott Aberegg, D Roxanne Richards, James O'Brien. Delta inflation: a bias in the design of randomized controlled trials in critical care medicine. Critical Care. 2010;14(2):R77. Abstract: "INTRODUCTION: Mortality is the most widely accepted outcome measure in randomized controlled trials of therapies for critically ill adults, but most of these trials fail to show a statistically significant mortality benefit. The reasons for this are unknown. METHODS: We searched five high impact journals (Annals of Internal Medicine, British Medical Journal, JAMA, The Lancet, New England Journal of Medicine) for randomized controlled trials comparing mortality of therapies for critically ill adults over a ten year period. We abstracted data on the statistical design and results of these trials to compare the predicted delta (delta; the effect size of the therapy compared to control expressed as an absolute mortality reduction) to the observed delta to determine if there is a systematic overestimation of predicted delta that might explain the high prevalence of negative results in these trials. RESULTS: We found 38 trials meeting our inclusion criteria. Only 5/38 (13.2%) of the trials provided justification for the predicted delta. The mean predicted delta among the 38 trials was 10.1% and the mean observed delta was 1.4% (P<0.0001), resulting in a delta-gap of 8.7%. In only 2/38 (5.3%) of the trials did the observed delta exceed the predicted delta and only 7/38 (18.4%) of the trials demonstrated statistically significant results in the hypothesized direction; these trials had smaller delta-gaps than the remainder of the trials (delta-gap 0.9% versus 10.5%; P<0.0001). For trials showing non-significant trends toward benefit greater than 3%, large increases in sample size (380% - 1100%) would be required if repeat trials use the observed delta from the index trial as the predicted delta for a follow-up study. CONCLUSIONS: Investigators of therapies for critical illness systematically overestimate treatment effect size (delta) during the design of randomized controlled trials. This bias, which we refer to as "delta inflation", is a potential reason that these trials have a high rate of negative results." [Accessed June 9, 2010]. Available at: http://ccforum.com/content/14/2/R77.
Peter Bacchetti, Leslie E. Wolf, Mark R. Segal, Charles E. McCulloch. Ethics and Sample Size. Am. J. Epidemiol. 2005;161(2):105-110. Abstract: "The belief is widespread that studies are unethical if their sample size is not large enough to ensure adequate power. The authors examine how sample size influences the balance that determines the ethical acceptability of a study: the balance between the burdens that participants accept and the clinical or scientific value that a study can be expected to produce. The average projected burden per participant remains constant as the sample size increases, but the projected study value does not increase as rapidly as the sample size if it is assumed to be proportional to power or inversely proportional to confidence interval width. This implies that the value per participant declines as the sample size increases and that smaller studies therefore have more favorable ratios of projected value to participant burden. The ethical treatment of study participants therefore does not require consideration of whether study power is less than the conventional goal of 80% or 90%. Lower power does not make a study unethical. The analysis addresses only ethical acceptability, not optimality; large studies may be desirable for other than ethical reasons." [Accessed July 7, 2010]. Available at: http://aje.oxfordjournals.org/cgi/content/abstract/161/2/105.
Johnston M, Hays R, Hui K. Evidence-based effect size estimation: An illustration using the case of acupuncture for cancer-related fatigue. BMC Complementary and Alternative Medicine. 2009;9(1):1. Available at: http://www.biomedcentral.com/1472-6882/9/1 [Accessed February 24, 2009].
Ross Prentice. Invited Commentary: Ethics and Sample Size--Another View. Am. J. Epidemiol. 2005;161(2):111-112. Excerpt: "In their article entitled, "Ethics and Sample Size," Bacchetti et al. (1) provide a spirited justification, based on ethical considerations, for the conduct of clinical trials that may have little potential to provide powerful tests of therapeutic or public health hypotheses. This perspective is somewhat surprising given the longstanding encouragement by clinical trialists and bioethicists in favor of large trials (2�4). Heretofore, the defenders of smaller trials have essentially argued only that small, underpowered trials need not be unethical if well conducted given their contribution to intervention effect estimation and their potential contribution to meta-analyses (5, 6). However, Bacchetti et al. evidently go further on the basis of certain risk-benefit considerations, and they conclude: "In general, ethics committees and others concerned with the protection of research subjects need not consider whether a study is too small.... Indeed, a more legitimate ethical issue regarding sample size is whether it is too large" (1, p. 108)." [Accessed July 7, 2010]. Available at: http://aje.oxfordjournals.org.
Wei-Jiun Lin, Huey-Miin Hsueh, James J. Chen. Power and sample size estimation in microarray studies. BMC Bioinformatics. 2010;11(1):48. Abstract: "BACKGROUND: Before conducting a microarray experiment, one important issue that needs to be determined is the number of arrays required in order to have adequate power to identify differentially expressed genes. This paper discusses some crucial issues in the problem formulation, parameter specifications, and approaches that are commonly proposed for sample size estimation in microarray experiments. Common methods for sample size estimation are formulated as the minimum sample size necessary to achieve a specified sensitivity (proportion of detected truly differentially expressed genes) on average at a specified false discovery rate (FDR) level and specified expected proportion (pi1) of the true differentially expression genes in the array. Unfortunately, the probability of detecting the specified sensitivity in such a formulation can be low. We formulate the sample size problem as the number of arrays needed to achieve a specified sensitivity with 95% probability at the specified significance level. A permutation method using a small pilot dataset to estimate sample size is proposed. This method accounts for correlation and effect size heterogeneity among genes. RESULTS: A sample size estimate based on the common formulation, to achieve the desired sensitivity on average, can be calculated using a univariate method without taking the correlation among genes into consideration. This formulation of sample size problem is inadequate because the probability of detecting the specified sensitivity can be lower than 50%. On the other hand, the needed sample size calculated by the proposed permutation method will ensure detecting at least the desired sensitivity with 95% probability. The method is shown to perform well for a real example dataset using a small pilot dataset with 4-6 samples per group. CONCLUSIONS: We recommend that the sample size problem should be formulated to detect a specified proportion of differentially expressed genes with 95% probability. This formulation ensures finding the desired proportion of true positives with high probability. The proposed permutation method takes the correlation structure and effect size heterogeneity into consideration and works well using only a small pilot dataset." [Accessed February 1, 2010]. Available at: http://www.biomedcentral.com/1471-2105/11/48.
K Akazawa, T Nakamura, Y Palesch. Power of logrank test and Cox regression model in clinical trials with heterogeneous samples. Stat Med. 1997;16(5):583-597. Abstract: "This paper evaluates the loss of power of the simple and stratified logrank tests due to heterogeneity of patients in clinical trials and proposes a flexible and efficient method of estimating treatment effects adjusting for prognostic factors. The results of the paper are based on the analyses of survival data from a large clinical trial which includes more than 6000 cancer patients. Major findings from the simulation study on power are: (i) for a heterogeneous sample, such as advanced cancer patients, a simple logrank test can yield misleading results and should not be used; (ii) the stratified logrank test may suffer some power loss when many prognostic factors need to be considered and the number of patients within stratum is small. To address the problems due to heterogeneity, the Cox regression method with a special hazard model is recommended. We illustrate the method using data from a gastric cancer clinical trial." [Accessed June 16, 2010]. Available at: http://www3.interscience.wiley.com/journal/9725/abstract.
Dimidenko E. Power/Sample Size Calculation for Logistic Regression with Binary Covariate(s). Available at: http://www.dartmouth.edu/~eugened/power-samplesize.php [Accessed April 9, 2010].
Frank E Harrell Jr, Kerry L Lee, Robert M Califf, David B Pryor, Robert A Rosati. Regression modelling strategies for improved prognostic prediction. Statistics in Medicine 1984: 3; 143-152. Description: This article uses a simulation study of stepwise logistic regression to demonstrate that it performs poorly when the ratio of events to candidate independent variables is less than 10 to 1.
S. J. Walters. Sample size and power estimation for studies with health related quality of life outcomes: a comparison of four methods using the SF-36. Health Qual Life Outcomes 2004: 2; 26. [Medline] [Abstract] [Full text] [PDF]. Description: This article proposes three formulas for estimating sample size as well as a bootstrap method and then compares their performance using a quality of life outcome, SF-36.
Peter Bacchetti, Jacqueline Leung. Sample Size Calculations in Clinical Research : Anesthesiology. Anesthesiology. 2002;97(4):1028-1029. Excerpt: "We write to make the case that the practice of providing a priori sample size calculations, recently endorsed in an Anesthesiology editorial, is in fact undesirable. Presentation of confidence intervals serves the same purpose, but is superior because it more accurately reflects the actual data, is simpler to present, addresses uncertainty more directly, and encourages more careful interpretation of results." [Accessed July 7, 2010]. Available at: http://journals.lww.com/anesthesiology/Fulltext/2002/10000/Sample_Size_Calculations_in_Clinical_Research.50.aspx.
Kevin L. Delucchi. Sample Size Estimation in Research With Dependent Measures and Dichotomous Outcomes. Am J Public Health. 2004;94(3):372-377. Abstract: "I reviewed sample estimation methods for research designs involving nonindependent data and a dichotomous response variable to examine the importance of proper sample size estimation and the need to align methods of sample size estimation with planned methods of statistical analysis. Examples and references to published literature are provided in this article. When the method of sample size estimation is not in concert with the method of planned analysis, poor estimates may result. The effects of multiple measures over time also need to be considered. Proper sample size estimation is often overlooked. Alignment of the sample size estimation method with the planned analysis method, especially in studies involving nonindependent data, will produce appropriate estimates." Available at: http://ajph.aphapublications.org/cgi/content/full/94/3/372.
Lenth RV. Some Practical Guidelines for Effective Sample Size Determination. The American Statistician 2001 (August), 55(3); 187-193. [Abstract] [PDF] Description: This article offers some practical suggestions on how to elicit an effect size and find the right standard deviation. It explains what to do if budget limitations restrict your sample size and criticizes the use of standardized effect sizes and post hoc power.
Steve Shiboski. Table of Calculators for Survival Outcomes. Description: This webpage highlights several different programs for power calculations for sirvial analysis. It includes a Java applet by Marc Bacsafra and SAS macros by Joanna Shih. [Accessed June 16, 2010]. Available at: http://cct.jhsph.edu/javamarc/index.htm.
32. Stats: Too much power and precision? (January 9, 2008).There was a discussion on EDSTAT-L about studies with too much power and precision. You can indeed have too much power/precision, and here is a pragmatic example.
31. Stats: Justifying the sample size for a microarray study (August 9, 2007). I'm helping out with a grant proposal that is using microarrays for part of the analysis. A microarray is system for quantitative measurement of circulating mRNA in human, animal, or plant tissue. A microarray will typically measure thousands or tens of thousands of different mRNA sequences. An important issue for this particular grant (and many grants involving microarray data) is how to justify the sample size. Here are a few references that I will use to develop such a justification.
30. Stats: What is an adequate sample size for establishing validity and reliability? (April 9, 2007). Someone from Mumbai, India wrote in asking whether a sample of 163 was sufficiently large for a study of reliability and validity. This was for a project that was already done, and this person was worried that someone would complain that 163 is too small.
29. Stats: IRB review of a pilot study (March 26, 2007). Dear Professor Mean: I am the new chair of the IRB at a county hospital. Many of the studies we review are pilot studies with small samples. I have been trying to locate criteria for the scientific review of pilot studies, but have not found a consensus in the literature that I have seen. Is a pilot study merely a "dry run" of the procedures that will be used in a later, larger-scale study? Or, is it reasonable for the IRB to demand that the investigator provide specific criteria for determining whether the pilot has been a success? And, should the IRB furthermore demand that specific hypotheses be formulated? My impression is that many investigators declare their studies to be pilots in order to avoid more rigorous scrutiny of their proposals.
28. Stats: Do your own power and sample size calculations (January 30, 2007). Someone asked me for some power calculations and the problem was stated very tersely and completely: "Alpha .05, Power 0.8. What is sample size to detect an outcome difference of 20% vs 30% for an adverse event. Thank you." Usually people have difficulty in elaborating the conditions of the power or sample size calculation, and I am always glad to help with that process. But if you already know the conditions, you can find very nice web sites that will do power calculations for you.
27. Stats: Variable cluster sizes and their impact on sample size calculations (January 3, 2007). A recently published article in the International Journal of Epidemiology discusses sample size requirements for cluster randomized trials when the size of the cluster itself varies. The authors develop an approximation that uses the coefficient of variation (CV) of the distribution of cluster sizes.
26. Stats: Be sure to account for dropouts in your sample size calculation (December 29, 2006). I helped out a colleague with an NIH grant, and when the critique came back, it mentioned two issues that I should have been aware of. First, they pointed out the need for an intention-to-treat analysis strategy. Second, they noted the long duration of the study, with a full year of evaluations on any particular patient, and seemed bothered that we presumed that 100% of the patients would complete the full study.
25. Stats: Is a 10% shortfall in sample size critical? (October 23, 2006). Dear Professor Mean, I'm reviewing a paper where they did a power calculation based on 60 patients per group, but in the research study, they ended up only getting 55/58 per group. Since their sample size was much less than what they originally planned for, does this mean that the study had inadequate power?
24. Stats: R libraries for sample size justification (July 28, 2006). There are a lot of good commercial and free sources for sample size justification. Note that most people use the term power calculation, but there is more than one way to justify a sample size, so I try to avoid the term "power calculation" as being too restrictive. Anyway, I just noted an email on the MedStats list that suggests two R libraries.
23. Stats: How many charts should I pull? (March 30, 2006). I got a question from someone doing a quality review. She needs to pull a certain number of medical records out of 892 and see whether the doctors followed the clinical guidelines properly. The question is how to determine the proper number of charts to pull.
22. Stats: Sample size for a binomial confidence interval (October 3, 2005). Someone asked me for some help with a homework question. I hesitate to offer too much advice in these situations because I don't want to disrupt the teacher's efforts to get the students to think on their own.
21. Stats: Sample size for a binary endpoint (August 12, 2005). Someone sent me an email asking for the sample size needed to detect a 10% shift in the probability of recurrence of an event after one of two different surgical procedures is done.
20. Stats: Confidence interval for a correlation coefficient (July 11, 2005). In many exploratory research studies, the goal is to examine associations among multiple demographic variables and some outcome variables. How can you justify the sample size for such an exploratory study? There are several approaches, but one simple way that I often use is to show that any correlation coefficients estimated by this research study will have reasonable precision. It may not be the most rigorous way to select a sample size, but it is convenient and easy to understand.
19. Stats: Sample size calculation for a nonparametric test (March 8, 2005). I got an email inquiry about how to calculate power for a Wilcoxon signed ranks test. I don't have a perfect reference for this, but I do have a brief discussion on sample size calculations for the Mann Whitney U test as part of my pages on selecting an appropriate sample size. The same considerations would apply for the Wilcoxon test.
18. Stats: Unequal sample sizes (November 24, 2004). For some reasons, it seems to unnerve people when the sample size in the treatment and control group are not the same. They worry about whether the tests that they would run on the data would be valid or not. As a general rule, there is no reason that you cannot analyze data with unequal sample sizes.
17. Stats: Ratio of observations to independent variables (November 17, 2004). A widely quoted rule is that you need 10 or 15 observations per independent variable in a regression model. The original source of this rule of thumb is difficult to find. I briefly commented on this in an earlier weblog entry, but here is a more complete elaboration.
16. Stats: Sample size for an ordinal outcome (September 22, 2004). Someone asked me for some help with calculating an appropriate sample size for a study comparing two treatments, where the outcome measure is ordinal (degree of skin toxicity: none, slight, moderate, severe). It turns out that an excellent discussion of the various approaches appears in a recent journal article with full free text on the web.
15. Stats: Sample size calculations in studies with a baseline (July 23, 2004). Many research studies evaluate all patients at baseline and then randomly assign the patients to groups, conduct the interventions, and then re-evaluate them at the end of the study. The sample size calculations for this type of study are a bit tricky.
14. Stats: Sample size for a diagnostic test (July 5, 2004). Someone asked me how to determine the sample size for a study involving a diagnostic test. It seems like a tricky thing, because most studies of diagnostic tests don't have a formal hypothesis. What you need to do instead is to specify a particular statistic that you are interested in estimating and then selecting a sample size so that the confidence interval for this estimate is reasonably precise.
13. Stats: Sample size for cluster randomized trials (May 27, 2004). One of my favorite people to work with, Vidya Sharma, was asking me how to compute the sample size in a cluster randomized trial. I had started to write a web page about this, but never found the time to finish it. A cluster randomized trial selects several large groups of patients and then randomly assigns a treatment to all of the patients within a group. A cluster randomized trial requires a larger sample size than for a simple randomized trial.
12. Stats: Sample size calculation example (May 20, 2004). I received a question in Hong Kong about how to double check a power calculation in a paper by Tugwell et all in the 1995 NEJM. In the paper, they state that "With the tender-joint count used as the primary outcome, a sample of 75 patients per group was needed in order to have a 5 percent probability of a Type I error and a power of 80 percent to detect a difference of 5 tender joints between groups, with a standard deviation of 9.5, and to allow for a 25 percent dropout rate."
11. Stats: Sample size for a survival data model (May 13, 2004). I got an email from Japan recently with an interesting question. The question was about an analysis of mortality for children under 5 years of age. There were 1992 patients and 272 of them died. I was asked if this was an adequate sample size and whether there was a problem because the median survival time was unavailable for some of the subgroups.
10. Stats: Cluster randomization (May 9, 2003). This appears to be a duplicate of the May 27, 2004 weblog entry. | 2019-04-21T18:03:52Z | http://pmean.com/category/SampleSizeJustification.html |
These records were created by Mrs. Wendell B. Folsom of the New Hampshire Chapter, D.F.P.A. (Daughters of Founders and Patriots of America) in 1938 and donated to the New Hampshire Historical Society. This cemetery is still in use so burials after 1938 are not included here. None of these records have been cross-checked at the actual cemetery, so we cannot vouch for their accuracy. In addition, typographical errors may have been made by us in converting them to the web.
This cemetery is along Post Road.
Rev. Henry Allen of Falmouth, Nova Scotia died in this town Feb 2 1784 in the 35th year of his age. Stone erected by his nephew Joseph Allen.
Charles C. Barton, Jan 9 1814--Sept 30 1905.
Dorcas his wife, Dec 24 1821--March 24 1898.
John Martin his son, 1840--1841.
Mary D. wife of Sylvester Jackson, 1812--1884.
Anna T. wife of Joseph Roberts, 1819--1896.
Mrs. Jane Batchelder d. Aug 8 1824 ae 29.
Martha M. his wife, 1844-1924.
Thomas H. Berry, Feb 8 1841--Sept 25 1913.
Rose E. his wife, 1875--1899.
Emma E. Locke his wife, 1867----- .
William J. Breed, May 17 1846-May 20 1911. G. A. R.
ydia A. Breed, Sept 22 1848--Feb 25 1902.
Harriet Briggs dau. of Jeremiah and Polly Jenness d. Nov 16 1881 ae 79 yrs.
Capt. Simon Brown d. July 20 1831 ae 87 yrs.
Mrs. Mary wife of Capt Simon Brown d. Sept 23 1837 ae 90 yrs 8 ms.
Simon Brown d. Dec 22 1890 ae 86 yrs 3 ms.
Emily wife of Simon Brown d. Sept 22 1880 ae 74 yrs.
Jeremiah Brown d. Feb 12 1875 ae 64 yrs.
Betsey wife of Jeremiah Brown d. Feb 16 1887 ae 81 yrs 5 ms.
Oliver Brown d. Feb 20 1872 ae 61 yrs 1 mo.
Elizabeth M. wife of Oliver Brown d. Jan 27 1898 ae 81 yrs 8 ms 20 ds.
Sarah E. d. Oct 11 1840 ae 19 ms.
Freeman O. d. Oct 18 1840 ae 3 yrs 1 mo.
Helen F. d. May 23 1847 ae 11 ms.
Children of Oliver and Elizabeth Brown.
Alice Emma wife of Horace O. Brown d. Nov. 13 1885 ae 25 yrs 11 ms.
Nathan Brown b. March 20 1814 d. May 1 1898.
Elizabeth A. wife of Nathan Brown d. Jan 30 1892 ae 68 yrs 10 ms.
Clarissa wife of Nathan Brown Jr. d. May 23 1858 ae 45 yrs 6 ms.
Elizabeth H. wife of Capt. Jonah Brown d. July 8 1818 ae 26 yrs.
Harriet E. Brown d. Feb 12 1925 ae 79 yrs 4 ms 15 ds.
Jenness Brown d. Sept 17 1876 ae 68 yrs.
Lydia Ward wife of Jenness Brown d. Feb 2 1876 ae 58 yrs.
Emma Susan dau. of Jenness and Lydia W. Brown d. in Newburyport, Mass. Aug 23 1856 ae 11 ms 4 ds.
Caroline Ward dau. of Jenness and Lydia W. Brown d. Newburyport, Mass. June 20 1851 ae 7 wks 1 day.
John Brown d. Aug 23 1825 ae 50 yrs.
Polly wife of John Brown d. May 9 1859 ae 81 yrs.
Mary Brown d. March 28 1810 ae 40 yrs.
James A. Bunker, March 25 1853--March 6 1911.
William H. Burleigh, May 4 1839--Jan 14 1907.
George W. Burton d. May 1 1871 ae 30 yrs 8 ms.
Lucy E. wife of Timothy Caswell d. Dec 19 1908 ae 88 yrs 3 ms 17 ds.
Dea. Samuel Chapman d. March 29 1876 ae 84 yrs.
Martha W. wife of Dea. Samuel Chapman d. Nov 17 1848 ae 53 yrs.
Mary W. wife of Dea. Samuel Chapman d. Oct 18 1879 ae 73 yrs 6 ms.
Elizabeth J. wife of Leonard W. Chapman and dau. of Dea. S. and M. W. Chapman d. Nov 14 1848 ae 24 yrs 7 ms.
Leander son of Leonard W. and Elizabeth J. Chapman d. Jan 13 1849 ae 3 ms.
David Chapman d. Feb 9 1826 ae 30 yrs.
Abigail wife of David Chapman d. Dec. 13 1878 ae 83 yrs 10 ms.
John Chapman, May 25 1834--April 15 1900.
Sarah E., Jan 15 1832--Nov 24 1922.
Lucretia A., Aug 13 1839--Jan 11 1899.
Rosamond M., Dec 13 1842----- .
Thomas H. Chapman d. Aug 20 1866 ae 41 yrs 4 ms.
Martha Ann dau. of John and Leocady D. Chapman d. Sept 8 1839 ae -- .
Rosamond B. dau. of John and Leocada D. Chapman d. April 6 1841 ae 4 yrs 10 ms.
Leocady D. wife of the John Chapman d. Sept 27 1858 ae 54 yrs 4 ms.
John Chapman d. Feb 9 1885 ae 82 yrs 3 ms.
Mrs. Sarah L. widow [wife?] of John Chapman d. Oct 6 1829 ae 26 yrs.
Samuel Chapman d. Sept 6 1840 ae 83 yrs 9 ms.
Mercy wife of Samuel Chapman d. May 11 1845 ae 76 yrs 3 ms.
Polly dau. of Samuel and Mercy Chapman d. April 7 1816 ae 22 yrs.
Hannah Chapman wife of Benjamin Chapman d. Aug 26 1848 ae 44 yrs.
Four infants: Leander, Mary, James S., Elizabeth.
Leander d. Jan 20 1842 ae 3 yrs.
Benjamin James d. Jan 19 1843 ae 8 yrs.
Children of Benjamin and Hannah Chapman.
Samuel Chapman d. May 24 1868 ae 41.
Edward Nathan son of Samuel and S. E. Chapman d. Aug 27 1859 ae 17 ms.
Sarah wife of Benjamin Crimbell d. March 5 1855 ae 56 yrs.
Abigail wife of Benjamin Crimbell d. May 25 1862 ae 53 yrs.
Rufus Dalton d. Feb 29 1888 ae 80 yrs.
Mehitable L. wife of Rufus Dalton d. Sept 7 1873 ae 60 yrs.
Maria wife of Rufus Dalton d. March 13 1887 ae 73 yrs.
Joseph Franklin d. Aug 5 1845 ae 14 ms.
Charles Edwin d. June 27 1846 ae 5 yrs.
Children of Rufus and Mehitable Dalton.
George R. d. Sept 14 1848 ae 11 wks.
Georgianna E. d. Sept. 15 1849 ae 14 ms.
Children of Rufus and M. L. Dalton.
Albert d. April 4 1851 ae 4 days.
Rufus J. d. Sept 2 1861 ae 9 yrs 9 ms.
Sons of Rufus and M. L. Dalton.
Eben H. Dalton, Aug 31 1842--Nov 5 1913.
Celia A. his wife, Feb 22 1846--Dec 31 1906.
Martha Dalton d. Dec 8 1820 ae 19 yrs.
In memory of Ebenezer M. Dalton who d. Nov 12 1846 ae 78 yrs.
Love wife of Ebenezer M. Dalton d. Aug 7 1848 ae 76 yrs.
Elijah Davis d. Jan 8 1856 ae 68 yrs.
Lydia wife of Elijah Davis d. April 26 1869 ae 92 yrs 5 ms.
Sarah Frances dau. of Orin L. and Sarah Davis d. March 30 1839 ae 6 yrs 3 ms.
Here lyes buried ye body of Mrs. Abigail Dearborn wife of Dea. John Dearborn who d. ye 14 of Novr 1736 in ye 69 yr of her age.
Horatio Dearborn d. April 23rd 1802 ae 24.
Here lyes ye body of John Dearborn son of Simon and Sarah Dearborn who dec'd 26th April 1736 in ye 4th year of his age.
Leocade Dearborn d. Nov 28th 1804 ae 21.
Joseph Dearborn son of Mr Joseph and Mrs. Anna Dearborn d. Feb ye 13th 1735 in the -- year of his age.
Sarah Dearborn dau. of Joseph and Anna Dearborn d. Feby 13 1735 in the 4th year of her age.
In memory of Doctor Levi Dearborn who after a life of extensive usefulness in his calling departed this life March 8 1702 ae 62 yrs.
Here lies the body of Mrs. Anne Dearborn the virtuous consort of Mr. Simon Dearborn who departed this life Oct 22 1763 in the 52nd year of her age.
Samuel Dearborn d. Nov 11 1838 ae 84 yrs.
Hannah widow of Samuel Dearborn d. Feb 20 1841 ae 88 yrs.
Dea. Nathaniel Dearborn d. Nov 30 1854 ae 77 yrs 3 ms 3 ds.
Dolly wife of Dea. Nathaniel Dearborn d. April 29 1809 ae 28 yrs 6 ms.
Lucy wife of Dea. Nathaniel Dearborn d. June 8 1867 ae 86 yrs 26 ds.
Lucy P. Dearborn d. Nov 18 1848 ae 27 yrs 9 ms.
Dolly G. Dearborn b. July 7 1812 d. Dec 26 1906.
Hannah wife of Jeremiah Dearborn d. March 20 1884 ae 85 yrs.
Jeremiah Dearborn d. Oct 5 1856 ae 61 yrs.
Mary E. dau. of Jeremiah and Hannah Dearborn d. Nov 4 1862 ae 20 yrs 8 ms.
Simon Dearborn d. Nov 3 1843 ae 77 yrs.
Mary widow of Simon Dearborn d. April 28 1859 ae 90 yrs 6 ms.
Alvah Dearborn d. June 8 1869 ae 72 yrs 9 ms.
Sarah Dearborn d. Nov 4 1861 ae 61 yrs 10 ms.
Sarah A. his wife, 1837----- .
Mary E. his wife, 1836--1906.
John Dearborn d. Oct 20 1880 ae 69 yrs.
Mary wife of John Dearborn d. April 29 1901 ae 82 yrs 6 ms 4 ds.
Althea C. wife of Rev. John Dinsmore and dau. of the late Rev. Nathan Cobb b. in Portland, Me. Feb 14 1830 d. July 11 1859 ae 29 yrs 5 ms.
Rachel Dow b. 1801 d. May 29 1889.
Daniel Dow d. April 6 1869 ae 79 yrs.
Lucinda wife of Daniel Dow d. Oct 1 1831 ae 29 yrs.
Samuel A. Dow, 1847----- .
Emily A, his wife, 1846--1919.
Lottie F. Dow b. May 24 1878 d. Jan 3 1898.
Fred L. Dow, 1873----- .
Gertrude E. his wife, 1871--1907.
Martha G. wife of David Dow d. Sept 4 1844 ae 21 yrs.
David M. Dow d. April 12 1870 ae 49 yrs.
Abbie L. wife of David M. Dow d. Oct 19 1885 ae 64 yrs.
Frank P. son of David M. and Abbie L. Dow d. Aug 27 1876 ae 24 yrs 5 ms.
George O. Dow d. Sept 27 1856 ae 38 yrs 7 ms.
Mary E. wife of George O. Dow d. May 4 1877 ae 51 yrs 7 ms.
George E. Dow d. April 5 1883 ae 28 yrs 8 ms.
Abram B. Dow d. Nov 24 1840 ae 73 yrs.
Love widow of A. B. Dow d. Feb 10 1855 ae 81 yrs.
Simon Dow d. Aug 8 1840 ae 33 yrs.
Abraham Drake, Dec 4 1715--Aug 1 1781. He was a member of the Provincial Congress, served in the French and Indian War, Lieut. Colonel in the Revolutionary War, and after the Battle of Lexington he was stationed at Winter Hill. Marched with his regiment to Saratoga. Was present at the surrender of Gen. Burgoyne. Later served on Gen. Washington's staff. [Stone erected by G. S. Drake, 1907].
Wife [of Abraham Drake] Abigail Dearborn b. Oct 19 1720 d. July 1 1811.
In memory of Cor. Abraham Drake who d. May 11 1819 ae 74.
In memory of Mrs. Mary Drake wife of Cornet A. Drake who d. Feby 8 1813 ae 66 yrs.
In memory of Mr Nathaniel Drake who d. Nov 5 1828 ae 69 yrs.
In memory of Mrs Elizabeth Drake widow of the late Nathaniel Drake who d. Jan 8 1853 ae 91 yrs 8 ms 11 ds.
Nathaniel Drake, Jr. d. April 24 1823 ae 25 yrs.
Abraham Drake d. Feb 16 1865 ae 81 yrs.
Sarah Drake d. Feb 25 1850 ae 75 yrs.
Dea. Abraham Drake d. April 4 1874 ae 57 yrs 6 ms.
Adaline B. wife of Dea. Abraham Drake d. June 28 1892 ae 75 yrs 11 ms.
Samuel Drake d. Nov 3 1835 ae 42 yrs 6 ms.
Mehitable wife of Samuel Drake d. June 8 1850 ae 51 yrs 5 ms.
Fabyan Drake d. Nov 11 1841 ae 22 yrs 10 ms.
Joshua P. Drake b. Jan 22 1823 d. Nov 19 1901.
Sarah L. wife of Joshua P. Drake b. March 16 1825 d. Dec 15 1885.
Hermon E. son of Joshua P. and Sarah L. Drake d. Dec 24 1852 ae 7 ms 16 ds.
Martha A. dau. of Joshua P. and Sarah L. Drake b. Oct 8 1853 d. July 26 1875.
Nathaniel Drake, Aug 21 1825--May 22 1895.
Annie T. his wife, Oct 14 1824--Dec 23 1900.
Ruth Elizabeth, June 11 1912--Oct 24 1917.
Robert Weare, Dec 25 1919--May 14 1923.
Francis Drake, Nov 5 1843--May 18 1902.
Climena Hodsdon his wife April 7 1854--Jan 10 1918.
Walter L. Drake d. Oct 25 1901 ae 55 yrs 5 ms.
Samuel J. Drake d. July 18 1891 ae 70 yrs 6 ms.
Mary L. wife of Samuel J. Drake d. Oct 15 1882 ae 61 yrs 5 ms.
Sarah E. Rollins wife of Samuel J. Drake, Aug 24 1826--June 12 1915.
Lemira L. his wife, 1849--1898.
George H. Dunham, June 8 1838--April 11 1916. Naval Service 1863--1865, U. S. N. Member of Phil Sheridan Post No. 34, G. A. R. Schuyler, Nebraska.
Augusta A. wife of William H. Dunham, Oct 10 1840--May 6 1910.
Marcia P. wife of Arthur H. Durgin b. Oct 18 1858 d. May 13 1919.
John Fogg, M. D. d. March 5 1816 ae 52 yrs.
Sarah wife of John Fogg, M. D. d. April 16 1858 ae 85 yrs.
John D. Fogg, M. D. was drowned in Spruce Creek, Kittery, Me. Nov 28 1830 ae 32 yrs.
Dearborn Fogg d. Dec 13 1841 ae 84 yrs.
Dorothy widow of Dearborn Fogg d. May 15 1847 ae 76 yrs.
Cyrus Fogg, June 21 1845--Sept 26 1912.
Abbie M. Fogg, Aug 23 1879--May 8 1917.
Mary A. Hicks his wife, 1841--1900.
Gertrude M. Strout his wife, 1876--1901.
Cassie J. Smith his wife, 1867--1930.
Ellen S. Locke his wife, 1865----- .
Rev. Jonathan French, D. D. b. at Andover, Mass. Aug 16 1778. Grad. at Harvard College, 1798. Ordained at North Hampton, N. H. Nov 18 1801. Died Dec 13 1856 ae 78 yrs 3 ms 27 ds.
Rebecca widow of Rev. Jonathan French, D. D. born at Lincoln, Mass. Dec 21 1785. Died Feb 3 1869 ae 83 yrs 1 mo 13 ds.
M. Abba Garland, March 5 1834--April 4 1918.
Isadora wife of George L. Garland, 1855--1913.
Erected to the memory of Hannah wife of Samuel Garland who d. in Boston Jan 3 1848 ae 36 yrs 8 ms.
Sarah T. wife of Samuel Garland, 1817--1888.
Mary Gerstacher, native of Bavaria, Germany, d. Aug 10 1893 ae 65 yrs 6 ms.
Mrs. Sarah E. Gilpatrick, May 29 1836--Sept 12 1892.
Walter W. Goss, 1875----- .
Fannie B. Knowles his wife, 1877--1911.
Richard T. Goss, 1901----- .
Louise S. Varney his wife, 1904--1928.
Ebenezer Gove, Feb 17 1814--May 28 1897.
Abbie P. his wife, Sept 27 1824--Feb 9 1901.
Laura E. wife of Eugene E. Gowell, Aug 6 1865--Sept 30 1914.
Fred H. Grant, Nov 27 1859--Aug 6 1917.
Sarah A. wife of Fred H. Grant, Aug 10 1857--March 21 1890.
Elsie E. wife of Harry R. Green d. Nov 11 1885 ae 28 yrs 6 ms.
Rev. Nathaniel Gookin b. Feb 18 1713. Ordained first pastor of the Congregational Church of this town Oct 31 1739. Died Oct 22 1766.
Widow Love Gookin d. Aug 11 1809 as 89.
Abigail Haines d. Oct 28 1877 ae 76 yrs 7 ms.
Elizabeth Haines d. April 6 1871 ae 75 yrs.
Orinda A. Jewell his wife, 1828-1905.
Clinton C. Hendry b. Oct 19 1851 d. June 28 1922.
Emily C. wife of Clinton C. Hendry d. May 1 1886 ae 23 yrs 7 ms.
Cora A. wife of Clinton C. Hendry d. Sept 7 1883 ae 27 yrs 11 ms.
Maurice Hobbs b. in England, 1611 d. 1702 ae 91 yrs.
Sarah Hobbs d. 1698 ae 89 yrs.
Maurice Hobbs d. April 6 1740 ae 88 yrs.
Sarah Hobbs d. Sept 13 1731 ae -- .
Maurice Hobbs b. 1714 d. May 1 1756.
Mary Hobbs d. April 7 1806 ae 79 yrs.
Dea. Benjamin Hobbs d. April 23 1804 ae 76 yrs.
Elizabeth widow of Dea. Benjamin Hobbs d. Jan 12 1812 ae 75 yrs.
Capt. David Hobbs d. May 4 1849 ae 89 yrs.
Joseph Hobbs d. March 24 1847 ae 74 yrs.
Abigail Hobbs d. March 22 1829 ae 51 yrs.
Elizabeth Hobbs d. Jan 29 1844 ae 56 yrs.
Olive Hobbs d. March 30 1874 ae 84 yrs.
Benjamin Hobbs d. Aug 24 1865 ae 70 yrs 5 ms 9 ds.
Mr. Morris Hobbs d. May 24 1830 ae 84 yrs.
Mrs. Deborah wife of Mr. Morris Hobbs d. Sept 28 1822 ae 73 yrs.
Jonathan Hobbs d. Sept 21 1844 ae 75 yrs.
Theodate wife of Jonathan Hobbs d. 1823 ae -- .
Mercy wife of Jonathan Hobbs d. May 24 1825 ae 46.
Dea. Morris Hobbs d. July 9 1830 ae 54 yrs.
Mrs. Abigail widow of Dea. Morris Hobbs d. Sept 14 1842 ae 68.
John L. Hobbs d. Oct 10 1877 ae 68 yrs 4 ms.
Morris Hobbs d. Sept 26 1841 ae 27.
Dr. Moses L. Hobbs d. May 23 1885 ae 84 yrs 11 ms.
Fanny M. Hobbs d. June 30 1851 ae 51 yrs 11 ms 9 ds.
Caroline D. Hobbs d. Dec 17 1890 ae 78 yrs.
Dr. Victory Hobbs d. July 2 1873 ae 33 yrs 10 ds.
Moses L. Hobbs d. June 1 1915 ae 77 yrs 7 ms.
David L. Hobbs d. Oct 3 1854 ae 23 yrs 2 ms 14 ds.
Joseph B. Hobbs d. March 19 1870 ae 36 yrs 8 ms 17 ds.
Lizzie D. Hobbs d. Oct 22 1890 ae 22 yrs 10 ms.
J. W. F. Hobbs, Jan 3 1815--April 27 1890.
Elizabeth J. wife of J. W. F. Hobbs and dau. of Dea. Francis Drake d. in Boston, Mass. Sept 11 1856 ae 38.
Orville C. d. in Boston Mass. Jan 11 1849 ae 5 ms.
Melissa C. d. in Boston, Mass. Jan 28 1850 ae 3 yrs 5 ms.
Children of J. W. F. and Elizabeth J. Hobbs.
Azalia A. dau. of J. W. F. and E. J. Hobbs d. in Boston Mass. Aug 1 1858 ae 14 yrs.
Edson C. Hobbs, June 1 1850--Aug 20 1915.
Mary F. wife of J. W. F. Hobbs d. Oct 14 1865 ae 42.
Lizzie Mae dau. of J. W. F. and M. F. Hobbs d. Oct 7 1865 ae 2 yrs 5 ms.
John F. son of J. W. F. and M. F. Hobbs b. Feb 4 1859 d. Aug 27 1881.
W. J. C. Hobbs, Jan 3 1815--Jan 2 1891.
Nancy F. wife of W. J. C. Hobbs d. Feb 3 1888 ae 73 yrs.
John W. only child of W. J. C. and N. F. Hobbs d. Oct 15 1864 ae 26 yrs.
Morris Hobbs son of Mr. Morris and Mrs. Abigail Hobbs d. Jan 11 1815 ae 16.
Data Hobbs d. Feb 13 1807 ae -- .
Polly Hobbs d. 1806 ae -- .
In memory of Morris Hobbs Jr. who d. Jan 10 1776 ae -- .
Jonathan Hobbs d. May 2 1872 ae 67 yrs 2 ms.
Mary H. wife of Jonathan Hobbs d. May 5 1889 ae 76 yrs 5 ms.
Lizzie A. dau. of J. and M. H. Hobbs d. April 4 1865 ae 25 yrs 5 ms.
Emily dau. of J. and M. H. Hobbs d. Sept 5 1862 ae 20 yrs 8 ms.
Mary F. Hobbs wife of William G. F. Wright b. July 1 1847 d. Nov 23 1914.
Annie French wife of Joseph O. Hobbs b. Sept 11 1856 d. May 3 1900.
Annie Hoyt wife of Joseph O. Hobbs b. March 29 1883 d. June 23 1912.
J. Harold Hobbs, ----- .
Grace Lougks his wife, Aug 17 1888--Jan 21 1917.
Thomas Hobbs d. Sept 1 1822 ae 73 yrs.
Sarah wife of Thomas Hobbs d. Oct 28 1826 ae 74 yrs.
Jonathan Hobbs d. Nov. 23 1852 ae 78 yrs.
Fanny wife of Jonathan Hobbs d. Oct 5 1826 ae 50 yrs.
Harriet Newell dau. of Jonathan and Fanny Hobbs d. July 29 1822 ae 2 yrs 4 ms.
Oliver Hobbs b. Sept 19 1802 d. Jan 13 1848.
Sarah wife of Oliver Hobbs b. July 18 1806 d. Feb 14 1874.
Mary E. wife of Thomas Philbrook and dau. of Oliver and Sarah Hobbs b. May 30 1841 d. May 7 [----?].
Hattie dau. of Oliver and Sarah Hobbs b. May 2 1846 d. Dec 13 1876.
Henry son of John and Lucinda Hobbs d. July 22 1822 ae 2 yrs.
Lucinda dau. of John and Lucinda Hobbs d. April 28 1816 ae 1 yr 9 ms.
Thomas Hobbs d. March 3 1862 ae 60 yrs 4 ms.
Jonathan F. Hobbs d. June 19 1867 ae 35.
Fannie F. Hobbs d. May 21 1873 ae 15 yrs.
James F. Hobbs, 1843--1923. G. A. R.
Elizabeth H. wife of James F. Hobbs, 1849--1921.
Samuel Farrar son of J. and M. H. Hobbs d. March 28 1852 ae 2 yrs 5 ds.
Dau. of Josiah and Mabel Hobbs d. March 11 1868 ae 24 yrs 10 ms.
Mr. Benjamin Hobbs d. July 15 -----? ae 86 yrs 5 ms.
Mrs. Mary Hobbs d. Dec. 8 1832 ae 99 yrs.
Horatio D. Hobbs d. April 6 1888 ae 76 yrs.
Emeline wife of Horatio D. Hobbs d. Dec 28 1897 ae 82 yrs.
Woodbury son of Horatio and Emeline Hobbs d. Jan 31 1857 ae 20 years.
Mina A. Seavey wife of John W. Hobbs, 1847--1916.
Charles W. Hoyt, Oct 2 1851--July 17 1894.
Agnes M. Beckwith his wife, May 11 1852----- .
Also his second wife, Abigail, ----- .
Betsey B. his wife, 1790--1859.
In memory of Samuel Jenness who d. Dec 29 1806 ae 54.
Mary wife of Samuel Jenness d. Dec 11 1833 ae 78 yrs.
Jeremiah Jenness d. Feb 8 1849 ae 73 [or 74] yrs.
In memory of Molly Jenness wife of Jeremiah Jenness who d. Nov 25 1806 ae 28.
Martha E. dau. of Edwin and Mary Jenness d. Dec 19 1848 ae 3 ms.
Roswell French d. May 18 1862 ae 11 yrs.
Goerge Edwin d. May 19 1862 ae 3 yrs 10 ms.
Children of Edwin and Mary Jenness.
Edwin Jenness, Sept 8 1818--Dec 16 1902.
Mary C. his wife, July 24 1820--Dec 21 1901.
Benjamin Jenness d. Aug 4 1875 ae 84 yrs.
Dorothy wife of Benjamin Jenness d. Dec 7 1896 ae 89 yrs 10 ms.
George W. son of Benjamin and Dolly Jenness d. July 30 1846 ae 7 ms.
Joseph B. Jenness d. Jan 9 1923 ae 86 yrs 7 ms.
Olive A. wife of Joseph Jenness d. May 30 1891 ae 54 yrs.
Charles W. Jenness, Nov 7 1846--April 6 1903.
Elizabeth A. Davis his wife, July 5 1849----- .
Martha Brown wife of S. Alonzo Jenness b. Jan 5 1853 d. Nov 6 1922.
Nathan B. Jenness b. March 11 1832 d. Jan 2 1896.
Julia A. Merrill wife of Nathan B. Jenness b. March 1 1834 d. March 15 1897.
Josiah E. son of N. B. and J. A. Jenness d. July 1 1862 ae 2 yrs 8 ms.
Charles E. son of Nathan B. and Julia A. Jenness d. Feb 24 1878 ae 9 ms 7 ds.
Marion D. Jenness dau. of Ivan D. and B. H. Jenness, May 7 1909--July 19 1911.
Abbie M. Locke his wife, 1829--1911.
E. Bloomer Jewell, 1851----- .
Abbie M. Dow his wife, 1849--1927.
William T. Keouse d. March 30 1825 ae 50 yrs.
William B. son of Dr. William and ? J. Kingsford d. Oct 12 1842 ae 4 ms.
Samuel Knowles d. Aug 23 1846 ae 83.
Mrs. Anna wife of Samuel Knowles d. March 19 1826 ae 52 yrs.
Nancy dau. of Samuel and Anna Knowles d. Oct 4 1821 ae 13 yrs 8 ms.
Reuben Knowles d. Oct 30 1857 ae 58 yrs.
Data wife of Reuben Knowles d. May 11 1895 ae 85 yrs 3 ms.
Nathan Knowles d. Sept 21 1839 ae 68 yrs 6 ms.
Mr. Oliver Knowles d. July 6 1837 ae 35 yrs.
Frances D. wife of Oliver Knowles d. March 25 1882 ae 78 yrs.
John F. Knowles d. Dec 7 1845 ae 21.
Levi Woodbury Knowles, Sept 14 1850--July 26 1922.
Emma E. Courson his wife, Jan 21 1861----- .
Charlie, Aug 10 1877--July 15 1883.
Willie J., Oct 24 1879--July 23 1883.
Minnie P., July 23 1884--July 30 1884.
Children of Levi W. and Emma E. Knowles.
Robert E. son of L. M. and M. D. Knowles, May 12 1919--May 29 1923.
Reuben C. Knowles b. Oct 26 1849 d. Jan 11 1896.
Sarah his wife, Oct 7 1840--Feb 6 1923.
Edward E. Knowles, Nov 23 1855--Sept 13 1892.
Cora E. his wife, Aug 12 1858--Nov 3 1926.
S. Jewell Knowles b. in North Hampton March 19 1845 d. in North Hampton July 2 1918.
Sarah A. his wife b. in Newington June 25 1850 d. in North Hampton Sept. 21 1929.
Thomas L. son of David and Sarah A. Knowles d. Oct 26 1898 ae 49 yrs. Member of Co. I, 18th N. H. Inf.
Sarah A. Knowles wife of Craig O. Lindsay, July 2 1852--Feb 7 1912.
David Knowles d. Oct 29 1889 ae 74 yrs.
Sarah A. Knowles wife of David Knowles d. Nov 5 1881 ae 58 yrs.
Bell G. dau. of David and Sarah A. Knowles d. Dec 1 1867 ae 20 yrs 6 ms.
Samuel Knowles d. July 9 1895 ae 82 yrs 6 ms.
Elizabeth M. wife of Samuel Knowles d. Sept 15 1894 ae 74 yrs 8 ms.
Mary E. dau. of Samuel and Elizabeth Knowles d. July 23 1883 ae 21 yrs 10 ms.
Edward Lane d. April 10 1866 ae 66 yrs.
Deborah wife of Edward Lane d. April 10 1871 ae 71 yrs.
John D. Lane b. March 26 1808 d. May 28 1892.
Margaret Dow b. Oct 5 1811 d. Jan 22 1889.
Anna L. R. dau. of J. D. and Margaret Lane d. July 13 1852 ae 2 yrs 5 ms.
David S. Lane, June 26 1817--July 16 1890.
Eleanor J. his wife, Sept 10 1817--June 16 1899.
Thomas J. Lane, July 4, 1840----- .
Lamira P. Lane, April 8 1842----- .
Christopher V. Lane, Jan 5 1844----- .
Luthera J. his wife, Feb 27 1843--Sept 13 1907.
Linnie J. Lane, Jan 29 1878--Oct 6 1878.
Apphia wife of Joseph S. Lane d. Dec 20 1893 ae 80 yrs 8 ms.
In memory of Col. Thomas Leavitt who d. March 20 1830 ae 85.
Mary widow of Col. Thomas Leavitt d. May 7 1840 ae 91.
Simon Leavitt d. Aug. 20 1842 ae 89 yrs.
Abigail widow of Simon Leavitt and formerly widow of Thomas Cotton d. May 16 1844 ae 91.
In memory of Mrs. Sarah Leavitt the wife of Simon Leavitt who d. June 23 1802 ae 46.
Benjamin Leavitt, Jr. d. March 20 1836 as 32 yrs.
Luther Leavitt d. April 30 1837 ae 38 yrs.
Capt. Benjamin Leavitt d. Nov 8 1835 ae 69 yrs.
Abigail widow of Benjamin Leavitt d. Sept 18 1857 ae 92 yrs 8 ms 17 ds.
In memory of Joseph S. Leavitt who d. Sept 26 1803 ae 32.
In memory of Thomas Leavitt, Jr. who d. Nov 21 1800 ae 27.
In memory of John Leavitt, Esq. who d. May 1781 ae 79.
In memory of Abigail Leavitt wife of John Leavitt, Esq. who d. Jan 1782 ae 64.
In memory of Samuel F. Leavitt who d. April 8 1829 ae 61 yrs 7 ms.
In memory of John Leavitt son of Col. Thomas and Mary Leavitt who d. at Cambridge Sept 4 1820 ae 34.
Sophia W. dau. of Thomas and Polly Leavitt d. Jan 18 1831 ae 7 yrs 10 ms.
In memory of Mrs. Anna Leavitt relict of Capt. Samuel F. Leavitt who d. June 10 1816 ae 47.
In memory of Alfred J. Leavitt son of Capt. Samuel F. and Mrs. Anna Leavitt who d. Oct 31 1814 ae 12.
Dearborn Leavitt d. Jan 23 1832 ae 62.
In memory of Mr. John Leavitt who d. Oct 25 1820 ae 38 yrs.
In memory of Mrs. Betsey Leavitt relict of Mr. John Leavitt who d. Dec 15 1828 ae 44 yrs.
Sarah E. wife of Toppan Leavitt d. June 27 1856 ae 32 yrs.
Abraham Leavitt d. Aug 17 1862 ae 83 yrs.
Sarah wife of Abraham Leavitt d. Aug 7 1858 ae 72 yrs.
Louisa Leavitt dau. of Abraham Leavitt d. Sept 10 1840 ae 33 yrs.
Lavina Leavitt dau. of Abraham Leavitt d. May 17 1851 ae 35 yrs.
Hannah wife of John D. Folsom d. May 31 1858 ae 46 yrs.
Rufus, son of Abraham Leavitt d. April 21 1863 ae 55 yrs.
Sarah dau. of Abraham Leavitt d. Feb 8 1882 ae 71 yrs.
Rebecca F. Leavitt, Jan 27 1834--Feb 1 1899.
Horace Leavitt, Nov 19 1837--June 19 1917. Sergt. Co, M, 1st N. H. Cav. '61 to '64. His wife Mary E. Dow, Aug 29 1847--June 19 1918.
Mary B. Chase his wife, 1878--1913.
Sally Jewell his wife, 1775--1851.
Eliza J. Perkins his wife, 1807-1845.
Eliza J. F. Lane his wife, 1815-1898.
His son Joseph Benning Leavitt, 1836-1854.
Addie Philbrick his wife, 1849----- .
Simon Leavitt d. Sept 19 1873 83 yrs 6 ms.
Abigail wife of Simon Leavitt d. Oct 21 1822 ae 28 yrs 10 ms.
Juliana 2nd wife of Simon Leavitt d. May 16 1885 ae 80 yrs 2 ms.
Rebecca F. dau. of Simon and Juliana Leavitt d. March 10 1833 ae 6 ms.
Dorothy Adeline dau. of Simon and Juliana Leavitt d. Sept 12 1846 ae 5 yrs.
Laura Jane dau. of Simon and Juliana Leavitt d. March 28 1851 ae 14 yrs 2 ms.
Martha Ann dau. of Simon and Juliana Leavitt and wife of Simon L. Hobbs d. Feb 26 1848 ae 26 yrs 6 ms.
Abraham Leavitt son of Abraham who was the son of Simon who was the son of John Leavitt d. Dec 22 1857 ae 32 yrs.
John Leavitt, July 15 1823--April 14 1891.
Elbridge A. Leavitt, March 24 1813--Sept 27 1887.
Mary C. Marston his wife, Dec 9 1814--June 3 1897.
Philip Leavitt d. Sept 1 1829 ae 38 yrs.
Dorothy wife of Ira James and formerly wife of Philip Leavitt d. Dec 16 1879 ae 83 yrs 4 ms.
Amanda M. dau. of E. A. and M. C. Leavitt d. March 19 1860 ae 22 yrs 1 mo.
John G. son of E. A. and M. C. Leavitt d. July 22 1859 ae 20 yrs 5 ms.
E. Warren son of E. A. and M. C. Leavitt d. June 19 1871 ae 26 yrs 4 ms.
James A. Leavitt, Oct 3 1838--June 8 1901.
Mary E. his wife, Feb 10 1838--Jan 27 1909.
Dea. James R. Leavitt, Sept 22 1815--Dec 24 1897.
Elizabeth M. his wife, Feb 3 1812--Sept 30 1898.
Dorothy Abby dau. of James R. and Elizabeth M. Leavitt d. Nov 25 1849 at 6 yrs 8 ms 19 ds.
Mary A. dau. of J. R. and E. M. Leavitt d. Jan 21 1855 ae 2 ds.
Dorothy Frances d. Sept 30 1856 ae 3 yrs 11 ms.
Martha Ann d. Oct 1 1856 ae 1 yr 9 ms.
Children of James R. and Elizabeth M. Leavitt.
Thomas C. son of James R. and Elizabeth M. Leavitt d. July 14 1857 ae 20 days.
Orin B. Leavitt, Oct 8 1848--Feb 16 1929.
Mary O. Drake wife of Orin B. Leavitt, Oct 9 1859--May 16 1924.
Simon Leavitt d. Jan 11 1841 ae 51 yrs.
Dorothy Leavitt d. May 8 1879 ae 94 yrs 3 ms.
Simon O. d. Feb 5 1831 ae 1 yr 3 ms.
Laura A. d. Feb 19 1831 ae 3 yrs 2 ms.
Children of Simon and Dorothy Leavitt.
Simon Leavitt d. April 29 1874 ae 62 yrs 5 ms.
Elizabeth S. wife of Simon Leavitt d. Sept 3 1904 ae 85 yrs 4 ms 21 ds.
Lizzie W. Leavitt d. Oct 29 1911 ae 62 yrs.
John T. Leavitt d. Sept 16 1914 ae 61 yrs.
Simon H. Leavitt, July 8 1846--July 1 1916.
Margaret E. wife of Simon H. Leavitt, Oct 12 1848--Aug 24 1921.
Thomas G. Leavitt d. May 19 1868 ae 61.
Mary H. his wife d. July 29 1894 ae 84.
Elizabeth A. Leavitt, Sept 4 1834--May 31 1905.
Payson son of Thomas G. and Mary Leavitt d. in Philadelphia, Penn. Sept 8 1863 ae 22 yrs 5 ms.
Elizabeth widow of Moses Leavitt [above] d. April 4 1822 ae 77.
Rev. William L. Linaberry, ----- .
Laura his wife, ----- .
Elizabeth M. his wife, ----- .
Samuel Locke d. Aug 16 1843 ae 45 yrs.
Mary wife of Samuel Locke d. Aug 8 1872 ae 66 yrs 7 ms.
Almira dau. of Samuel and Mary Locke d. Nov 23 1841 ae 3 yrs 4 ms.
Mary E. his wife, 1840--1916.
Albert E. Locke, 1860----- .
Susie Berry his wife, 1864----- .
Mehitable wife of Ebenezer Lovering d. June 2 1840 ae 68 yrs 6 ms.
Col. Thomas Lovering a patriot of the Revolution who at the age of fifteen years commenced serving his country, and continued his services thirty two months on the tented field; in consequence of being thrown from carriage died Nov 24 1834 ae 74 yrs.
Mary A. Sands wife of Robert E. Luck, Sept 29 1871--Aug 21 1899.
Their son Clements Frank d. Feb 12 1899 ae 7 yrs.
James Marden d. June 15 1885 ae 72 yrs.
Almira Marston wife of James Marden d. Jan 20 1900 ae 82 yrs 9 ms.
Lurana Rowe his wife, 1866--1912.
Hepzibah widow of Samuel Marston d. Feb 17 1841 ae 96 yrs 11 ms.
Sarah Marston d. March 29 1851 ae 88 yrs.
Hannah Marston d. Jan 8 1855 ae 88 yrs.
Daniel Marston d. Dec 18 1864 ae 92 yrs.
Jonathan Marston d. April 27 1845 ae 76.
Mary wife of Jonathan Marston d. April 5 1864 ae 87.
David son of Jonathan and Mary Marston d. Aug 7 1809 ae 15.
Mary L. dau. of Jonathan and Mary Marston d. Oct 2 1810 ae 17.
Henry Marston d. June 6 1842 ae 62 yrs.
David Marston, Co. K, 2nd D. C. Inf.
Miss Abigail Marston d. Dec 4 1833 ae 82 yrs.
Deborah Marston d. Nov 14 1852 ae 98 yrs 6 ms.
Thomas Marston d. May 17 1847 ae 90 yrs. 6 ms.
Hannah his wife d. Feb 28 1820 ae 59 yrs.
Mary A. Marston d. Dec 8 1878 ae 72 yrs 8 ms.
Sherburne Marston d. April 9 1884 ae 72 yrs, 2 ms.
Ann wife of Sherburne Marston d. March 15 1864 ae 54 yrs.
Olive L. wife of Sherburne Marston d. April 15 1893 ae 70 yrs.
Simon Marston d. Dec 29 1867 ae 79 yrs.
Charlotte wife of Simon Marston d. Dec 14 1867 ae 76 yrs.
James Marston d. Nov. 20 189- ae 71 yrs 10 ms.
Mary Marston d. June 20 1871 ae 81 yrs.
Thomas Marston d. Dec. 17 1870 ae 78 yrs 8 ms.
Mary L. Marston d. May 18 1874 ae 79 yrs.
Sally Marston d. May 15 1882 ae 82 yrs 5 ms.
Fanny Marston d. May 15 1882 ae 79 yrs 6 ms.
Almira Marston d. Sept 3 1870 ae 66 yrs.
Sarah H. wife of Frank A. Marston b. Feb 18 1840 d. Oct 8 1900.
Mary A. Marston, March 14 1842--May 29 1896.
George A. Marston, Jan 25 1851--April 9 1931.
Lucy A. wife of James Marston d. Dec 14 1894 ae 81 yrs 10 ms.
Thomas L. Marston d. Dec 31 1888 ae 91 yrs 8 ms.
Mary B. wife of T. L. Marston d. Feb 18 1869 ae 74.
Charles Augustus son of Thomas L. and Mary B. Marston d. Sept 1 1834 ae 3 yrs 3 ms 27 ds.
George F. son of T. L. and M. B. Marston b. Dec 9 1833 d. March 26 1914.
Fanny S. wife of George F. Marston b. Aug 22 1840 d. Sept 7 1918.
Henry S. Marston, March 21 1828--Feb 18 1916.
Rachel C. Lane his wife, May 4 1835--Feb 26 1892.
Payson H. Marston, Aug 31 1861--April 25 1932.
Katie W. Marston, Jan 28 1865----- .
Benjamin Marston d. Aug 5 1862 ae 55 yrs.
Margie A. wife of Thomas E. Marston, April 17 1848--Sept 23 1897.
E. Everett Marston, Jan 23 1875--Dec 6 1908.
Thomas F. Marston, Aug 10 1825--May 20 1920.
Elizabeth A. wife of Thomas F. Marston, Dec 3 1827--Jan 12 1911.
Hermon Leroy son of Thomas F. and Elizabeth A. Marston d. Aug 1 1864 ae 7 yrs 2 ms 10 ds.
Emily Fogg his wife, 1840--1912.
J. Frank Marston, July 6 1868--Feb 13 1932.
Herman L. Marston, March 21 1866--March 17 1922.
Son of Herman L. and Fannie A. Marston ----- [no dates].
Harvey B. Marston, March 22 1880--May 7 1894.
Martha A. Dow wife of H. B. Marston, March 14 1846--June 14 1914.
Andrew S. Marston, ----- .
Nora Mansfield his wife, ----- .
Charles F. McLaughlin d. Nov 5 1874 ae 31 yrs.
Clark McLaughlin d. Dec 22 1891 ae 35 yrs 8 ms.
Emma F. McLaughlin d. May 9 1917 ae 64 yrs 10 ms.
N. B. Mead b. Aug 7 1825 d. Nov 13 1825.
Mary C. Adams wife of Rev. M. F. Mevis, 1868--1909.
Mary Olive Dow his wife, 1855--1917.
Abby C. Jenness wife of Christopher T. Moore b. Sept 10 1821 d. Nov 8 1910.
Christopher T. Moore d. June 20 1892 ae 69 yrs 3 ms.
Melvin J. son of Christopher T. and Sarah F. Moore d. April 20 1855 ae 10 yrs 26 ds.
Sarah F. Moore d. Sept 25 1888 ae 64 yrs 8 ms.
In memory of Thomas B. son of Peter and Abiah Moore who d. April 1 1833 ae 20 yrs.
Peter Moore d. March 17 1858 ae 80 yrs 6 ms 12 ds.
Abiah wife of Peter Moore d. Feb 5 1870 ae 91 yrs 5 ms 18 ds.
George A. Moore, Nov 30 1892----- .
His wife Sarah A. Bennett, April 7 1897--Oct 12 1918.
Daniel P. Moulton d. March 17 1869 ae 69.
Morris Hobbs Moulton b. Nov 26 1829 d. May 16 1914.
Harriet Fogg his wife b. April 14 1842 d. June 4 1915.
Orice J. Moulton, Oct 19 1861----- .
Jessie M. his wife, June 1 1860--Dec 27 1925.
John B. Moulton d. June 17 1927 ae 58 yrs 1 mo.
Elizabeth A. wife of John B. Moulton d. Feb 21 1902 ae 33 yrs 10 ms.
Nellie F. dau. of Eliza A. Moulton d. Jan 26 1885 ae 28 yrs 9 ms.
John Moulton d. April 2 1879 ae 53 yrs 3 ms.
Eliza A. Fogg wife of John Moulton d. Jan 23 1916 ae 85 yrs 3 ms.
James E. Nary b. July 26 1816 d. Oct 30 1906.
Charlotte wife of James E. Nary d. April 30 1890 ae 71 yrs 7 ms.
Elsie dau. of F. E. and M. F. Nary, June 19 1896--May 11 1897.
Murle son of F. E. and M. F. Nary, Oct 12 1892--Sept 27 1899.
In memory of Mary Neal wife of Mr. John Neal Esq. who d. Oct 18 1805 ae 61.
Mae E. wife of Nelson J. Norton, May 11 1871--June 27 1893.
Olive J. wife of Thomas M. Norton d. Oct 9 1853 ae 30 yrs.
Joshua J. Norton d. Jan 2 1884 ae 57 yrs 2 ms.
Annie P. wife of Joshua J. Norton d. Nov 8 1891 ae 57 yrs 11 ms.
Effie A. dau. of Joshua J. and Phebe A. Norton d. June 27 1859 ae 6 ms 3 ds.
Clarence P. son of Joshua J. and Phebe A. Norton d. Sept 3 1870 ae 1 yr 3 ms.
Elizabeth E. W. his wife, 1843-1891.
Jonathan Page d. 1770 ae 70 yrs.
Mary Towle his wife d. Nov 14 1783 ae 82 yrs 8 ms.
Jonathan Page 2nd d. Dec 11 1811 ae 84 yrs 7 ms.
Mary Smith his wife d. Dec 2 1793 ae 61 yrs.
Oliver O. Page d. Sept 27 1895 ae 75 yrs 9 ms 25 ds.
Lydia S. Page d. Sept 20 1905 ae 74 yrs 11 ms 10 ds.
Daisy A. Page wife of James G. G. Downing b. March 29 1869 d. Jan 7 1900 ae 30 yrs 9 ms 9 ds.
George W. Page, Aug 12 1843--April 28 1912.
Ellen M. Trefethen wife of George W. Page, Oct 16 1845--Oct 25 1909.
Charles S. Page, June 11 1867--April 15 1921.
Georgie M. wife of Charles S. Page, April 11 1868--July 6 1922.
Jonathan Perkins d. April 10 1880 ae 87 yrs.
Phebe wife of Jonathan Perkins d. April 4 1872 ae 74 yrs 7 ms.
Jonathan Philbrick d. Aug 28 1881 ae 66 yrs 7 ms.
Clara A. wife of Jonathan Philbrick d. March 15 1894 ae 73 yrs 7 ms.
Mary Abby dau. of Jonathan and Clara A. Philbrick d. Jan 11 1846 ae 5 weeks.
John L. son of Jonathan and Clara A. Philbrick d. April 21 1850 ae 1 yr.
Martha A. dau. of Jonathan and Clara A. Philbrick d. July 10 1864 ae 17 yrs 7 ms.
Annah S. dau. of Jonathan and Clara A. Philbrick d. May 20 1865 ae 14 yrs 3 ms.
John W. Philbrick, 1854----- .
Jennie S. his wife, 1862--1928.
Clara N. Philbrick, May 10 1860--March 19 1914.
S. Page Philbrook d. Sept 29 1855 ae 92 yrs 11 ms.
Jonathan Philbrook d. Jan 16 1870 ae 77 yrs.
Elizabeth wife of Jonathan Philbrook d. Jan 21 1870 ae 73 yrs.
Thomas Philbrook d. April 30 1912 ae 79 yrs 9 ms 18 ds.
Margaret D. wife of Thomas Philbrook d. Nov 26 1879 ae 32 yrs 6 ms.
Henry M. Philbrook, Aug 5 1830--Feb 5 1865.
Grace W. Dunham wife of Willard H. Philbrook, Oct 17 1870--May 9 1906.
Joseph L. Philbrook d. Aug 27 1904 ae 77 yrs 7 ms 1 day.
Julia M. wife of Joseph L. Philbrook d. Sept 2 1927 ae 97 yrs 6 ms.
Allen Howard son of Joseph L. and Julia M. Philbrook d. Sept 18 1867 ae 3 yrs 2 ms.
Joshua Pickering Esq. d. Jan 25 1852 ae 83 yrs 10 ms 17 ds.
Elizabeth Fabyan his wife d. Nov 18 1833 ae 61 yrs.
Rosamond their dau. d. Nov 21 1805 ae 23 ms.
John their son d. March 23 1866 ae 71 yrs 1 mo 12 ds.
Frederick A. Batchelder d. Oct 3 1882 ae 23 [?] yrs 9 ms.
Mary Olive wife of Frederick A. Batchelder and dau. of Joshua Pickering, Esq. d. Oct 3 1865 ae 54 yrs 9 ms 20 ds.
Samuel F. French d. Feb 10 1899 ae 90 yrs 20 ds.
Ann R. wife of Samuel F. French and dau. of Joshua Pickering, Esq. d. Aug 9 1883 ae 70 yrs.
Charles H. French d. Sept 16 1869 ae 28 yrs 3 ms 21 ds. Only son of Samuel F. and Ann R. French.
Joshua Pickering son of Joshua Pickering Esq. d. Dec 4 1870 ae 74 yrs.
Nancy C. wife of Joshua Pickering and dau. of Dea. Moses Hobbs d. Jan 9 1851 ae 46 yrs 2 ms.
Lydia P. wife of Lyford Thyng of Sious City, Iowa and dau. of Joshua Pickering, Esq. d. Sept 7 1871 ae 70 yrs.
Mary Abby dau. of Cyrus and Mary Jane Powers b. at Newmarket June 21 1835 d. July 9 1853 ae 18 yrs.
Henry Riley, July 17 1852--April 4 1909.
Abbie S. Tucker his wife, Dec 24 1858--Dec 21 1925.
Margaret O., Feb 2 1887--Jan 14 1902.
Henry A. Ring, Jan 10 1837--June 26 1897.
Clara A. Batchelder his wife, Dec 6 1844--July 20 1927.
Jonathan P. Robinson, May 20 1807--Oct 29 1897.
Oliver Robertson d. March 20 1826 ae 7 yrs 2 ms.
Jonathan Robinson d. Jan 5 1815 ae 29 yrs 7 ms.
Jonathan Rollins, April 30 1834--Sept 18 1912.
Frances H. his wife, Oct 27 1839--Sept 22 1881.
Frances D. his wife, 1873--1912.
Here lies the body of Deacon Daniel Sanborn who d. March 30 1787 ae -- .
Betsey E. wife of Joseph Sayboll, Jan 22 1829--Oct 28 1910.
Garrett D. Sears, 1871----- .
Laleah Fowler his wife, 1865--1929.
Frank H. Seavey, Dec 16 1843--April 28 1926.
Abbie E. Seavey wife of Frank H. Seavey, Oct 8 1850--Dec 14 1903.
Carra Dora dau. of Frank H. and Abbie E. Seavey d. Oct 15 1868 ae -- .
Charles E. Seavey, June 10 1834--Dec 28 1895.
Della E. his wife, April 26 1835--Sept 16 1864.
Hattie S. his wife, Feb 4 1840--June 7 1920.
Infant son of C. E. and H. S. Seavey, July 22 1877--July 28 1877.
George L. Seavey, April 11 1875----- .
Anna Bartlett his wife, Oct 17 1882--Feb 7 1919.
Moses Shaw d. Jan 15 1876 ae 51.
Elmary wife of Moses Shaw d. July 11 1886 ae 63.
Laura A. his wife, 1850--1919.
Maggie J. wife of Willie F. Simpson d. April 6 1900 ae 33 yrs 8 ms 11 ds.
Theophilus W. Sleeper d. Nov 16 1856 ae 47 yrs.
Joshua P. Smith d. July 29 1894 ae 59 yrs.
Linda B. wife of Joshua P. Smith d. Dec 9 1919 ae 83 yrs 7 ms.
John E. Smith, Sept 4 1887--March 24 1911.
Christopher Smith d. Sept 30 1881 ae 76 yrs.
Eliza wife of Christopher Smith d. Oct 12 1884 ae 82 yrs 10 ms.
D. Curtis Smith d. July 25 1869 ae 30 yrs.
John L. Smith, June 15 1835--Nov 7 1924.
Rebecca P. wife of John L. Smith, July 5 1840--Aug 22 1905.
Georgie E. son of John L. and Rebecca P. Smith d. July 5 1865 ae 1 yr 4 ms.
D. Curtis Smith only son of J. L. and R. P. Smith, July 22 1872--Sept 7 1908.
Morris Hobbs Smith, July 1 1833--Jan 28 1904.
Isabella Leavitt his wife, Feb 17 1839--May 6 1905.
Martha dau. of Morris H. and Isabella L. Smith d. July 28 1864 ae 2 yrs. 10 ms.
John son of Morris H. and Isabella L. Smith d. Sept 17 1864 ae 3 ms.
Maurice Leavitt son of Edward M. and Estelle L. Smith, Aug 31 1904--Feb 1 1905.
George M. Smith, Aug 15 1864--June 4 1915.
Mary H. widow of Rev. E. P. Sperry and sister of the late Rev. J. French, D. D. b. in Andover, Mass. Aug 6 1781 d. Jan 6 1858 ae 76.
Joseph Taylor d. March 27 1824 ae 74.
Polly Taylor d. Oct 27 1864 ae 79 yrs.
Edward J. Taylor, Sept 10 1856--March 17 1929.
Eva M. his wife, Dec 19 1865--March 18 1914.
George E. Taylor, Feb 10 1859--April 5 1928.
Sophia E. Marston his wife, Feb 25 1860--Feb 24 1924.
Ira J. Taylor, Aug 7 1830--Sept 11 1904.
Martha S. Locke wife of Ira J. Taylor, July 27 1835--June 5 1928.
Warren E. son of Ira J. and Martha S. Taylor d. June 30 1867 ae 5 yrs 1 mo.
Mary E. Taylor d. Nov 19 1876 ae 54 yrs.
John Taylor, 4th, d. Aug 16 1846 ae 25 yrs.
John S. Taylor son of Mr. Thomas and Mrs. Elizabeth Taylor d. Oct 14 1817 ae 12.
John Taylor d. May 7 1852 ae 63 yrs.
Mary wife of John Taylor d. Aug 16 1882 ae 85 yrs.
Col. John Taylor d. Feb 18 1855 ae 67 yrs.
Betsey wife of Col. John Taylor d. Oct 3 1848 ae 59 yrs.
Clementine G. dau. of Col. John and Elizabeth Taylor d. June 11 1842 ae 17 yrs.
Richard Taylor, April 5 1828--May 11 1899.
Sarah his wife, 1822--Nov 9 1858.
Mary J. his wife, 1833--1918.
Mary Lydia dau. of Richard and Sarah Taylor, Jan 22 1855--May 15 1864.
Fred L. Taylor, July 9 1862--Feb 10 1904.
John F. Taylor, Jan 28 1853--May 25 1908.
Abbie E. Chase his wife, Feb 24 1857--Oct 12 1912.
Walter E., April 28 1883--Jan 13 1919.
Hiram Tobey, Nov 30 1838--March 25 1924.
Esther Sayward, March 25 1834--March 27 1906.
Justin B. Drake, July 8 1855----- .
Ethlyn Tobey his wife, June 4 1866----- .
William Toole, Co. K, 2nd N. H. Inf.
George A. Tourtillott, Dec 9 1854--July 11 1923.
Emma A. Ingalls wife of George A. Tourtillott, March 18 1858--July 13 1903.
John M. son of George A. and Emma A. Tourtillott, April 25 1898--Jan 25 1909.
John Towle d. Oct 11 1849 ae 55 yrs.
Amos Towle d. Feb 15 1855 ae 91 yrs.
Mary M. dau. of Amos Towle d. Sept 7 1855 ae 63 yrs.
Edward H. Treadway d. Nov 26 1883 ae 37.
Sacred to the memory of Mrs. Sarah Thurston consort of Revd Benjamin Thurston who departed this life May the 22d 1789 in the 34th year of her age.
Leverett H. Ward, Sept 11 1855----- .
Alice Birchall his wife, June 4 1866----- .
S. Content their dau., Sept 22 1891--May 3 1915.
Samuel S. Warner, Jan 28 1807--April 8 1882.
Abigail Warner, June 18 1814--March 13 1842.
Ann E. Warner, Sept 7 1818--March 11 1908.
Alveretta M. Warner, March 30 1848--Aug 29 1855.
Samuel G. H. Warner, July 18 1852--April 3 1854.
Charles A. Warner, July 12 1836--May 27 1837.
Matilda A. Warner, May 15 1840--Sept 27 1841.
Ethel G. his wife, 1845--1919.
William S. Warner d. April 14 1864 ae 78 yrs.
Mary wife of William S. Warner d. April 10 1864 ae 73 yrs.
Emily D. dau. of William S. and Elizabeth Warner d. Nov 19 1839 ae 16 yrs 10 ms.
Andrew S. Warner d. April 2 1876 ae 62 yrs.
Olivia R. wife of Andrew S. Warner d. April 24 1885 ae 66 yrs 9 ms.
Malissa Ellen dau. of Andrew S. and Olivia Warner d. July 4 1851 ae 6 ms 15 ds.
Emily Malissa dau. of Andrew S. and Olivia Warner d. Sept. 28 1812 ae 1 yr 8 ms.
Mary Ann d. Aug 1822 ae 4 yrs 4 ms.
Nancy K. d. Nov 4 1822 ae 2 yrs.
Children of William S. and Elizabeth Warner.
William Whenal d. Oct 9 1893 ae 62 yrs 4 ms.
Jane Brown wife of William Whenal, June 15 1839--July 18 1922.
Louis C. son of William and Jane Whenal d. Jan 14 1896 ae 18 yrs. 8 ms 5 ds.
John Winthrop son of John and Carrie Whenal, Jan 16 1907--June 19 1909.
Josephine W. dau. of T. B. and I. J. Whenal, 1904--1906.
Enoch F. Wiggin d. May 25 1878 ae 52 yrs 1 mo.
Martha O. wife of William C. Garland and widow of Enoch F. Wiggin d. June 6 1889 ae 53 yrs.
Ella C. dau. of E. F. and M. O. Wiggin d. Oct 25 1862 ae 2 yrs 11 ms.
Ave M. dau. of E. F. and M. O. Wiggin d. June 11 1864 ae 2 yrs 3 ms.
Bertie F. son of E. F. and M. O. Wiggin d. Oct 29 1875 ae 11 yrs 4 ms.
Edwin G. son of E. F. and M. O. Wiggin d. April 13 1879 ae 26 yrs 3 ms.
John Wingate d. Sept 4 1812 ae 88.
Oliver S. son of Henry P. and Sarah A. Wingate, Aug 25 1870--Sept 8 1906.
Dea. John A. Wright d. Nov 14 1895 ae 72 yrs 1 mo 7 ds.
David P. Wright b. Feb 24 1844 d. ----- .
Willey J. Wright d. March 27 1894 ae 18 yrs 9 ms.
John Yuran d. Aug 31 1884 ae 80 yrs 9 ms.
Loranda wife of John Yuran d. June 23 1860 ae 50 yrs. | 2019-04-21T02:07:26Z | http://www.hampton.lib.nh.us/hampton/graves/northhampton/center.htm |
No pitching prospect had a more decorated 2007 than Buchholz. He ranked as the No. 1 prospect in the Double-A Eastern League, where he outpitched Roger Clemens in a May matchup. From there he went to the Futures Game and then on to Triple-A Pawtucket, making five starts before getting summoned to Boston. Buchholz went six innings to beat the Angels in his big league debut, but the best was still yet to come. Called back up in September, he became the 21st rookie in modern baseball history to throw a no-hitter, dominating the Orioles in just his second start. He might have made Boston's playoff roster had he not come down with a tired arm, which led the club to shut him down as a precaution. Buchholz led all minor league starters by averaging 12.3 strikeouts per nine innings and won the organization's minor league pitcher of the year for the second straight season. His accomplishments are all the more impressive considering that he was a backup infielder at McNeese State in 2004 and didn't become a full-time pitcher until 2005. Buchholz emerged as a prospect that spring at Angelina (Texas) JC, though some clubs backed off him because he had been arrested in April 2004 and charged with stealing laptop computers from a middle school. Red Sox general manager Theo Epstein and scouting director Jason McLeod grilled him about the incident during a Fenway Park workout and decided it was a one-time lapse in judgment. Boston drafted him 42nd overall and signed him for $800,000. Buchholz has gone 22-11, 2.39 with 378 strikeouts in 308 innings since. Buchholz has a low-90s fastball that tops out at 95 mph, and it's just his third-best pitch. His 12-to-6 curveball and his changeup both rate as 70s on the 20-80 scouting scale and are better than anyone's on Boston's big league staff. With terrific athleticism and hand speed, he used an overhand delivery to launch curves that drop off the table. His changeup can make hitters look even sillier. He'll also mix in a handful of sliders during a game, and that's a plus pitch for him at times. Buchholz improved his mechanics in 2007 and now operates more under control. He showed during his no-hitter that he won't be fazed by pressure. His secondary pitches are so outstanding that Buchholz doesn't use his fastball enough. He needs to throw more fastball strikes early in counts and improve his command of the pitch. Clearly gassed after throwing a career-high 149 innings last season, he needs to get stronger. Working toward that goal, he trained at the Athlete's Performance Institute in Florida during the offseason. Buchholz is Boston's best pitching prospect since Clemens and has everything he needs to become a No. 1 starter. He'll join Josh Beckett, Daisuke Matsuzaka and Jon Lester in the big league rotation in 2008, giving the Red Sox four quality starters aged 27 and younger. Buchholz is the baby of the group at 23.
Ellsbury electrified Red Sox fans by scoring from second base on a wild pitch in his third big league game in July, and there was more to come. After setting a Pawtucket record with a 25-game hitting streak, he batted .361 while subbing for an injured Manny Ramirez in September and hit .438 in the World Series. Ellsbury puts his plus-plus speed to good use on the bases and in center field. At the plate, he focuses on getting on base with an easy live-drive swing and outstanding bat control. He's a prolific and efficient basestealer, swiping 50 bases in 57 tries in 2007, including a perfect 9-for-9 in the majors. He may not be as spectacular in center field as Coco Crisp, but he's a Gold Glover waiting to happen. Ellsbury has just 10 homers in 1,017 minor league at-bats, but Boston believes he has the deceptive strength to hit 10-15 homers per season. He can launch balls in batting practice and did go deep three times in September. Like Clay Buchholz, he spent time at API during the offseason to add strength. Ellsbury's arm is below average, but he compensates by getting to balls and unloading them quickly. The Red Sox have tried to downplay the expectations and the Johnny Damon comparisons for Ellsbury since drafting him in 2005's first round, but that's impossible now. He's clearly their center fielder of the future, and the future is soon.
Anderson led California high schoolers with 15 homers in 2006, but his inexperienced agent didn't understand baseball's slotting system and scared teams off with a $1 million price tag. The Red Sox took an 18th-round flier on him and landed him in August for $825,000. He went to low Class A Greenville at age 19 for his pro debut, where he showed that he has the best bat and best power in the system. He's extremely disciplined, recognizes pitches well and lets balls travel deep before drilling them to the opposite field. He generates tremendous raw power with just an easy flick of the wrists. His glove was better than expected, as he worked hard and managers rated him the best defensive first baseman in the South Atlantic League. Boston loves Anderson's approach but wants him to get more aggressive with two strikes. He takes too many borderline pitches in those situations. His power will explode once he starts to pull more pitches. All but one of his 11 homers last year went to left or center field. Once he fills out, he'll be a below-average runner. The next step is the launching pad at high Class A Lancaster, where Anderson could put up some crazy numbers in 2008. Corner infielders Kevin Youkilis and Mike Lowell are under Red Sox control through 2010, but Anderson may be ready before then.
After beginning his high school career as a catcher, Masterson first blossomed as a prospect in the Cape Cod League in the summer of 2005. He transferred from Bethel (Ind.) to San Diego State, went in the second round of the 2006 draft and reached Double-A in his first full pro season. Using a low three-quarters arm slot, Masterson unleashes a special sinker. With its combination of low-90s velocity and heavy movement, batters feel like they're trying to hit a bowling ball. His No. 2 pitch is a slider that improved last season. He showed his toughness by not giving in when he went 2-3, 6.31 in his first nine starts at hitter-friendly Lancaster, making adjustments so he could survive the wind tunnel there. Because he throws from a lower arm angle, Masterson doesn't always stay on top of his slider. His changeup is getting better but also is inconsistent and he doesn't use it enough. He worked a career-high 154 innings and tired down the stretch, so he'll need to get stronger. The Red Sox will send Masterson to Triple-A as a starter but envision him becoming a big league reliever. He has the power sinker and the mentality to close games, though in Boston he'd be a setup man for Jonathan Papelbon.
Following a strong pro debut in 2005, when he was a supplemental first-round pick, Lowrie slumped to .262 with three homers in his high Class A encore. He hit just .170 last April and seemed destined for another down year, but he improved dramatically afterward and wound up being Boston's minor league offensive player of the year. Lowrie is a switch-hitter with a patient approach and pop from both sides of the plate. He started to make adjustments at the end of 2006 and they helped him recover from his season-opening slump last year. He improved even more dramatically on defense, becoming an average shortstop. Lowrie improved his fielding percentage there to .965 from .938 the year before and demonstrated enough speed and range to stay there. His hands and arm weren't in question. While Lowrie can play shortstop and his offensive production makes his glove more tolerable, a contender probably would want a better defender at the position. As with most of their best prospects, the Red Sox would like to see him get stronger. Luckily for Lowrie, his bat will play at second or third base, but there are no infield openings in Boston. That's why his name repeatedly surfaced in offseason trade talks. If he's still with the organization in 2008, he'll go to Triple-A to get regular playing time and be on call to fill any infield need that arises.
It may be apocryphal, but legend has it that Kalish didn't swing and miss at a single pitch as a high school senior. Because he was strongly committed to Virginia, he dropped to the ninth round, where the Red Sox signed him for $600,000. He was hitting .368 at short-season Lowell when an errant pitch broke the hamate bone in his right wrist in mid-July, ending his year and necessitating surgery in September. Kalish's approach and plate discipline are quite advanced for his age, which combined with his sweet lefty swing mean that he should have little trouble hitting for average. He already pulls his share of pitches and could develop into a 15-20 homer threat, perhaps more if he adds some loft to his swing. He's a plus runner with good instincts in center field. He has a strong work ethic and constant energy. Kalish is still growing and if he loses a step, he wouldn't profile as a leadoff hitter or center fielder. He'll need to improve his arm strength if he shifts to right field. Because he signed late in 2006 and got hurt last year, he has accumulated just 142 pro at-bats in parts of two seasons. Kalish began hitting again after Thanksgiving and should be 100 percent for spring training, where an assignment to low Class A awaits. He's most often compared to J.D. Drew, whom he eventually could succeed as Boston's right fielder.
While most of his fellow pitchers were shellshocked by Lancaster last year, Bowden's fine command allowed him to overcome the dreadful pitching environment. He wasn't as spectacular following a promotion to Portland in mid-May, but he acquitted himself well for a 20-year-old in Double-A. The Rangers could have taken him in the Eric Gagne trade last July, but chose Kason Gabbard instead. Bowden has uncanny feel for pitching, pounding both sides of the plate and commanding the bottom of the strike zone with his low-90s fastball. His curveball has big 12-to-6 break and he throws his changeup with deceptive arm speed. He uses a high arm slot to throw all of his pitches on a steep downhill plane. He's durable and a tough competitor. Bowden needs to get more consistent with his secondary pitches. His offerings all move down in the strike zone, so he may try to add a slider to give him something with lateral break. Scouts have quibbled with his delivery, which is long in back, short in front and reminiscent of former all-star Ken Hill's mechanics. But Bowden repeats it well and never has had any injury problems. Bowden is a workhorse with the ceiling of a No. 3 starter. He'll probably open 2008 in Double-A and move up to Triple-A by the end of the year. The Red Sox don't have any rotation openings, so they may use him as trade bait.
Hagadone has a nondescript fastball and performance in his first two years at Washington before suddenly blossoming in 2007, becoming Boston's top draft pick (55th overall) and signing for $571,500. He allowed five runs in his first pro game, then slammed the door and threw 23 straight shutout innings afterward, allowing just eight hits. A big-bodied lefthander, Hagadone has two plus pitches in a 92-94 mph fastball and a hard slider that ranks as the best in the system. He uses a high three-quarters arm slot to stay on top of his pitches and drive them down in the strike zone. The Red Sox love his makeup and believe he can handle any role they throw at him. Hagadone's changeup isn't as good as his other two pitches, though it has potential and he showed some feel for it at Lowell and in instructional league. His mechanics aren't picture-perfect and when they get out of whack, his stuff flattens out. The short-term plan is to send Hagadone to low Class A as a starter, allowing him to have success and build up some innings. Long term, Boston isn't sure whether it wants to deploy Hagadone as a possible No. 3 starter or as a power lefty reliever. If he moves to the bullpen, he could rocket to the majors quickly.
When Tejeda signed for $525,000 out of the Dominican Republic in 2006, a rival international scouting director described him as Alfonso Soriano with better hands. That hyperbole elicited chuckles from the Red Sox, but they didn't hesitate to challenge him as a 17-year-old last season. He ranked among the Top 10 Prospects in both the Rookie-level Gulf Coast and the short-season New York-Penn leagues, and he was the latter circuit's youngest player. With a projectable frame and a fluid swing that imparts backspin, Tejeda could develop considerable power once he matures physically and as a hitter. He has quick hands and plenty of bat speed. His arm strength attracted scouts when he was 14, and he makes accurate throws as well. His speed and range are solid. A leader on the field, he made tremendous strides learning English in 2007. Tejeda is aggressive at the plate, and while he makes enough contact now, it's going to take him a while to incorporate Boston's emphasis on plate discipline. He's thin and needs to get stronger, and it's possible he'll outgrow shortstop. Like many young shortstops, Tejeda will have to become more reliable with his glove. He made 22 errors in 63 games at short last year. The Red Sox have an abundance of gifted middle infielders at the lower levels of their system. They're trying to figure out where everyone will play in 2008, but the one sure thing is that Tejeda will be the regular shortstop in low Class A.
When the Red Sox selected Reddick in the 17th round in 2006, they intended on making him a draft-and-follow. But when they watched him homer off Team USA's Ross Detwiler (who became the No. 6 overall pick in 2007), they moved to sign Reddick immediately for $140,000. Boston didn't have an opening for him at the start of last season, so he punished pitchers in extended spring training and then did the same when he got to low Class A. Reddick will consistently hit for average because he has a smooth lefty stroke, strong wrists and great feel for putting the bat on the ball. He doesn't chase pitches and drives them with little effort. He's a solid right fielder with good arm strength and precision accuracy, which enabled him to lead the South Atlantic League with 19 outfield assists. He's a smart baserunner. Reddick is so aggressive at the plate and makes so much contact that he rarely walks. Boston doesn't want to tone him down too much, but he needs to learn that he's better off letting pitches on the black go by and waiting for something more hittable. He's still filling out his frame, and his speed is already fringy. Lancaster features perhaps the best hitting environment in the minors, so Reddick could have a monster year in 2008. The Red Sox have no need to rush him but may not be able to hold his bat back for long.
At the all-star break last year, Moss was hitting .303 with 31 doubles and 13 homers in Triple-A, numbers that usually would merit big league playing time in the second half. But with the Red Sox, he got just 25 at bats. That's the dilemma facing Moss, who has nothing left to prove at Pawtucket but is blocked in Boston. Moss broke out as a prospect by winning the MVP award and batting title (.339) in the South Atlantic League in 2004, then struggled to find offensive consistency the next two years in Double-A. His swing got long when he tried to take advantage of the short right-field porch in Portland, but he made adjustments in the second half of 2006, which he capped by winning Eastern League playoff MVP honors. In 2007, Moss demonstrated more opposite-field power than ever before and led his league in doubles for the second straight season. He has strong hands, a quick bat, leverage in his swing and a greater understanding that he should just let his power come naturally. He imparts nice backspin on his drives, and though he'll swing and miss, he does a good job of covering both sides of the plate. Though Moss isn't as streaky as he used to be, he still can get inconsistent with his approach and gives too many at-bats away. He projects as a .270 hitter with 20 homers a year. Despite slightly below-average speed, he's a solid right fielder with a good arm. Moss will be a reserve outfielder this season for the Red Sox, unless they use him as trade bait.
As soon as Diaz made his U.S. debut in 2006, Boston realized he's the best defensive shortstop its system has seen in years. His actions, instincts and first step are so good that he has above-average range to both sides despite owning slightly below-average speed. His hands are reliable, his exchange is quick and his arm is strong. He can wow scouts just by making routine plays. The Red Sox compare his defensive skills to those of Alex Gonzalez, who played a slick shortstop for them in 2006. Diaz isn't as strong physically as Gonzalez, but he made some encouraging progress with the bat in 2007. He hit a career-high .279 during the regular season, then challenged for the Hawaii Winter Baseball batting title before slumping late and finishing at .358. It's still unclear what Diaz will bring to the table offensively. He doesn't offer much power and speed, so he needs to focus on making contact and getting on base. His approach is fairly sound for his age, though he can get too aggressive at times. He took such a huge cut at a pitch in late April that he hurt his shoulder and missed most of May. He speaks English very well, which makes it easier for him to receive instruction. Boston believes Diaz will develop into a Gold Glover who hits for a high average. Though he hasn't progressed past low Class A, he was eligible for the Rule 5 draft this offseason, so the Red Sox didn't hesitate to add him to their 40-man roster. He'll advance to high Class A this year.
Johnson got a huge wakeup call in his first full pro season with an assignment to the wind tunnel that is Lancaster. Like most of the JetHawks pitchers, he struggled to adapt, going 2-3, 8.76 in his first nine starts. Then he realized that he had to challenge hitters because nibbling and falling behind in the count had been disastrous. He turned his season around, going 7-4, 4.32 the rest of the way. Johnson wouldn't have lasted 40 picks in the 2006 draft if he hadn't been at less than full strength after having Tommy John surgery the year before. He's a tall lefthander who uses his size to throw lively low-90s fastballs down in the strike zone. Though he's still skinny, he generates his velocity with an easy delivery and has no trouble throwing 91-92 mph in the seventh inning. His changeup is a solid second pitch, but he has yet to regain the plus curveball with power and depth that he showed before getting hurt at Wichita State. A breaking ball and command are often the last two things to return after Tommy John surgery, so the Red Sox are hoping his curve will improve in 2008, when the operation will be three years behind him. Command isn't an issue, as Johnson can pitch to both sides of the plate and most of his walks last year came when he was afraid to go after hitters. He also did a better job of maintaining his delivery last year than he did in 2006. Though he made a nice comeback at Lancaster, Boston wants Johnson to show mental toughness from the outset in 2008. He could be ready for a breakout year in Double-A.
Middlebrooks received $925,000, the highest bonus of any 2007 Red Sox draftee, despite lasting until the fifth round. He only dropped that far because he priced himself above his consensus draft slot, but Boston was thrilled to grab him with its last pick on the first day of the draft. He's a tremendous athlete who drew college football interest as a quarterback and punter, and he might have had an NFL future as a punter. Nagged by shoulder tendinitis, he didn't play in a minor league game after signing at the Aug. 15 deadline and didn't take balls at shortstop until the final week of instructional league. His bat will need some polish, but he has the size and leverage to hit for power. Most clubs projected Middlebrooks as a third baseman because of his size, but the Red Sox will give him every opportunity to remain at shortstop despite their burgeoning depth at the position. He has average speed and range, plus the actions and body control to pull it off. His arm isn't a question, as he was a legitimate prospect as a pitcher with a low-90s fastball and an occasional plus curveball. He'll still have plenty of value if he does wind up at the hot corner. For scouts, the high-end comparisons are Cal Ripken Jr. if he sticks at shortstop and Scott Rolen if he moves to third. Oscar Tejeda, Ryan Dent and Yamaico Navarro all are ready for Class A and need time at shortstop as well, so it's unclear what Middlebrooks' assignment will be for 2008. He may begin the year in extended spring and then play shortstop for Lowell in June.
The Red Sox slotted Nick Hagadone at No. 18 and Dent at No. 19 on their draft board, but didn't have a pick in the first round after giving theirs up as compensation for free agent Julio Lugo, so they weren't sure they'd get either player. They got both, however, taking Hagadone 55th overall and Dent 62nd. Dent had starred on the showcase circuit the previous year, helping the Reds' scout team win the World Wood Bat Association championship. He signed two days before the Aug. 15 signing deadline for $571,000, which was slightly over slot and $500 less than Hagadone's bonus. Dent was one of the best athletes in the draft. He can go from the right side of the plate to first base in 4.1 seconds, and he's also strong enough to drive balls into the gaps. He could develop 15-20 homer power in time. He has a quick stroke and sound hitting mechanics, and he should hit for average if he tones down his aggressiveness. Despite his speed and athleticism, he's not smooth at shortstop. His actions, range and arm are all just average, unlike the rest of his physical tools. The Red Sox see him as a shortstop and would like to play him there, but they also have more shortstops than they know what to do with. With Oscar Tejeda ticketed for low Class A, Dent will get the bulk of his playing time at second base if he's assigned there as well. Center field is another option for him, though Boston definitely will keep him in the infield for now.
To sign Almanzar out of the Dominican Republic last summer, Boston gave him a $1.5 million bonus, a club record for a Latin American player. He's the son of former big leaguer Carlos Almanzar, who pitched in the Red Sox system in 2007. Michael is lanky and athletic, with the bat speed and leverage to hit for a ton of power once he matures physically. He's just 6-foot-5 and 180 pounds now, so there's room for him to add a significant amount of strength--and he already had legitimate gap power as a 16-year-old. He has a good load and trigger in his swing, and though there's some bat wrap in the back of his stroke, it doesn't hamper him. Almanzar played shortstop in the Dominican but will play third base in pro ball. He runs well, but at his size he'd almost certainly outgrow shortstop. He's more fluid and has better actions at the hot corner. He has the arm to make the longer throws, as it's plenty strong. He needs to remember to keep his elbow up so his tosses will be more accurate. Almanzar will need plenty of time to develop because he's so young and skinny. Given his background in baseball and his makeup, Boston believes he can handle an assignment to the Gulf Coast League this year at age 17.
No one with the Red Sox is quite comparing Rizzo to Lars Anderson yet, but for the second year in a row, they're excited about a high school first baseman out of the most recent draft. Rizzo had performed well with wood bats on the showcase circuit, yet his $325,000 price tag caused him to slide in the 2007 draft. Boston anted up on the Aug. 15 signing deadline and got more than it bargained for. They knew he had raw strength and usable power, but they didn't realize he had such an advanced approach. Rizzo surprised them even more with his better-than-expected agility at first base. Though he's easily a below-average runner, he has soft hands and moves wells around the bag. He also pitched in high school, so his arm is an asset at first base. In the Red Sox' minds, getting Rizzo more than makes up for not signing Alabama high school first baseman Hunter Morris, their second-round pick. They like Rizzo's maturity, too, and think he'll be able to handle low Class A in 2008.
In the final two months last season, the Red Sox finally started to see glimpses of the pitcher they thought they were getting when they spent the 26th overall pick and a $4.4 million big league contract on Hansen in 2005. After making some adjustments to his mechanics and mental approach, Hansen had a 1.23 ERA and a 25-9 K-BB in his final 22 innings, and he again started flashing the slider that made him so dominant in college. It's still inconsistent, but Boston hadn't seen that killer slider since he turned pro. He also worked with a 93-96 mph fastball that had good life down in the strike zone. Before Jonathan Papelbon emerged as the Red Sox' closer in 2006, there was talk that Hansen might assume that role in his first full pro season. The pressure got to Hansen, who kept tinkering with his mechanics while trying to find the slider that had deserted him. He started throwing with more effort and a lower arm slot, and it hurt his fastball command. Now he's back up to a true three-quarters angle and looking more like his old self. Hansen did hit a couple of speed bumps after his resurgence, missing three weeks in August after he banged his forearm when he slipped and fell against a nightstand. He also left the Arizona Fall League early to have surgery to correct his sleep apnea. As soon as Hansen gets more consistent with his slider, he'll be pitching in Boston again.
No pitcher struggled with the hitting environment at Lancaster last year more than Bard did. After surrendering four runs in 22⁄3 innings in his first start, he completely lost his confidence and stopped challenging hitters. He gave up 21 walks and 19 runs over 102⁄3 innings in his next four starts, then went on the disabled list for what was described as a triceps injury but may have been a mental health break as much as anything. He spent some time in extended spring training before being shipped to low Class A. Bard wasn't much better at the lower level, as he continued to fall out of whack with his mechanics, lose his release point and miss the strike zone. He did a better job of repeating his delivery in Hawaii Winter Baseball, but still has a considerable ways to go to find consistent command. The Red Sox will remain patient because Bard has an electric arm even if he can't harness it. He throws 96-98 mph without breaking a sweat, breaking bats with his combination of velocity and heavy life. He never has had a reliable breaking ball. He's now working with a slurvy pitch that's more curve than slider, and while it's a plus offering at times, he doesn't locate it very well. His changeup is less dependable than his breaking ball. Bard posted a 1.08 ERA in Hawaii, though he still walked 15 batters and hit five in 17 innings. Though they drafted him as a starter--and gave him a $1.55 million bonus--the Red Sox are starting to think they should just put him in the bullpen. He seems to challenge hitters more and just let his pitches go in that role, and he has a history of success in shorter stints in venues such as Team USA, the Cape Cod League and his relief role in Hawaii.
The only unsettled long-term position on the Red Sox is catcher, where there's no clear heir apparent to Jason Varitek. Wagner is the leading candidate to fill that role, as he has the most well-rounded game among a group of catching prospects that also includes Dusty Brown, Jon Egan, George Kottaras, Jon Still and Tyler Weeden. A ninth-round pick in 2005, Wagner has improved in each of his seasons in the system. He initially stood out with his work behind the plate. Wagner may not have a plus defensive tool, but he's solid across the board. With average arm strength and a hair-trigger release, he threw out 35 percent of basestealers in 2007. He also gets the job done as a receiver and game-caller. When Wagner first signed, his stance was too spread out and he had a defensive swing. He now stands more upright and has become more aggressive without sacrificing any of his tremendous plate discipline. Lancaster did help his power numbers (which included a career-high 14 homers) but he did hit .281 with 36 doubles the previous season. He's not going to be an offensive force, but Wagner will hit at least enough to be a big league backup. He's a typical catcher in that he doesn't run well. His blue-collar makeup helps Wagner get the most out of his abilities. He'll move up to Double-A for 2008.
On May 19, Bates became the first player in the 64 seasons of the high Class A California League to hit four homers in a game. Teammate Brad Correll matched him five weeks later, another indication of how ridiculous Lancaster can get. For someone like Bates, who has legitimate hitting ability and power, it's an opportunity to put up crazy numbers and he did just that, leading the league with a .456 on-base percentage and posting a 1.048 OPS. Bates has a patient approach, waiting for a pitch he can pound and using the whole field. His biggest issue at the plate is that he takes an exceedingly long stride and doesn't always get his front foot down in time, messing up his stroke. It's a compact swing at times and long at others. He has good pitch recognition and hammers fastball and offspeed pitches alike. Bates' value rests totally with his bat. He's a below-average runner and athlete who's working hard to become an acceptable first baseman. He needs to adjust his stride after Double-A pitchers ate him up at the end of 2007, and he'll return to Portland to begin the season.
Richardson competed on ESPN's reality show "Knight School," where Texas Tech students tried to make coach Bob Knight's basketball team as a walk-on. Richardson would have won the competition if he had been able to join the team, but that would have conflicted with his baseball responsibilities. Though he was a lefthander with an 89-92 mph fastball, he lasted until the fifth round in 2006 because he was a onepitch pitcher and not the easiest guy to see because he pitched his home games in Lubbock. Richardson gets terrific extension out front of his delivery, and his low three-quarters arm slot adds deception. He has continued to get swings and misses with his heater in pro ball. His fastball plays above its velocity more than any other starter's in the system, with the exception of Justin Masterson. Richardson even tamed Lancaster in four starts at the end of the season--including three at home, with five no-hit innings in his last outing. Because he relied almost exclusively on his fastball in college, his secondary pitches are still works in progress. His curveball has loopy break and not much power, while his changeup is inconsistent. He may try to replace the changeup with a splitter in the future. The Red Sox envision him as a starter and will continue that development path, though it's easy to see him becoming a reliever and working mostly off his fastball. He could open the year in Double-A.
The Red Sox knew Place's hitting mechanics would need an overhaul, but they couldn't resist his athleticism and chose him with the 27th overall pick in the 2006 draft. He signed for $1.3 million. Place struggled in his first full season, which wasn't unexpected. Place came into pro ball with a funny load to his swing, with his hands starting in the middle of his body and circling back. Boston has fixed his load and spread out his stance, giving him a shorter stride and better balance in his lower half. He further smoothed out his swing in Hawaii Winter Baseball, albeit with similar results. Now it's up to him to make consistent contact so he can take advantage of his above-average raw power and speed. Place has lots of bat speed and power to all fields, though he'll get pull-conscious at times. Besides his speed, he also has the instincts and route-running ability to play center field. He has a strong, accurate arm that would easily play in right field if needed. Lancaster would boost Place's numbers and confidence, but he may repeat low Class A this year.
While Daisuke Matsuzaka and Hideki Okajima were helping to pitch Boston to a World Series championship, the club made another Asian investment on a smaller scale. In June, the Red Sox signed Lin out of Taiwan for $400,000. He held his own as an 18-year-old in his U.S. debut while showing off some exciting tools. Chief among them are his plus-plus arm, his instincts and above-average play in center field and his plus speed. Lin has some ability at the plate, too. He holds his hands high and employs a big leg kick, drilling line drives to the gap. He has some strength and projectable power, and he does a nice job of imparting backspin on the ball. Lin generally uses a whole-field approach, though he sometimes gets pull-conscious. He'll need to adjust to breaking balls down and away, which led to many of his strikeouts in his debut. Boston could challenge him by sending him to low Class A in 2008.
Of all the hitters who thrived at Lancaster last year, none could top Bell. He was named MVP, rookie of the year and all-star game MVP in the California League, which he led in batting (.370), slugging (.665), homers per at-bat (one every 14.6) and plate appearances per strikeout (9.7). He likely would have paced the Cal League in several counting stats had he not been promoted to Double-A in early July, as he was tops in all three triple-crown categories at the time. Bell clearly benefited from playing his home games at Clear Channel Stadium, but the Red Sox say he's not a fluke. He has a short lefthanded swing, tremendous plate discipline and a willingness to use the entire field, so there's no reason he can't keep hitting for average. His power was inflated by Lancaster, but he has enough juice to hit 10-15 homers per year under normal conditions. Bell has solid speed and plays a better center field than Boston thought, and he has an average arm. He may be more fourth outfielder than regular, but his ability can take him to the majors. He was slowed by back and quadriceps injuries once he got to Double-A, and he'll head back there to start 2008.
With Wily Mo Pena rotting on their bench, the Red Sox shipped him to the Nationals in a three-way deal that netted Carter from the Diamondbacks, who got righthander Emiliano Fruto from Washington. Carter is similar in many ways to Pena, as he's a defensively challenged slugger who may find at-bats hard to come by in Boston. Carter was a top recruit when he arrived at Stanford, but he left as a 17th-round pick in 2004 following a disappointing, injury-plagued career. He has been anything but disappointing in pro ball, reaching Triple-A in his second full season and putting up a career .906 OPS. With tremendous bat speed, Carter can knock a ball out of any part of any park. He also has the discipline to wait until pitchers challenge him before turning his bat loose. The rest of his game is substandard. He's a well below-average runner who hasn't thrown well since having surgery to repair a torn labrum in college. The Red Sox played Carter solely at first base after the trade, though he did see extensive time in the outfield during his time in the Arizona system. He's bad in both spots and really best suited to become a DH. Boston protected him on its 40-man roster but has no way to get him at-bats. The best Carter can hope for is to serve as a lefty bat off the bench, and he may be looking at a third straight year in Triple-A.
Jones pitched with a misdiagnosed broken arm as a sophomore at Florida State, eventually developing a fracture through his ulnar bone in his elbow. He had a pin inserted into the elbow to prevent further damage, then decided not to redshirt and went just 4-3, 5.05 as a junior in 2005. Given his performance and medical history, he went undrafted. Jones pitched well that summer in the Cape Cod League, however, and the Red Sox signed him as a nondrafted free agent. Jones' stuff won't wow anyone, but he gets outs with his 87-88 mph fastball. His heater peaks at 91 but still gets a lot of swings and misses because he's deceptive, throws downhill and can spot it on both sides of the plate. He also uses a slow curveball with deep break and a changeup but mostly works with his fastball. Jones is an especially versatile reliever because he gets righthanders out and can pitch up to three innings in an outing. He'll start 2008 in Triple-A and should make his major league debut at some point during the summer.
Engel's Baylor commitment led most teams to believe he was unsignable in 2005, but the Red Sox took him in the fifth round and signed him for slot money ($154,000) plus money to cover his tuition if he does attend Baylor ($96,000). He just hit .243 in his first two pro seasons and didn't reach full-season ball until his third, but he had a nice little breakout in 2007. He made adjustments to his approach and to his swing, showing more patience while putting his hands more out in front of his body, giving him a better trigger. He has decent power, though his best tool is plus-plus speed that he's still learning to use on the basepaths. He has enough range to play center field and enough arm to play in right, but he was a left fielder at Greenville because Jason Place and Josh Reddick were also there. Engel endured a scary incident at the end of May, when he fouled a ball off the plate and it bounced up and hit him in the jaw. But after spending 10 days on the disabled list, he returned and kept on hitting. He'll enjoy playing in hospitable Lancaster this year.
Yet another shortstop prospect who emerged for the Red Sox in 2007, Navarro slid over to third base once Oscar Tejeda was promoted to Lowell. Navarro is more offensive-minded than most of the other young shortstops. He squares up fastballs well and already shows some opposite-field power. Navarro takes violent cuts and chases pitches, and Boston would like him to use his two-strike approach (more selectivity, shorter swing) throughout his entire at-bats. He has good speed but doesn't always run hard, and he needs to mature and show more professionalism. Navarro isn't as fluid as some of the other shortstops, but he has the range and arm to make most of the plays. He needs work on balls in the hole. Navarro is slated to play with Tejeda again this year in low Class A, and Tejeda once again will man shortstop.
With shoulder problems putting Jonathan Papelbon's career as a closer in jeopardy, the Red Sox were searching for a new closer in spring training last year. Given his spectacular success down the stretch with Rice and in his pro debut the year before, there was talk that Cox might even take over the role at some point during his first full season. That didn't happen, of course, and not just because Papelbon proved healthy enough to keep the job. After finally finding a compact delivery and a three-quarters arm slot that not only worked for him but also produced spectacular results, Cox lost them again in 2007. He missed time in Double-A and again after a demotion to low Class A with hamstring strains, and he never got his mechanics back. The Red Sox tried everything, even hitting him groundballs at third base like the Rice coaching staff had. By the end of the year, Cox had regained the 92-94 mph velocity on his fastball, but it didn't have its previous ride and sink. His wipeout slider also remained AWOL. The logical next step for him in 2008 would be high Class A, but Boston doesn't want to expose him to Lancaster while he's struggling. He's an enigma with huge upside, but the fact remains that in five years of college and pro ball, he has dominated for just four months. | 2019-04-26T10:36:44Z | https://www.baseballamerica.com/teams/1003/boston-red-sox/organizational/?year=2008 |
What other human genetic diseases are potentially caused by miRNA dysfunction?
microRNAs (miRNAs) are a class of small RNAs (19-25 nucleotides in length) processed from double-stranded hairpin precursors. They negatively regulate gene expression in animals, by binding, with imperfect base pairing, to target sites in messenger RNAs (usually in 3' untranslated regions) thereby either reducing translational efficiency or determining transcript degradation. Considering that each miRNA can regulate, on average, the expression of approximately several hundred target genes, the miRNA apparatus can participate in the control of the gene expression of a large quota of mammalian transcriptomes and proteomes. As a consequence, miRNAs are expected to regulate various developmental and physiological processes, such as the development and function of many tissue and organs. Due to the strong impact of miRNAs on the biological processes, it is expected that mutations affecting miRNA function have a pathogenic role in human genetic diseases, similar to protein-coding genes. In this review, we provide an overview of the evidence available to date which support the pathogenic role of miRNAs in human genetic diseases. We will first describe the main types of mutation mechanisms affecting miRNA function that can result in human genetic disorders, namely: (1) mutations affecting miRNA sequences; (2) mutations in the recognition sites for miRNAs harboured in target mRNAs; and (3) mutations in genes that participate in the general processes of miRNA processing and function. Finally, we will also describe the results of recent studies, mostly based on animal models, indicating the phenotypic consequences of miRNA alterations on the function of several tissues and organs. These studies suggest that the spectrum of genetic diseases possibly caused by mutations in miRNAs is wide and is only starting to be unravelled.
microRNAs (miRNAs) are a class of single-stranded RNAs (ssRNAs), 19-25 nucleotides (nt) in length, generated from hairpin-shaped transcripts. They control the expression levels of their target genes through an imperfect pairing with target messenger RNAs (mRNAs), mostly in their 3' untranslated regions (3' UTRs) . The biogenesis of miRNAs involves a complex protein system that includes members of the Argonaute family, Pol II-dependent transcription and the two RNase III proteins, Drosha and Dicer . miRNAs are first transcribed in the nucleus as long transcripts, known as primary miRNA transcripts (pri-miRNAs), which can sometimes contain multiple miRNAs [3, 4]. Few pri-miRNA transcripts have been studied in detail, but increasing evidence suggests that miRNAs are regulated and transcribed like protein encoding genes .
In brief, within the nucleus, Drosha first forms a micro-processor complex with the double-stranded RNA-binding protein DGCR8 . It then processes the pri-miRNAs into a smaller, stem-loop miRNA precursor of ~70 nucleotides (pre-miRNA) . pre-miRNAs are exported, in turn, across the nuclear membrane and into the cytoplasm by the Exportin-5 complex [8–10]. These pre-miRNAs are further cleaved by Dicer thus producing a 19- to 25-nucleotide RNA duplex. These duplexes are then incorporated into a ribonucleoprotein complex (RNP) called RISC-like complex [11, 12], referred to as the miRNA-induced silencing complex (miRISC). Only one strand of the miRNA-duplex, known as the mature miRNA, is incorporated into the miRISC complex, while the other strand, the miRNA-star (miRNA*), is degraded although, recently, miRNAs* have been found to play a role similar to that of their cognate miRNAs. Within the miRISC complex, miRNAs bind to the mRNA targets and regulate gene expression, either at the translational level [13, 14] or at the transcript level [15–17] or both . A crucial role in the recognition of the target mRNA by the miRNA is played by the so-called seed region, which is composed of six to seven nt, which shows a perfect complementarity between a miRNA and its target. miRNA can be localized in the intergenic (40%) or the intragenic (60%) regions . Intragenic miRNAs are located within other transcriptional units which are termed host genes. The vast majority of intragenic miRNAs is localized within the intronic regions of their host genes and only a minority (10%) lies within exonic regions, usually pertaining to the non protein-coding host genes. Interestingly, it has been demonstrated that many intronic miRNAs and their host genes are co-regulated and co-transcribed from a common promoter [20–22].
miRNAs are implicated in a wide range of basic biological processes, including development, differentiation, apoptosis and proliferation [23, 24]. Since the discovery of the strong impact of miRNAs on biological processes, it has been hypothesized that mutations affecting miRNA function may have a pathogenic role in human diseases. A large body of evidence has already shown that aberrant miRNA expression is implicated in most forms of human cancer [25–27], but fewer studies have established a clear link between miRNAs and human genetic disorders. Initially, there were two main (and contrasting) arguments against the hypothesis of miRNAs as genes responsible for human genetic diseases: (1) each miRNA is endowed with such a basic role in the regulation of gene expression and consequently in the regulation of basic cellular processes that a significant alteration of their function is not compatible with cell survival and ultimately with life; and (2) considering the great deal of redundancy in miRNA actions, a significant alteration of the function of a single miRNA may only give rise to subtle modifications in both the cellular transcriptome and proteome, which are unable to determine a significant perturbation of biological processes and ultimately lead to a diseased phenotype.
Our aim in this review is to provide an overview of the evidence available to date which support the pathogenic role of miRNAs in human genetic diseases, with a particular focus on monogenic disorders. In order to achieve this goal, we will first describe the types of mutations affecting miRNA function that can result in human monogenic disorders, giving some recently described examples. In the second part of this review, we will give a broader picture of the hypothetic involvement of miRNAs in the pathogenesis of human monogenic diseases based on the results obtained in vivo from the analysis of several animal models characterized by either the global perturbation of miRNA pathways or by the perturbation (either inactivation or overexpression) of single miRNAs.
Given the mechanisms of action of miRNAs (see above), three main types of mutation mechanisms affecting miRNA function can be envisaged (Figure 1): (1) mutations affecting primarily miRNAs, either point mutations in the mature sequence or larger mutations (that is, deletions or duplications of the entire miRNA locus); (2) mutations in the 3' UTR of mRNAs that can lead to the removal or to the de novo generation of a target recognition site for a specific miRNA; and (3) mutations in genes which participate in the general processes of miRNA processing and function and, therefore, are predicted to impact on global miRNA function.
Schematic diagram summarizing the main types of miRNA mutations with a potential aetiopathogenic role in monogenic disorders (see text for further details).
Likewise, protein-coding loci and also miRNA loci can be subjected to large mutations, such as deletions or duplications. To date, there are no examples of such mutations which are clearly associated with human mendelian diseases. However, a careful analysis of the genomic organization of miRNAs reveals that a number of intragenic miRNAs are localized within host genes (see above) whose mutations are responsible for human genetic disorders. By analysing the mutation spectrum previously described for the latter disease genes in the Human Gene Mutation Database (HGMD) , we found evidence that some mutations do, indeed, significantly affect one or more miRNAs (Table 1). This is the case, for instance, in certain intragenic deletions responsible for Duchenne muscular dystrophy, choroideraemia and Dent disease, among others, which are also predicted to encompass some miRNAs. It is of crucial importance to confirm these predictions and to determine whether or not the deletion of these miRNAs is able to play a role in the phenotype observed. Furthermore, several miRNAs loci are also either deleted or duplicated (Table 1) in some well-known human aneuploidy syndromes, and there is initial evidence of their contribution to the pathogenic mechanisms of the complex manifestations of these disorders .
Duan and colleagues, in 2007, described a single nucleotide polymorphism (SNP) within the seed region of miR-125a. Through a series of in vitro analyses, the authors demonstrated that this SNP in miR-125a, in addition to reducing miRNA-mediated translational suppression, significantly altered the processing from pri-miRNA to pre-miRNA. Although this SNP has not been associated with a disease status, these data suggest, for the first time, that SNPs that reside within miRNA genes may, indeed, impair miRNA biogenesis and alter target selection and, therefore, have a potentially profound biological effect .
The first example of point mutations in the mature sequence of a miRNA with an aetiopathogenic role in a human mendelian disease has been recently reported by Mencìa et al. . They identified two different nucleotide substitutions in the seed region of the human miR-96 in two Spanish families affected by an autosomal dominant form of deafness, namely DFNA50. In particular, both the mutations, miR-96 (+13G>A) and (+14C>A), which were not present in several unrelated normal-hearing Spanish controls, were segregated in both of the families with a hearing impairment. miR-96, together with miR-182 and miR-183, is transcribed as a single polycistronic transcript and is reported to be expressed in the inner ear. For this reason, the authors also carried out a mutation screening of miR-182 and miR-183 in the same cohort of patients, tested for miR-96. However, they did not find any potential mutation, although this does not exclude the possibility that the latter two miRNAs may be involved in the pathogenesis of other forms of deafness. The fact that both the above families manifested the hearing loss postlingually indicated that probably neither of the two miR-96 mutations resulted in impaired development of the inner ear. Instead, they could have had an impact on the regulatory role that miR-96 plays in the hair cells of the adult cochlea which maintain the gene expression profiles required for its normal function. In vitro experiments showed that both mutations impaired, but did not abrogate, the processing of miR-96 to its mature form, although an additional indirect effect on the expression of miR-182 and miR-183 due to the miR-96 mutations cannot be excluded. Furthermore, a luciferase reporter assay confirmed that both mutations were able to affect the targeting of a subset of selected miR-96 target genes, mostly expressed in the inner ear. In contrast, no significant gain of function was associated with these two mutations, at least for the potentially new acquired miR-96 targets investigated. In addition, after an ophthalmologic revision, no ocular phenotype was observed in individuals carrying mutations in miR-96 (age range between 2 and 66 years), suggesting that its specific targets in the retina, a site in which miR-96 is also strongly expressed, were not critical for its function or that the translation of these targets was not markedly affected .
The finding of a single base change (A>T) in the seed region of miR-96 in a mouse mutant (diminuendo) with a progressive hearing loss phenotype, provided additional support to the finding that a single base change in miR-96 is the causative mutation behind the hearing loss phenotype in both man and mouse . In particular, the diminuendo mutant showed progressive hearing impairment in heterozygotes and profound deafness in homozygotes associated with hair cell defects. Lewis and colleagues suggested that the degeneration observed in homozygotes could be a consequence of a prior dysfunction of the hair cells. Bioinformatic analysis indicated that the mutation has a direct effect on the expression of many genes, including transcription factor genes, that are directly required for hair cell development and survival. The large number of genes whose expression is affected by miR-96 suggests that the mechanism that explains the effects of the mutation may not be simple but, rather, may be the result of a combination of different small effects that act in concert to cause hair cell dysfunction .
In animal cells, most miRNAs form imperfect hybrids with sequences in the 3'-UTR, with the miRNA 5'-proximal 'seed' region (positions 2-8) providing most of the pairing specificity [33, 34]. However, evidence is also accumulating that miRNAs may target mRNA-coding regions . Generally, miRNAs inhibit protein synthesis either by repressing translation or by bringing about deadenylation and degradation of mRNA targets . Since more than 700 miRNAs have been identified in the human and mouse genomes , and also considering that each miRNA can regulate, on average, the expression of 100-200 target genes [38, 39], the whole miRNA apparatus seems to participate in the control of the gene expression for a significant proportion of the mammalian gene complement.
It is conceivable that some sequence variations falling within the 3'-UTR of mRNA may alter miRNA recognition sites, either by altering functional miRNA target sites or by creating aberrant miRNA target sites. Both types of sequence variations may potentially have deleterious effects in the case of either miRNA-mRNA pairs endowed with a biologically relevant (and non-redundant) role or when the formation of an illegitimate miRNA target occurs in mRNAs that are under selective pressure to avoid target sites for that particular miRNA (that is, in the case of the so-called anti-targets) .
One of the first animal disorders with a mendelian transmission reported to be caused by dysregulation of a specific miRNA-mRNA target pair was the Texel sheep model. The Texel sheep phenotype is characterized by an inherited muscular hypertrophy that is more pronounced in the hindquarters of sheep . Clop et al. demonstrated that the myostatin (GDF8) gene of Texel sheep is characterized by a G to A transition in the 3' UTR that creates a target site for mir-1 and mir-206, which are highly expressed in the skeletal muscle. This sequence change leads to a translational inhibition of the myostatin gene and, hence, is responsible for the muscular hypertrophy of Texel sheep .
There are now some examples of sequence variations in the 3'-UTR of mRNAs altering miRNA recognition sites which have been suggested to have a pathogenic role in human genetic diseases. The first was reported by Abelson et al. , who identified two independent occurrences of the identical sequence variant in the binding site for the miRNA hsa-miR-189 (now termed miR-24*) in the 3'-UTR of the SLITRK1 mRNA in familial cases of Tourette's syndrome, a developmental neuropsychiatric disorder characterized by chronic vocal and motor tics. This 3'-UTR sequence variation in SLITRK1 was proposed in order to determine an increased extent of repression of this gene by hsa-miR-189 (miR-24*). It must be underlined, however, that the involvement of SLITRK1 in Tourette's syndrome has been subsequently questioned by other reports [43–47]. The second example is represented by two different point mutations in the 3'-UTR of the REEP1 gene which have been associated with an autosomal dominant form of hereditary spastic paraplegia (SPG31) [48, 49]. These mutations, which alter the sequence of a predicted target site for miR-140, were found to segregate with the disease phenotype and were not detected in a large set of human controls. These data strongly suggest the pathogenic role of the impaired miR-140-REEP1 binding in some SPG31 families, although so far no functional data have been provided to consolidate this hypothesis.
Georges and colleagues tried to address, in a more systematic way, the potential implications of sequence variations in the 3'-UTR of mRNAs in the pathogenesis of human diseases. They demonstrated, through SNP analysis, that sequence variations creating or destroying putative miRNA target sites are abundant in the human genome and suggested that they might be important effectors of phenotypic variation . A list of additional sequence variations altering putative miRNA recognition sites, and with a potential role in human disease, can be found in a review by Sethupathy and Collins . The authors critically reviewed a number of studies that claimed that there is an association between the presence of polymorphisms/mutations in miRNA target sites (poly-miRTSs) and human diseases, giving a special emphasis on possible biases and confounding factors. They concluded that only a few presented rigorous genetic and functional evidence. The authors therefore suggested a set of concrete recommendations in order to guide future investigations of putative disease-associated poly-miRTSs .
As previously described, a number of different proteins are involved in the processing of miRNAs. Mutations altering the function of these proteins are predicted to determine a global alteration of miRNA function. This aspect is exploited, for instance, in the experimental inactivation of Dicer that is used to assess the biological consequences of the global perturbation of miRNA activity in whole organisms or specific tissues/cell types (see also below). Complete loss-of-function mutations of certain key members of the miRNA processing pathway (such as Drosha and Dicer) are expected to be incompatible with life and, therefore, are not believed to play a role in the pathogenesis of human monogenic disorders. However, there are two human diseases characterized by mutations in genes involved in miRNA processing/activity, namely DiGeorge syndrome and Fragile X syndrome. The DGCR8 gene, which maps to chromosomal region 22q11.2, is commonly deleted in DiGeorge syndrome , characterized by cardiovascular defects, craniofacial defects, immunodeficiency and neurobehavioral alterations. As previously mentioned, DGCR8 is a component of the Drosha complex and its haploinsufficiency in DiGeorge syndrome patients may have a potential impact on miRNA processing. However, also based on the results of the targeted inactivation of the corresponding gene in mouse , there are no data, thus far, which point to a functional effect of DGCR8 haploinsufficiency on miRNA biogenesis.
The second example is represented by the Fragile X syndrome. The product of the FRM1 gene, whose loss-of-function is responsible for this condition, is a selective RNA binding protein. It has been proposed that the FMRP1 protein may function as a translational repressor of its mRNA targets at synapses by recruiting the RISC complex along with miRNAs and by facilitating the recognition between miRNAs and a specific subset of their mRNA targets. This interaction is suggested to be important in the process of synaptic plasticity which, instead, is largely compromised in Fragile X syndrome patients [54, 55]. However, this hypothesis requires further investigations. In conclusion, there is no evidence so far to support a direct role of altered global miRNA processing in human hereditary disorders.
The number of cases in which mutations in miRNA and in miRNA targets have proven to be firmly associated with monogenic disorders is still limited (see above). However, we expect the contribution of miRNAs, and related pathways to the pathogenesis of these conditions, to increase in the near future, following both a better knowledge of their biological function and the advancement of high-throughput mutation detection approaches [56–58]. We will now try to make some hypotheses on the possible diseases in which miRNAs may have a pathogenic role, mainly based on the results obtained from the analysis of animal models.
The heart is the first organ to form and to function during vertebrate embryogenesis . Perturbations in normal cardiac development and function result in a variety of cardiovascular diseases, which are the leading cause of death in developed countries .
The first indication of the global involvement of miRNAs in heart development and function was derived from the analysis of conditional knockout mice carrying a cardiac-specific inactivation of the Dicer enzyme. As described above, Dicer plays a key role in miRNA biogenesis and its inactivation is predicted to cause a general deficiency of the mature forms of all miRNAs.
Chen and colleagues reported that cardiac-specific knockout of the Dicer gene led to rapidly progressive dilated cardiomyopathy, heart failure and postnatal lethality. Dicer mutant mice showed misexpression of cardiac contractile proteins and profound sarcomere disorder. Functional analyses indicated significantly reduced heart rates and a decreased fractional shortening in Dicer mutant hearts. Furthermore, this study demonstrated, for the first time, the essential role of Dicer in cardiac contraction and also indicated that miRNAs play a critical role both in normal cardiac function and under pathological conditions .
Moreover, da Costa and colleagues found that an inducible deletion of Dicer in the adult mouse heart results in a severe alteration of myocardial histopathology, suggesting a crucial role for this enzyme in ensuring the integrity of the postnatal heart. Interestingly, Dicer depletion in the juvenile heart provoked an overall tendency to arrhythmogenesis and less marked myocyte hypertrophy, but its inactivation in the adult myocardium gave rise to myocyte hypertrophy and angiogenic defects. These findings seem to imply the presence of differences in the biological role, as a whole, of miRNAs between the juvenile and the adult myocardium .
The generation of cardiac disease-like phenotypes in animal models may not only be caused by a global alteration of miRNA function but also by the dysfunction of specific miRNAs. For instance, miR-1-2 appears to be involved in the regulation of diverse cardiac and skeletal muscle functions, including cellular proliferation, differentiation, cardiomyocyte hypertrophy, cardiac conduction and arrhythmias . miR-1, together with another heart-specific miRNA (miR-133a), is known to be transcribed by a duplicated bicistronic genetic locus (miR-1-1/miR133a-2 and miR-1-2/miR133a-1) sharing identical sequences of the mature miRNAs. Mice lacking miR-1-2 present a spectrum of abnormalities, ranging from ventricular septal defects and early lethality to cardiac rhythm disturbances. These mice also featured a striking cell-cycle abnormality in myocytes, leading to hyperplasia of the heart with nuclear division persisting postnatally. Remarkably, the persistence in these mice of the other identical copy of miR-1-2 (that is, miR-1-1) did not compensate for the loss of miR-1-2, at least for many aspects of its function. While it is likely that mice lacking both miR-1-1 and miR-1-2 will have increasingly severe abnormalities, the range of defects upon the deletion of miR-1-2 highlighted the ability of miRNAs to regulate multiple diverse targets in vivo . The subtle dysregulation of numerous developmental genes may contribute to the embryonic defects observed in miR-1-2 mutants. These included: (1) Hrt2/Hey2, a member of the Hairy family of transcriptional repressors that mediates Notch signalling, which can itself cause heart disease ; and (2) Hand1, a bHLH transcription factor involved in ventricular development and septation that, in combination with Hand2 (a paralog of Hand1), is known to regulate expansion of the embryonic cardiac ventricles in a gene dosage-dependent manner . Furthermore, in miR-1-2 mutants, the observed abnormality in the propagation of cardiac electrical activity, despite normal anatomy and function, was correlated with the upregulation of the direct target Irx5, a transcription factor, resulting in ventricular repolarization abnormalities and a predisposition to arrhythmias.
Jiang et al. added one more piece to the puzzle represented by the miR-1/miR-133a cluster. They extensively characterized genetically engineered mice deficient for either miR-133a-1, or miR-133a-2, or both as well as mice overexpressing miR-133a . While miR-133a-1 and miR-133a-2 seemed to have redundant functions and did not cause obvious cardiac abnormalities when deleted individually, targeted deletion of both miRNAs resulted in cardiac malformations and embryonic or postnatal lethality. miR-133a double knockout mice displayed two distinct lethal phenotypes: (1) either large ventricular sept defects (VSDs), dilated right ventricles, and atria leading to death shortly after birth; or (2) survival into adulthood, no VSDs but dilated cardiomyopathy (DCM), cardiac fibrosis and heart failure. Surprisingly, miR-133a deficiency did not lead to hypertrophic cardiomyopathy, as one would have been expected from previous studies, in which miR-133a-antagomir treatment induced cardiac hypertrophy in mice . Several genes involved in cardiomyocyte cell cycle control, such as Cyclin D1, Cyclin D2 and Cyclin B1, were found to be significantly upregulated in miR-133a deficient hearts as well as several smooth muscle genes, such as smooth muscle-Actin, Transgelins, Calponin I and the myogenic transcription factor SRF.
In a further attempt to dissect the effects of miR-133a on cardiomyocyte proliferation, Liu et al. overexpressed miR-133a under the control of the cardiac β-myosin heavy chain promoter. Surprisingly, transgenic animals died by embryonic day 15.5 (E15.5) due to VSDs and diminished cardiomyocyte proliferation, which resulted in ventricular walls only consisting of two to three cell layers that were unable to fulfill the hemodynamic needs of the developing mouse .
Morton et al. reported that miR-138 is expressed in specific domains of the zebrafish heart and is required to establish appropriate chamber-specific gene expression patterns. Disruption of miR-138 function led to expansion in the expression of ventricular-specific genes, normally restricted to the atrio-ventricular valve region, and, ultimately, to disrupted ventricular cardiomyocyte morphology and cardiac function. Furthermore, the authors demonstrated that miR-138 helps in establishing discrete domains of gene expression during cardiac morphogenesis by targeting multiple members of the retinoic acid signaling pathway .
van Rooij and colleagues described cardiac hypertrophy and failure in a mouse model overexpressing mir-195, which is generally upregulated in hypertrophic human hearts . Overexpression of miR-195 under the control of the α-myosin heavy chain (Mhc) promoter initially induced cardiac growth with disorganization of cardiomyocytes, which progressed to a dilated heart phenotype by 6 weeks of age. More striking was the dramatic increase in size of individual cardiomyocytes in miR-195 transgenic mice compared to normal mice. Furthermore, ratios of heart weight to body weight were also dramatically increased in miR-195 transgenic (Tg) animals as compared to wild-type littermates, indicating that overexpression of miR-195 was sufficient to stimulate cardiac growth. Thus, the cardiac remodelling induced in the miR-195 Tg animals was specifically caused by the functional effects of this miRNA rather than a general nonspecific effect resulting from miRNA overexpression, suggesting that increased expression of miR-195 induced hypertrophic signalling, leading to cardiac failure.
Based on all the previously described findings it is tempting to speculate a possible role of specific miRNAs in human genetic forms of heart hypertrophy and failure. This hypothesis is also supported by the dysregulation of miRNA expression observed in several cardiovascular diseases in man [71–73].
Multiple lines of evidence indicate the potential role of miRNAs in neuronal cell development and maturation. Both the mouse and human brain express a large spectrum of distinct miRNAs compared with other organs [74, 75]. Therefore, the implications of dysregulation of miRNA networks in human diseases affecting the CNS are potentially enormous.
In recent years, different conditional Dicer null mouse lines in the brain have been generated. They have provided initial insight into the in vivo role of miRNAs in the mammalian CNS and particularly in the neuronal maintenance of the mouse brain [76–79].
Schaefer and colleagues described the phenotypic characterization of Dicer null mice in Purkinje cells of the cerebellum. They performed the inactivation of Dicer exclusively in the post-mitotic Purkinje cells by using the Purkinje cell-specific Pcp2 promoter-driven Cre recombinase. This inactivation led to the relatively rapid disappearance of cerebellar-expressed miRNAs followed by a slow degeneration of Purkinje cells. The loss of Dicer and the decay of miRNAs had no immediate impact on Purkinje cell function, as assessed by electrophysiological studies and analysis of animal locomotion. However, the continuous lack of miRNAs led eventually to Purkinje cell death and ataxia, suggesting that a long-term absence of miRNAs in these cells results in a neurodegenerative process .
In a recent work, the inactivation of Dicer in the cortex and hippocampus beginning at embryonic day 15.5 resulted in dramatic effects on cellular and tissue morphology and led to gross brain malformations including microcephaly, increased brain ventricle size, and reduction in size of white matter tracts, leading to an early postnatal death . Furthermore, mutant mice were ataxic with visible tremors during motility. Ataxic gait was detected by postnatal day 14-15, but often occurred as early as postnatal day 12. Dicer mutant animals also displayed hind limb crossing, typical of animals with motor impairment. Therefore, loss of miRNA function in some mouse brain regions during late development results in a significantly decreased lifespan as a consequence of severe malformations as well as motor impairments due to an increased cortical apoptosis .
To determine the role of miRNAs in dopaminoceptive neurons, Cuellar et al. ablated Dicer in mice by using a dopamine receptor-1 (Dr-1) promoter-driven Cre. The mutant animals displayed a range of phenotypes including ataxia, front and hind limb clasping, reduced brain size and smaller neurons. Surprisingly, dopaminoceptive neurons without Dicer survived during the life of the animal in contrast with other mouse models in which neurodegeneration was observed in the absence of Dicer .
miRNAs have also been studied in early neurogenesis during the development of the mammalian cerebral cortex and the switch of neural stem and progenitor cells from proliferation to differentiation. Dicer ablation in neuroepithelial cells at embryonic day (E) 9.5 resulted in massive hypotrophy of the postnatal cortex and death of the mice shortly after weaning. Remarkably, the primary target cells of Dicer ablation, the neuroepithelial cells and the neurogenic progenitors derived from them, were unaffected by miRNA depletion with regard to cell cycle progression, cell division, differentiation and viability during the early stage of neurogenesis and only underwent apoptosis starting at E14.5, suggesting that progenitors are less dependent on miRNAs than their differentiated progeny .
miRNA function was studied also for another part of the CNS (that is, the retina). Damiani et al. described a partial ablation of Dicer in the developing mouse retina by using a Cre line under the Chx10 promoter, a gene mostly expressed in retinal progenitors and specific adult retinal interneuronal cells. These mice apparently showed no visible impact on early postnatal retinal structure and function. Retinal lamination appeared normal and all expected retinal cell types were represented. However, as observed for the other Dicer null mutants, progressive and widespread structural and functional abnormalities were detected, culminating in loss of photoreceptor-mediated responses to light and extensive retinal degeneration . Therefore, the observation that progressive retinal degeneration occurred after removal of Dicer raises the possibility that miRNAs are involved in retinal neurodegenerative disorders.
In summary, although removing Dicer is conceptually a crude experimental approach, the aforementioned results support the hypothesis that defects in the miRNA regulatory network in the brain are a potential cause of neurodegenerative disease.
A functional role for miRNAs in more specific neurological processes is emerging, and their dysfunction could have direct relevance for our understanding of neurodegenerative disorders . This conclusion is supported by several in vitro examples using both gain- and loss-of-function experiments. For example, the introduction of artificial miRNAs mimicking upregulation or antisense oligonucleotides induces loss of function of primary neurons in culture [82–85].
The next challenge will be to characterize in vivo individual miRNAs and specific families of miRNAs in depth, which are predicted to contribute to the proper CNS function. An initial step towards this goal is the recent study of Walker and Harland. The authors show through loss-of-function experiments that in Xenopus laevis miR-24a is necessary for proper neural retina development by regulating apoptosis through Caspase 9 targeting .
Vertebrates have evolved complex genetic programmes that simultaneously regulate the development and function of hematopoietic cells, resulting in the capacity to activate specific responses against invading foreign pathogens while maintaining self-tolerance. From recent studies, miRNAs are emerging as major players in the molecular circuitry that controls the development and differentiation of haematopoietic lineages .
Genetic disruption of different steps in miRNA biogenesis in mice has highlighted the key role of miRNAs during haematopoiesis. Dicer ablation in the T-lineage, whilst not abolishing the development of T-lymphocytes, affected their functionality [87, 88]. Interestingly, ablation of Dicer in regulatory T cells (Treg cells) resulted in a much more severe phenotype. Mice lacking Dicer expression in Treg cells failed to differentiate functional Treg cells and developed a severe autoimmune disease, leading to death within the first few weeks of life .
Knocking-out Dicer activity in early B-cell progenitors determined a block at the pro-B cell stage during the differentiation process leading to mature activated B-cells. Gene-expression profiling revealed a miR-17-92 signature in the 3'UTRs of genes upregulated in Dicer-deficient pro-B cells; the proapoptotic molecule Bim, a top miR-17-92 target, was also highly upregulated. Surprisingly, B cell development was partially rescued by ablation of Bim or transgenic expression of the prosurvival protein Bcl-2 .
In mice the specific role of single miRNAs in the development and function of the immune system is starting to be elucidated through targeted deletion approaches. The pioneer knockout of miR-155 in mice (the first mouse knockout for a single miRNA) revealed an essential role in the acquired immunity for this miRNA. In fact, despite miR-155 null mice developed normally, immune system analysis revealed that miR-155 depletion led to pleiotropic defects in the function of B cells, T cells and dendritic cells. These mice were unable to gain acquired immunity in response to vaccination, demonstrating that miR-155 is indispensable for normal adaptive immune responses [91, 92].
Another functional example derives from the study of Ventura et al. (2008) who demonstrated that the miR-17-92 cluster is involved in controlling B-lymphocyte proliferation. Deletion of this miRNA cluster was lethal in mice resulting in lung hypoplasia, ventricular sept defects and impairment of the pro-B to pre-B transition. Absence of miR-17-92led to increased levels of the pro-apoptotic protein Bim and inhibited B cell development at the pro-B to pre-B transition. Furthermore, while ablation of miR-106b-25 or miR-106a-363 (the two paralogous clusters) had no obvious phenotypic consequences, compound mutant embryos lacking both miR-106b-25 and miR-17-92 died at mid-gestation . On the contrary, over-expression of miR-17-92 cluster in mice led to lymphoproliferative and autoimmune diseases that were associated with self-reactive antibody production .
Finally, Johnnidis et al. described the generation of a knockout mouse for miR-223, which highlighted its role in granulocyte differentiation. The myeloid-specific miR-223 negatively regulated progenitor proliferation and granulocyte differentiation. Moreover, mutant mice had an expanded granulocytic compartment resulting from a cell-autonomous increase in the number of granulocyte progenitors. These data support a model in which miR-223 acts as a fine-tuner of granulocyte production and the inflammatory response .
The fact that miRNAs are involved in the modulation of T cell selection, T cell receptor sensitivity as well as Treg cell development in normal immune responses, suggests that these molecules may also be involved in the development of immune system disorders of genetic origin such as immunodeficiencies or autoimmune diseases.
In mouse, conditional inactivation of Dicer has been achieved in different other tissues in order to study the global function of miRNAs [23, 96–101]. Using transgenes to drive Cre expression in discrete regions of the limb mesoderm, Harfe et al. found that removal of Dicer determined developmental delays, due in part to massive cell death as well as to dysregulation of specific gene expression, and brought to the formation of a much smaller limb. Strikingly, however, the authors did not detect defects in basic patterning or in tissue-specific differentiation of Dicer-deficient limb buds . This class of skeletal defects was previously observed in mice with compound mutations in Prx1 and Prx2 genes .
To better understand the role of miRNAs in skin- and hair follicle biology, Andl and colleagues generated mice carrying an epidermal-specific Dicer deletion. These mice presented stunted and hypoproliferative hair follicles. Normal hair shafts were not produced in the Dicer mutant and the follicles lacked stem cell markers and degenerated. In contrast to decreased follicular proliferation, the epidermis became hyperproliferative. These results reveal the critical role played by Dicer in the skin and the key aspect that miRNAs may have in epidermal and hair-follicle development and function . Moreover, the existence of skin-specific miRNAs involved in normal epidermal and follicular development, such as the miR-200, the miR-19 and miR-20 families, indicate that their therapeutic expression or inhibition might also be relevant to epidermal pathology .
To study Dicer function in the later events of lung formation, Harris and collaborators inactivated Dicer in the mouse lung epithelium using a Shh-Cre allele. As a result, the mutant lung presented a few large epithelial pouches as opposed to the numerous fine branches that are seen in a normal lung. Phenotypic differences between mutant and normal lungs were apparent, significantly, even before detection of an increase in epithelial cell death, leading the authors to propose that Dicer may play a specific role in regulating lung epithelial morphogenesis independent of its requirement in cell survival .
Dicer activity is essential for skeletal muscle development during embryogenesis and postnatal life. O'Rourke and colleagues (2007) showed that Dicer inactivation in skeletal muscle resulted in lower levels of muscle-specific miRNAs. Moreover, Dicer muscle mutants died perinatally and were characterized by skeletal muscle hypoplasia. Reduced skeletal muscle, in turn, was accompanied by abnormal myofibre morphology. The skeletal muscle defects associated with loss of Dicer function were explained by increased apoptosis. Furthermore, decrease in muscle mass in Dicer mutants was strikingly similar to the phenotypes associated with muscular dystrophies and aged skeletal muscle. This finding suggests that, in humans, DICER mutations, or disrupted miRNA-mediated gene regulation, should contribute to skeletal muscle myopathy and age-related sarcopenia .
The study of Pastorelli and co-workers demonstrated that loss of Dicer in the developing mouse reproductive tract, under the control of Amhr2-Cre-mediated deletion, resulted in morphologic and functional defects in the reproductive tracts of female but not of male mice (before 3 months of age).
Recently, Sekine and colleagues have described the conditional Dicer ablation in the mouse liver. This resulted in prominent steatosis and in the depletion of glycogen storage. Dicer-deficient liver exhibited increased growth-promoting gene expression and robust expression of fetal stage-specific genes. The consequence of Dicer elimination included both increased hepatocyte proliferation and overwhelming apoptosis .
Finally, two different Dicer knockout strategies demonstrated that miRNAs are required for the development and differentiation of sensory epithelia and the maintenance of the sensory neurons of the inner ear . Based on studies carried out in animal models, it is clear that miRNA dysfunction may lead to severe alterations in the function of all tissues/organs that have been analysed up to now. In the majority of the aforementioned cases, the aberrant phenotypes observed are the consequence of a global impairment of miRNA processing (that is, Dicer knockout approaches), a condition that is highly unlikely to contribute to the pathogenesis of human genetic diseases. Nevertheless, the fact that in some organs (that is, the heart, the eye and the immune system) the dysfunction of single miRNAs may underlie phenotypes, strongly resembling those observed in human disease, suggests that miRNAs should be considered potential candidates in the pathogenesis of human genetic disorders, even monogenetic forms, likewise protein-coding genes.
microRNAs are emerging as key regulators of the cell transcriptome due to their ability to finely tune gene dosage. In the last few years, they have been shown to be involved in the regulation of many cellular processes and their role in the proper differentiation and function of tissues and organs is only starting to be unravelled. It is also becoming increasingly clear that miRNAs, similarly to protein-coding genes, may harbour mutations leading to human genetic conditions, even 'classical' monogenic forms. The number of cases in which mutations in miRNAs, or in their targets, have been convincingly shown to have a pathogenic role in human genetic diseases is still limited. This may not only be explained by the recent characterization of miRNAs at the genomic level, which will now allow us to carry out the appropriate analyses, but also by the fact that the 3'UTR of mRNAs have, until recently, been generally neglected as potential sources of sequence variations with a potentially pathogenic effect in genetic diseases. However, both the improvement in experimental procedures, aimed at the identification of mutations based on efficient sequencing protocols, and the increasing knowledge in miRNA function are predicted to fill the latter gaps, underscoring the role played by miRNAs in the pathogenesis of human genetic disorders in the coming years.
We apologize to our colleagues whose insightful work was not included due to size constraints. We are grateful to Luciana Borrelli for editing the original manuscript. This work was supported by a grant from the Italian Telethon Foundation.
NM, VAG and SB wrote the review manuscript. The authors read and approved the final manuscript. | 2019-04-26T08:18:41Z | https://pathogeneticsjournal.biomedcentral.com/articles/10.1186/1755-8417-2-7 |
Best part of today’s podcast, FREE book download!
Guest: Episode 44 is just Meb.
In the stock market, the most successful large investors—particularly hedge fund managers—represent the house. These managers like to refer to their top investments as their “best ideas.” In today’s podcast, you will learn how to farm the best ideas of the world’s top hedge fund managers. Meb tells us who they are, how to track their funds and stock picks, and how to use that information to help guide your own portfolio. In essence, you will learn how to play more like the house in a casino and less like the sucker relying on dumb luck.
So how do you do it? Find out in Episode 44.
Welcome Message: Welcome to the Meb Faber Show where the focus is on helping you grow and preserve your wealth. Join us as we discuss the craft of investment and uncover new and profitable ideas all to help you grow wealthier and wiser. Better investing starts here.
Disclaimer: Meb Faber is the co-founder and chief investment officer at Cambria Investment Management. Due to industry regulations, he will not discuss any of Cambria’s funds on this podcast. All opinions expressed by podcast participants are solely their own opinions and do not reflect the opinions of Cambria Investment Management or its affiliates. For more information, visit cambriainvestments.com.
Sponsor: Today’s podcast is sponsored by YCharts. YCharts is a web-based investing research platform that I’ve been subscribing to for years. In addition to providing overall market data that offers investors powerful tools like stock and fund screening and charting analysis with Excel integrations, it’s actually one of the few sites that calculates both shareholder yield as well as 10-year PE ratios for stocks, 2 factors that are notoriously hard to find elsewhere. The YCharts platform is fast, easy to use, and comes at a fraction of the price of larger institutional platforms. Plans start at just 200 bucks a month, and if you visit go.ycharts.com/meb you can access a free trial and, when you do, you’ll receive up to 500 bucks off an annual subscription. That’s go.ycharts.com/meb.
Meb: Hey, podcast listeners. This is Meb…here solo today. I’m in somewhat of a foul mood after Virginia got bounced, yet again, from the NCAA tournament by almost 30 points. The good news, I was watching the game on Catalina so I wasn’t too depressed out on the ocean having a few beers, but I thought today we’d do something fun. First, I’m gonna give you guys a gift. I’m gonna make available to every podcast listener a free copy of my most recent book. This is called “Invest with the House: Hacking the Top Hedge Funds.” This is a really interesting book and topic, particularly for me. It’s something that goes back all the way to the late ’90s, was when I started working on this research. I think there’s a lot of misunderstanding on this space so I really wanted to talk about the hedge fund space.
If you guys haven’t started watching “Billions,” it’s really a wonderful show. I know I’m a little late to the game there, but it’s a pretty great show. And so there’s a renewed, rekindled interest in hedge funds. So I wanted to read a few chapters from this book and see if you guys like it, but you can go download a free copy at freebook.mebfaber.com and you’ll get a PDF and then you can read that anywhere. So, we’ll get started. I’m gonna read but I’m also gonna interject a little bit, maybe with some comments, some stories, etc. And if you guys like this, I don’t know, we’re trying a lot of experimental things. You know, this could even be a weekly feature where we profile some of these managers, their holdings, what are they buying, what are they up to, ones you shouldn’t follow, ones you should. Let us know. Shoot us feedback at the mebfabershow.com.
So, chapter one, we start with that quote and it’s called “The casino can be beat.” Stock picking is hard, really, really hard. The odds are stacked against you. My friends at Longboard Asset Management completed a study called The Capitalism Distribution that examined stock returns from the top 3000 stocks from 1983 to 2007. By the way, J.P. Morgan has also published a paper on this as well as another academic, we’ll link to them in the show notes. We’ve also had a great podcast with Eric Crittenden [SP] of Longboard and one of the earliest podcast episodes. Check those out. Anyway, they found that 64% of stocks underperform the broad stock market index. Thirty-nine percent of stocks were unprofitable investments. Think about that for a second, almost half. If you just picked a stock, threw a dart against a wall, almost half are unprofitable investments.
Nineteen percent of stocks lost at least 75% of their value and 25% of stocks were responsible for all of the market’s gains. Simply picking a stock out of a hat means you have a 64% chance of underperforming a basic index fund and a 39% chance of losing money. Not only is it hard to pick stocks, but you’re also up against the most talented investors in the world. People like Ray Dalio, who’s a founder Bridgewater Associates, the world’s largest hedge fund. Dalio is fond of comparing stock market investing to a poker game and his description brings to mind the old saying that “If you sit down at a poker table and you don’t know who the sucker is, then you’re the sucker.” Dalio has spent oodles of time and money to make sure he’s not the sucker.
With a superior stable of research investment talent, Dalio figures he can beat most of the other players at the table, and he does. His Bridgewater Fund posts investment returns that make others jealous. He does it year after year. Here’s what’s really interesting though, he’s not the only one. A special few have done it as well, beating the market year after year. They don’t all do it in the same way or with the same investments. Some have done it better than others and some eventually falter, but the fact is it happens and it does so with some consistency. Now, we make two assumptions that are vital to the arguments in this podcast and book. There are active managers that can beat the market i.e. the market is not completely efficient, and two, superior active managers can be identified ahead of time.
These two concepts are difficult for many investors as well. There’s a general feeling that the market can’t be beat and it is tough to get past that belief. The big challenge is separating luck from skill, but would anyone deny that some people are better than others at stock picking? Just like any other profession, the investment field has top experts who are paid handsomely for what they do. Warren Buffett of Berkshire Hathaway certainly comes to mind, one of the most famous stock pickers of all time with an estimated net worth of more than $70 billion. He’s also one of the richest people in the world. The 2014 Berkshire Hathaway annual report indicates that the per share market value of the company has increased at a compounded rate of 21% since 1965 compared to an average of about 10% for the S&P. The outperformance is striking.
In fact, there was a businessman from Singapore in 2014 that paid over $2 million in charity auction to have lunch with Buffett. But a lot of people don’t know that it’s possible to learn some of Buffett’s wisdom for a lot less. In fact, it’s possible to learn what stocks he’s buying and selling for free. One of the most basic principles of U.S. stock market is transparency, and it’s a characteristic that has helped make our stock market so attractive to investors around the world. Of course it isn’t always transparent and there are noticeable lapses in scandals and shenanigans. But in one particular area, transparency works very well and that is the area that forms the data source for this book. Under SEC rules, any professional fund manager with more than $100 million in U.S. listed assets must report their stock holdings.
That means great stock pickers, such as Warren Buffet, must disclose their stock picks. You may already be aware of this, but many are not. Thanks to the internet, you can now look up any of these fund holdings online from the SEC website. It’s one of the most valuable sources of market information around. It is simple and easy to access and it gives you a window into the trading activity of the greatest managers. Sadly, not many investors take advantage of it. Instead, most get their investment information from their brokers or TV talking heads or they pick up a stock tip from a friend or neighbor. As a recent TIAA-CREF study illustrates, people actually spend more time picking a restaurant or researching which TV to buy than they do planning their retirement investments.
But consider what you get when you examine these SEC filings. You have access to the stock picks made by fund managers who often spend millions of dollars and every waking moment thinking and obsessing about the financial markets. If you think this statement is an exaggeration, note there’s hedge fund managers who lease satellites to track the department store traffic and resulting sales estimates. These stock picks are the result of painstaking work done by people significantly more capitalized than you, who have way more resources than you, and who, if you select the right ones, are way better than you at picking stocks. The best ones know everything there is to know about a company before they invest.
Lee Ainslie, portfolio manager at Maverick Capital, who we exam later in the book, has to say this about how obsessive Julian Robertson of Tiger Management was when examining companies, “Julian was maniacal on the importance of management. ‘Have you done your work on management?’ ‘Yes, sir.’ ‘Where did the CFO go to college?’ ‘Um, um…’ ‘I thought you did your work?’ He wanted you to know everything there was to know about the people in the companies you invested in.” This is a competition. Do you know where the CFO went to college? Do you even know who the CFO is? Do you even know what a CFO is? In case you don’t, by the way, chief financial officer.
So to go back to the poker analogy, examining SEC filings is like getting a peek at the cards held by these investment managers. It’s a great way to learn from some of the brightest minds in investing in the world. Would you rather play with them or against them? This book and podcast will begin by examining a case study, how an investor could’ve bought Buffett’s stock picks to great success. We will examine the performance of his stock picks in the past and determine how well they performed, a process called backtesting. This can tell you how you might have fared if you had piggybacked on Buffett stock picks in the past. While it doesn’t tell you how a manager will perform in the future, it does give you a record of performance from which you can draw your own conclusions.
Logic suggests that a manager who outperforms consistently must be pretty good at what he does. Will he do it again next year? No one ever knows for sure but, again, logic suggests the odds are in your favor if you select and follow a manager who has a demonstrated record of success and then prudently add some of his picks to your own portfolio. Buffett is an obvious choice to start with. He’s the first of 20 of the best investors in the world whose background and track records we’ll examine. I provide a brief overview of the process of following these star managers along with some case studies that demonstrate the manager stock picks in detail and how the portfolios would’ve performed since the year 2000.
You can then build a stable of these managers and use them as your own personal Idea Farm for stock ideas to research and possibly implement to your own portfolio. The process I outline is an effective way to track and potentially copy the stock picks of some of the best stock pickers in the world. Let’s get started. “Berkshire Hathaway: Warren Buffett and Charlie Munger,” chapter two. “Techniques shrouded in mystery clearly have the value to the purveyor of investment advice. After all, which witch doctors ever achieved fame and fortune by simply advising, “Take two aspirins?” That’s Warren.
But while Buffett has spent his entire life making money through value investing, Graham ended up reconsidering some of the basic tenets of the practice. Graham decided that the investment world had changed so much over time that the markets had become much more efficient, making it too difficult to make money by looking for undervalued stock gems. He began to adopt the efficient market hypothesis, which holds that the market is so efficient that stock prices always incorporate and reflect all relevant information, which make it all but impossible to beat the market through stock selection. Graham discusses his conversion in the market efficiency in an article from “The Financial Analyst Journal” in 1976. “I am no longer an advocate of elaborate techniques of security analysis in order to find superior value opportunities.
This was rewarding activity, say, 40 years ago when our textbook “Graham and Dodd” was first published, but the situation has changed a great deal since then. In the old days, any well-trained security analyst could do a good professional job of selecting undervalue issues through detailed studies, but in light of the enormous amount of research now being carried on, I doubt whether, in most cases, such extensive efforts would generate sufficiently superior selections to justify their cost. To that very limited extent, I’m on the side of the efficient market school of thought now generally accepted by the professors.” And it’s funny, Graham came to this conclusion prior to the advent of the internet, Bloomberg, and other modern research tools.
It is my view that Buffett is correct on this point and, for proof, one need to look no further than his investment record or the records of any other number of successful managers who employ a similar value investing style that seeks to capitalize on market inefficiency. Today, an investor who wants exposure to Buffett’s investing acumen can invest in any number of mutual funds that share the Buffett investment style. When he closed his early investment partnership in 1969, he advised his investors to place money in the Sequoia Fund, which reopened in 2008 for the first time since 1995, which by the way, became a subject of a bunch of media scrutiny dealing with their valiant investment, which became a huge concentrated portfolio. Maybe we’ll talk about that later but I wanna skip over that for now.
Tweedy Brown family of funds was another good example. In fact, several employees of the old Graham Newman partnership founded the firm. While Buffett has gone on to deploy hedge fund techniques such as currency and commodity trading, merger arbitrage, convertible arbitrage, catastrophe bonds, pipes, and private equity, he’s mostly known for his stock picks. There’ve been numerous books that have tried to define exactly how Buffett goes about selecting his investments. The American Association of Individual Investors and Bolidia [SP] Capital Management have developed screens that are designed to find companies that Buffett would buy based on criteria he’s promoted through decades of public speaking, annual reports, and prior transactions.
AQR Capital even published a white paper entitled “Buffett’s Alpha” that attempts to distill his process down to a single algorithm. Some investors simply buy Berkshire Hathaway stock, gaining access to his portfolio management skills, exposure to the operations of an insurance conglomerate, an entry into the Berkshire Hathaway annual shareholder meeting, which I highly recommend attending, by the way. But why not just buy what Warren buys? We set out in this chapter to examine whether following Berkshire Hathaway’s investments through government filings could offer the investor the opportunity to piggyback on Buffett stock picks and consequently achieve outside returns. We will get there shortly, but first a little background.
In 1975, Congress passed section 13F pursuant to the Securities Exchange Act of 1934. This measure required the manager of every institutional fund with assets under management over $100 million to report its holdings the SEC once a quarter. Congress enacted this legislation to improve the disclosure and transparency of these big firms with the hope of increasing confidence in the financial markets. In the early days, accessing these records, called form 13F or form 13F-HR, was difficult and tedious. These days, the forms are uploaded to the SEC website and an investor can view the holdings 45 days delayed after quarter end. By reviewing the 13Fs you can see and dissect the holdings of every manager from Soros to Klarman to Carl Icahn to Warren Buffett, all for free.
The SEC maintains these filings on [inaudible 00:17:55] database and posts the electronic versions of 13F filings within a day of receiving them. Other websites including Edgar Online, Bloomberg, FactSet, Line Shares, aggregate the information in a more usable and searchable formats, often for a fee. The electronic data go back to late 1999, although the archives in Washington, D.C. contain paper records that go back further. There’s also a lot more websites in the back of the book under resources. Remember the book’s free, freebook.mebfaber.com, where you can look these up. So to reach the Berkshire filing page, all an investor’s gotta do is visit the SEC website, search under company name for Berkshire Hathaway, laundry list of filings pops up.
You can search through them for any of the 13Fs or you can narrow it by typing 13F in the type box. Since they’re published within 45 days after a quarter end, the quarter that ended June 30th, 2016 would be available around August 15th. Examining this 13F from Berkshire reveals a laundry list of longtime Buffett holdings you’ll be familiar with such as Coke, AmEx, Wells Fargo, and Coca-Cola. The SEC filing format is a little difficult to read and comprehend. Again, a number of websites publish the current holdings, like Whale Wisdom, in a much more readable format. And this information is, indeed, interesting but can it be of any value? After all, the data is 45 days stale when you see it and the manager may well not even own a particular stock by the time the 13F is posted.
In addition, he may have added a stock at the start of the 90-day reporting cycle so a new stock could have been purchased as long as 130 days ago. To further muddy the waters, some managers game the system by omitting certain recently acquired holdings and then filing an amended 13F form later. But even with all these delays, there’s plenty of rich data here that you can use by sticking with managers who have a long holding period. The delay in reporting time should not be a major factor in your own performance if you’re trying to piggyback them. In Buffet’s case, has stated that his favorite holding period is forever, so turnover should not be a big issue.
The major value added in investing process from the managers in this book we’ll examine is actually in stock picking, not in market timing. The portfolio’s I’ll track are long only, while most hedge funds are short, or long/short, and also use derivatives to hedge or leverage their ideas. But these positions do not show up in the 13F filing, they will not concern us here. So here’s the methodology. One, download all the 13F quarterly filings back to 2000. Two, create historical stock portfolios including all stocks that are no longer traded due to de-listings, buyouts, mergers, bankruptcies, etc. We also include all dividends, cash stocks, special etc. We then equal weight the top 10 holdings with a 10% weight for each stock.
In reality, if there’s more than 10 holdings I simply use the 10 biggest, as the majority of a manager’s performance should be driven by his largest holdings. Investors could also weight the holding similarly to how the manager weights them in his portfolio, but let’s just use a simple example for this book and podcast. In reality, it actually doesn’t matter that much. Four, rebalance, add and delete holdings quarterly, and calculate performance as the 20th day of the month to allow all filings to arrive. As for the backtesting, it’s not realistic for an individual investor to go and do this work on their own. Even finding historical stock databases is problematic. The good news is I’ve done this for you. You can follow along in the pages that follow.
So using the methodology that we just presented, the simulated results for the period 2000 to 2016 are found here. Let’s see, I’ve actually updated this through 2016. The book only went through 2014, but we had a laundry list of holdings. You’ve got Kraft, Wells Fargo, Coke, IBM, AmEx, Philips 66, all these good dudes, Apple, the big news lately, of course, the airlines, which they’ve started buying. First observation is how mediocre the returns have been for U.S. stocks over the past 16 years, which is right around 5% a year. That’s much less than the historical 10% that we’ve experienced back to 1900. How did the Berkshire portfolio do? They did 9.7%. Drawdowns were roughly about the same, the Berkshire portfolio, 43%. S&P had at 50.9% drawdown.
Buffet’s equity selections outperformed the indices quite substantially. Volatility was reasonable, which is a little bit surprising given that the portfolio only contained 10 holdings. If you ran a mutual fund with these numbers, you’d be one of the best performing managers in the United States. Again, that’s Buffett outperforming by five percentage points per year since 2000. There’s another study by some academics titled “Imitation is the Sincerest Form of Flattery: Warren Buffett and Berkshire Hathaway,” and it found a similar method to ours would have resulted in returns over 10 percentage points higher than the S&P if you went all the way back to 1976. In a more recent paper entitled “Buffett’s Alpha,” by AQR, found similar results.
So one question many readers and listeners often ask, how does the cloning strategy perform versus just buying Berkshire stock? So it turns out, the good news is either strategy works great and beat the S&P by about four to five percentage points per year and note that the outperformance has occurred while Buffett and Berkshire have underperformed the S&P since the bottom in 2009. In fact, this clone has underperformed the S&P 500, 7 of the last 10 years. We were unsure if 2016 was gonna be a underperform year or not, but I think he squeaked out, barely, by the end of the year by like 10 basis points. So 7 out of 10 of the last years he’s underperformed, which is a massive, massive amount.
All right, so now we have a decent base case upon which to build. Next chapter, we’re gonna examine some of the pros and cons of following 13Fs. I like to be honest about any investment approach and you wanna look back historically and make sure that you know both the good in the bad. “My mantra is diversity. I clone my mentors. I copy everything they do and then I innovate on top of it.” That’s Henry Markram. So to summarize some of the differences in managing portfolio based on 13F filings versus allocating an investment to an active hedge fund manager, the following list may be helpful.
Pros: One, access. Many of the best hedge funds are not open to new investment capital and if they are, many have high investment requirements, in many cases, in excess of $10 million. As Mark Yusko of Morgan Creek Capital says…and by the way, he’s got a great podcast episode earlier in the year…”We don’t wanna give money to people that want our money. We want to give it to people that don’t want our money.” A 13F tracking strategy allows investors to follow otherwise inaccessible managers.
Pro number two, transparency. The investor controls and aware of exact holdings at all times. If an investor was following the hedge fund Galleon Group during its insider trading scandal, the investor could simply sell all his or her stocks rather than waiting to redeem their allocation.
Pro number three, liquidity. The investor can trade out of positions at any time versus the monthly, quarterly, or multi-year lockup periods at hedge funds. Hedge funds have other special provisions like gates, which can be put up to prevent investors from withdrawing money immediately. Many investors were gated during the financial crisis when they wanted to withdraw their investments.
Pro number four…and this is a biggie…lower fees. Most hedge funds charge high fees. The standard is 2% percent management fee and 20% performance. Funds of funds layer on an additional 1% and 10%. The fees associated with managing a 13F portfolio is simply the investor’s routine brokerage costs, and that’s it, and that is a big deal.
Risk targeting is pro number five. Investors can control the hedging, leverage, to suit their risk tolerances. A number of hedge funds have blown up as a result of excessive leverage or derivatives.
Pro number six, fraud avoidance. Investors own and independently custody their assets, thus completely avoiding any custody risk like those in the Madoff scheme in which investors lost billions.
And there’s a great paper on this topic called “Rules of Prudence for Individual Investors” by Mark Kritzman of Windham Capital…trying to get Mark on in a future episodes. The [inaudible 00:26:28] stories that taxes have a significant impact on returns for the taxable investor. A hedge fund needs to return about 19% to deliver the same after-tax return as a stock index does that returns about 10%. This is due to the high turnover resulting in capital gains as well as large performance fees for the hedge fund. But to be honest, there’s also some potential negatives to not actually letting the fund manager run the portfolio on his or her own terms.
So, here are some cons. One, lack of expertise in portfolio management. The investor does not have access to the timing in portfolio trading capabilities as the manager. To be honest, this could also be a benefit if the manager is good at picking stocks but terrible at timing or position sizing.
Con number two, inexact holdings. Crafty hedge fund managers have some tricks to avoid revealing their holdings on 13Fs, like moving positions off the book at the end of the quarter is one of them. The lack of short sales, and futures reporting, means that the results will differ from hedge fund results. Managers can also get rare exemptions from reporting stocks on 13F filings.
Con number three, the 45-day delay in reporting. The delay in reporting will often affect the portfolio in various amounts for various funds. At worst, an investor could own a position and the hedge fund manager sold out a long time ago. Disclosure of a new holding by some famous hedge funds, like Dreamlight Capital, can also cause the stock to move sharply before an investor has time to build a position.
Con number four, high turnover strategies. Managers who employ pairs trading, or other strategies that trade frequently, are poor candidates for 13F replication.
Con number five, arbitrage strategies. 13F filings make sure that a manager is long stock, when in reality he’s using an “arb” strategy. The short hedge will not show up in the 13F.
Con number six, inconsistent manager skill. Like any active strategy, some managers lose their desire or skill over time. How do you determine when to cut a manager from your stable of funds? So also to note, just because you’re investing alongside a great manager, that does not spare you from painful drawdowns. The strategy is still long is a long-only stock strategy that will experience similar losses to the broad stock market. And if you remember the stock market, twice in the 2000s, declined about half, and all the way back in the Great Depression stocks declined over 80%.
However, we do tackle in the books some hedging ideas potentially to reduce volatility and drawdowns, if you want to read it. So there’s some investment styles to avoid. I can’t tell you how many times I’ve heard on TV people talking about a handful of 13Fs from these following managers and styles when it makes, really, no sense whatsoever to follow them. So an investor needs to be really careful when using these filings and understand both the strengths and the weaknesses. Since there are literally thousands of hedge funds and mutual fund managers to choose from, how does one go about narrowing the list of managers? This is not an easy question to answer and, unfortunately, an intimate knowledge of the hedge fund space is a big advantage.
However, I’ll outline a few of the criteria to look for as well as a list and selection of managers I admire to get started. Funds to avoid include those that fit the following criteria: short bias or short-only funds. Since they don’t show up in 13F filings, it’s impossible to track what your hedge funds are doing with their shorts unless they disclose them publicly. An example is Kynikos Associates and Jim Chanos. High turnover trading…if a fund trades too much. Quarterly filings and 45-day delay will not accurately reflect what the fund is holding. I focus primarily on value investors in this book, which typically have lower turnover. It is a bit fuzzy as to what level of turnover is too much. In general, the less is better. So an example of this would be something like Stevie Cohen’s SAC or Point72…black box.
While Renaissance’s Medallion has certainly performed head and shoulders above almost every hedge fund in existence, and that’s after a 4% and 40% fee structure, it’s shrouded in mystery. It also trades lots of derivatives, so that’s something you wanna avoid. And a similar cousin is Global Macro and other CTA-type of funds, many trade futures, forwards, currencies, and most CTAs are under this umbrella. Like, someone like John Paulson made a lot of money on housing. It’s impossible to replicate, so groups like Soros or Winton, Harding Winton…it doesn’t make a lot of sense. And lastly, we already mentioned this, but arbitrage. Pair trading is an example of one that doesn’t make sense, and Fairlawn Capital Management is example one.
So let’s talk about a few frequently asked questions real quick and then we’ll profile a couple managers and then slowly start to wind this down. Here’s the questions we hear most from people, and I think this is important because this will probably answer a lot that you have as well. So number one, “Holdings are reported 45 days after the quarter, so you may be buying a stock the manager no longer even owns. The delay makes it impossible to follow these managers, right?” So, the answer is, first remember all these simulated results mentioned in this book already include the effects of using the delayed data.
Also recall that if you put enough time and careful analysis up front, you’re likely only going to be tracking funds with lower turnover in the first place. However, way back in 2012, I did a study to try and quantify the effect of the 45-day lag. There’s some inherent biases no matter how you chop up the data, like how many funds include long/short only, entire universe, do you include dead funds, whether to regress returns based on turnout, yada, yada, but I looked at about 20 funds that I’ve been following for years on my blog. I compared rebalancing on the 13F filing date to rebalance support for the prior quarter.
So basically, a look ahead bias investors don’t have. It shows how a portfolio constructed without the 45-day delay compares to a portfolio with publicly available information. Tests go back to 2000, examine the total return data with no transaction costs. So does it matter? A little. All this wide variation in the funds, which is to be expected, the delay ranged anywhere from a three percentage point penalty for a few funds to a two percentage point benefit. Overall, the friction delay average is about 1.5 percentage points per annum, so not that bad. Another side is it doesn’t matter a whole lot when you rebalance after discloser as long as you just do it at some point.
Question two, “Shorts don’t show up on a manager’s disclosures, so you’re not really replicating the fund, right?” And so ditto for futures are an undisclosed. I think this is important because you’re only replicating the fund’s long stock positions. A firm like The Baupost Group, which we talk about later, may have most of its assets in real estate or distressed debt and only a fraction in equities. So clone portfolios will have serious tracking error in comparison to the underlying fund. However, in many cases, the clones and hedged versions of the clones perform similarly or, in some cases, superior to the underlying fund, and fees are a big reason why.
Question three, “Why shouldn’t I just pick the top stock? Isn’t that a manager’s best idea?” We see a lot of people fall under this mistake. We found that the top pick is usually the worst performer out of the top 10 holdings, and we discuss the topic more in depth later in the book, but the time the position becomes the largest holding is often due to appreciation and not necessarily conviction.
Question number four, “What fund should I track? Why can’t I track the whole universe?” We actually think tracking the entire hedge fund universe is a great idea for a short fund. An investor doesn’t want the broad market exposure of beta of hedge funds, which is likely to simply be S&P 500-like in nature. Investors want the alpha in hedge funds, and tracking the thousands of hedge funds, most of which are not long-term oriented value stock pickers is a really, really, really bad idea. You may also run the risk of being invested in stocks where there is a high concentration of funds invested in the stocks, imposes liquidation risks in the case of market stress. Let’s look to the recent Goldman VIP fund, which I think is a wonderful short and a terrible idea for an ETF. As far as what funds to track, we outlined 20 funds in the book as well as dozens of funds in the first book and on the blog. Build your own list of favorites through research and always through reading.
Question five, “Can I just filter the stocks by market cap or sector momentum, etc.? What about a stock small cap bias?” Answered is yes, you can, but realize that part of the benefit of tracking these managers is their ability to go anywhere. Also realize that any [inaudible 00:34:43] the portfolio will have resulting impact of potentially making it less diversified or sector biased. Some funds that are inherently sector-focused, which is a little bit slightly different, so examples are health care funds like RA [SP], Baker Brothers, OrbiMed, Palo Alto, etc.
Six, and this is a tough one we’ve talked about a few times on the podcast, is “How do I know when to stop following a manager?” So there’s a lot of ways and this is where it’s a little bit more subjective and hard, and domain expertise really helps. But something like style drift, lost enthusiasm, resting on their laurels, a nasty divorce, too many assets, went to jail, newer, younger, and hungrier managers, lots of reasons, but the criteria is subjective and it’s tough.
Number seven, last, “Doesn’t piggybacking on these managers make them angry? Aren’t you stealing their ideas?” I said. “Actually, I think these managers should be sending me cases of champagne.” And I said, actually in the book, “Actually, I’d prefer tequila,” but I think I’ve gone back to champagne or beer. Shockingly, none of them have yet, so why? By definition, people following 13Fs would be buying what these managers are selling, at some point. We’re now gonna take a look at some of my favorite managers’ track.
There’s no specific screening requirement to arrive at these funds, rather it’s a combination of years of study combined with qualitative as well as quantitative analysis. Another 15 fund profiles are included the appendix with a little bit shorter investment track records. I’ll offer a very brief introduction to each manager as well as the backtested performance, current holdings, and the most recent filing. They’re in alphabetical order but it seems fitting we start with the top performing fund and the first profile, David Tepper’s Appaloosa Management.
Let’s move on to our first of two manager profiles, Appaloosa Management’s David Tepper. He’s got a quote that starts the chapter. It says, “The key is to wait. Sometimes the hardest thing to do is to do nothing.” You’d expect any management fund that takes its name from a distinctive breed of leopard-spotted horse to stand out from the crowd. Appaloosa Management does just that, in large part because of the unique in idiosyncratic investing pattern of its founder David Tepper. Appaloosa has grown into one of the more influential and storied hedge funds, but its founder grew up in a modest neighborhood in Pittsburgh.
His accountant father hit the jackpot in 1986 with the winning lottery ticket. The payoff was $30,000 per year, a windfall for the elder Tepper at the time. These days, David Tepper earns more than that in an hour. He topped the 2014 Rich List for Hedge Fund Manager Compensation, published by the “Institutional Investors Alpha Magazine,” which estimated his 2013 earnings at $3.5 billion. It was the second year in a row he came out number one. What makes Tepper worth that much? A $20 billion hedge fund that he founded in Short Hills, New Jersey in 1993, Appaloosa Management regularly turns out returns that delight his investors and wow analysts. His flagship fund, Appaloosa One, produced an estimated 29% net annualized gain since its launch in July 1993.
Tepper’s not shy about tooting his own horn. He says, “I hope for it to be recognized that in the past 20 years I, arguably, have the best record and therefore may be the best of this generation,” he commented in an interview. Round faced and jovial, he projects the air of a film character actor, the simple but sincerest sidekick to a leading man. His diction retains the imprint of the working class neighborhood where he grew up so that when he says “the markets” it comes out as “da markets.” He once described himself as just a regular upper class middle guy who happens to be a billionaire. But while his pronunciation may not be perfect his pronouncements and investments tend to be spot on. Wall Street views him as an investment guru worthy of emulation.
These days, he has the power to move markets with a few choice words. When he was a guest on CNBC program “Squawk Box” in May of 2013, he offered a long and detailed explanation of why he thought markets were headed higher. S&P futures had been trading lower before he spoke. By the end of the day S&P had risen 17 points, a bump many attributed to Tepper rally. While Tepper is closely watched for his views on equity markets, his forte is actually debt. Early in his career before he was head of the high-yield desk at Goldman, Tepper worked as a finance analyst at Republic Steel Corporation of Ohio. It was there, in the midst of this financially insolvent steel corp, that Tepper learned to navigate the complex credit structure of a distressed company, a skill that would later come to define so much of his investing strategy.
By 1983, Tepper had acquired enough capital, aided by a partial cash infusion from his Goldman Sachs colleague Jack Walton, to open Appaloosa Management Investors. The general aim of the fund was to draw on his expertise by emphasizing investments in bankruptcies and distressed debt situations through a 70:30 debt-equity allocation in global publicly traded markets. But beyond those loose restrictions, the fund was open to any investing opportunity and Tepper prided himself on being sector-agnostic, event-driven, and often unorthodox. He has a reputation for taking bets contrary to conventional market wisdom, often earning windfall returns while others were nursing losses.
“The point is markets adapt, people adapt,” he once said. “Don’t listen to all the crap out there.” His style relies on macroeconomic and market analysis that he combines with deep and thorough research into specific investment opportunities. While he’s maintained the distressed debt specialty in the strategy, he’s ventured into other fields, sometimes taking a major position in a company and becoming an activist investor, pushing for changes to enhance shareholder value. In recent years, some of his best returns have come from equities leading other equity investors and analysts to closely monitor his portfolio. Part of his strategy is to move against the grain.
The turnaround situations are his strength, such as when he bought the sovereign debt of Argentina in 1995 when most investors sought cover from the financial crisis or, similarly, when he purchased futures in South Korean currency in 1997 as most investors were pulling out of the Asian markets. Tellingly, Tepper defies his approach with statements like, “We lead the herd. The street follows us, we don’t follow the street.” And, “We’re consistently inconsistent and it’s one of the cornerstones of our success.” Some of his most famous bets at Appaloosa were buying debt for pennies on the dollar in big bankruptcies including Algoma Steel, Enron, WorldCom, and Conseco.
He often wildly shifts around sectors. He is the textbook definition of an opportunity investor. A lot of his success has occurred to well time trades like the financial sector in ’09 to 2011. So maybe the best tactic, when tracking Tepper, is to pay attention to what he says at any given moment but keep an even closer eye on what he does with his portfolios. So, what do they look like? If you pull out a printout of Tepper…and we’ve included this up to 2016 since the book only goes up to 2014…his portfolio, which has names like Alargan, Google, Facebook, Allstate, Pfizer, has performed a whopping 19% per year compared to 4.9% for the S&P. That’s the highest we have in the book, and some pretty astonishing outperformance as well. So a really interesting one, and kinda to contrast that to Buffett who’s underperformed 7 of the last 10 years in the U.S., this Tepper portfolio has outperformed. I may have to go back to the tape on this one but it’s something like 13 in the last 16 years, so pretty incredible.
Next we’re gonna move on to one of the most classic, famous value investors on the planet and this is the Baupost Group’s Seth Klarman. We have quote, to start, from Seth. It says, “In capital markets, price is set by the most panicked seller at the end of a trading day. Value, which is determined by cash flows and assets, is not. In this environment, the chaos is so extreme, the panic selling so urgent, there’s almost no possibility that sellers are acting on superior information. Indeed, in situation after situation, it seems clear that fundamentals do not factor into their decision-making at all.” “The Intelligent Investor,” Ben Graham’s definitive book on value investing was selling in paperback for $12.97 on Amazon in November 2014.
Since founding Baupost Group in 1983, Klarman has grown into a hedge fund giant, managed over $30 billion. His flagship fund has churned out more than 17% annual returns since its founding, handily beating the S&P 500 and doing it while often holding 40% or more of its assets in cash. Like many value investors, Klarman’s likes to slowly build up concentrated bets, then he accepts long holding periods of three to five years. For example, Baupost spent three years amassing a 35% ownership state in Idenix Pharmaceuticals at Cambridge, Mass. When Merck and Co. announced a $4 billion takeover in June 2014, he realized nearly a billion dollars in profits.
He went on to say that, “Some people are born with the nerve and intuition to be great investors. For me, it is natural. For a lot of other people it is fighting human nature.” In “Margin of Safety,” Klarman credits success to the Graham and Dodd Model, claiming that one must be willing to walk away from an alluring investment through careful scrutiny and review, the investment does not provide sufficient room for error. Klarman hits his natural ability to value investing after working as an intern for two years in Mutual Shares Corporation under the tutelage of Max Heine and Michael Price.
A Harvard MBA, Klarman was soon recruited by one of his former professors there to run a family office. That led Klarman to launch Baupost in 1983 with $27 million, its name combining parts of the names of the families being represented. These days, its clients include Harvard University itself along with Yale and Stanford. Klarman has a special knack for complex transactions that often come with limited liquidity. He has purchased real estate that was acquired by the U.S. government in the savings and loan collapse in 1990s, dabbled in Parisian office buildings and drilled into Russian oil companies.
Baupost has made a killing in the aftermath of Bernie Madoff’s massive Ponzi scheme by buying claims from victims who figured they stood little chance of fully recovering their losses. Baupost bought $230 million worth of claims for $74 million then saw its investment more than double in value after a favorable court ruling on distribution of certain assets. Although Klarman seems to delight in fishing for opportunity in obscure and complex deals, he is no slouch when it comes to stock picking. He runs concentrated portfolios as evidenced by his positions in…and we’ll update this…but at the third quarter of 2014. The top five represent the lion’s share of invested assets, and that’s true today, the market value in the top 10 positions as of the last filing, 77%.
As a long-term investor, Klarman doesn’t spend much time monitoring the daily movement of markets. His office features a desk piled high with papers, a computer and some half-filled water bottles, with no Bloomberg terminal, the device with access to market data that traders rely upon. Klarmen runs Baupost with the same kind of deliberate planning. Rather than divide up his analysts according to specific sectors of the market like pharma, financial, oil, he assigns them to general areas of investment opportunity instead. Some focus on distressed debt while others are oriented toward post-bankruptcy equity, and still others work on spin-off and index fund deletions, and so on.
Processes allowed Klarman to remain diligent about mispriced securities, over-leveraged companies and misguided selling. And while Klarman cautions the investor against the uncertainties in the market and identifies the current economic environment as the most alarming in his lifetime, he still believes there are real opportunities to make sound investments. Klarman prides himself as much on not losing money as he does on making it. He’s only had two negative years…may have to update that…’92 and 2008. And also note that he invests in other assets besides stocks, including real estate bonds and cash and his top 10 clone would’ve had a 5 [inaudible 00:48:33] since 2000, because remember we’re talking about [inaudible 00:48:34] equities here.
When Charlie Rose asked Klarman to name his biggest mistakes, “The Sage of Boston” thought for a moment, came up empty. “I’ve never really screwed up a lot. Knock on wood.” How many investors who have been at it for three decades can say that? In summing up his investment philosophy he said, “I would be buying what other people are selling. I would be buying what is loathed and despised.” So what’s he buying these days? I printed out his recent 13F and it shows that if you go back to 2000, his performance, similar to Buffett, 10.2% versus 4.9% for the S&P, similar volatility, a little higher volatility.
One of the cool things about his portfolio is you end up with a hugely different holding list than you see with other hedge funds. So for example, there’s some names on here that probably many have never heard of ViaSat, VSAT, Synchrony Financial, Alargan, that was one we just mentioned, 21st Century Fox, PBF Energy, and Taro Resources, they’re advanced buyer pharma, Colony Northstar, Chinari [SP] Energy, all sorts of these. But one of the interesting parts is you can also invest in Klarman Arab outposts in a fairly large drawdown. He had a pretty terrible year in the latter half of 2014 and 2015. So he’s at levels that you haven’t seen since back to 2013 and probably, arguably, not quite 50% drawdown but not too far off, I believe. Anyway, one of my favorite investors…a really interesting one to follow.
We were gonna talk a little bit about fund groups and strategies. I’m gonna cut this short and we’re gonna see what everyone thinks about this sort of really long podcast. If you all really like it, let us know, or hate it. We got, you know, another dozen profiles in the front part of the book, another 20 in the back. We could start doing this weekly on a different day than Wednesday, adding them on at some point. Let us know what you think, positive or negative feedback on the mebfabershow.com. We’re gonna do a summary in implementation, a real quick summary of some ideas here. A reminder, you can download a free book for this podcast.
So let’s do the final two chapters and then we’ll shut this down. So I always like to read research paper book summaries in bullet format, maybe because I like to skip to the end, kind of like this podcast. Hopefully you enjoy the fascinating world of many of these fund managers and the ideas presented here will be a great starting point for more research and stock ideas. You can always follow along with my favorite ideas, as well, on The Idea Farm. So we’re gonna condense this 200-plus page book in less than 10 bullet points.
1. It is very simple to track holdings of institutional fund managers using 13F filings submitted quarterly to the SEC.
2. Following a subset of fund managers can lead to new investment ideas. Additionally, investment portfolios can be constructed tracking a hedge fund’s long portfolio performance without many of the traditional drawbacks of allocating to private funds.
3. Because value managers have long term holding periods and low turnover the 45-day delay in reported holdings should not be a significant drawback.
4. Case studies presented, examining 20 value investors, backtested results presented for the portfolio since 2000.
5. Results indicate that by tracking and rebalancing portfolios quarterly, an investor can effectively replicate the long holdings of value hedge funds without paying the high hedge fund fees.
6. Following the top value hedge funds can result in excess returns with in-line volatility compared with the equity in hedge fund indices.
7. An investor could invest in multiple managers…we didn’t touch on this today…to create his or her own fund of funds, again, without paying an additional layer of fees. Additional applications include constructing hedge portfolios, leverage portfolios, as well as sector portfolios.
So talk a little bit about implementation, so how does one go about implementing these strategies? First, you can track any one manager or build your own hedge fund or funds by choosing a group of your favorite managers. We demonstrate you could replicate most managers with their top 5 holdings, so even if you follow 20 funds that’s a fairly reasonable list of 100 stocks, and then if you exclude the top holding as a sub-optimal pick, that reduces the number to 80 stocks. However, it is very important to pay attention to commissions as well as spreads that an investor would pay to execute this portfolio. Thankfully, there are a number of brokerages that charge reasonable transaction costs as well as plenty that do not. Some brokerages to explore include Interactive Brokers, The Motif, Folio, TD Ameritrade and, including one that doesn’t charge any costs, Robin Hood.
There’s some other good sites that track 13F holdings, include Whale Wisdom and Insider Monkey, and newsletter such as “Market Folly” and “Super Investor Insight.” For those who don’t wanna track and trade 13F strategies, there are a handful funds, public and private, that are managed by professional investors tracking 13F strategies. A very enterprising research with time on their hands could find a stock database without survivor bias…Norgate’s a great one, by the way…and piece together backtests from publicly available databases. Other databases include Bloomberg, the SEC, and FactSet. Be forewarned, it is a tedious process.
So I’m gonna wind down today. In the appendix, there are other resources. Remember, freebook.mebfabershow.com to download this. You got the websites for 13Fs, you’ve got a list of conferences to learn more, you got a list of books that profile top hedge fund managers. Even cooler, suggested reading from top hedge fund managers including John Griffin’s Blue Ridge, Bill Ackman’s Persian Square, Seth Klarman’s Baupost, David Einhorn’s Greenlight, Buffett, Dan Loeb’s Third Point. All those have recommended reading.
So we’re gonna wind this down. Everyone, thanks for taking the time to listen today. Again, Jeff and I always welcome feedback and questions through the mailbag at feedback@themebfabershow.com. As a reminder, you can always find the show notes and the other episodes at mebfaber.com/podcasts. Please subscribe to the show on iTunes and if you’re enjoying the podcast, leave us a review. Thanks for listening, friends, and good investing.
Sponsor: Today’s podcast is sponsored by The Idea Farm. Do you want the same investing edge as the pros? The Idea Farm gives small investors the same market research usually reserved for only the world’s largest institutions, funds, and money managers. These are reports from some of the most respected research shops in investing. Many of them cost thousands and are only available to institutions or investment professionals but now they’re yours with The Idea Farm subscription. Are you ready for an investing edge? Visit theideafarm.com to learn more. | 2019-04-24T04:04:57Z | https://mebfaber.com/2017/03/22/episode-44-invest-house/ |
Explaining the dramatic rise of incarceration in the United States has been surprisingly difficult. Theories abound, but they are continually defeated by the vastness and complexity of the American criminal justice system. For a time, the prime suspect was the War on Drugs, which President Obama described as “the real reason our prison population is so high.” Numerically, this never made sense, given that drug offenders are a small fraction of state prisoners.1 Mandatory minimums and three-strikes laws were tangible reforms that attracted a great deal of attention. But as causal explanations they, too, wither under scrutiny. “There’s not a lot of evidence that the amount of time spent in prison has changed that much,” as law professor John Pfaff recently observed.2 Even landmark pieces of federal legislation—think of the 1994 Violent Crime Control and Law Enforcement Act, which dogged Hillary Clinton during her 2016 campaign—probably had minor statistical impact.
For now, the vanguard theory is that criminal prosecutors, at some point in the late twentieth century, changed their behavior. “The primary driver of incarceration is increased prosecutorial toughness when it comes to charging people, not longer sentences,” Pfaff wrote.3 This idea, most fully developed by Pfaff and by the late Harvard professor William Stuntz, rests on data from the National Center on State Courts, which appear to show that even as crime (and arrests) declined between 1994 and 2008, the number of felony cases filed by prosecutors sharply increased. The incarceration rise was a three-decade phenomenon (roughly 1975–2005), and it is not clear how well the prosecutor behavior thesis explains the early years, where relevant data are weaker or nonexistent. But it remains the best explanation we have.
This theory has significant practical implications. It suggests that prison reformers have a daunting task ahead of them, because their efforts must be applied at the level of thousands of different counties. This realization drew Stuntz’s attention to what economists call the agency problem, but in the context of county governance. In The Modern Corporation and Private Property (1932), Adolf Berle and Gardiner Means noted that the management of business corporations had become detached from the ownership of those corporations. Stuntz observed that American criminal justice “faces the same governance problem, but in worse form.”4 Since prosecutors are elected at the county level, suburbanization has given residents outside of cities—for whom crime is no longer a main concern—a great deal of power over urban criminal justice. These suburban voters are often indifferent to calls for reform, and the racial composition of this arrangement may explain some of the inertia.
The prosecutor behavior theory also invites a rethinking of much of the intellectual history behind shifts in U.S. crime policy. Typically underemphasized in standard accounts is the fact that crime rates began soaring in the same decade (1960–1970) that saw the prison population decline by more than twenty-five thousand people. The United States, at the time, had a relatively lenient justice system. (It wasn’t Nixon or Reagan but Hannah Arendt who issued the complaint, fairly representative of the late 1960s, that “the odds in favor of the criminal are so high that the constant rise in criminal offenses seems only natural.”5) The consequence was a surge of populist anger: street crime became a nationwide political issue that simmered for over a generation. The incarceration boom, however, wasn’t entirely a reaction to popular sentiment. Muddying that explanation is a curious fact, puzzled over by Pfaff and others: the public became more punitive, but prosecutors and policymakers even more so. Both started thinking about prisons differently. To some unquantifiable degree, the intellectual history of the era contributed to penal expansion.
This is true despite the fact that criminologists tend to be pessimistic about their influence on public policy, even in comparison to other sociologists. A spelunker in the jstor archives will frequently come across titles like “Why Criminology Is Irrelevant” and hear complaints about “the limited influence of scientific knowledge on criminal justice policy.”6 This pessimism is easy to understand. In recent decades the most politically influential “criminologists” tended to be interlopers and malcontents, and the two that loom the largest, perhaps, are Robert Martinson and James Q. Wilson. The first was a charismatic Freedom Rider who nearly destroyed the “rehabilitative ideal” or “treatment” ethos, eventually recanting his views before taking his own life. The second was the most eminent conservative political scientist of his generation, a man who helped construct what is now called “mass incarceration” out of the rubble of Martinson’s career.
In the 1960s and early 1970s, a number of infamous social science “reports” not only polarized Left and Right but often widened divides between centrist liberals and the Far Left as well. Examples would include the 1965 Moynihan Report, a leaked Johnson administration paper that warned about changes in African American family structure; the 1966 Coleman Report, a study commissioned by the U.S. Department of Education to scrutinize educational disparities between whites and blacks; and the 1967 Kerner Commission, an investigation into the causes of the Watts, Chicago, and Newark riots. Although the label is now mostly associated with foreign policy, it was in the heated controversies over these reports that the term neoconservative began to take on real meaning—referring to a small but influential group of Democrats who began to break ranks with their former political allies over the validity and interpretation of these studies.
The Korean War interrupted his studies; then McCarthyism interrupted his Army service. The attorney general had placed the Socialist Youth League on a list of subversive organizations, and Martinson was brought before a military tribunal after disclosing his membership. He was a lowly private, but the trial was remarkably thorough. As he wrote to a friend, “They backed up this charge with a huge, five-pound secret file, containing . . . all the assiduous diggings of the FBI about my lurid past.”10 The outcome was a triumph of bureaucratic inscrutability. At first dismissed from the Army as an “undesirable,” he received a letter eighteen months later granting him an honorable discharge.
In the 1950s, the most bourgeois social scientist in America could have endorsed this set of principles. The twenty-two-year-old Martinson was describing the conventional wisdom of his own country, what legal scholar Francis Allen dubbed “the rehabilitative ideal”: the notion that the purpose of penal sanctions should be “to effect changes in the characters, attitudes, and behavior of convicted offenders,” so as to strengthen the social defense against bad behavior and to contribute to the offenders’ own welfare.12 This ideal was exactly what Martinson would become famous for attacking.
Reading through the statements of the rehabilitative ideal, one notes a recurring pattern beyond the assertion of the ideal itself. There is a tendency to acknowledge that—while the rehabilitation of criminals is the nominal goal—correctional institutions often fail to live up to this ideal, and instead have come to embody more callous tasks like simple punishment or the warehousing of criminals. The historian David Rothman’s analysis of “the decline from rehabilitation to custodianship” in nineteenth-century prisons sounds almost exactly like the Lyndon B. Johnson Crime Commission’s analysis of contemporary ones. For well over a century, the social science literature followed the same wearying pattern, in which criminal justice professionals loudly announce the goal of rehabilitation and develop institutions for its realization. The next generation, having seen that these institutions diverged from their founding ideals, deplores the gap between creed and practice in the American prison system and, once again, loudly announces the goal of rehabilitation, then develops new institutions and procedures for its realization. Few bodies of professional literature are as numbingly repetitious as penal science between 1870 and 1970.
Soon after 1970, though, the pattern abruptly stops. Not that prisoner rehabilitation efforts changed much in practice—genuine attempts at prisoner reform had always been sporadic and those efforts tended to lose their robustness quickly. But the ideal of rehabilitation largely faded from view. As Allen writes, the significance of the 1970s is the unprecedented degree to which the rehabilitative ideal “suffered defections” not only from prison wardens, politicians, the media, and the public, “but also from scholars and professionals in criminology, penology, and the law.”17 The decline of the rehabilitative ideal was just that—the decline of a professional ideal.
One should not be misled by the starry-eyed connotations of the word “ideal.” The decline of an ideal can be an event of enormous practical significance. Even if American prisons only haphazardly offered therapeutic programs for inmates, the rehabilitative ideal nonetheless influenced the everyday reality of criminal justice, at least until the 1980s. It stimulated the creation of many noteworthy innovations: parole and probation, juvenile custody, mental institutions, and systems of “indeterminate” sentencing. All of these innovations were at bottom what Marc Plattner called the “practical concomitant[s]” of the rehabilitative ideal.18 Even if they failed to work as intended, they may have served a limiting function, diverting offenders away from the cell block.
Martinson conducted a two-year study of “Treatment Ideology and Correctional Bureaucracy” in California youth prisons for his doctoral thesis. Although he swathed his normally vivid prose in the suffocating blanket of academic argot, a growing cynicism is not hard to discern: “Is the bureaucratization of a complex of ‘total institutions’ within a post-adjudicatory system an ominous development? . . . What are the implications of the permeation of ‘people-changing’ ideologies into complex correctional organizations?”20 His thesis was like most: it promptly sank into obscurity. Google Scholar gives it a single citation. What made his career, and ultimately destroyed it, was Governor Nelson Rockefeller’s need to do something—or look like he was doing something—about skyrocketing levels of crime in New York.
The Rockefeller Committee launched its research effort amid growing doubts about the rehabilitative function of prisons. Leftists and conservative hard-liners formed the left and right flanks of what was, in effect, a pincer movement. They attacked rehabilitation for contradictory reasons. Typically, the Right argued that it was an ineffective waste of resources, while the Left thought it was insidiously effective. The great object of rehabilitation was the “moral regeneration” of the inmate, but the Left at the time proudly rejected the moral values of the liberal center. Thus Jessica Mitford, the author of Kind and Usual Punishment (1973), located the true purpose of treatment in making disaffected “lower-class persons” conform to a “Christian middle-class ethic.” For Mitford, as for much of the Left, rehabilitative treatment was—“and ever shall be”—a mechanism for exerting control over the inmate population while “assuaging the public conscience” with compassionate-sounding talk about rejecting punishment in favor of rehabilitation. It was often the case, Mitford argued, that “dangerous” criminals in need of “reform” turned out to be “the political nonconformist, the malcontent, [or] the inmate leader of an ethnic group.” Concern for socialization and moral improvement, therefore, was an insidious cover for the biddings of power.
Initially, state officials suppressed the results of the Rockefeller study. They considered the findings a disturbing threat to rehabilitation programs they were committed to sustaining, with or without empirical sanction. Too much money and too many correctional jobs hung in the balance. As the study began to acquire “something of an underground reputation” among criminal justice professionals, a New York attorney teamed up with Martinson to get a subpoena issued in a Bronx Supreme Court case. Subsequently, the state relented and gave permission to publish. The results were twofold: a gargantuan, seven-hundred-page volume, and a shorter, punchier 1974 article in the Public Interest—“What Works?—Questions and Answers about Prison Reform.”28 Martinson wrote the article independently. Lipton took a less pessimistic view of the results, so Martinson took it upon himself to provide an unvarnished account of what the team had found.
This became known as the “nothing works” doctrine, and in this crude form it was frequently invoked by policymakers. Almost everyone, Left or Right, who had grievances with the criminal justice system cited it abundantly, and with vindictive satisfaction. “I’m sure you’ll find the Martinson piece especially interesting,” Irving Kristol, an editor of the Public Interest, wrote to liberal journalist Tom Wicker. “Just don’t let it depress you too much!” Mainstream criminology also registered an instant change. As early as 1976, Stuart Adams observed a “visible impact on criminal justice practitioners and opinion leaders alike,” a phenomenon which he dubbed “Martinson-shock.” Science magazine reported that William Saxbe, attorney general under Nixon and Ford, had been “strongly influenced” by the report, and the dean of the University of Chicago Law School used the keynote address at the 1975 Congress of Corrections to urge his colleagues to set aside—at least for the time being—the goal of rehabilitation.30 All told, Martinson provoked a lengthy period of soul-searching in the profession, one which opened it up to new ideas, especially those coming from the Right.
Martinson’s view that the rehabilitative ideal was the linchpin of the prison system was widely held at the time. It struck many criminal justice experts as something close to a logical inexorability that, in its absence, prisons would start to disappear. As it turned out, they read too much into what one academic called “the philosophical problem” of the treatment ethos: “If rehabilitation is the object, and if there is little or no evidence that available correctional systems will produce much rehabilitation, why should any offenders be sent to any institutions?”36 Many criminologists drastically underestimated the possibility of any alternative justifications.
The academic who described this “philosophical problem” was James Q. Wilson, a distinguished political scientist who brought “What Works?” to the editors of the Public Interest.37 Wilson was one of the top experts on bureaucracy and public administration who regularly served as an adviser to Republican presidents. When he died in 2012, the obituaries highlighted his “broken windows” theory, codeveloped with George Kelling, which put forth a hypothesis about crime (“disorder and crime are . . . inextricably linked, in a kind of developmental sequence”) and a policing strategy to go with it. Almost none of the tributes mentioned his earlier foray into criminology, however, though it was probably more significant.
Wilson’s rhetorical style was well suited to his political niche. His commonsensical manner gave him authority not only with the educated layman but with a new generation of conservatives. Irving Kristol once remarked that the self-imposed task of neoconservatives was “to explain to the American people why they are right, and to the intellectuals why they are wrong.” He was at least half right. The early neoconservatives were influential because they could explain social science, not necessarily “to the American people,” but to important people on the right and in the center. Or at least they could explain why social science was on their side.
Wilson’s contribution to “policy analysis” was that we could reduce the crime rate by sending more people to prison. At a 1975 White House seminar for the benefit of Attorney General Edward Levi, a notetaker wrote down his overriding suggestion: “increase prison intake > decrease crime rate.”46 It may sound odd to call this an innovative thought, but it was. “I remember being shocked at seeing his article [“Lock ’Em Up and Other Thoughts on Crime,” a selection from Thinking about Crime published in the New York Times], because it was so completely new,” Charles Murray recalls. “Among academic elites, hardly anyone . . . was on his side at that time.”47 Prisons have traditionally served three functions: rehabilitation, deterrence, and “incapacitation” (or physically restraining offenders so they cannot offend again). In a conceptual landscape newly flattened by Martinson’s assault on the rehabilitative ideal—at a time when prisons were a practice in search of a policy—Wilson sought to elevate the third and most basic function, something which had seldom received scholarly attention.
The “incapacitation” campaign forked into an academic debate (the National Academy of Sciences got involved) and an informal policy persuasion. The latter was much more important. At a time of criminological turmoil and confusion, Wilson’s clarity was a tonic. The GOP leadership in Pennsylvania distributed Thinking about Crime to all Republican members of the legislature for their use in criminal justice planning.53 Governor Jerry Brown of California, a Democrat, ordered the book via air mail shortly before giving a major crime speech. After seeing the result, Wilson wrote to a colleague, “If he were a scholar, I’d accuse him of plagiarism!”54 In their book Incapacitation (1995), the criminologists Franklin Zimring and Gordon Hawkins flag a 1975 Gerald Ford speech at a Yale Law School convocation dinner as a symbolic turning point—a “remarkable example of presidential prescience and sophistication.”55 They were apparently unaware that the speech, almost from start to finish, was based on Wilson’s ideas.
It is still possible to argue that the first doubling of prison and jail capacity from its post-1960s low point of about 500,000, in the face of the great crime wave, was justified by necessity and by the gains in reduced victimization and fear that came from locking up some very high-rate serious criminals. But there is nothing to be said for the more-than-doubling from that already historically high level.
Wilson himself was more equivocal. Fêted for contributing to the great crime decline beginning in the 1990s, he even received the Presidential Medal of Freedom in 2003.57 But his later years were marked by a certain defensiveness. He agreed that America had too many people behind bars, at least for certain categories of crime, but he also complained that his critics neglected the advantages of the prison system as it currently stood.
In 2008, as a guest blogger at The Volokh Conspiracy, Wilson faced criticism from commenters about the size of the U.S. prison population, which by then had swollen to historic proportions. He tried to deflect attention away from himself, arguing that the American justice system had diverged from Europe because the United States was more democratic. “American policies were driven by public opinion while British ones were shaped by elite preferences,” he wrote.58 While there may be something to this argument, there can be no doubt that, at least for a crucial period in the late twentieth century, public opinion and elite preferences in America converged on the benefits of “incapacitation,” thanks in large part to Wilson himself.
Martinson’s final years, in contrast to Wilson’s, were one of the sadder dénouements in the annals of social science. He maintained a belief in the evils of prison, but as the 1970s dragged on—and as the emerging punitive turn became dimly visible—he anxiously tried to find noninstitutional ways to break the crime wave and undercut the hard-liners.
By the end of the decade, Martinson faced an even greater humiliation. He recanted the view which had made him famous, acknowledging—on the basis of academic critiques and his own new research—that his previous work had been “misleading.”63 Few people noticed. Mainstream outlets did not report on his turnabout, and the “nothing works” meme lived on. They also didn’t print anything when, on August 11, 1979, Martinson leapt to his death from the fifteenth floor of his Manhattan apartment.
This article originally appeared in American Affairs Volume II, Number 3 (Fall 2018): 144–66.
1 Oliver Roeder, “Releasing Drug Offenders Won’t End Mass Incarceration,” FiveThirtyEight, July 17, 2015.
2 John F. Pfaff, Locked In: The True Causes of Mass Incarceration and How to Achieve Real Reform (New York: Basic Books, 2017), 6.
4 William J. Stuntz, The Collapse of American Criminal Justice (Cambridge: Belknap Press, 2011), 38.
5 Hannah Arendt, On Violence (New York: Harcourt, Brace & World, 1970), 98. Arendt was also worried that lackluster police efficiency would make police brutality more likely.
6 James Austin, “Why Criminology Is Irrelevant,” Criminology and Public Policy 2, no. 3 (July 2003): 557–64; Michael Tonry, “Evidence, Ideology, and Politics in the Making of American Criminal Justice Policy,” Crime and Justice 42, no. 1 (August 2013): 1–18.
7 Nathan Glazer, “Introduction,” in The Public Interest on Crime and Punishment, ed. Nathan Glazer (Lanham: University Press of America, 1984), xi.
9 Irving Krauss to 1st Lt. Erwin Friedman, February 6, 1955. This letter was part of the trial documents that were kindly provided to me by Robert’s son Michael C. Martinson. There were also letters on Martinson’s behalf from Norman Thomas and Robert Nisbet, among others.
10 Robert Martinson to “Friend,” October 20, 1956. This letter was included in the trial documents.
11 Robert Magnus, “Anti-Labor Despotism in the Stalin Legal Code,” Labor Action 13, no. 29 (July 18, 1949): 4.
12 Francis A. Allen, The Decline of the Rehabilitative Ideal: Penal Policy and Social Purpose (New Haven: Yale University Press, 1981), 2.
13 Nils Gilman, Mandarins of the Future: Modernization Theory in Cold War America (Baltimore: Johns Hopkins University Press, 2003), 16.
14 Allen, 19. In context, Allen is speaking more about institutions like schools and the family.
15 Marc F. Plattner, “The Rehabilitation of Punishment,” Public Interest 44 (Summer 1976): 104.
16 This passage draws from Jessica Mitford, “Prisons: The Menace of Liberal Reform,” New York Review of Books 18, no. 4 (March 9, 1972).
19 Robert Martinson, “Prison Notes of a Freedom Rider,” Nation (January 6, 1962): 4–6; Adam Humphreys, “Robert Martinson and the Tragedy of the American Prison,” Ribbonfarm (December 15, 2016). Humphreys is a Canadian filmmaker with a forthcoming documentary on Martinson and his legacy.
20 Robert Martinson, “Treatment Ideology and Correctional Bureaucracy: A Study of Organizational Change” (PhD diss., University of California, Berkeley, 1968).
21 Statement by Governor Nelson A. Rockefeller at public hearings of the Governor’s Special Committee on Criminal Offenders, held at the New York County Lawyers’ Association, New York City, October 14, 1966, Folder 1721, Box 47, N. A. Rockefeller Gubernatorial Records, Rockefeller Archive Center.
22 Author interview with Douglas Lipton on June 4, 2017.
23 An irony is that the academic term “carceral state,” which has arisen in scholarly discussions of mass incarceration, derives from the final chapter of Foucault’s Discipline and Punish. The argument of that chapter is that the prison as a specific location has become relatively less important, since the boundaries between prison and society have started to blur. Foucault suggests that disciplinary ideologies are growing insidiously stronger and diffusing through multiple institutions. At least in the American case, the hegemony of treatment collapsed as soon as his book went to print, and the prison as a specific, singular institution was about to become more important than it had ever been.
24 Robert Nisbet, “Many Tocquevilles,” American Scholar 46, no. 1 (Winter 1977): 67.
25 Robert Martinson, “The Age of Treatment: Some Implications of the Custody-Treatment Dimension,” Issues in Criminology 2, no. 2 (Fall 1966): 275–93.
26 Charles Murray, Losing Ground: American Social Policy, 1950–1980 (New York: Basic Books, 1984), 148.
27 Joseph F. Spillane, Coxsackie: The Life and Death of Prison Reform (Baltimore: Johns Hopkins University Press, 2014), 200.
28 Robert Martinson, “What Works? Questions and Answers About Prison Reform,” Public Interest (Spring 1974): 22–54; Douglas Lipton, Robert Martinson, and Judith Wilks, The Effectiveness of Correctional Treatment: A Survey of Treatment Evaluation Studies (New York: Praeger, 1975). I also draw on Humphreys for this paragraph.
29 The segment carried the unambiguous title “It Doesn’t Work,” Mike Wallace/CBS, 60 Minutes, August 24, 1975; see also Sasha Abramsky, American Furies: Crime, Punishment, and Vengeance in the Age of Mass Imprisonment (Boston: Beacon Press, 2007), 43–58.
30 Irving Kristol to Tom Wicker, April 3, 1974, Folder 34, Box 22, Series: The Public Interest, Irving Kristol Papers, Wisconsin Historical Society; Stuart Adams, “Evaluation: A Way Out of Rhetoric,” in Rehabilitation, Recidivism, and Research, ed. Matthew Matlin (Hackensack: National Council on Crime and Delinquency, 1976), 75–91; Constance Holden, “Prisons: Faith in ‘Rehabilitation’ Is Suffering a Collapse,” Science 188, no. 4190 (May 23, 1975): 815–17; Norval Morris, Keynote Address, 105th Congress of Corrections, Louisville, August 18, 1975.
31 Francis T. Cullen, “The Twelve People Who Saved Rehabilitation,” Criminology 43, no. 1 (2005): 1–42.
32 Ted Palmer, “Martinson Revisited,” in Rehabilitation, Recidivism, and Research, ed. Matthew Matlin (Hackensack: National Council on Crime and Delinquency, 1976), 41–62.
33 Robert Martinson, “California Research at the Crossroads,” in Rehabilitation, Recidivism, and Research, ed. Matthew Matlin (Hackensack: National Council on Crime and Delinquency, 1976), 63–74.
34 Irving Kristol to Robert Martinson, December 16, 1975, Folder 34, Box 22, Series: The Public Interest, Irving Kristol Papers, Wisconsin Historical Society.
35 Martinson’s four-part series “The Paradox of Prison Reform” ran from April 1, 1972 to April 29, 1972 in the New Republic.
36 James Q. Wilson, Thinking about Crime (New York: Basic Books, 1975), 170–71.
37 Nathan Glazer: “I did recall but vaguely that it was brought to us by Jim Wilson” (email message to author, June 23, 2017).
38 “To turn burglars into Rotarians” is Mark Kleinman’s phrase, used in a discussion of Wilson: “Thinking About Punishment: James Q. Wilson and Mass Incarceration,” NYU, Marron Institute of Urban Management, Working Paper #11, June 26, 2014.
39 Wilson, Thinking about Crime, 172–73.
40 James Q. Wilson, “Politics, Crime, and Society,” Discourses: Papers on Politics, Policy, and Political Theory 1 (Chicago: Loyola University of Chicago, 1975). A transcript of a talk given on October 21, 1975.
41 M. J. Sobran Jr., “Clarity About Crime, Class,” National Review (August 29, 1975): 948.
42 Wilson had more respect for economists because they aren’t focused on causality “in any fundamental sense.” He was influenced by Gary Becker’s rational-actor theory of crime, although he embraced a weaker version of it. Wilson combined this with an interest in “constitutional”—i.e., biological—factors, something evident in Thinking about Crime but greatly expanded on in Crime and Human Nature (1985), coauthored with Richard Herrnstein. Critics seized on the combination as logically contradictory, which is silly. Both can be true—or what’s more important to Wilson, both can be useful—to varying degrees and in various circumstances. Both have the effect of starkly individualizing the criminal—divorcing him from social context—which is why academics like to call Wilson “neoliberal” almost as much as neoconservative.
43 For causal vs. policy analysis, see Wilson, Thinking about Crime, 43–63.
45 Box 9, Folder “Crime Message (1) – (2),” James E. Connor Files. Gerald R. Ford Presidential Library; the president himself thought Wilson’s ideas were “most interesting & helpful.” Box C17, Folder “Presidential Handwriting 3/29/1975 (1),” Presidential Handwriting File. Gerald R. Ford Presidential Library; Ford wrote about Wilson’s influence in his memoir A Time to Heal (New York: Harper & Row, 1979), 269.
46 Box 3, Folder “James Q. Wilson,” Robert A. Goldwin Files, Gerald R. Ford Presidential Library.
47 Charles Murray, email message to author, February 6, 2013.
48 Author interview with Shlomo Shinnar on July 13, 2018; Shlomo Shinnar and Reuel Shinnar, “The Effects of Criminal Justice on the Control of Crime: A Quantitative Approach,” Law and Society Review 9, no. 4 (Summer 1975): 581–612. The two-thirds cut in the crime rate would be for a set of “serious” crimes. Reuel Shinnar had been working on this topic since the 1960s, but was rebuffed by the criminological establishment until the political environment changed. In fairness to the Shinnars, a study sponsored by the National Academy of Sciences concluded that while their study rests on simplified assumptions, it represents “the best approach to estimating the incapacitative effect to date.” Jacqueline Cohen, “The Incapacitative Effect of Imprisonment: A Critical Review of the Literature,” in Deterrence and Incapacitation: Estimating the Effects of Criminal Sanctions on Crime Rates, eds. Alfred Blumstein, Jacqueline Cohen, and Daniel Nagin (Washington, D.C.: National Academy of Sciences, 1978), 187–243.
49 Wilson, Thinking about Crime, 200–1.
50 James Q. Wilson, “How Crowded Prisons Throw Sentencing out of Whack,” Washington Star (November 21, 1976): F-1.
51 Box 3, Folder “James Q. Wilson,” Robert A. Goldwin Files. Gerald R. Ford Presidential Library.
53 See Jerome G. Miller, Search and Destroy: African-American Males in the Criminal Justice System (Cambridge: Cambridge University Press, 1996), 272.
54 Box 3, Folder “James Q. Wilson,” Robert A. Goldwin Files, Gerald R. Ford Presidential Library.
55 Franklin E. Zimring and Gordon Hawkins, Incapacitation: Penal Confinement and the Restraint of Crime (New York: Oxford University Press, 1995), 18. Significantly, Zimring and Hawkins wrote in the mid-1990s that incapacitation “now serves as the principal justification for imprisonment in the American criminal justice system,” whereas it hadn’t before. Emphasis mine.
56 Kleiman, “Thinking about Punishment,” accessed online. Kleiman notes that he has come to realize that his calculation was flawed. It didn’t calculate the benefits of what was supposed to be a cost-benefit calculation.
57 Studies on the effect of prison on the decline of the crime rate vary widely in their estimates. “The best scholars,” according to Wilson, attribute 25–30 percent of the decline to imprisonment. Some scholars give it much less credit.
58 James Q. Wilson, “What Do We Get From Prison?,” The Volokh Conspiracy (blog), June 9, 2008.
59 Lee Wohlfert, “Criminologist Bob Martinson Offers a Crime-Stopper: Put a Cop on Each Ex-Con,” People, February 23, 1976; Patricia Masterman, “Abolish Parole Board, Expert Urges at Hearing,” Amarillo Globe‑Times, November 7, 1975, 1–2.
60 Robert Martinson to Irving Kristol, December 21, 1975, Folder 34, Box 22, Series: The Public Interest, Irving Kristol Papers, Wisconsin Historical Society.
61 Irving Kristol to Robert Martinson, January 13, 1976, Folder 34, Box 22, Series: The Public Interest, Irving Kristol Papers, Wisconsin Historical Society.
62 Mirko Bagaric, Dan Hunter, and Gabrielle Wolf, “Technological Incarceration and the End of the Prison Crisis,” Journal of Criminal Law and Criminology 108, no. 1 (Winter 2018): 73–135.
63 Robert Martinson, “New Findings, New Views: A Note of Caution Regarding Sentencing Reform,” Hofstra Law Review 7, no. 2 (Winter 1979): 243–58.
Timothy Crimmins is a graduate student in history at the University of Chicago. | 2019-04-19T20:30:15Z | https://americanaffairsjournal.org/2018/08/incarceration-as-incapacitation-an-intellectual-history/ |
In our wins, our defensive numbers are among the best in the league, Carlisle said.Wade Miller led the Trailblazers with 18 points, Andre Wilson added 12.Is not about being more nervous.
AAA is one of the more expensive programs, especially when you get into their premium plans.
And it’s not just interest rates.Check with your insurer to see their minimum grade requirements, and keep that glowing report card handy.Related: NHL News & Notes: Price, NHL Stars wholesale nfl jerseys of the Week & More In today’s News & Notes, the Vancouver Canucks and Ottawa Senators have made a trade, Alex Ovechkin has decided to forego attending the All-Star Game and Frederik Andersen is still out with an injury.Jane interjects.
2015 SEASON: Started all 13 games he played for the Chargers in his final season with the franchise…led the Chargers secondary with 75 tackles , adding 0 sacks, six passes defensed and one fumble recovery, before being placed on injured reserve on 12…They’re https://www.cheapnfljerseystousa.com going to call upon you, whether it’s to play center, guard or tackle �?just depending on who is healthy, who is playing, who is not playing.The undocumented immigrant accused in the murders of five people across two states has been found dead in a St.
I’m a very aggressive fighter.It marks the second time since 1986 a team was held to one field goal and the 3 percent shooting was the worst.Chicago : Posted five tackles and forced one fumble …If anybody can do it, he can get it done.
Suites of advanced safety systems are being included as standard equipment on more and more mainstream vehicles.24 Chris Duhon, New York ?Games this year where I thought I was going good and had zero points sometimes, last year I’d have five in a game.To search for players who were born on a certain date, for example all players born on December 25, choose the month, day and year with the drop down boxes and then choose the ‘Full Date Search’ option.Best interest this be my first meeting with Mr Rooney ever as Antonio Brown the man not AB84 the player in locker ?!Bank on Murray being a bench rusher all Fantasy season long.
Moore’s presence on the ice Saturday is obviously a step in the right direction in terms of his recovery, but he’ll need to be cleared for contact before rejoining the lineup.So how can you become that person who’s always given the lowest car insurance quotes and pays the best rate?He’s very consistent, very active, got a strong will, and he’s a smarter fighter than people expect.When firefighters arrived at the vacant building, they found heavy flames.Payton would bring in a much needed passing game for the Mav?s who average an NBA worst of 17 assists a game.
With an additional week between the end of the combine and the start of free agency, teams waited to use their free agency tags until after the combine ended.White and the Bills secondary will match-up against the Jets and quarterback Sam Darnold next week, who threw four interceptions Sunday.It is because on Monday the Panthers named Joel Quenneville their coach, bringing on board the three-time Stanley Cup champion with the Chicago Blackhawks who has more wins than every coach in NHL history other than Scotty Bowman .
Proceeds will benefit the You Can Play Project.He also had just 238 yards receiving, a low since his 12-game season of 2015.Everybody has supporters.Fiat does include general qualification standards, including mechanical standards, maintenance standards, appearance standards, and detail standards, when considering a vehicle for certification.They’re double-teaming, both on pick-and-rolls and on pin-downs, coach Rick Carlisle said.
Mar 6 1 PM Mark Cuban is strongly considering a run for President of the United States in 2020 as a third party candidate.That wasn’t the case in 2000.The NHL described the incident as noted below: As the video shows, McDavid is back-checking through center as the Islanders enter the zone on the rush.The fuel-efficient CX-5 has a well-appointed interior that is stylish and offers plenty of passenger space in the front row.The fact he has three years to go means that any side looking to sign Varane would have to pay through the teeth to sign him.
But it was his 1999 and 2000 seasons, his only full seasons as a Panther, when he made his mark.Our driver profile for the annual state-by-state study of car insurance premiums is a 40-year-old man with good credit and a clean driving record.Seattle is 6, which means the five-win teams as of right now are outside the playoff picture looking in.News and World Report offer resources to help.
And he was really good when he got them.But in the playoffs, as long as you’re playing the right way and playing hard and setting a good example for the next line following, it goes a long way.Pacquiao, whose pro career stretches back 24 years, showed he still has the speed that carried him over his spectacular career.She did not call me, recounted Espinoza.There are powertrain limited warranties and bumper-to-bumper warranties, and several automakers offer combinations of both.
Bjork is day-to-day after sustaining an upper-body shoulder injury during this past Sunday’s game against AHL Hershey, Mark Divver of The Providence Journal reports.Stiff or not, I’m playing, Harris said.If you would like to search for all players born on a certain day, for example all players born on December 25th https://www.newjerseysch.com in any year, choose the month and day with the drop down boxes and then choose the ‘Month and Day Search’ option.If you don’t you’re just making a dumb insurance mistake.Because Super Prime interest rates are so low, and the market so competitive, they don’t make much money off those car loans.
As a result, the veteran closes out the 2018 campaign with a career-best 79 points and tied a career high with 32 goals.The fans and media wanted to define what I was.Sometimes they’ll magically be able to find a bit more room to negotiate, especially if it is near the end of the month or you’re very close to a deal.Days like today are extremely important.For that, he won the Outland and Lombardi awards, becoming the first player in cheap jerseys SEC history to do so.It is certainly possible that a dealer will get you an order or an allocation at a competitive discount-that way you can get a price break once the car hits the lots.
Note: The RealPlayer is free while the RealPlayer Plus has a charge associated with it.
Luc Robitaille.Many restaurants offer great deals on appetizers, pasta, and more.They shoot 39% from 3-point range, which is good for 46th in the nation.Sunday vs Michigan State Player Spotlight Miles Bridges averages 18 points per game while playing 30 minutes per night this season.Which players were all over the stat sheets?
He carried the ball 7 times for an average of 7 yards per carry and ended with 48 yards.Chicago Bears https://www.newcheapjerseysshop.com at New York Giants Cleveland Browns at Houston Texans, 1 p.m.Useful Pitching Statistics Houston has a 16 overall mark this year.Even more remarkably, the league gave the Cavs compensatory first-round picks in 1983, 1985 and 1986, presumably to keep the franchise afloat after the ownership change.Edwin Encarnacion is really not hitting for average, but he is making contact when he does get a hit – he leads the league in home runs.Despite the diagnosis, he so far has avoided surgery and even was a limited participant at Friday’s session, leading to a questionable designation.
They have returned 5 kicks for 104 yards on special teams, ranking 15th in kick return yardage.Heat shooting guard Dwyane Wade has played in 29 games against the Hornets in his career and has averaged 20 points, 5 assists and 5 rebounds in those games.They allow 30 shots to their opposition per contest and have a team save percentage of 92%.He is playing with a tremendous swagger, and he is excellent defensively on his new team.As a team, the Sabres have a total of 116 goals scored while they’ve given up 100 goals to this point.
Playing for the first time in nine days, the Badgers shook off an early challenge from Grambling State to beat the Tigers 84.Copyright 2018 by AP.They gave guys chances to do things that maybe they haven’t had in the past.No Credit Card.We’re going to continue to work with him.Redick once again outscores Celtics star and likely future Hall of Famer Ray Allen in the shooting guard matchup on Wednesday night, then the Magic will head back to central Florida with a 2 lead in this series.
When, at age 41, he concluded his 23-season career in 2004 — having starred in his own selfless way for the Hartford Whalers, Pittsburgh Penguins, Carolina Hurricanes and, very briefly, Toronto Maple Leafs — his 1 points were the fourth most in NHL history; he trailed Wayne Gretzky, Mark Messier and Gordie Howe.They have a very healthy pool of talent to draw on and a solid system in place.To search for players who were born on a certain date, for example all players born on December 25, choose the month, day and year with the drop down boxes and then choose the ‘Full Date Search’ option.BYU was a 20-point underdog in that game and won by eight, proving that Gonzaga is not a machine, but a group of men.
Will 2019 bring about change?In fact, I probably wouldn’t bet that they will.But there is one guy who can ruin it for everyone – Martin Truex Jr.NHL ref Daniel O’Rourke caddied for Rank.Of those first downs, 5 came on the ground while 15 came through the air.
The move wasn’t as hard to make as it would be for some teams because the Senators have the luxury of two high-level goalies.That’s what we’re looking at.Zach Johnson The 2007 Masters champ has not been particularly sharp lately, but he showed signs of rediscovering his form last time out at Colonial.Issues with a new system aren’t particularly surprising.Gabriel Carlsson 2.
Often, there are even cheap jerseys dual screens.He added another game-winning goal in a 2 win against the Edmonton Oilers on Feb.As NFL offenses continue to evolve and incorporate more passing and more read-option facets, NFL defensive coordinators are tasked with keeping up.The 2019 Sentra starts at $17, an increase of $800 from the outgoing model.
Matt Niskanen has struggled immensely, but lately Dmitry Orlov has had a rough few games as well.The Bruins began to rebuild as they entered the 1980s.He finished the series with just a single assist.Now the Beavers have to scrape themselves off the deck and really start rebuilding what is, right now, the worst team in the league.
Coming into this game, then, it’s probably more reasonable to have an opinion of Darnold based cheap jerseys on his performance in the first two games and not on the ridiculous hype that surrounded him heading into the season.Recorded 30 points , five rebounds, two assists and a steal at Toronto …It seems premature right now, though, to assume that everything that needs to turn out well will immediately.
Ran for a two-point conversion https://www.newcheapjerseys.us.com at Oak.He is signed through the 2019 season and the Broncos have no real succession plan.Derick Brassard has accumulated 47 total points this season for the Rangers.Previously, only Tiger Woods and Loren Roberts had won in consecutive years at Bay Hill.That legendary Saturday in 1993 is the last time the Irish were ranked No.
Catholic Boy, Jonathan Thomas, Javier Castellano, 8: This has been the best three-year-old since Justify was retired, having won three straight, with the last two at the classic distance.DeMarre Carroll sat again.There’s just so many .
Their defense surrenders a 36% shooting percentage and a 31% average from behind the 3-point line, ranking them 2nd and 31st in those defensive statistics.No Salesman.The Bruins give up 80 points per game on the road this season and they have surrendered 64 points per contest in their last 3 games.
The Bears have been penalized on offense 65 times for 546 yards so far this year, which has them 20th in the NFL in penalties.The Blue Demons score 71 ppg at home and they have averaged 69 points per contest in their last 3 games.They have an average scoring differential of -12 in their past 3 contests and at home this season they have a difference of -2 points per game.
They also allow a 42% shooting percentage and a 37% average from behind the arc ranking them 103rd and 278th in those defensive categories.
Teams are hitting .248 against the bullpen and they’ve struck out 355 hitters and walked 145 batters.
But they are not so you can’t expect the same level of output.
Jordan Spieth ‘s memorable triumph at last year’s Travelers Championship, where he holed a bunker shot for birdie to win the tournament in a playoff, makes it easy to forget he struggled heading into the tournament, finishing tied for 35th at the U.S.
But then he broke his foot.Odorizzi is +8000 for the Cy.You’re also not alone there.
Helped the Vikings claim the NFC North title in 2017 in his 2nd season with the team…The 1st player drafted from Texas-San Antonio in the history of the fledgling football program…Invited to 2016 Scouting Combine and to play in East-West Shrine Game but did not compete in game…Selected with Vikings 2nd choice in the 6th round .
Good Bet Week 11: Indianapolis versus New England At first glance getting the Patriots plus points might feel like a steal.They’re both on the trip for a reason, Cassidy added.BPI represents how many points above or below average a team is.However, that’s not necessarily what spring practices are about, especially for rookie quarterbacks.
Justifying the Patriots’ rare long-term, high-dollar, free-agency expense, Gilmore earned his second Pro Bowl nod.They are 3rd in the league in team earned run average at 3.Part of what makes college football so great is that you enter every season with so much uncertainty and so many questions.
Same thing with Brian and Isaiah.We’ll just see how he feels tomorrow.The Rangers as a unit have 1 base hits, including 209 doubles and 153 homers.They rank 17th in MLB with 8 hits per game.They also allow a 41% shooting percentage and a 35% average from behind the arc ranking them 125th and 209th in those defensive statistics.
Now https://www.cheapjerseyoutlet.com that you’re back down in civilization, it’s time to relax for a bit…Thursday, September 15 Wrigley Field Probable Pitchers: Jimmy Nelson vs.They give up a walk 2 times per 9 innings while they strike out 8 per 9.Of those first downs, 7 came on the ground while 13 came through the air.Detroit has walked 26 times this season and they’ve struck out 64 times as a unit.It’s so Stanford it hurts.
4) Rookie receivers: Much of their work might come with backup quarterbacks working behind a patchwork offensive line, but it’s always a worthwhile watch when rookie skill players debut.Markel Starks is a point guard that can score in bunches, and he should get a decent amount of playing time as a freshman.
AFC West teams.Teams are hitting .264 against the bullpen while being struck out 32 times and walking 11 times this season.The Jaguars average 17 yards per kick return and have a long of 31 yards this season.The Rams allow 110 rushing yards per game on 26 carries for an average of 4 yards per carry, which ranks their rush defense 20th in the NFL.The Bruins have been able to rush for 220 yards per game and they are scoring 33 points per game.
Kris Bryant comes into this game hitting .293 and his on-base percentage is at .387.No.2 Ohio State, No.3 Clemson and No.7 Louisville were all on the brink of losses but somehow managed to sneak by their opponents and keep their playoff hopes alive for another week.Be sure to also check out the Eagle Eye In The Sky podcast on the Philadelphia Eagles podcast channel on iTunes.Starting pitcher Hisashi Iwakuma had a 16 record with an earned run average of 4 and a WHIP of 1.
The Blackhawks have 7 losses in OT and 2 of them occurred in a shootout.Now he always jokes that I owe him a percentage of my contract – though luckily I haven’t had to pay him yet!A Trace McSorley injury and overly conservative James Franklin game-calling helped, but that’s nobody’s problem in Lexington.BIG PICTURE Michigan State: After a first half that disgusted Izzo, the Spartans bore down late and won their first of the new year after going 6 in December.You don’t know how long the best players will be on the field.
Missouri turns it over an average of 15 times per 100 possessions and they steal the ball on 8% of their opponents possessions.That translated into numerous bets on Chicago to win outright.Ariza and his agent, Aaron Mintz of CAA, had hoped https://www.topauthenticnfljerseys.com that the Los Angeles Lakers could complete a deal with the Suns.Jaroslav Halak has 12 wins and 12 losses this season in net for the Islanders.Ottawa Senators Betting Trends The Ottawa Senators are 32 straight up The Ottawa Senators are against the over under Dallas Stars Betting Trends The Dallas Stars are 27 straight up The Dallas Stars are against the over under Ottawa Senators Injuries 10 16 LW Clarke MacArthur Concussion out for season 02 17 GO Andrew Hammond Hip out for season 02 17 RW Bobby Ryan Finger out indefinitely Dallas Stars Injuries 11 16 C Mattias Janmark Knee expected to miss 5 months 03 17 LW Antoine Roussel Hand out indefinitely Get $60 worth of FREE premium member picks.Click Here for details.
No Obligation.Big left arrow icon Big right arrow icon Close icon Copy Url Three dots icon Down arrow icon Email icon Email icon Exit Fullscreen icon External link icon Facebook logo Football icon Facebook logo Instagram logo Snapchat logo YouTube logo Grid icon Key icon Left arrow icon Link icon Location icon Mail icon Menu icon Open icon Phone icon Play icon Radio icon Rewind icon Right arrow icon Search icon Select icon Selected icon TV icon Twitter logo Twitter logo Up arrow icon User icon Audio icon Tickets iconAdd to calendar iconNFC icon AFC icon NFL icon Carousel IconList ViewWebsite InstagramTwitterFacebookSnapchatShop IconProfile Overlay AvatarAddAirplayArrow LeftArrow RightArrow UpArrow DownAudioBack 5sBack 10sBack 30sCalendarChartCheckDownLeftRightUpChromecast OffChromecast OnCloseClosed CaptionsBench OffBench OnBroad OffBroad OnVertical OffVertical OnCommentDockDoneDownloadDraftFantasyFilterForward 5sForward 10sForward 30sFull Screen OffFull Screen OnGamepassGamesInsightsKeyLeaveLiveCombineDraftFantasyMenu GamesMenu NetworkMenu NewsMenu PlayoffsMenu Pro BowlMenu ShopMenu StandingsMenu StatsMenu Super BowlMenu TeamsMenu TicketsMenuMore HorizontalMore VerticalMy LocationNetworkNewsPauseplayMultiple PlayersSingle PlayerPlaylistPlayoffsPro BowlPurgeRefreshRemoveReplaySearchSettingsShare AndroidShare Copy URLShare EmailShare FacebookShare InstagramShare iOSShare SnapchatShare TwitterSkip NextSkip PreviousStandingsStarStatsSwapTeamsTicketsVideoVisibility OffVisibility OnVolume HiVolume LowVolume MediumVolume MuteWarningWebsite.No Salesman.Luke Voit went 4-for-4 with a two homers, four runs scored and three RBI in Wednesday’s win over the Red Sox.Voit took David Price deep twice in this one, knocking a solo shot in the fourth inning before adding a two-run homer in the sixth frame.The Bobcats ended up flying into Chicago’s Midway Airport and taking a charter bus to Milwaukee.Grummond needed a playmaking guard, and Cousy fit the bill.
Big left arrow icon Big right arrow icon Close icon Copy Url Three dots icon Down arrow icon Email icon Email icon Exit Fullscreen icon External link icon Facebook logo Football icon Facebook logo Instagram logo Snapchat logo YouTube logo Grid icon Key icon Left arrow icon Link icon Location icon Mail icon Menu icon Open icon Phone icon Play icon Radio icon Rewind icon Right arrow icon Search icon Select icon Selected icon TV icon Twitter logo Twitter logo Up arrow icon User icon Audio icon Tickets iconAdd to calendar iconNFC icon AFC icon NFL icon Carousel IconList ViewWebsite InstagramTwitterFacebookSnapchatShop IconProfile Overlay AvatarAddAirplayArrow LeftArrow RightArrow UpArrow DownAudioBack 5sBack 10sBack 30sCalendarChartCheckDownLeftRightUpChromecast OffChromecast OnCloseClosed CaptionsBench OffBench OnBroad OffBroad OnVertical OffVertical OnCommentDockDoneDownloadDraftFantasyFilterForward 5sForward 10sForward 30sFull Screen OffFull Screen OnGamepassGamesInsightsKeyLeaveLiveCombineDraftFantasyMenu GamesMenu NetworkMenu NewsMenu PlayoffsMenu Pro BowlMenu ShopMenu StandingsMenu StatsMenu Super BowlMenu cheap jerseys TeamsMenu TicketsMenuMore HorizontalMore VerticalMy LocationNetworkNewsPauseplayMultiple PlayersSingle PlayerPlaylistPlayoffsPro BowlPurgeRefreshRemoveReplaySearchSettingsShare AndroidShare Copy URLShare EmailShare FacebookShare InstagramShare iOSShare SnapchatShare TwitterSkip NextSkip PreviousStandingsStarStatsSwapTeamsTicketsVideoVisibility OffVisibility OnVolume HiVolume LowVolume MediumVolume MuteWarningWebsite.That mark is significantly worse than the man he replaced, Dan Hope, who was 16 in his first three years.I also found the original transmission.Tony’s Pick: Take the Dodgers -174 Get $60 worth of FREE premium member picks.
The best part is covering a successful team who has been in the Super Bowl in three of my five years covering the team.New customers who are interested in checking out the one-of-a-kind short track action can save up to $20 per ticket if they buy before the November or February cutoff dates.Wonnum Ankle is questionable Saturday vs Akron 11 18 DB Jaycee Horn Ankle is questionable Saturday vs Akron 11 18 DB Steven Montac Undisclosed is questionable Saturday vs Akron 11 18 DB Jamyest Williams Shoulder is out for season 11 18 DB Javon Charleston Foot is out for season 09 18 WR OrTre Smith Knee is out for season 09 18 OL Jovaughn Gwyn Foot is out for season 08 18 DL Jesus Gibbs Knee is out indefinitely 08 18 OL Wyatt Campbell Knee is out indefinitely 08 18 DB Tavyn Jackson Medical is out indefinitely 08 18 RB Caleb Kinlaw Knee is out for season Useful Offensive Statistics The Akron Zips are 125th in the country in points scored with 224.Out of his four finals’ appearances, he’s won the Summoner’s Cup three times.No Obligation.
With the recent influx of younger talent via free agency,The Draft, and undrafted free agents, how many new starters do you anticipate to be on the field week 1 in Los Angeles?That has almost certainly knocked him out of Top 2 consideration, but now it remains to be seen who will be willing to take the risk.Jimmy Butler had 26 points, seven rebounds and eight assists.Advanced Statistics Nashville has a Corsi percentage of 49% as they’ve tried 950 shots on goal while at full strength while they have surrendered 966 shots on net when they have a full team on the ice.
There is of course the small matter of Johnson’s contract with the Scottish Rugby Union, which both Clyne and Castle said they wanted to treat in the right manner.Golladay showed potential as a longtime starter.He has 25 goals this season with 231 shots on goal giving him a shooting percentage of 11%.No Obligation.
15 came from running the ball and 9 came from a pass.
San Jose Sharks Betting Trends The San Jose Sharks are 43 straight up The San Jose Sharks are against the over under Los Angeles Kings Betting Trends The Los Angeles Kings are 48 straight up The Los Angeles Kings are against the over under San Jose Sharks Injuries No key injuries to report Los Angeles Kings Injuries 10 15 D Matt Greene Shoulder out indefinitely 04 16 D Alec Martinez Lower Body ?They have punched their ticket to The Big Dance with fortuitous wins over Georgetown over the last two weeks.The Fighting Illini were 71st in yards per play allowed with 5.
Parents should use appropriate parental discretion in determining whether to grant authorization to minor children to access the Services.He cheap jerseys had a total of 55 assists on the season and averaged 24 minutes played per game.The probable starters are Kendall Graveman for the Athletics and Cole Hamels for the Rangers.That is a much tougher schedule, which explains why Houston has such a big advantage in the odds.combined for just 14 points on 5-of-21 shooting.
The Islanders have given up 68 power play opportunities and surrendered 14 goals on those chances for a penalty kill percentage of 79%, ranking them 17th when short-handed.To find all players born within a certain month and year, for example all players born in December of 1985, choose the month and year with the drop down boxes and then choose the ‘Month and Year Search’ option.Buffalo Bulls quarterback Tyree Jackson could have very well decided to enter the 2019 NFL Draft.Thomas Greiss has 3 wins and 1 loss on the year in net for the Islanders.Defensive Statistics The Nebraska Cornhuskers rush defense allowed 460 attempts for 2 yards last year, putting them in 111th place in D-1 against the run.
Adebayo scored 11 points while adding 10 rebounds, two assists, two steals and a block in 38 minutes during Saturday’s 112 win over the Grizzlies.He received an invite to the NFL Scouting Combine, where he’ll have an opportunity to improve his stock.They possess the ball 59 times per 40 minutes and their effective field goal percentage for the year is 58%.
Thirteen teams have lower odds.He maintains a slugging percentage of .415 with an OPS+ of 97.The opening line for this game has Toledo as 7 point favorites and the over under has been set at 138.
Carlson, a Massachusetts native, is the first U.S.-born defenseman to rack up at least 50 assists in one NHL season since Brian Leetch earned 58 assists for the Rangers in 2000.It’s just amazing how things go in life amd how things work, Nagy said.It was the first buzzer-beating game-winning field goal for Green in 11 NBA seasons, and it was the third such shot by a Rockets player against the Suns over the last six seasons.U20 Euro Championship A.Under the hood lies a 1JZ swap that’s been beefed up to produce 350 hp.
They all parred the first playoff hole, the par-4th.Head-to-head, the over is 4 in the last 5 meetings in Sacramento.He has electric speed, oozes creativity and could wind up being a real catalyst for David Krejci.Weaver pulled the seats and began disassembling the car, but just never had the time to put it back together.
As they say, look at the schedule!As one major league exec noted: Guys like Andrew Miller , Daniel Murphy and David Robertson were smart to grab fair deals and not https://www.authenticnflcheapjerseys.us.com hold out for a few extra bucks or another year.Cargo capacity in the 2019 Cherokee is 25 cubic feet behind the rear seats or 27 cubic feet with the cargo floor lowered.Brandon Meriweather.
No Credit Card.I’m guessing the fact that they were without several key players on defense doesn’t matter much, either.Jordan Clarkson averages 14 points per game in 23 minutes per contest this season.
He has 574 shots against him and he’s surrendered 37 goals.He has 29 hits this year along with 19 runs scored and 19 RBI.When did you first start dancing cheerleading?They rank 3rd in MLB with 9 hits per game.
Your tax rate, you know, let’s say, from zero to $75 may cheap jerseys be ten percent or 15 percent, et cetera.They have walked 3 men per 9 innings while striking out 9 per 9.Favorite Quote: The future https://www.chinacheapjerseyswholesale.us.com belongs to those who believe in the beauty of their dreams.They averaged 30 shots per contest and had a team shooting percentage of 9%.Physical problems are suddenly piling up for Federer, who withdrew from a tournament for a fourth time this season.Florida Gators Schedule Analysis The Gators start out with a bang, opening against Michigan in Arlington.
What more can a sports bettor ask for?These two also opened the 2010 season against one another, with USC winning 49 SU.Useful Team Statistics Notre Dame is 200th in the nation with 74 points per contest this season.This holiday marks the night that Prophet Muhammad travelled from Mecca to Jerusalem and ascended to heaven.They’ve scored 121 goals and surrendered 100 for a goal differential of +21.
I’m surprised he can enter this race, though, because last I checked he was still running – very slowly – towards the finish line at Churchill Downs.Gilbert responded to rumors that spread following his cancellation of renovations to Quicken Loans Arena.
Limerick showed in their round-robin clash that they have no fear of Cork.The over under was set at 7.Big left arrow icon Big right arrow icon Close icon Copy Url Three dots icon Down arrow icon Email icon Email icon Exit Fullscreen icon External link icon Facebook logo Facebook logo Instagram logo Snapchat logo YouTube logo Grid icon Key icon Left arrow icon Link icon Location icon Mail icon Menu icon Open icon Phone icon Play icon Radio icon Rewind icon Right arrow icon Search icon Select icon Selected icon TV icon Twitter logo Twitter logo Up arrow icon User icon Audio icon Tickets iconAdd to calendar iconNFC icon AFC icon NFL icon Football iconCarousel IconList ViewFootball iconCarousel IconList View.Whitey finally crossed into the postseason in 1973.He had a total of 101 yards on 6 receptions for an average of 17 yards per catch in the game.If you would like to search for all players born on a certain day, for example all players born on December 25th in any year, choose the month and day with the drop down boxes and then choose the ‘Month and Day Search’ option.
They had a tough loss , they felt they didn’t play well, so there was a little bit of anger in their game.Any commercial use or distribution without the express written consent of AP is strictly prohibited.Burns coached for the Vikings for 24 seasons.He has 30 hits this year along with 15 runs scored and 11 RBI.The real limitation of the handling and braking components, though, remains the tires.
The bullpen has given up 342 hits on the season and have a team earned run average of 2.Get $60 worth of FREE premium member picks.Entered the NFL as a college free agent with Baltimore on May 4, and was claimed off waivers by San Francisco on July 31…Von Miller – Comeback Player of the Year may be more likely for the Broncos’ https://www.wholesaleelitejerseysdeal.com linebacker, even though only three defenders have won in the 41-year history of that award.In an illegal market, there is no such visibility.
Another interesting value bet is whether a No.The Mets have operated confusingly since their 2015 World Series appearance, but the fact that they’re willing to add a bat and contract like Cano’s could bode well for their willingness to compete for a title soon.He has 44 strikeouts over his 38 innings pitched and he’s given up 39 hits.
As a team, they are batting .243, good for 19th in the league while putting together a .223 average at home.Like the Sox, the Braves had one of the best starting rotations in baseball through most of last year.Their team WHIP was 1 while their FIP as a staff was 4.From 3-point territory they shoot 35%, which is good for 186th in Division 1.
They were crowned the One of the decisions Tampa Bay Lightning general manager Steve Yzerman faces this summer is who will backup goaltender Andrei Vasilevskiy next season.The all-new Luxury Suites at Raymond James Stadium are the cornerstone of the three-phase $150 million renovation project that has transformed the Buccaneers gameday experience.His FG percentage is 47% and his free throw percentage is at 85%.Big left arrow icon Big right arrow icon Close icon Copy Url Three dots icon Down arrow icon Email icon Email icon Exit Fullscreen icon External link icon Facebook logo Facebook logo Instagram logo Snapchat logo YouTube logo Grid icon Key icon Left arrow icon Link icon Location icon Mail icon Menu icon Open icon Phone icon Play icon Radio icon Rewind icon Right arrow icon Search icon Select icon Selected icon TV icon Twitter logo Twitter logo Up arrow icon User icon Audio icon Tickets iconAdd to calendar iconNFC icon AFC icon NFL icon Football iconCarousel IconList ViewFootball iconCarousel IconList View.The rest of the year should be devoted to evaluating what other players on the roster have the potential to become building blocks.He has allowed 9 hits per nine innings and his FIP stands at 3.
If the Heat had gone up three games to one, it would’ve been a pretty tall order for Dirk Nowitzki and company to fight back.And given Coughlin’s age , how much time on the sideline does he really have left cheap jerseys anyway?But how much? | 2019-04-24T20:35:00Z | https://www.elitecheapjerseysusa.com/archives/tag/cheap-jerseys |
Death comes to all of us at some point. Have you thought about how you would feel when the time comes for you to die? Have you considered if you would have any regrets about how you led your life?
A palliative nurse who counseled dying patients in the final weeks of their lives took the liberty to record the most common regrets among them. Many of her patient’s regrets were revealing statements like: wishing they didn’t work so hard, wishing they had the courage to express their feelings, and wishing they had stayed in touch with their friends.
I believe in learning from the experiences of others. Having the insights of people who have lived to the end of their lives is strikingly helpful in living our best lives. Rather than reiterate the details of their regrets, I’m going to share them briefly and provide suggestions on how we can ensure that these regrets don’t become our regrets on our deathbeds someday. While we can’t change our past, we can change the present and the future. How our lives pan out from here is dependent on what we do starting today.
Are you living the life you have always wanted for yourself? Or are you simply living a life based on what others expect of you?
Many people today live their life around the expectations of others. Among my friends, many of them often make decisions based on what their partners or what other people want, rather than because of what they want or believe. Among my one-to-one coaching clients, they often complain about being trapped in careers they dislike because they chose careers which were deemed acceptable by their peers and family, rather than pursuing career paths that interested them.
I was raised in an oppressive manner by my parents and by my education system. While I have never faulted anyone for such an upbringing because I believe my parents and teachers came from a place of good intention, I did grow up feeling repressed. I would do things to conform to what others wanted for me, rather than doing things I wanted to do, and this made me very unhappy most of the time.
Being raised this way made me realize the importance of living a life true to myself. When I was in my early 20s, I began to come into my own, steadily making decisions and acting in a way that was truer to who I was as an individual. When I realized I was no longer in love with my corporate career, I quit and moved on to pursue my true passion to help others grow. When I felt it was time to do what I love to do, I readily started my personal development business (which I continue to run today), by way of my blog Personal Excellence. When I realized I had friendships which were no longer compatible with the person I had become, I immediately let them go rather than keep up a pretense.
Regret #2: I wish I didn’t work so hard.
Our modern society is one which drowns itself in busy-work. People are busier than ever, working twelve-hour workdays, and sometimes even longer. Parents rarely have time for their kids, and instead relegate care-taking duties to daycares, nannies and grandparents. People rarely have enough time for relationships or personal activities, often prioritizing their work ahead of everything else because it’s their livelihood. For some, work forms the core part of their identity.
There’s no such thing as “not having enough time.” It’s only a matter of what you set as your priorities. If you don’t have enough time for your relationships, it means that you are not making them a priority. If you missed your anniversary with your lover, it’s only because you deemed the anniversary as less important than whatever it is you had to do at that time. If you consistently miss your gym classes, it’s only because you are not committed to staying in shape, even if you claim otherwise.
Everyone has the same amount of time every day, be it successful entrepreneurs like Bill Gates, financial moguls like Warren Buffet, top athletes like Serena Williams, or inspirational leaders like Oprah Winfrey. It’s silly to think of yourself as not having enough time relative to others, because these go-getters are making productive leaps ahead every day even though they have the same amount of time at their disposal as you do.
Make a conscious choice on what you want to spend time on. What do you value the most in life? Are you spending your time in line with your priorities? If you answer no to the latter question, it means there is a misalignment between your desires and your actions.
Is there someone you like? Are you afraid to open your heart to him or her? Have there been times when you closed your heart to love because you were afraid of what would happen if you opened yourself up to it?
You aren’t alone. I have quite a few friends who are single, not because they are inadequate (in fact they are high achievers, great lookers in their own right, with great personalities to boot), but because they are closed off to love. They repeatedly dismiss opportunities to meet new people and expand their social circles. Whenever there is a guy or girl they take a fancy to, they choose not to act on their desires, instead finding one billion and one excuses why this person is not “the one” for them.
If you are afraid of expressing your feelings, ask yourself, “What is there to lose?” or “What’s the worst that could happen?” I believe in wearing your heart on your sleeve and being true to yourself, rather than hiding your feelings. At worst, the person will reject you and you will realize that your feelings had been misdirected all along.
But wait, is that really a worst-case scenario? Because now you will know the truth and be able to move on, rather than lingering around a one-sided romance. On the other hand, if the person reveals similar feelings, you will then be on the way to building a budding romance. Either way you will be grateful that you acted on your feelings rather than hiding behind a facade out of a mental fear of being rejected.
Friendships are often put on the back burner relative to other things, such as one’s career, romantic relationships, financial goals, and personal agendas.
Why? Because we tend to think friendships will stay afloat even when we do not give them due attention. As such, many of us take our friends for granted, often pushing back social appointments in the name of work, cancelling on friends at the last minute, or simply not putting in the due effort to meet up with friends face to face.
Rather than wait for your friends to initiate a get together, why not take the first step? Many of my social appointments and gatherings are often self-initiated. My proactive behavior has encouraged my friends to reciprocate in terms of putting in more effort to build our friendships. I don’t think there’s a need to wait on other people to meet up; it takes two hands to clap and you can always be the one to gets things moving.
As you reach out to friends, there will be people who do not reciprocate your efforts. That’s okay. Don’t take it to heart; sometimes people have different priorities and there’s no need to force a connection if it’s not working out. Simply move on to the friends who are reciprocating your efforts. You will build more authentic and fruitful connections this way.
Are you deeply unhappy? Are you always complaining about little things that go wrong? Are you always harping on the things you don’t have or things you have missed out on, rather than appreciating the things you do have today and the things you have gained?
Too many people are deeply unhappy not because of their place in life, but because of their misperceptions about what it takes to be happy. If anything, many of these unhappy people are highly affluent and privileged; they have a comfortable place to live, a stable job, a regular disposable income, a healthy social network, and a family to return home to.
It’s as John Galbraith mentioned in The Affluent Society : “Despite the increasing wealth of the society, people are not happier – in fact, they have become unhappier.” Why? Their unhappiness isn’t due to a lack of material wealth, but because they have flawed perceptions of what it takes to be happy. They think happiness comes from material goods or financial wealth, when these things are simply means to live a better quality life, rather than vehicles of happiness itself.
Recognize that happiness is a choice. Many people relegate their happiness to external factors. They think they can only be happy if they achieve X, Y, and Z or if X, Y, and Z criteria is satisfied.
Of course, the problem is this criteria is entirely untrue. Happiness doesn’t happen when those things are achieved; happiness is something you can experience now in this moment if you allow it to happen. You CAN be happy now if you want to be. The question is: Do you?
How do you feel about these five common regrets of the dying? What would you regret not fully doing, being or having in your life? Please leave a comment below and share your thoughts.
Author Bio: Celestine Chua writes at Personal Excellence on how to achieve excellence in life. If you like this article, check out: 101 Things To Do Before You Die and 8 Habits of Highly Productive People. Get her free e-book delivered to your inbox by signing-up for her free newsletter.
These regrets on dying remind me one of Og Mandino’s scrolls in the Greatest Salesman in the world. “Live this day as if it is your last”. The meaning is to live life fully. Make every second sweat. Extract every bit of juice from this morsel of a minute. Make every hour contribute to your life – as per your priorities. I think, by setting the right priorities and following the 5 ways to avoid the regrets, anyone can really live a life without regrets. Thanks for this beautiful post.
It took a cancer diagnosis for me to ‘wake-up’ to the fact, I am not going to be here forever. Lucky for me, I discovered this before I reached my deathbed and I can ask those questions and set myself straight so I do not have regrets when the time comes. Thank you for reminding us to live life right now…for ourselves not to please anyone else. That work thing is a hard one though. So many are scared of losing their job and they say ‘yes’ to their boss, when they would be better off saying ‘no’.
It is a frightening thought to look back and wish you didn’t work so hard. I value passion but I noticed a part of me was being consumed by work. I began to prioritize it over my friends. Luckily though, now that I’m back in California for a month, I realize that I’d much rather spend time with my loved ones and closest friends (who are also loved) than work.
Wonderful post. Right now I’m working intently on #1. It is hard to get out of the mold you were raised in, but so worthwhile.
Great post! I can only imagine that having regrets during the process of dying is the pits.
In fact, I knew I would regret not spending more time with my children, which is why I switched careers last year to give myself more freedom to help them and watch them grow.
My biggest regret would be giving in to fear and conforming to a life expected of me by others. If we listen to everyone else now, we’ll have to answer to ourselves later.
Great, insightful post. Thanks for sharing.
I have read the story about the nurse many times, but every single time I love the lessons learned from this nurse!
All of these lessons are so true, but I think the point about money can dictate the rest of the lessons. This happened to me when I got out of college…I was only focused on money, so I got a job and focused all of my energy on making money and climbing the corporate ladder.
I made money, but along the way I lost a lot of myself. Unfortunately, it wasn’t until a tragedy in my life that I made a complete change. That is when I started living true to myself…got a job I enjoyed…started working on projects that had more meaning….spending more time with my family…ect. Now hopefully I won’t have the same regrets as most people!
One of the biggest challenges is to have the capacity to live your own life. Being there in service to others because they are supposedly needy is so often a big hurdle to overcome. If you have a good perception of your passions, or a strong understanding of personal purpose, it may be easier. But I tell you, to escape the ropes of responsibility is very, very hard and if you aren’t careful, your life as you wish it, simply drifts away.
I ask myself every week if I’m on the right path. What if this is my last week? Am I happy with the way I spend most of my time and energy?
I don’t think it’s feasible to live absolutely every day as it is the last one, although we should aim to do so. However, when too many days in a row don’t feel right, we should do something about it.
I love your last tip, i.e. that happiness is a choice. I have made this choice in the past years and it feels great. I do have my bad days of course, but this simple attitude shift has made a big difference into how I feel on a daily basis.
What a great post. I just turned 57 and have been doing nothing but lamenting about what I did not do when I was younger, what I am stuck doing now and how I feel I have no time left to do anything. It is awful to wake up in the morning and feel like you have no future and no hope.
For the last 25 years I have been successful in the insurance industry. But funny enough, I have always been afraid to prospect because of my perception of what others will think of me. (I never got over the Woody Allen stuck in the elevator with the insurance agent thing! lol) I get referrals, but I want to really expand my business and work for as long as I am able. To do this I will have to prospect in many ways and take risks I have not taken in many years. I have not taken on this challenge because I tell myself I am to old to do this, people with tell me my ideas won’t work and they will laugh at an “old” guy going for it.
What it all comes done to is living with purpose. How do you want to live your life? Not only ask what you want to experience, but also what values you want your life to reflect. Spend time doing meaningful work that contributes to the world, surround yourself with caring and supportive people and love them back. Look at the bigger picture so you won’t let the small daily hassles define you. Sometimes it helps to wonder, what would you like your eulogy to say about you and your life?
Thank you for sharing great tips!
The point about time resonated with me. I hear excuses from my significant other and from my father about how they’re just “too busy” to call or to relax. We all get the same 24 hours–what you do with them reveals where your values lie.
Great post! But I found your last sentence (The think happiness from material goods. …) quite conflicting and confusing, because as you rightly pointed when the means to live quality and happy life is absent, then how can you be happy?
This really hit home, especially #3, expressing your feelings. Several years ago, I was secretly in love with someone but I never told him because of our complicated personal situations. It took the death of two family members to wake me up to the realization that I needed to tell everyone how much they meant to me.
I ended up spilling out all my feelings to him in an email. I had absolutely no expectation of anything, I just wanted him to know. He immediately responded, saying he was flattered but he didn’t feel the same way and that he viewed me as more of a sister. It was a blow to my ego, but I have never had any regrets about doing it.
It has been a number of years and I have moved on. He still isn’t involved with anyone. If I happen to run into him, he barely acknowledges me. It hurts a little because we used to be so close. But I’m happy I’m able to express my feelings and risk rejection than to be an emotional coward who can’t let anyone in, even a formerly close friend.
Knowing what I know now, I would still have sent that email.
Great post. I am 44 with major regrets as to where I am professionally. My passions have all but left me, so I spent 60 days working on getting myself in tune again. All of a sudden I find myself getting braver and acting with more clarity of purpose. I have given myself this year to finish up this career phase, then I am moving on to what I really want to do.
I have a lot of personal regrets like #3. Thankfully they are in the past. However I do need to appreciate more about what I have, and also what I do NOT have.
Letting myself be happier I struggle with, I do a great deal of goal setting. Its all about the timeline and the time spent, not about how you did it. Big mistake.
I love this post I really do, but one of the major things I struggle with is that idea that more money and success won’t make me happy when I really feel that not only will it make me happier but it will make me eligible for a relationship again.
I’ve a had a few relationships, all of which ended with me alone and unhappy and while I put my all into each relationships and trusted them with my heart…in the end I felt that the fact these relationships didn’t work proved to me that I’m not good enough and that maybe when I make 100k annually, and maybe I’m a little thinner these things will play out better.
I’m much like your friends in the sense I’ve pretty much given up on love because for ME it feels like the smart thing to do to avoid pain, and rejection in this regard. I’ve put all my effort into going toward my career because I DO feel that its the only thing that is actually good about me or of value.
I don’t want to think this way, I want to feel that women place more value on a significant other than material things–I see this from time to time but I feel like for ME that I have to meet some unrealistic expectation to be accepted. I have a lot of friends and people who support me as friends but its been some time since I’ve even thought about getting in a relationship.
I still feel like I need to make 200k a year just to seem worthy for anyone at this point.
This post could be helpful, just might take some time.
This is beautiful and on time! Thank you!!!
I regret not taking a leave of absence from work earlier to attend to my mother who was ill. She died the morning I submitted the paperwork. I’m not going to make the same mistake with my father, even though my siblings are.
I also regret that I didn’t spend the time up front to identify career options more suited to my interests. Instead, I took the passive approach and the first job that allowed me to support myself. I’ve tried to get my nieces and nephews to expand their horizons but I fear they are making the same mistake, taking the more familiar path that requires the least amount of effort.
The idea of “wishing I hadn’t worked so hard” is so foreign to me. My willingness to work really hard is one of my qualities that I am proudest of. I don’t always like it, but I know that if I want to accomplish any of my dreams, I need to do it anyway. #1 is one that I totally agree with. My parents keep questioning my desire to go to medical school/become a doctor, and keep pushing me to go the computer science route instead, but CS is something I do not want. #3: I used to lament the fact that I seemed to have no luck with the guys, even when I did express my feelings. Now, I do mostly stay to myself and don’t “put myself out there,” but I have also come to the realization that right now is just not the right time in my life for me to be looking.
Perhaps you have said this in other ways but I have found that the deepest regret of those left after “Their special someone” has died is not telling the deceased that they love them. This regret is particularly evident between daughters and parents.
The opposite must also apply, people who are dying have not told those they do love that they love them in any meaningful way. The old, and mostly imagined, hurts or ego slights, continue to be in the way. I do not want to die leaving any ambiguity in the minds of those I love that I love them unconditionally. There can be no regrets if that is the case.
Regret #2 is a big one in my life and not easily solved. I have started working at regret #’s 3 and 4. “lost” friends are often surprised to hear from me. Whether or not it is sustainable when they don’t make the effort remains to be seen. Having said that, it makes me feel good about myself so I will continue regardless.
Well, I could say that I have mostly not lived my life how others would want me to, which means I haven’t done that much with it, being lazy; I’ve learned that expressing my feelings usually upsets someone, (although that’s probably more down to how I express them I suppose), I certainly don’t work too hard; I don’t have a huge number of friends and those I have I see once a week – school friends, who knows where they are by now, that’s 30 years ago!
So really, it’s down to the last one. I know that I find life to be so imperfect, even though I know rationally that it never will be, never can be, part of me still wants it to be, hopes it will be. And there is the conflict, the constant niggling stone in my shoe. (Don’t give me some facile platitude about taking my shoes off!) I know that it is a major flaw, and so far, the only thing that gives me any kind of relief from it is Dharma, Buddhism. Buddha said, life is suffering, and this honesty, rather than some airy fairy blather, made me pay attention. And then he said, mind is the creator of all. So I know that my suffering is created by my own mind, my own perceptions. So, the work of my life is to work on my mind and change it. Not sure if I’ll do it before I die, (another thing Buddha said is, we can die at Any Time), but I don’t think there’s anything else worth striving for, ultimately, whatever else in life I do or have, however many friends or family I have.
I have been reading your blog for over 6 months now, and it’s a great read, everyday.
I just want to say how much it has helped me keep focused on what is important in life, what I want to achieve, the baby steps to take in order to achieve it, and how it all fits into the big picture & bigger purpose in life. I’m generally a happy person, thank God…but It has pumped even more happiness into everything I do, and helped me reconnect with myself & others on so many levels.
I just want to emphasize point 3 about expressing your feelings. I think we all need to realize that we need love in our lives. No matter how successful we may be or become, if we don’t fill our life with love, we will always have this empty hole inside of us.
Thanks, for all the hard work & passion you & Marc put into this.
Yep, I totally agree that doing something is almost always better than doing nothing. Be brave, a bruised heart is better than one that never sees the light.
I have difficulty working on both #1 and #4. While I believe I am getting better in living a life I like and take responsibility for it, it comes with a price of giving up on many friendships. I am from a country with heavy social ties, so leaving an independent life is difficult. I will give you an example. On Sundays, everyone goes to church but not me. It is not because I don’t believe in God but because there is so much social pressure that comes with it later that I know I can’t handle. If you live outside your country, going to church is usually the only way to socialize. In the past 5 years I have been a foreigner , every church i was invited to always ended up with some divisions based on race,status etc. Which is why I am not too excited about going there. Also I prefer not going to some occasional parties for the same reasons, so it is no wonder i don’t have too many friends. I would have liked to have some friends that understand me, but those are usually found online and not someone I can hang out with.
Most of us do! However, I think balance is the key. My ex-husband was an overachiever and made making money his top priority. Now I have a man who adores me but despite his efforts has been jobless for the past three years after losing his franchise. Even though the attraction and love are still there after eight years, the financial situation is putting a lot of stress on our relationship. I have been supportive, patient and understanding because I love him and he is worth it. Unfortunately, I am running out of patience and growing frustrated because he is taking way too long to find his path. So even though people say money can’t buy happiness, in my case, it would at least help me keep it! Again, it’s all about balance.
Good reflective post. I found a thread of responsibility through the article. If you take responsibility for ourselves and present state we found a lot of more pleasure and satisfaction in life.
So true also that happiness and gratitude is a choice. We may feel bad about our present condition, but there is a 99.99% chance there are others in the world in worse circumstance. By consciously looking for and concentrating on what makes us and being grateful for this will go a long way to keep these regrets from coming up during the dying process.
Wonderful post… Today is my 53rd birthday. I took the day off work (first time ever having my birthday off, and I think I will make it an annual ‘event’) I am spending most of the day the way I want to (and envisioned)…. being on the internet was not one of them, but I do love these posts. I am looking forward to spending the latter part of the day with 2 very important friends. As always, there is room for reflection and what one wants life to be. I am surprised to find myself at 53 still seeking more, and knowing much has not yet been fulfilled. And yet, I am so very blessed. Reading ‘One Thousand Gifts’ is an eye opener to begin really seeing that there is much all around us to be appreciative of. I love the # that reminds us to be open to love and to express it to others. Connections and friendship really to make a life rich.
Perhaps we should focus not on working less hard at everything, but concentrating our efforts after careful consideration on what is truly important and lasting.
For #1: I think that if you are lucky enough to have some kind of expectations, boundaries, limitations or guidelines given to you by parents or teachers, then it is possible to break those constraints. Breaking the rules is SO important to growth. However, if you are unlucky and you don’t have guidelines, then you will always try to either recreate some guidelines or continually hurt yourself and others. Living the life you want takes admitting your errors and taking full responsibility for them. Intelligence and humility is key.
Powerful stuff. The subject of death is always one of the best ways to start seeing things from a bigger perspective for me. I will ponder these “regrets” for sure which give them a more serious weight than just nice ideas. Thanks.
Thank God for wonderful authors and their inspiring ideas. Cheers to life and no more regrets!
Wouldn’t my 5 biggest regrets of dying be, I’m dead, still dead, still dead, dead, and you guessed it… I still regret, I’m dead.
@All: Thank you for sharing your thoughts and stories with us. I think Celes did a wonderful job covering these common regrets, and I’m glad so many of you connected with her insights.
@Inge: You are an inspiration. I’m so happy you beat cancer and are here to share your beautiful story with the world.
@Cornelius: Agreed. If you wake up too many mornings in a row and you are unhappy with what you have planned for the day, it’s time to re-plan your life.
@JK: Knowing is better than not knowing. You made the right choice. Carry on with you head held high.
@David Rapp: It sounds to me like you’re making significant progress. You’re on the right tract, that’s for sure.
@Mr_Baseball: Jus take it one step at a time. Don’t rush into a relationship. Make friends and see where things lead. The right relationship will come along at the right time.
@Dana: I’d have a conversation with your nieces and let them know hwat you’ve learned.
@Melayahm: If you’ve found Buddhism and it working for you, keep practicing. Honestly, I connected with you sentiments. I deal with them differently, but I know where you’re coming from.
@Joesph: Great addition. Thanks for the kind words too.
@Daniel: What about social groups based on hobbess and interests. Perhaps meetup.com?
@Lisa: Happy birthday! Thanks for the added insight.
I had been doing most of these things that people regret… until I came down with breast cancer. That was the changing point of my life. I realized that I wanted to change and started step by step to actually do it. I started to live a life putting myself first and stop letting others expectations be my guiding force. I stopped trying to please everyone else, except myself.
Before I got cancer I worked too hard for something someone else wanted, and it nearly killed me. My husband also did this in the corporate world to keep us afloat when I was sick and it nearly killed him also. He retired and we don’t live nearly as extravagant as we did but we have what we need. We have since come together in such a spiritual way that saved us, our family and our marriage. We cocooned for quite awhile during this time and had to put some friendships on hold for a while to get ourselves together and we are now reconnecting with our true friends. I now tell all the people in my life how I feel and how much they mean to me. I feel much more unconditional love and am happier for it. I am much happier now, but still don’t do what I really want to do. I’m still caught up in the idea that I can’t do “the fun stuff until the works done” I need to find the balance to do both.
One thing that I never regretted was living a simpler life to be able to stay home and raise my children. We did without a lot of “things” to make this happen but it was the best decision I ever made. I was there in the moment , not just making videos of all those special moments in their life or hearing it from others. I was involved in their school and lives. This ended up giving me children who cared for me during my illness, who make time for me in their lives and tell me how much they love me. This taught them what was important in life. I think this was the greatest gift I ever gave them and Myself. Life’s not perfect, but I’m happier now than I’ve ever been.
After my husband died I realized that we had spent a lifetime seldom straying outside our comfort zones. We were afraid of failure. So I spent a year saying Yes to every opportunity to experience the new and the different, including climbing into tall trees and reaching the ground via a zip wire(I have a fear of heights and avoid ladders).
The surprise and joy and sadness of this is that I did not fail and I now have a range of active hobbies and new friends to share them with. The one I enjoy most was one we were saving for retirement, dancing, or rather, partner dancing. My husband stopped work because of illness and died before retirement age.
I now trust my capacity to learn and change, and, guess what? I wish WE had been able to do this many years ago. So to all of you starting careers I would say, don’t give up your job, don’t stop striving for success, add happiness and breadth of experience into your life goals. My husband said that “This life we have is not a rehearsal”. I think it is extraordinarily difficult to really live like that.
@ Dana – I share a similar regret, wishing I had taken a leave of absence to care for my mom when she was terminally ill. I empathize with the pain you must be feeling. I wish you peace.
It can be so hard to express one’s true feelings.
“What if I’m laughed at?” or “what if I look stupid?” are thoughts that can have a huge amount of control over our actions.
We have to move past those thoughts into action if we want to live without regret. That way when we lay in our deathbed we don’t think about the “what if’s…” in life.
I teared up reading through all these posts, it is sad that we all seem to have it hard wired in ourselves that it is easier to accept we might fail than succeed.
Thanks for the great article and posts.
All five of these regrets hit close to home but #2 is the one I personally need to balance the most – working too hard. On the positive side of working too much, I love my work and it’s rewarding. When you love what you do, it doesn’t feel like work. When work makes you happy, you cover regret #5. But when you work all the time, there’s no balance with the rest of life, and you miss out on regret #4 – staying in touch with friends. In a perfect world all five of these regrets are important. It’s interesting how most of us all share the same regrets. It’s also interesting that these are very simple things to do that will ultimately create so much happiness. Thanks for sharing! | 2019-04-20T12:58:26Z | http://www.marcandangel.com/2013/06/04/5-ways-to-avoid-the-biggest-regrets-of-dying/ |
We create, interpret, and experience stories every day, whether we realize it or not. Our brains are constantly receiving input and stringing things together in order for us to make sense of the world. While our brains create countless stories, only the few great ones stay with us. These make us cry, laugh, or embrace a new perspective.
Understanding how our brains interpret the world can help us become better storytellers. That’s where neuroscience comes in. The field of neuroscience covers anything that studies the nervous system, from studies on molecules within nerve endings to data processing, to even complex social behaviors like economics.
So let’s put our brains to the test. Take a look at this image for a few seconds. What do you see?
We know very little about this scene. But because our brains crave structure, we still try to see the story. We take things we know—boxing gloves, children, and a corner man—and try to infer what the unknown might be.
A good story takes us from the Known to the Unknown. This simple premise is the key to telling stories for the brain. Let’s apply this concept to a comic. Why a comic? Comics are similar to data stories in that they present a sequence of panes containing different data points that lead you through a story.
Election year is coming up.
Dying in Canada = real.
What did we do in the course of reading the comic? We’re going to look at some basic brain anatomy to understand what our brain does when reading something like this.
As you look at the comic, the prefrontal cortex in your frontal lobe kicks into gear, and your brain’s cognitive control goes to work. You're also processing data that comes into your brain as visual input. From your eyes, that data is sent to the primary visual cortex at the back of your brain and onward along two processing streams: the "what" and the "where" pathways.
The "what" pathway (in purple) uses detailed visual information to identify what we see. It pieces together the lines and figures that add up to the comic's characters. It also recognizes the letters and words, and helps deciphers their meaning with the help of additional cortical regions like Wernicke's Area, a part of our language system.
The "where" pathway (in green) processes where things are in space. We know this data stream is important and active during reading because adults with reading disabilities like dyslexia often have disrupted functioning of this pathway.
So when we're interpreting visual information, we're activating quite a bit of our brains to make sense of the data we're presented.
Things get more complex from there, because as we interpret the stories we see, even more brain areas become active. Part of the way we comprehend stories is through a simulation of what we see. So you can potentially activate parts of your brain involved in motor control or your sense of touch.
And imagine if you connect emotionally to the story you're reading. You'll be activating areas of your brain involved in emotion (the limbic system). So when reading a good story, whether it's prose, a comic strip, a data-driven story, you have the potential to get almost global activation of your brain. And the most impactful and memorable stories are those that engage us most.
Now that we know some of the anatomy, let’s look at the behavioral applications of what we know. Take a look at the figures and read them from left to right. Which one is not like the others? We can quickly see which figure is out of place. Our eyes jump right to it.
How did we know which one was the oddball figure without anyone telling us what it looked like? We had already established a baseline that our initial figure was the normal figure. And when the outlier was presented, we knew right away that it didn't belong.
This experiment is a common attentional process test called the oddball paradigm. A baseline is presented through repetition, then an oddball is presented. This should remind you of our Known-to-Unknown formula that I mentioned earlier. By creating a strong baseline, when the oddball—or an unexpected twist or climax—occurs, we are prepared for it and enjoy it.
Our brain is processing the information based on our experience of the information input. Below is a figure of an ERP, or event-related potential. ERPs are averaged waveforms that measure electrical activity from your scalp. We can use them to measure reaction speed to attentional processing.
In the left figure, we see our brains when presented with standard stimuli (each tick mark is 100ms). You see that we have relatively flat lines after the initial peak. The flat lines are expected because standard stimuli are essentially noise, and our mind zones out because it has been normalized.
The figure on the right shows the oddball—or target—tone with a peak of 300ms (also known as a P300). This peak is from our brain detecting the oddball and concluding that this is the item to pay attention to. This peak is only possible through having established a clear baseline.
The example above shows us we have to lay down a good foundation and logical progression to get to our peak. Without structure, our audience will experience our story as noise and tune out, like our figure on the left.
When creating your own stories, remember that the brain craves structure and loves oddballs. The brain processes information by taking information it already knows to infer what a new piece of information might be. Therefore, making it easy as possible for the brain to understand the story is key to delivering a successful climax or twist.
Now that you have some basic understanding of brain anatomy and neuroscience, try applying the lessons learned to your data stories. Create dashboards that engage the senses through pleasing designs, shapes, color, text, and interactivity. Embrace the oddball paradigm by clearly establishing a baseline before delivering your findings. That way, the audience’s mind will be primed to attend to it. And their brains will help them remember your story as one of the few good ones.
The bagged trees algorithm is a commonly used classification method. By resampling our data and creating trees for the resampled data, we can get an aggregated vote of classification prediction. In this blog post I will demonstrate how bagged trees work visualizing each step.
Conclusion: Other tree aggregation methods differ in how they grow trees and they may compute weighted average. But in the end we can visualize the result of a algorithm as borders between classified sets in a shape of connected perpendicular segments, as in this 2-dimensional case. As for higher dimensions these became multidimensional rectangular pieces of hyperplanes which are perpendicular to each other.
Contributed by Bin Lin. He took NYC Data Science Academy 12 week full-time Data Science Bootcamp program between Jan 11th to Apr 1st, 2016. The post was based on his second class project(due at 4th week of the program).
Explore food price changes over time from the year 1974 - 2015.
Compare food price changes to All-Items price changes (All-items include all consumer goods and services, including food).
Compare Consumer Food Price Changes vs. Producer Price Changes (producer price changes are the average change in prices paid to domestic producers for their output).
Missing data: There are 2 missing values in the column of "Eggs"
Missing data: There are 25 missing values in the column of "Processed.fruits.vegetables".
The high share of nonalcoholic beverages/soft drinks (6.7%) seems concerning as high consumption of soft drinks might pose the health risk.
The Consumer Price Index (CPI) is a measure that examines average change over time in the prices paid by consumers for goods and services. It is calculated by taking price changes for each item in the predetermined basket of goods and averaging them; the goods are weighted according to their importance. Changes in CPI are used to assess price changes associated with the cost of living.
As I was looking at the food price changes, I noticed that there was the dramatic increase during the late 70s. After reviewed history of the 1970s, a lot happened during that time of period, including the "Great Inflation".
To view the food price changes for each category in a year, I created the bar chart in the Shinny app. Users can select a year from the slider; the chart will show food price changes of each category for that year. I actually created two bar charts side-by-side in case users want to compare the food price changes between any of the two years.
A quick look at the year 2015, the price of "Egg" had the biggest increase; price of "Pork" dropped the most. In fact, many food categories dropped their price. Compared 2015, the year 2014 had fewer categories with dropped price; the price of "Beef and Veal" had the biggest increase.
Food price changes mostly aligns with all-item price changes.
Food price inflation has outpaced the economy-wide inflation in recent years.
Based on United State Department of Agriculture (USDA), changes in farm-level and wholesale-level PPIs are of particular interest in forecasting food CPIs. Therefore, I created a chart to show the Over All Food Price Changes vs Producer Price Changes. Uses can choose one or more Producer food categories.
From the chart, that food price changes mostly aligns with the producer price changes. However, farm level milk, farm level cattle, farm level wheat seem fluctuate since year 2000 and they didn't affect the over all food price change that much. Though the impact on the over all food price was small, I doubt they might have impacted individual food categories. I would like to add a new drop-down list to allow users to select food categories from the consumer food categories.
To see the relationship among the different categories in terms of price changes, I created a correlation tile map.
Looking ahead to 2016, ERS predicts food-at-home (supermarket) prices to rise 2.0 to 3.0 percent - a rate of inflation that remains in line with the 20-year historical average of 2.5 percent. For future works, I would love to try to fit a time-series model to predict the price changes for the coming five years.
Again, this project was done in Shiny and most of the information in this blog post were from the Shiny, https://blin02.shinyapps.io/food_price_changes/.
There are many ways to choose features with given data, and it is always a challenge to pick up the ones with which a particular algorithm will work better. Here I will consider data from monitoring performance of physical exercises with wearable accelerometers, for example, wrist bands.
The data for this project come from this source: http://groupware.les.inf.puc-rio.br/har.
In this project, researchers used data from accelerometers on the belt, forearm, arm, and dumbbell of few participants. They were asked to perform barbell lifts correctly, marked as "A", and incorrectly with four typical mistakes, marked as "B", "C", "D" and "E". The goal of the project is to predict the manner in which they did the exercise.
There are 52 numeric variables and one classification variable, the outcome. We can plot density graphs for first 6 features, which are in effect smoothed out histograms.
We can see that data behaviors are complicated. Some of features are bimodal and even multimodal. These properties could be caused by participants' different sizes or training levels or something else, but we do not have enough information to check it out. Nevertheless it is clear that our variables do not obey normal distribution. Therefore we are better with algorithms which do not assume normality, like trees and random forests. We can visualize the algorithms work in the following way: as finding vertical lines which divide areas under curves such that areas to the right and to to left of the line are significantly different for different outcomes.
If there is a pair for a feature which satisfies it then the feature is chosen for a prediction. As result I got 21 features for a random forests algorithm. The last one yielded accuracy 99% for the model itself and on a validation set. I checked how many variables we need for the same accuracy with PCA preprocessing, and it was 36. Mind you that the variables will be scaled and rotated, and that we still use the same original 52 features to construct them. Thus more efforts are needed to construct a prediction and to explain it. While with the above method it is easier, since areas under curves represent numbers of observations.
Contributed by Paul Greeh. Paul took NYC Data Science Academy 12 week full time Data Science Bootcamp program between Sept 23 to Dec 18, 2015. The post was based on his first class project(due at 2nd week of the program).
Analyse fuel economy ratings in the automotive industry.
Compare vehicle efficiency of American automotive manufacturer, Cadillac with the automotive industry as a whole.
Compare vehicle efficiency of American automotive manufacturer, Cadillac, with self declared competition, the German luxury market.
What further comparisons will display insight into EPA ratings?
Import FuelEconomy.gov data and filter rows needed for analysis. Then remove all zero’s included in city and highway MPG data as this will skew results. - Replace this information with NA as to not perform calculations on data not present.
Visualize city and highway EPA ratings of the entire automotive industry.
How have EPA ratings for city and highway improved across the automotive industry as a whole?
Note: No need to include combined as combined is simply a percentage based calculation defaulting to 60/40 but can be adjusted on the website.
Data visualization shows relatively poor EPA ratings throughout the 1980's, 1990's and early to mid 2000's with the first drastic improvement in these ratings occurring around 2008. One significant event around this time period was the recession hitting America. Consumers having less disposable income along with increased oil prices likely fueled competition to develop fuel efficient powertrains across the automotive industry as a whole.
Visualize Cadillac's city and highway EPA ratings with that of the automotive industry.
How does Cadillac perform when compared to the automotive industry as a whole?
Cadillac was chosen as a brand of interest because they are currently redefining their brand as a whole. It is important to analyze past performance to have a complete understanding of how Cadillac has been viewed for several decades.
In 2002, Cadillac dropped to its lowest performance. Why did this occur? Because the entire fleet was made up of the same 4.6L V8 mated to a 4-speed automatic transmission, or as some would say... slush-box. The image that Cadillac had of this time was of a retirement vehicle to be shipped to its owners new retirement home in Florida with a soft ride, smooth powerful delivery and no performance. With the latest generation of Cadillac's being performance oriented beginning with the LS2 sourced CTS-V and now containing the ATS-V, CTS-V along with several other V-Sport models, a rebranding is crucial in order to appeal to a new market of buyers.
Also interesting to note is that although there is an increased amount of performance models being produced, fuel efficiency is not lacking. The gap noted above has decreased although there has been an increase in performance models being developed, a concept not often found to align.
How does Cadillac perform when compared with the German Luxury Market?
“Mr. Ellinghaus, a German who came to Cadillac in January from pen maker Montblanc International after more than a decade at BMW, said he has spent the past 11 months doing”foundational work" to craft an overarching brand theme for Cadillac’s marketing, which he says relied too heavily on product-centric, me-too comparisons.
Despite comments made by Mr. Ellinghaus, the end goal is for consumers to be comparing Cadillac with Audi, BMW and Mercedes-Benz. The fact that this is already happening is a huge success for the company which only ten years ago, would never be mentioned in the same sentence as the German Luxury market.
Data visualization shows that Cadillac is equally rated as its German competitors and at the same time, has not had any significant dips unlike all other manufacturers. The continued increase in performance combined with rebranding signify that Cadillac is on a path to success.
Every manufacturer has its strengths and weaknesses. It is important to assess and recognize these attributes to best determine where an increase in R&D spending is needed and where to maintain a competitive advantage for the consumer by vehicle class.
In what vehicle class is Cadillac excelling or falling behind?
The above data visualization displays the delta between Cadillac and the average (Audi, BMW, Mercedes-Benz) fuel economy ratings. Positive can then be considered above the average competition and negative, below the average competition.
There is a lack of performance across all vehicle classes. Reasoning may be because the same power trains are being used across multiple chassis.
There is a clear improvement in EPA ratings as federal emission standards drive innovation for increased fleet fuel economy. It is important for automotive manufacturers to continue innovation and push for increased efficiency.
There's been many variations of this theme - defining big data with 3Vs (or more, including velocity, variety, volume, veracity, value), as well as other representations such as the data science alphabet.
It was published in a scholarly paper entitled Computing in the Statistics Curricula (PDF document). Enjoy!
Guest blog post by Martijn Theuwissen, co-founder at DataCamp. Other R resources can be found here, and R Source code for various problems can be found here. A data science cheat sheet can be found here, to get you started with many aspects of data science, including R.
Learning R can be tricky, especially if you have no programming experience or are more familiar working with point-and-click statistical software versus a real programming language. This learning path is mainly for novice R users that are just getting started but it will also cover some of the latest changes in the language that might appeal to more advanced R users.
Creating this learning path was a continuous trade-off between being pragmatic and exhaustive. There are many excellent (free) resources on R out there, and unfortunately not all could be covered here. The material presented here is a mix of relevant documentation, online courses, books, and more that we believe is best to get you up to speed with R as fast as possible.
Data Video produced with R: click here and also here for source code and to watch the video. More here.
R is rapidly becoming the lingua franca of Data Science. Having its origins in academics, you will spot it today in an increasing number of business settings as well where it is a contestant to commercial software incumbents such as SAS, STATA and SPSS. Each year, R gains in popularity and in 2015 IEEE listed R in the top ten languages of 2015.
This implies that the demand for individuals with R knowledge is growing, and consequently learning R is definitely a smart investment career wise (according to this survey R even is the highest paying skill). This growth is unlikely to plateau in the next years with large players such as Oracle &Microsoft stepping up by including R in its offerings.
Nevertheless, money should not be the only driver when deciding to learn a new technology or programming language. Luckily, R has a lot more to offer than a solid paycheck. By engaging yourself with R, you will become familiar with a highly diverse and interesting community. Namely, R is being used for a diverse set of task such as finance, genomic analysis, real estate, paid advertising, and much more. All these fields are actively contributing to the development of R. You will encounter a diverse set of examples and applications on a daily basis, keeping things interesting and giving you the ability to apply your knowledge on a diverse range of problems.
Before you can actually start working in R, you need to download a copy of it on your local computer. R is continuously evolving and different versions have been released since R was born in 1993 with (funny) names such as World-Famous Astronaut and Wooden Christmas-Tree. Installing R is pretty straightforward and there are binaries available for Linux, Mac and Windows from the Comprehensive R Archive Network (CRAN).
Once R is installed, you should consider installing one of R’s integrated development environment as well (although you could also work with the basic R console if you prefer). Two fairly established IDE’s are RStudio and Architect. In case you prefer a graphical user interface, you should check out R-commander.
DataCamp’s free introduction to R tutorial and the follow-up course Intermediate R programming. These courses teach you R programming and data science interactively, at your own pace, in the comfort of your browser.
The swirl package, a package with offline interactive R coding exercises. There is also an online version available that requires no set-up.
On edX you can take Introduction to R Programming by Microsoft.
The R Programming course by Johns Hopkins on Coursera.
R puts a big emphasis on documentation. The previously mentionedRdocumentation is a great website to look at the different documentation of different packages and functions.
There are numerous blogs & posts on the web covering R such asKDnuggets and R-bloggers.
One of the main reasons R is the favorite tool of data analysts and scientists is because of its data visualization capabilities. Tons of beautiful plots are created with R as shown by all the posts on FlowingData, such as this famous facebook visualization.
If you want to get started with visualizations in R, take some time to study theggplot2 package. One of the (if not the) most famous packages in R for creating graphs and plots. ggplot2 is makes intensive use of the grammar of graphics, and as a result is very intuitive in usage (you’re continuously building part of your graphs so it’s a bit like playing with lego). There are tons of resources to get your started such as this interactive coding tutorial, a cheatsheet and an upcoming book by Hadley Wickham.
If you want to see more packages for visualizations see the CRAN task view. In case you run into issues plotting your data this post might help as well.
Next to the “traditional” graphs, R is able to handle and visualize spatial data as well. You can easily visualize spatial data and models on top of static maps from sources such as Google Maps and Open Street Maps with a package such as ggmap. Another great package is choroplethr developed by Ari Lamstein of Trulia or the tmap package. Take this tutorial onIntroduction to visualising spatial data in R if you want to learn more.
Note that these resources are aimed at beginners. If you want to go more advanced you can look at the multiple resources there are for machine learning with R. Books such as Mastering Machine Learning with R andMachine Learning with R explain the different concepts very well, and online resources like the Kaggle Machine Learning course help you practice the different concepts. Furthermore there are some very interesting blogs to kickstart your ML knowledge like Machine Learning Mastery or this post.
One of the best way to share your models, visualizations, etc is through dynamic documents. R Markdown (based on knitr and pandoc) is a great tool for reporting your data analysis in a reproducible manner though html, word, pdf, ioslides, etc. This 4 hour tutorial on Reporting with R Markdownexplains the basics of R markdown. Once you are creating your own markdown documents, make sure this cheat sheet is on your desk.
HTML widgets allow you to create interactive web visualizations such as dynamic maps (leaflet), time-series data charting (dygraphs), and interactive tables (DataTables). If you want to learn how to create your own watch this tutorial by RStudio.
Another technology making a lot of noise recently is Shiny. With Shiny you can make your own interactive web applications in R such as these. There is a whole learning portal dedicated to building your own Shiny applications.
Lately, there is a lot of focus on how to run R in the cloud. If you want to do this yourself, you can have a look at tutorials such as running R on AWS, the R programming language for Azure, and RStudio Server on Digital Ocean.
Once you have some experience with R, a great way to level up your R skillset is the free book Advanced R by Hadley Wickham. In addition, you can start practicing your R skills by competing with fellow Data Science Enthusiasts on Kaggle, an online platform for data-mining and predictive modelling competitions. Here you have the opportunity to work on fun cases such as this titanic data set.
To end, you are now probably ready to start contributing to R yourself by writing your own packages. Enjoy!
About this book: Numerical computation, knowledge discovery and statistical data analysis integrated with powerful 2D and 3D graphics for visualization are the key topics of this book. The Python code examples powered by the Java platform can easily be transformed to other programming languages, such as Java, Groovy, Ruby and BeanShell. This book equips the reader with a computational platform which, unlike other statistical programs, is not limited by a single programming language.
This is a guest repost by Jacob Joseph.
An Outlier is an observation or point that is distant from other observations/points. But, how would you quantify the distance of an observation from other observations to qualify it as an outlier. Outliers are also referred to as observations whose probability to occur is low. But, again, what constitutes low??
There are parametric methods and non-parametric methods that are employed to identify outliers. Parametric methods involve assumption of some underlying distribution such as normal distribution whereas there is no such requirement with non-parametric approach. Additionally, you could do a univariate analysis by studying a single variable at a time or multivariate analysis where you would study more than one variable at the same time to identify outliers.
The question arises which approach and which analysis is the right answer??? Unfortunately, there is no single right answer. It depends for what is the end purpose for identifying such outliers. You may want to analyze the variable in isolation or maybe use it among a set of variables to build a predictive model.
Let’s try to identify outliers visually.
How can we identify outliers in the Revenue?
We shall try to detect outliers using parametric as well as non-parametric approach.
The x-axis, in the above plot, represents the Revenues and the y-axis, probability density of the observed Revenue value. The density curve for the actual data is shaded in ‘pink’, the normal distribution is shaded in 'green' and log normal distribution is shaded in 'blue'. The probability density for the actual distribution is calculated from the observed data, whereas for both normal and log-normal distribution is computed based on the observed mean and standard deviation of the Revenues.
Outliers could be identified by calculating the probability of the occurrence of an observation or calculating how far the observation is from the mean. For example, observations greater/lesser than 3 times the standard deviation from the mean, in case of normal distribution, could be classified as outliers.
The above plots show the shift in location or the spread of the density curve based on an assumed change in mean or standard deviation of the underlying distribution. It is evident that a shift in the parameters of a distribution is likely to influence the identification of outliers.
Let’s look at a simple non-parametric approach like a box plot to identify the outliers.
In the box plot shown above, we can identify 7 observations, which could be classified as potential outliers, marked in green. These observations are beyond the whiskers.
In the data, we have also been provided information on the OS. Would we identify the same outliers, if we plot the Revenue based on OS??
In the above box plot, we are doing a bivariate analysis, taking 2 variables at a time which is a special case of multivariate analysis. It seems that there are 3 outlier candidates for iOS whereas there are none for Android. This was due to the difference in distribution of Revenues for Android and iOS users. So, just analyzing Revenue variable on its own i.e univariate analysis, we were able to identify 7 outlier candidates which dropped to 3 candidates when a bivariate analysis was performed.
Both Parametric as well as Non-Parametric approach could be used to identify outliers based on the characteristics of the underlying distribution. If the mean accurately represents the center of the distribution and the data set is large enough, parametric approach could be used whereas if the median represents the center of the distribution, non-parametric approach to identify outliers is suitable.
Dealing with outliers in a multivariate scenario becomes all the more tedious. Clustering, a popular data mining technique and a non-parametric method could be used to identify outliers in such a case.
One of the most important tasks in Machine Learning are the Classification tasks (a.k.a. supervised machine learning). Classification is used to make an accurate prediction of the class of entries in the test set (a dataset of which the entries have not been labelled yet) with the model which was constructed from a training set. You could think of classifying crime in the field of Pre-Policing, classifying patients in the Health sector, classifying houses in the Real-Estate sector. Another field in which classification is big, is Natural Lanuage Processing (NLP). This is the field of science with the goal to makes machines (computers) understand (written) human language. You could think of Text Categorization, Sentiment Analysis, Spam detection and Topic Categorization.
For classification tasks there are three widely used algorithms; the Naive Bayes, Logistic Regression / Maximum Entropy and Support Vector Machines. We have already seen how the Naive Bayes works in the context of Sentiment Analysis. Although it is more accurate than a bag-of-words model, it has the assumption of conditional independence of its features. This is a simplification which makes the NB classifier easy to implement, but it is also unrealistic in most cases and leads to a lower accuracy. A direct improvement on the N.B. classifier, is an algorithm which does not assume conditional independence but tries to estimate the weight vectors (feature values) directly.
This algorithm is called Maximum Entropy in the field of NLP and Logistic Regression in the field of Statistics.
Maximum Entropy might sounds like a difficult concept, but actually it is not. It is a simple idea, which can be implemented with a few lines of code. But to fully understand it, we must first go into the basics of Regression and Logistic Regression.
Regression Analysis is the field of mathematics where the goal is to find a function which best correlates with a dataset. Lets say we have a dataset containing datapoints; . For each of these (input) datapoints there is a corresponding (output) -value. Here the -datapoints are called the independent variables and the dependent variable; the value of depends on the value of , while the value of may be freely chosen without any restriction imposed on it by any other variable.
The goal of Regression analysis is to find a function which can best describe the correlation between and . In the field of Machine Learning, this function is called the hypothesis function and is denoted as .
If we can find such a function, we can say we have successfully build a Regression model. If the input-data lives in a 2D-space, this boils down to finding a curve which fits through the datapoints. In the 3D case we have to find a plane and in higher dimensions a hyperplane.
If the results looks like the figure on the left, then we are out of luck. It looks like the points are distributed randomly and there is not correlation between and at all. However, if it looks like the figure on the right, there is probably a strong correlation and we can start looking for the function which describes this correlation.
where are the dependent parameters of our model.
Evaluating the results from the previous section, we may find the results unsatisfying; the function does not correlate with the datapoints strong enough. Our initial assumption is probably not complete. Taking only the studying time into account is not enough. The final grade does not only depend on the studying time, but also on how much the students have slept the night before the exam. Now the dataset contains an additional variable which represents the sleeping time. Our dataset is then given by . In this dataset indicates how many hours student has studied and indicates how many hours he has slept.
See the rest of the blog here, including Linear vs Non-linear, Gradient Descent, Logistic Regression, and Text Classification and Sentiment Analysis.
I wrote a blog post inspired by Jamie Goode's book "Wine Science: The Application of Science in Winemaking".
In this book, Goode argued that reductionistic approach cannot explain relationship between chemical ingredients and taste of wine. Indeed, we know not all high (alcohol) wines are excellent, although in general high wines are believed to be good. Usually taste of wine is affected by a complicated balance of many components such as sweetness, acid, tannin, density or others that are given by corresponding chemical entities.
However, I think (and probably many other data science experts agree) that it is not a limitation of reductionistic approach, but a limitation of univariate modeling. To illustrate it, I performed a series of multivariate modeling with random forest or other models on "Wine Quality" dataset of UCI Machine Learning repository.
As a result, a random forest classifier predicted tasting score of wine better than intuitive univariate modeling. At the same time, it also showed some hidden and complicated dynamics between chemical ingredients and taste of wine. I believe that modern multivariate modeling such as machine learning can reveal more complicated relationship between chemical ingredients and taste of wine.
See my blog post below for more details.
An open-source exploration of the city's neighborhoods, nightlife, airport traffic, and more, through the lens of publicly available taxi and Uber data.
Images are clickable to open hi-res versions.
Original post covers a lot more details and for those who want to pursue more analysis on their own: everything in the post - the data, software, and code - is freely available. Full instructions to download and analyze the data for yourself are available on GitHub.
For today’s post we use crimtab dataset available in R. Data of 3000 male criminals over 20 years old undergoing their sentences in the chief prisons of England and Wales. The 42 row names ("9.4", 9.5" ...) correspond to midpoints of intervals of finger lengths whereas the 22 column names ("142.24", "144.78"...) correspond to (body) heights of 3000 criminals, see also below.
"142.24" "144.78" "147.32" "149.86" "152.4" "154.94" "157.48" "160.02" "162.56" "165.1" "167.64" "170.18" "172.72" "175.26" "177.8" "180.34"
"182.88" "185.42" "187.96" "190.5" "193.04" "195.58"
Note: the resultant components of pca object from the above code corresponds to the standard deviations and Rotation. From the above standard deviations we can observe that the 1st PCA explained most of the variation, followed by other pcas’. Rotation contains the principal component loadings matrix values which explains /proportion of each variable along each principal component.
In the second principal component, PC2 places more weight on 160.02, 162.56 than the 3 features, "165.1, 167.64, and 170.18" which are less correlated with them.
Following the Mediator scandal, France adopted in 2011 a Sunshine Act. For the first time we have data on the presents and contracts awarded to health care professionals by pharmaceutical companies. Can we use graph visualization to understand these dangerous ties?
Pharmaceutical companies in France and in other countries use presents and contracts to influence the prescriptions of health care professionals. This has posed ethical problems in the past.
In France, 21 persons are currently prosecuted for their role in the Mediator scandal, a drug that was recently banned. Some of them are accused of having helped the drug manufacturer obtain an authorization to sell its drug and later fight its ban in exchange for money.
In the US, GlaxoSmithKline was condemned to pay $3 billion in the largest health-care fraud settlement in US history. Before the settlement, GlaxoSmithKline paid various experts to fraudulently market the benefits of its drugs.
Such problems arose in part because of a lack of transparency in the ties between pharmaceutical companies and health-care professionals. With open data now available can we change this?
Regards Citoyens, a French NGO, parsed various sources to build the first database documenting the financial relationships between health care providers and pharmaceutical manufacturers.
That database covers a period from January 2012 to June 2014. It contains 495 951 health care professionals (doctors, dentists, nurses, midwives, pharmacists) and 894 pharmaceutical companies. The contracts and presents represent a total of 244 572 645 €.
The original data can be found on the Regards Citoyens website.
The data is stored in one large CSV file. We are going to use graph visualization to understand the network formed by the financial relationships between pharmaceutical companies and health care professionals.
Now the data is stored in Neo4j as a graph (download it here). It can be searched, explored and visualized through Linkurious.
Unfortunately, names in the data have been anonymized by Regards Citoyens following pressure from the CNIL (the French Commission nationale de l’informatique et des libertés).
Who is Sanofi giving money to?
Let’s start our data exploration with Sanofi, the French biggest pharmaceutical company. If we search Sanofi through Linkurious we can see that it is connected to 57 765 professionals. Let’s focus on the 20 Sanofi’s contacts who have the most connections.
19 doctors among Sanofi’s top 20 connections.
In a click, we can filter the visualization to focus on the doctors. We are now going to color them according to their region of origin.
Region of origin of Sanofi’s 19 doctors.
Indirectly, the health care professionals Sanofi connects to via presents also tell us about its competitors. Let’s look at who else has given presents to the health care professionals befriended by Sanofi.
Sanofi’s contacts (highlighted in red) are also in touch with other pharmaceutical companies.
Zooming in, we can see Sanofi is at the center of a very dense network next to Bristol-Myers Quibb, Pierre Gabre, Lilly or Astrazeneca for example. According to the Sunshine dataset, Sanofi’s is competing with these companies.
We can also see an interesting node. It is a student who has received presents from 104 pharmaceutical companies including companies that are not direct competitors of Sanofi.
Why has he received so much attention? Unfortunately all we have is an ID (02b0d3726458ef46682389f2ac7dc7af).
Sanofi could identify the professionals its competitors have targeted and perhaps target them too in the future.
Who has received the most money from pharmaceutical companies in France?
Neo4j includes a graph query language called Cypher. Through Cypher we can compute complex graph queries and get results in seconds.
The doctor behind the ID 2d92eb1e795f7f538556c59e48aaa7c1 has received 77 480€ from 6 pharmaceutical companies.
The relationships are colored according to the money they represent. St Jude Medical has over 70 231€ to Dr 2d92eb1e795f7f538556c59e48aaa7c1.
Perhaps next time they receive a prescription from Dr 2d92eb1e795f7f538556c59e48aaa7c1, his patients would like to know about his relationship with St Jude Medical. Unfortunately today the Sunshine data is anonymous.
We can also find the most generous pharmaceutical company.
Novartis Pharma has awarded 12 595 760€ to various entities.
The 5 entities receiving the most money from Novartis.
When we look closer, we can see that the 5 entities which have received the most money from Novartis Pharma are 5 NGOs.
24f3287da6ab125862249416bc91f9c4 has received 75 000€.
Come meet us at GraphConnect in London, the biggest graph event in Europe. It is sponsored by Linkurious and you can use “Linkurious30″ to register and get a 30%discount!
The Sunshine dataset offers a rare glimpse into the practice of pharmaceutical companies and how they use money to influence the behavior of health care professionals. Unfortunately for citizens looking for transparency, the data is anonymized. Perhaps it will change in the future?
K Clique Percolation - A clique merging algorithm. Given a set kk, the algorithm goes on to produce kk clique clusters and merge them (percolate) as necessary.
DP Clustering - seed growth approach to finding dense subgraphs similar to MCODE but has an internal representation of weights in the edges, and the stopiing condition is different.
IPCA - Modified DPClus Algorithm which focuses on maintaining the diameter of a cluster (defined as the maximum shortest distance between all pairs of vertices, rather than its density.
CoAch - Combined Approach with finding a small number of cliques as complexes first and then growing them.
In the original article, these visualizations are interactive, and you will find out which software was used to produce them.
For my submission to HackCambridge I wanted to spend my 24 hours learning something new in accordance with my interests. I was recently introduced to protein interaction networks in my Bioinfomartics class, and during my review of machine learning techniques for an exam noticed that we study many supervised methods, but no unsupervised methods other than the k means clustering. Thus I decided to combine the two interests by clustering the Protein interaction networks with unsupervised clustering techniques and communicate my learning, results, and visualisations using the Beaker notebook.
The study of protein-protein interactions (PPIs) determined by high-throughput experimental techniques has created karge sets of interaction data and a new need for methods allowing us to discover new information about biological function. These interactions can be thought of as a large-scale network, with nodes representing proteins and edges signifying an interaction between two proteins. In a PPI network, we can potentially find protein complexes or functional modules as densely connected subgraphs. A protein complex is a group of proteins that interact with each other at the same time and place creating a quaternary structure. Functional modules are composed of proteins that bind each other at different times and places and are involved in the same cellular process. Various graph clustering algorithms have been applied to PPI networks to detect protein complexes or functional modules, including several designed specifically for PPI network analysis. A select few of the most famous and recent topographical clustering algorithms were implemented based on descriptions from papers, and applied to PPI networks. Upon completion it was recognized that it is possible to apply these to other interaction networks like friend groups on social networks, site maps, or transportation networks to name a few.
This post brings forth to the audience, few glimpses (strictly) of insights that were obtained from a case of how predictive analytic's helped a fortune 1000 client to unlock the value in their huge log files of the IT Support system. Going to quick background, a large organization was interested in value added insights (actionable ones) from thousands of records logged in the past, as they saw both expense increase at no higher productivity.
As, most of us know in these business scenarios end-users will be much interested in out-of-knowledge, strange and unusual things that may not be captured from regular reports. Hence, here data scientist job not only ends at finding un-routine insights, but, also needs to do a deeper dig for its root cause and suggest best possible actions for immediate remedy (knowledge of domain or other best practices in industry will help a lot). Further, as mentioned earlier, only few of those has been shown/discussed here and all the analysis has been carried out using R Programming Language components viz., R-3.2.2, RStudio (favorite IDE), ggplot2 package for plotting.
The first graph (below one) is a time series calendar heat map adopted from Paul Bleicher, shows us the number of tickets raised day-wise over every week of each month for the last year (green and its light shades represent less numbers, where as red and its shades represent higher numbers).
Herein, if one carefully observe the above graph, it will be very evident for us that, except for the month of April & December, all other months have sudden increase in the number of tickets raised over last Saturday's and Sunday's; and this was more clearly visible at Quarter ends of March, June, September (also at November which is not a Quarter end). One can think of this as unusual behavior as numbers raising at non-working days. Before, going into further details, lets also look at one more graph (below), which depicts solved duration in minutes on x-axis and their respective time taken through a horizontal time line plot.
The above solved duration plot show us that out of all records analyzed 71.87% belong to "Request for Information" category and they have been solved within few minutes of tickets raised (that's why we cannot see a line plot for this category as compared to others). So, what's happened here actually was a kind of spoof, because of lack of automation in their systems. In simple words, it was found that there doesn't exists a proper documentation/guidance for many of applications they were using; such situation was taken as advantage for increasing the number of tickets (i.e. nothing but, pushing for more tickets even for basic information in the month ends and quarter ends, which resulted in month end openings which in turn forced them to close immediately). Discussed one here is one of those among many which has been presented with possible immediate remedies which can be easily actionable.
I have spent many hours planning and executing in-company self-service BI implementation. This enabled me to gain several insights. Now that the ideas became mature enough and field-proven, I believe they are worth sharing. No matter how far you are in toying with potential approaches (possibly you are already in the thick of it!), I hope my attempt of describing feasible scenarios would provide a decent foundation.
All scenarios presume that IT plays its main role by owning the infrastructure, managing scalability, data security, and governance. I tried to elaborate every aspect of possible solution leaving behind all marketing claims of the vendor.
Scenario 1. Tableau Desktop + departmental/cross-functional data schemas.
This scenario involves gaining insights by data analysts on a daily basis. They might be either independent individuals or a team. Business users’ interaction with published workbooks is applicable, but limited to simple filtering.
Fast response for complex ad-hoc business problems.
Most likely involves Tableau training on query performance optimisation on a particular data source (e.g. Vertica).
Create a “sandbox” that allows data analysts to query and collaborate on their own and without supervision. Further promotion of workbooks to production is welcome.
Scenario 2. Tableau Desktop + custom data marts.
In this scenario, business users are fully in charge of data analysis. IT provides custom data marts.
Licenses: Tableau Desktop + Server Interactors.
Self-publishing for further ad-hoc access across multiple devices.
Deliver training in 2-3 wisely structured sections with 2-3 week breaks for business users to have time for playing with software, along with generating needs for the new skills.
Focus on reach visualisations, not tables.
This scenario fully relies on data models published by data analysts and powerful Web Edit features of Tableau Server.
Could serve as a foundation for self-service BI adoption among C-Suite.
Any changes in the data model require development and republishing of a template.
Provide as much ad-hoc assistance as you can.
In my next post, I would like to throw light on some technical aspects and limitations of each scenario.
I highly appreciate any comments and looking forward to know about your experience.
possible - for the first time ever - to store Petabytes of data on commodity hardware and process this data, as needed, in a fault tolerant and incredibly quick fashion. Many of us fail to understand the full implications of this inflection point in the history of computing.
Storage is decreasing in cost every year, to the point where you can now have multiple GB on a USB drive that 10 years ago you could only store a few MBs. Gigabit internet is being installed in cities all over the world. Spark uses the concept of in memory distributed computation to perform at 10X map reduce for gigantic datasets and is already being used in production by Fortune 50 companies. Tableau, Qlik, MicroStrategy, Domo, etc. have gained tremendous market share as companies that have implemented Hadoop components such as HDFS, Hbase, Hive, Pig, and Map Reduce are starting to wonder "How I can I visualize that data?"
Now think about VR - probably the hottest field in technology at this moment. It has been more than a year since Facebook bought Oculus for 2Billion and we have seen Google Cardboard burst onto the scene. Applications from media companies like the NY Times are already becoming part of our every day lives. This month at the CES show in Las Vegas, dozens of companies were showcasing virtual reality platforms that improve on the state of the art and allow for a motion-sickness free immersive experience.
All of this combines into my primary hypothesis - this is a great time to start a company that would provide the capability for immersive data visualization environments to businesses and consumers. I personally believe that businesses and government agencies would be the first to fully engage in this space on the data side, but there is clearly an opportunity in gaming on the consumer side.
Personally, I have been so taken by the potential of this idea that I wrote a post in this blog about the “feeling” of being in one of these immersive VR worlds.
The post describes what it would be like to experience data with not only vision, but touch and sound and even smell.
Just think about the possibilities of examining streaming data sets, that currently are being analyzed with tools such as Storm, Kafka, Flink, and Spark Streaming as a river flowing under you!
The strength of the water can describe the speed of the data intake, or any other variable that is represented by a flow - stock market prices come to mind.
The possibilities for immersive data experiences are absolutely astonishing. The CalTech astronomers have already taken the first step in that direction, and perhaps there is a company out there that is already taking the next step. That being said, if this sounds like an exciting venture to you, DM me on twitter @Namenode5 and we can talk.
Great infographic about the big data / analytics / data science / deep learning / BI ecosystem. Created by @Mattturk, @Jimrhao and @firstmarkcap. Click on the image to zoom in. | 2019-04-23T02:49:10Z | https://www.datavizualization.datasciencecentral.com/blog?page=4 |
In general usage, the term direct instruction refers to (1) instructional approaches that are structured, sequenced, and led by teachers, and/or (2) the presentation of academic content to students by teachers, such as in a lecture or demonstration. In other words, teachers are “directing” the instructional process or instruction is being “directed” at students.
While a classroom lecture is perhaps the image most commonly associated with direct instruction, the term encompasses a wide variety of fundamental teaching techniques and potential instructional scenarios. For example, presenting a video or film to students could be considered a form of direct instruction (even though the teacher is not actively instructing students, the content and presentation of material was determined by the teacher). Generally speaking, direct instruction may be the most common teaching approach in the United States, since teacher-designed and teacher-led instructional methods are widely used in American public schools. That said, it’s important to note that teaching techniques such as direct instruction, differentiation, or scaffolding, to name just a few, are rarely mutually exclusive—direct instruction may be integrated with any number of other instructional approaches in a given course or lesson. For example, teachers may use direct instruction to prepare students for an activity in which the students work collaboratively on a group project with guidance and coaching from the teacher as needed (the group activity would not be considered a form of direct instruction).
Establishing learning objectives for lessons, activities, and projects, and then making sure that students have understood the goals.
Purposefully organizing and sequencing a series of lessons, projects, and assignments that move students toward stronger understanding and the achievement of specific academic goals.
Reviewing instructions for an activity or modeling a process—such as a scientific experiment—so that students know what they are expected to do.
Providing students with clear explanations, descriptions, and illustrations of the knowledge and skills being taught.
Asking questions to make sure that students have understood what has been taught.
It should be noted that the term direct instruction is used in various proprietary or trademarked instructional models that have been developed and promoted by educators, including—most prominently—Direct Instruction, created by Siegfried Engelmann and Wesley Becker, which is a “explicit, carefully sequenced and scripted model of instruction,” according to the National Institute for Direct Instruction.
In recent decades, the concept of direct instruction has taken on negative associations among some educators. Because direct instruction is often associated with traditional lecture-style teaching to classrooms full of passive students obediently sitting in desks and taking notes, it may be considered outdated, pedantic, or insufficiently considerate of student learning needs by some educators and reformers.
That said, many of direct instruction’s negative connotations likely result from either a limited definition of the concept or a misunderstanding of its techniques. For example, all teachers, by necessity, use some form of direct instruction in their teaching—i.e., preparing courses and lessons, presenting and demonstrating information, and providing clear explanations and illustrations of concepts are all essential, and to some degree unavoidable, teaching activities. Negative perceptions of the practice tend to arise when teachers rely too heavily upon direct instruction, or when they fail to use alternative techniques that may be better suited to the lesson at hand or that may improve student interest, engagement, and comprehension.
While a sustained forty-five-minute lecture may not be considered an effective teaching strategy by many educators, the alternative strategies they may advocate—such as personalized learning or project-based learning, to name just two options—will almost certainly require some level of direct instruction by teachers. In other words, teachers rarely use either direct instruction or some other teaching approach—in actual practice, diverse strategies are frequently blended together. For these reasons, negative perceptions of direct instruction likely result more from a widespread overreliance on the approach, and from the tendency to view it as an either/or option, rather than from its inherent value to the instructional process.
Curriculum mapping is the process indexing or diagraming a curriculum to identify and address academic gaps, redundancies, and misalignments for purposes of improving the overall coherence of a course of study and, by extension, its effectiveness (a curriculum, in the sense that the term is typically used by educators, encompasses everything that teachers teach to students in a school or course, including the instructional materials and techniques they use).
In most cases, curriculum mapping refers to the alignment of learning standards and teaching—i.e., how well and to what extent a school or teacher has matched the content that students are actually taught with the academic expectations described in learning standards—but it may also refer to the mapping and alignment of all the many elements that are entailed in educating students, including assessments, textbooks, assignments, lessons, and instructional techniques.
Generally speaking, a coherent curriculum is (1) well organized and purposefully designed to facilitate learning, (2) free of academic gaps and needless repetitions, and (3) aligned across lessons, courses, subject areas, and grade levels. When educators map a curriculum, they are working to ensure that what students are actually taught matches the academic expectations in a particular subject area or grade level.
Before the advent of computers and the internet, educators would create curriculum maps on paper and poster board; today, educators are far more likely to use spreadsheets, software programs, and online services that are specifically dedicated to curriculum mapping. The final product is often called a “curriculum map,” and educators will use the maps to plan courses, lessons, and teaching strategies in a school. For a related discussion, see backward design.
Vertical coherence: When a curriculum is vertically aligned or vertically coherent, what students learn in one lesson, course, or grade level prepares them for the next lesson, course, or grade level. Curriculum mapping aims to ensure that teaching is purposefully structured and logically sequenced across grade levels so that students are building on what they have previous learned and learning the knowledge and skills that will progressively prepare them for more challenging, higher-level work. For a related discussion, see learning progression.
Horizontal coherence: When a curriculum is horizontally aligned or horizontally coherent, what students are learning in one ninth-grade biology course, for example, mirrors what other students are learning in a different ninth-grade biology course. Curriculum mapping aims to ensure that the assessments, tests, and other methods teachers use to evaluate learning achievement and progress are based on what has actually been taught to students and on the learning standards that the students are expected to meet in a particular course, subject area, or grade level.
Subject-area coherence: When a curriculum is coherent within a subject area—such as mathematics, science, or history—it may be aligned both within and across grade levels. Curriculum mapping for subject-area coherence aims to ensure that teachers are working toward the same learning standards in similar courses (say, three different ninth-grade algebra courses taught by different teachers), and that students are also learning the same amount of content, and receiving the same quality of instruction, across subject-area courses.
Interdisciplinary coherence: When a curriculum is coherent across multiple subject areas—such as mathematics, science, and history—it may be aligned both within and across grade levels. Curriculum mapping for interdisciplinary coherence may focus on skills and work habits that students need to succeed in any academic course or discipline, such as reading skills, writing skills, technology skills, and critical-thinking skills. Improving interdisciplinary coherence across a curriculum, for example, might entail teaching students reading and writing skills in all academic courses, not just English courses.
In education, scaffolding refers to a variety of instructional techniques used to move students progressively toward stronger understanding and, ultimately, greater independence in the learning process. The term itself offers the relevant descriptive metaphor: teachers provide successive levels of temporary support that help students reach higher levels of comprehension and skill acquisition that they would not be able to achieve without assistance. Like physical scaffolding, the supportive strategies are incrementally removed when they are no longer needed, and the teacher gradually shifts more responsibility over the learning process to the student.
Scaffolding is widely considered to be an essential element of effective teaching, and all teachers—to a greater or lesser extent—almost certainly use various forms of instructional scaffolding in their teaching. In addition, scaffolding is often used to bridge learning gaps—i.e., the difference between what students have learned and what they are expected to know and be able to do at a certain point in their education. For example, if students are not at the reading level required to understand a text being taught in a course, the teacher might use instructional scaffolding to incrementally improve their reading ability until they can read the required text independently and without assistance. One of the main goals of scaffolding is to reduce the negative emotions and self-perceptions that students may experience when they get frustrated, intimidated, or discouraged when attempting a difficult task without the assistance, direction, or understanding they need to complete it.
As a general instructional strategy, scaffolding shares many similarities with differentiation, which refers to a wide variety of teaching techniques and lesson adaptations that educators use to instruct a diverse group of students, with diverse learning needs, in the same course, classroom, or learning environment. Because scaffolding and differentiation techniques are used to achieve similar instructional goals—i.e., moving student learning and understanding from where it is to where it needs to be—the two approaches may be blended together in some classrooms to the point of being indistinguishable. That said, the two approaches are distinct in several ways. When teachers scaffold instruction, they typically break up a learning experience, concept, or skill into discrete parts, and then give students the assistance they need to learn each part. For example, teachers may give students an excerpt of a longer text to read, engage them in a discussion of the excerpt to improve their understanding of its purpose, and teach them the vocabulary they need to comprehend the text before assigning them the full reading. Alternatively, when teachers differentiate instruction, they might give some students an entirely different reading (to better match their reading level and ability), give the entire class the option to choose from among several texts (so each student can pick the one that interests them most), or give the class several options for completing a related assignment (for example, the students might be allowed to write a traditional essay, draw an illustrated essay in comic-style form, create a slideshow “essay” with text and images, or deliver an oral presentation).
The teacher gives students a simplified version of a lesson, assignment, or reading, and then gradually increases the complexity, difficulty, or sophistication over time. To achieve the goals of a particular lesson, the teacher may break up the lesson into a series of mini-lessons that progressively move students toward stronger understanding. For example, a challenging algebra problem may be broken up into several parts that are taught successively. Between each mini-lesson, the teacher checks to see if students have understood the concept, gives them time to practice the equations, and explains how the math skills they are learning will help them solve the more challenging problem (questioning students to check for understanding and giving them time to practice are two common scaffolding strategies). In some cases, the term guided practice may be used to describe this general technique.
The teacher describes or illustrates a concept, problem, or process in multiple ways to ensure understanding. A teacher may orally describe a concept to students, use a slideshow with visual aids such as images and graphics to further explain the idea, ask several students to illustrate the concept on the blackboard, and then provide the students with a reading and writing task that asks them articulate the concept in their own words. This strategy addresses the multiple ways in which students learn—e.g., visually, orally, kinesthetically, etc.—and increases the likelihood that students will understand the concept being taught.
Students are given an exemplar or model of an assignment they will be asked to complete. The teacher describes the exemplar assignment’s features and why the specific elements represent high-quality work. The model provides students with a concrete example of the learning goals they are expected to achieve or the product they are expected to produce. Similarly, a teacher may also model a process—for example, a multistep science experiment—so that students can see how it is done before they are asked to do it themselves (teachers may also ask a student to model a process for her classmates).
Students are given a vocabulary lesson before they read a difficult text. The teacher reviews the words most likely to give students trouble, using metaphors, analogies, word-image associations, and other strategies to help students understand the meaning of the most difficult words they will encounter in the text. When the students then read the assignment, they will have greater confidence in their reading ability, be more interested in the content, and be more likely to comprehend and remember what they have read.
The teacher clearly describes the purpose of a learning activity, the directions students need to follow, and the learning goals they are expected to achieve. The teacher may give students a handout with step-by-step instructions they should follow, or provide the scoring guide or rubric that will be used to evaluate and grade their work. When students know the reason why they are being asked to complete an assignment, and what they will specifically be graded on, they are more likely to understand its importance and be motivated to achieve the learning goals of the assignment. Similarly, if students clearly understand the process they need to follow, they are less likely to experience frustration or give up because they haven’t fully understood what they are expected to do.
The teacher explicitly describes how the new lesson builds on the knowledge and skills students were taught in a previous lesson. By connecting a new lesson to a lesson the students previously completed, the teacher shows students how the concepts and skills they already learned will help them with the new assignment or project (teachers may describe this general strategy as “building on prior knowledge” or “connecting to prior knowledge”). Similarly, the teacher may also make explicit connections between the lesson and the personal interests and experiences of the students as a way to increase understanding or engagement in the learning process. For example, a history teacher may reference a field trip to a museum during which students learned about a particular artifact related to the lesson at hand. For a more detailed discussion, see relevance.
Backward design, also called backward planning or backward mapping, is a process that educators use to design learning experiences and instructional techniques to achieve specific learning goals. Backward design begins with the objectives of a unit or course—what students are expected to learn and be able to do—and then proceeds “backward” to create lessons that achieve those desired goals. In most public schools, the educational goals of a course or unit will be a given state’s learning standards—i.e., concise, written descriptions of what students are expected to know and be able to do at a specific stage of their education.
The basic rationale motivating backward design is that starting with the end goal, rather than a starting with the first lesson chronologically delivered during a unit or course, helps teachers design a sequence of lessons, problems, projects, presentations, assignments, and assessments that result in students achieving the academic goals of a course or unit—that is, actually learning what they were expected to learn.
Backward design helps teachers create courses and units that are focused on the goal (learning) rather than the process (teaching). Because “beginning with the end” is often a counterintuitive process, backward design gives educators a structure they can follow when creating a curriculum and planning their instructional process. Advocates of backward design would argue that the instructional process should serve the goals; the goals—and the results for students—should not be determined by the process.
A teacher begins by reviewing the learning standards that students are expected to meet by the end of a course or grade level. In some cases, teachers will work together to create backward-designed units and courses. For a related discussion, see common planning time.
The teacher creates an index or list of the essential knowledge, skills, and concepts that students need to learn during a specific unit. In some cases, these academic expectations will be called learning objectives, among other terms.
The teacher then designs a final test, assessment, or demonstration of learning that students will complete to show that they have learned what they were expected to learn. The final assessment will measure whether and to what degree students have achieved the unit goals.
The teacher then creates a series of lessons, projects, and supporting instructional strategies intended to progressively move student understanding and skill acquisition closer to the desired goals of the unit.
The teacher then determines the formative-assessment strategies that will be used to check for understanding and progress over the duration of the unit (the term formative assessment refers to a wide variety of methods—from questioning techniques to quizzes—that teachers use to conduct in-process evaluations of student comprehension, learning needs, and academic progress during a lesson, unit, or course, often for the purposes of modifying lessons and teaching techniques to make them more effective). Advocates typically argue that formative assessment is integral to effective backward design because teachers need to know what students are or are not learning if they are going to help them achieve the goals of a unit.
The teacher may then review and reflect on the prospective unit plan to determine if the design is likely to achieve the desired learning goals. Other teachers may also be asked to review the plan and provide constructive feedback that will help improve the overall design.
While backward-design strategies have a long history in education—going back at least as far as the seminal work Basic Principles of Curriculum and Instruction, by Ralph W. Tyler, published in 1947—the educators and authors Grant Wiggins and Jay McTighe are widely considered to have popularized “backward design” for the modern era in their book Understanding by Design. Since its publication in the 1990s, Understanding by Design has evolved in series of popular books, videos, and other resources.
As a strategy for designing, planning, and sequencing curriculum and instruction, backward design is an attempt to ensure that students acquire the knowledge and skills they need to succeed in school, college, or the workplace. In other words, backward design helps educators create logical teaching progressions that move students toward achieving specific—and important—learning objectives. Generally speaking, strategies such as backward design are attempts to bring greater coherence to the education of students—i.e., to establish consistent learning goals for schools, teachers, and students that reflect the knowledge, skills, conceptual understanding, and work habits deemed to be most essential. For a related discussion, see curriculum mapping.
Backward design arose in tandem with the concept of learning standards, and it is widely viewed as a practical process for using standards to guide the development of a course, unit, or other learning experience. Like backward designs, learning standards are a way to promote greater consistency and commonality in what gets taught to students from state to state, school to school, grade to grade, and teacher to teacher. Before the advent of learning standards and other efforts to standardize public education, individual schools and teachers typically determined learning expectations in a given course, subject area, or grade level—a situation that can, in some cases, give rise to significant educational disparities.
For related discussions, see achievement gap, equity, and high expectations.
Locus of control is a psychological concept that refers to how strongly people believe they have control over the situations and experiences that affect their lives. In education, locus of control typically refers to how students perceive the causes of their academic success or failure in school.
Students with an “internal locus of control” generally believe that their success or failure is a result of the effort and hard work they invest in their education. Students with an “external locus of control” generally believe that their successes or failures result from external factors beyond their control, such as luck, fate, circumstance, injustice, bias, or teachers who are unfair, prejudiced, or unskilled. For example, students with an internal locus of control might blame poor grades on their failure to study, whereas students with an external locus of control may blame an unfair teacher or test for their poor performance.
Whether a student has an internal or external locus of control is thought to have a powerful effect on academic motivation, persistence, and achievement in school. In education, “internals” are considered more likely to work hard in order to learn, progress, and succeed, while “externals” are more likely to believe that working hard is “pointless” because someone or something else is treating them unfairly or holding them back. Students with an external locus of control may also believe that their accomplishments will not be acknowledged or their effort will not result in success.
In special education, the locus-of-control concept is especially salient. Many educators believe that students with learning disabilities are more likely to develop an external locus of control, at least in part due to negative experiences they may have had in school. If their disabilities have made learning exceptionally difficult or challenging, and they have consequently experienced more failure than success in school, blaming other people and external factors can develop into a psychological coping mechanism (i.e., when someone or something else is always the cause, the students don’t need to take more responsibility over their success in school).
For related discussions, see growth mindset and stereotype threat.
Altering learning contexts: More structured, orderly, and supportive classrooms and learning environments are believed to benefit students with an external locus of control, while students with an internal locus of control often thrive in more unstructured learning environments.
Strengthening internal locus of control: Educators and specialists may also use a variety of strategies to encourage students to believe they have more control over their education and academic achievement, including techniques known as “attribution training.” Essentially, students are taught to internalize positive messages that tend to be intuitive to students with an internal locus of control. For example, the training may encourage students to say to themselves—out loud at first, then in a whispering voice, and then silently to themselves—that they can do the task they were assigned and that their hard work and effort will be rewarded with success.
Several questionnaires have been developed to help identify whether students tend toward an internal or external locus of control. Julian B. Rotter, the psychologist who originally developed the locus-of-control concept, created a widely used question-based assessment and a corresponding scale designed to identify where students are on the internal-external spectrum. The questionnaire offers a series of choices between two statements. For example, the respondent would choose between “I have often found that what is going to happen will happen” or “Trusting to fate has never turned out as well for me as making a decision to take a definite course of action.” Rotter’s assessment is one of a number of diagnostic tools and scales that may be used by psychologists and educators.
In education, the term standards-referenced refers to instructional approaches or assessments that are “referenced” to or derived from established learning standards—i.e., concise, written descriptions of what students are expected to know and be able to do at a specific stage of their education. In other words, standards-referenced refers to the use of learning standards to guide what gets taught and tested in schools.
Standards-referenced tests, and other forms of standards-referenced assessment, are designed to measure student performance against a fixed set of predetermined learning standards. In elementary and secondary education, standards-referenced tests evaluate whether students have learned a specific body of knowledge or acquired a specific skill set described in a given set of standards. The term standards-referenced test and criterion-referenced test are synonymous when the “criteria” being used are learning standards. The terms standard-referenced assessment and criterion-referenced assessment are similarly synonymous. (In education, assessment refers to the wide variety of methods that educators use to evaluate, measure, and document the academic readiness, learning progress, and skill acquisition of students, which includes tests and other methods of evaluation, such as graded assignments, demonstrations of learning, or formative assessments, for example.) For a more detailed discussion of standards-referenced testing, see criterion-referenced test.
A standards-referenced curriculum is a course of study that is guided by learning standards. In other words, the academic knowledge and skills taught in a school, or in a specific course or program, are based on learning standards, typically the learning standards developed and adopted by states. The standards determine the goals of a lesson or course, and teachers then determine how and what to teach students so they achieve the expected learning goals described in the standards. Depending on how broadly educators define or use the term, standards-referenced curriculum may refer to the knowledge, skills, topics, and concepts that are taught to students and/or to the lessons, units, assignments, readings, and materials used by teachers. For related discussions, see alignment, curriculum, coherent curriculum, learning objectives, and learning progression.
The distinction between standard-based and standards-referenced is often a source of confusion among educators and the public—in part because the terms are sometimes used interchangeably, but also because the distinction between the two is both subtle and nuanced. In brief, standards-referenced means that what gets taught or tested is “based” on standards (i.e., standards are the source of the content and skills taught to students—the original “reference” for the lesson), while standards-based refers to the practice of making sure students learn what they were taught and actually achieve the expected standards, and that they meet a defined standard for “proficiency.” In a standards-referenced system, teaching and testing are guided by standards; in a standards-based system, teachers work to ensure that students actually learn the expected material as they progress in their education.
Assessment: Say a teacher designs a standards-referenced test for a history course. While the content of the test may be entirely standards-referenced—i.e., it is aligned with the expectations described in learning standards—a score of 75 may be considered a passing score, suggesting that 25 percent of the taught material was not actually learned by the students who scored a 75. In addition, the teacher may not know what specific standards students have or have not met if only the scores tests and assignments are summed and averaged. For example, a student may be able to earn a “passing” grade in a ninth-grade English course, but still be unable to “demonstrate command of the conventions of standard English grammar and usage when writing and speaking” or “demonstrate understanding of figurative language, word relationships, and nuances in word meanings”—two ninth-grade standards taken from the Common Core State Standards. If the teacher uses a standards-based approach to assessment, however, students would only “pass” a test or course after demonstrating that they have learned the knowledge and skills described in the expected standards. The students may need to retake a test several times or redo an assignment, or they may need additional help from the teacher or other educational specialist, but the students would need to demonstrate that they learned what they were expected to learn—i.e., the specific knowledge and skills described in standards.
Curriculum: In most high schools, students typically earn credit for passing a course, but a passing grade may be an A or it may be a D, suggesting that the awarded credit is based on a spectrum of learning expectations—with some students learning more and others learning less—rather than on the same learning standards being applied to all students equally. And because grades may be calculated differently from school to school or teacher to teacher, and they may be based on widely divergent different learning expectations (for example, some courses may be “harder” and others “easier”), students may pass their courses, earn the required number of credits, and receive a diploma without acquiring the most essential knowledge and skills described in standards. In these cases, the curricula taught in these schools may be standards-referenced, but not standards-based, because teachers are not evaluating whether students have achieved specific standards. In standards-based schools, courses, and programs, however, educators will use a variety of instructional and assessment methods to determine whether students have met the expected standards, including strategies such as demonstrations of learning, personal learning plans, portfolios, rubrics, and capstone projects, to name just a few.
Grading: In a standards-referenced course, grading may look like it traditionally has in schools: students are given numerical scores on a 1–100 scale and class grades represent an average of all scores earned over the course of a semester or year. In a standards-based course, however, “grades” often look quite different. While standards-based grading and reporting may take a wide variety of forms from school to school, grades are typically connected to descriptive standards, not based on test and assignment scores that are averaged together. For example, students may receive a report that shows how they progressing toward meeting a selection of standards. The criteria used to determine what “meeting a standard” means will defined in advance, often in a rubric, and teachers will evaluate learning progress and academic achievement in relation to the criteria. The reports students receive might use a 1–4 scale, for example, with 3s and 4s indicating that students have met the standard. In standards-based schools, grades for behaviors and work habits—e.g., getting to class on time, following rules, treating other students respectfully, turning in work on time, participating in class, putting effort into assignments—are also reported separately from academic grades, so that teachers and parents can make distinctions between learning achievement and behavioral issues.
Criterion-referenced tests and assessments are designed to measure student performance against a fixed set of predetermined criteria or learning standards—i.e., concise, written descriptions of what students are expected to know and be able to do at a specific stage of their education. In elementary and secondary education, criterion-referenced tests are used to evaluate whether students have learned a specific body of knowledge or acquired a specific skill set. For example, the curriculum taught in a course, academic program, or content area.
If students perform at or above the established expectations—for example, by answering a certain percentage of questions correctly—they will pass the test, meet the expected standards, or be deemed “proficient.” On a criterion-referenced test, every student taking the exam could theoretically fail if they don’t meet the expected standard; alternatively, every student could earn the highest possible score. On criterion-referenced tests, it is not only possible, but desirable, for every student to pass the test or earn a perfect score. Criterion-referenced tests have been compared to driver’s-license exams, which require would-be drivers to achieve a minimum passing score to earn a license.
Norm-referenced tests are designed to rank test takers on a “bell curve,” or a distribution of scores that resembles, when graphed, the outline of a bell—i.e., a small percentage of students performing poorly, most performing average, and a small percentage performing well. To produce a bell curve each time, test questions are carefully designed to accentuate performance differences among test takers—not to determine if students have achieved specified learning standards, learned required material, or acquired specific skills. Unlike norm-referenced tests, criterion-referenced tests measure performance against a fixed set of criteria.
Criterion-referenced tests may include multiple-choice questions, true-false questions, “open-ended” questions (e.g., questions that ask students to write a short response or an essay), or a combination of question types. Individual teachers may design the tests for use in a specific course, or they may be created by teams of experts for large companies that have contracts with state departments of education. Criterion-referenced tests may be high-stakes tests—i.e., tests that are used to make important decisions about students, educators, schools, or districts—or they may be “low-stakes tests” used to measure the academic achievement of individual students, identify learning problems, or inform instructional adjustments.
Well-known examples of criterion-referenced tests include Advanced Placement exams and the National Assessment of Educational Progress, which are both standardized tests administered to students throughout the United States. When testing companies develop criterion-referenced standardized tests for large-scale use, they usually have committees of experts determine the testing criteria and passing scores, or the number of questions students will need to answer correctly to pass the test. Scores on these tests are typically expressed as a percentage.
It should be noted that passing scores—or “cut-off scores“—on criterion-referenced tests are judgment calls made by either individuals or groups. It’s theoretically possible, for example, that a given test-development committee, if it had been made up of different individuals with different backgrounds and viewpoints, would have determined different passing scores for a certain test. For example, one group might determine that a minimum passing score is 70 percent correct answers, while another group might establish the cut-off score at 75 percent correct. For a related discussion, see proficiency.
Criterion-referenced tests created by individual teachers are also very common in American public schools. For example, a history teacher may devise a test to evaluate understanding and retention of a unit on World War II. The criteria in this case might include the causes and timeline of the war, the nations that were involved, the dates and circumstances of major battles, and the names and roles of certain leaders. The teacher may design a test to evaluate student understanding of the criteria and determine a minimum passing score.
While criterion-referenced test scores are often expressed as percentages, and many have minimum passing scores, the test results may also be scored or reported in alternative ways. For example, results may be grouped into broad achievement categories—such as “below basic,” “basic,” “proficient,” and “advanced”—or reported on a 1–5 numerical scale, with the numbers representing different levels of achievement. As with minimum passing scores, proficiency levels are judgment calls made by individuals or groups that may choose to modify proficiency levels by raising or lowering them.
To determine if students have learning gaps or academic deficits that need to be addressed. For a related discussion, see formative assessment.
To evaluate the effectiveness of a course, academic program, or learning experience by using “pre-tests” and “post-tests” to measure learning progress over the duration of the instructional period.
To evaluate the effectiveness of teachers by factoring test results into job-performance evaluations. For a related discussion, see value-added measures.
To measure progress toward the goals and objectives described in an “individualized education plan” for students with disabilities.
To determine if a student or teacher is qualified to receive a license or certificate.
To measure the academic achievement of students in a given state, usually for the purposes of comparing academic performance among schools and districts.
To measure the academic achievement of students in a given country, usually for the purposes of comparing academic performance among nations. A few widely used examples of international-comparison tests include the Programme for International Student Assessment (PISA), the Progress in International Reading Literacy Study (PIRLS), and the Trends in International Mathematics and Science Study (TIMSS).
Criterion-referenced tests are the most widely used type of test in American public education. All the large-scale standardized tests used to measure public-school performance, hold schools accountable for improving student learning results, and comply with state or federal policies—such as the No Child Left Behind Act—are criterion-referenced tests, including the assessments being developed to measure student achievement of the Common Core State Standards. Criterion-referenced tests are used for these purposes because the goal is to determine whether educators and schools are successfully teaching students what they are expected to learn.
Criterion-referenced tests are also used by educators and schools practicing proficiency-based learning, a term that refers to systems of instruction, assessment, grading, and academic reporting that are based on students demonstrating mastery of the knowledge and skills they are expected to learn before they progress to the next lesson, get promoted to the next grade level, or receive a diploma. In most cases, proficiency-based systems use state learning standards to determine academic expectations and define “proficiency” in a given course, content area, or grade level. Criterion-referenced tests are one method used to measure academic progress and achievement in relation to standards.
To hold schools and educators accountable for educational results and student performance. In this case, test scores are used as a measure of effectiveness, and low scores may trigger a variety of consequences for schools and teachers.
To evaluate whether students have learned what they are expected to learn. In this case, test scores are seen as a representative indicator of student achievement.
To identify gaps in student learning and academic progress. Test scores may be used, along with other information about students, to diagnose learning needs so that educators can provide appropriate services, instruction, or academic support.
To identify achievement gaps among different student groups. Students of color, students who are not proficient in English, students from low-income households, and students with physical or learning disabilities tend to score, on average, well below white students from more educated, higher income households on standardized tests. In this case, exposing and highlighting achievement gaps may be seen as an essential first step in the effort to educate all students well, which can lead to greater public awareness and resulting changes in educational policies and programs.
To determine whether educational policies are working as intended. Elected officials and education policy makers may rely on standardized-test results to determine whether their laws and policies are working as intended, or to compare educational performance from school to school or state to state. They may also use the results to persuade the public and other elected officials that their policies are in the best interest of children and society.
The widespread use of high-stakes standardized tests in the United States has made criterion-referenced tests an object of criticism and debate. While many educators believe that criterion-referenced tests are a fair and useful way to evaluate student, teacher, and school performance, others argue that the overuse, and potential misuse, of the tests could have negative consequences that outweigh their benefits.
The tests are better suited to measuring learning progress than norm-referenced exams, and they give educators information they can use to improve teaching and school performance.
The tests are fairer to students than norm-referenced tests because they don’t compare the relative performance of students; they evaluate achievement against a common and consistently applied set of criteria.
The tests apply the same learning standards to all students, which can hold underprivileged or disadvantaged students to the same high expectations as other students. Historically, students of color, students who are not proficient in English, students from low-income households, and students with physical or learning disabilities have suffered from lower academic achievement, and many educators contend that this pattern of underperformance results, at least in part, from lower academic expectations. Raising academic expectations for these student groups, and making sure they reach those expectations, is believed to promote greater equity in education.
The tests can be constructed with open-ended questions and tasks that require students to use higher-level cognitive skills such as critical thinking, problem solving, reasoning, analysis, or interpretation. Multiple-choice and true-false questions promote memorization and factual recall, but they do not ask students to apply what they have learned to solve a challenging problem or write insightfully about a complex issue, for example. For a related discussion, see 21st century skills and Bloom’s taxonomy.
The tests are only as accurate or fair as the learning standards upon which they are based. If the standards are vaguely worded, or if they are either too difficult or too easy for the students being evaluated, the associated test results will reflect the flawed standards. A test administered in eleventh grade that reflects a level of knowledge and skill students should have acquired in eighth grade would be one general example. Alternatively, tests may not be appropriately “aligned” with learning standards, so that even if the standards are clearly written, age appropriate, and focused on the right knowledge and skills, the test might not designed well enough to achievement of the standards.
The process of determining proficiency levels and passing scores on criterion-referenced tests can be highly subjective or misleading—and the potential consequences can be significant, particularly if the tests are used to make high-stakes decisions about students, teachers, and schools. Because reported “proficiency” rises and falls in direct relation to the standards or cut-off scores used to make a proficiency determination, it’s possible to manipulate the perception and interpretation of test results by elevating or lowering either standards and passing scores. And when educators are evaluated based on test scores, their job security may rest on potentially misleading or flawed results. Even the reputations of national education systems can be negatively affected when a large percentage of students fail to achieve “proficiency” on international assessments.
The subjective nature of proficiency levels allows the tests to be exploited for political purposes to make it appear that schools are either doing better or worse than they actually are. For example, some states have been accused of lowering proficiency standards of standardized tests to increase the number of students achieving “proficiency,” and thereby avoid the consequences—negative press, public criticism, large numbers of students being held back or denied diplomas (in states that base graduation eligibility on test scores)—that may result from large numbers of students failing to achieve expected or required proficiency levels.
If the tests primarily utilize multiple-choice questions—which, in the case of standardized testing, makes scoring faster and less expensive because it can be done by computers rather than human scorers—they will promote rote memorization and factual recall in schools, rather than the higher-order thinking skills students will need in college, careers, and adult life. For example, the overuse or misuse of standardized testing can encourage a phenomenon known as “teaching to the test,” which means that teachers focus too much on test preparation and the academic content that will be evaluated by standardized tests, typically at the expense of other important topics and skills.
The term student outcomes typically refers to either (1) the desired learning objectives or standards that schools and teachers want students to achieve, or (2) the educational, societal, and life effects that result from students being educated. In the first case, student outcomes are the intended goals of a course, program, or learning experience; in the second case, student outcomes are the actual results that students either achieve or fail to achieve during their education or later on in life. The terms learning outcomes and educational outcomes are common synonyms.
While the term student outcomes is widely and frequently used by educators, it may be difficult to determine precisely what is being referred to when the term is used without qualification, specific examples, or additional explanation. When investigating or reporting on student outcomes, it is important to determine precisely how the term is being defined in a specific educational context. In some cases, for example, the term may be used in a general or undefined sense (“Our school is working to improve student outcomes”), while in others it may have a specific pedagogical or technical meaning (“The student outcomes for this course are X, Y, and Z”).
Instructional outcomes: Schools and teachers may define student outcomes as the knowledge, skills, and habits of work that students are expected to acquire by the end of an instructional period, such as a course, program, or school year. In this sense, the term may be synonymous with learning objectives or learning standards, which are brief written statements that describe what students should know and be able to do. Teachers often establish instructional goals for a course, project, or other learning experience, and those goals may then be used to guide what and how they teach (a process that is sometimes called “backwards planning” or “backward design”). While the term student outcomes may be used in this sense, terms such as learning objective and learning target are more common.
Educational outcomes: The results achieved by schools may also be considered “student outcomes” by educators and others, including results such as standardized-test scores, graduation rates, and college-enrollment rates. In this sense, the term may be synonymous with student achievement, since achievement typically implies education-specific results such as improvements in test scores.
Societal and life outcomes: In some cases, the term student outcomes, and synonyms such as educational outcomes, may imply broader, more encompassing, and more far-reaching educational results, including the impact that education has on individuals and society. For example, higher employment rates, lower incarceration rates, better health, reduce dependency on social services, and increased civic participation—e.g., higher voting rates, volunteerism rates, or charitable giving—have all been correlated with better education.
Long-term English learner (or LTEL) is a formal educational classification given to students who have been enrolled in American schools for more than six years, who are not progressing toward English proficiency, and who are struggling academically due to their limited English skills. States, districts, and schools determine the criteria and student characteristics used to identify long-term English learners, but definitions and classification criteria may vary widely from place to place. Given that these students are typically identified after six or more years of enrollment in formal education, long-term English learners are most commonly enrolled in middle schools and high schools. While some long-term English learners come from immigrant families, the majority are American citizens who have lived most or all of their lives in the United States.
Generally speaking, long-term English learners struggle with reading, writing, and academic language—the oral, written, auditory, and visual language proficiency and understanding required to learn effectively in academic programs—and consequently they have fallen behind their English-speaking peers academically and have accumulated significant learning gaps over the course of their education. While many long-term English learners are bilingual and articulate in English, and many sound like native English speakers, they typically have limited writing and reading skills in both their native language and in English, and their academic-literacy skills in English are not as well developed as their social-language abilities.
The defining characteristic of long-term English learners is that their English-language deficits have grown more severe and consequential over time, which has negatively affected their ability to achieve their full academic potential. Many long-term English learners have also developed habits of social detachment, academic disengagement, or learned passivity, and while many aspire to attend college or pursue professional careers, the students may be unaware that their academic experiences are not adequately preparing them for these aspirations. Long-term English learners are also more likely to be held back or drop out of school.
The students may have gone without adequate English-language instruction for an extended period of time due to a family relocation or disruption in their formal schooling. For example, their family may have moved to another country or to a school system that was not equipped to adequately teach and support English-language learners, or the students may have come from a country experiencing political, social, or economic upheavals, which prevented them from attending school for long periods of time.
The students may have been enrolled in weak, poorly designed, or ineffective language-development programs that did not improve or accelerate their English proficiency. For example, many schools with small populations of English-language learners do not have the experience, expertise, or resources need to create effective English-language instructional programs.
The elementary schools the students attended were not equipped to teach and support English-language learners, which delayed their acquisition of English proficiency, academic language, and the foundational knowledge and skills acquired by their English-speaking peers. For example, many teachers have not received training in the specialized instructional strategies required to teach English-language learners effectively, and some districts and schools do not have the resources needed to hire teachers with expertise in teaching English-language learners.
The students may have been misidentified by poorly designed diagnostic assessments or biased tests that led to their enrollment in inappropriate courses and programs. For example, the students may have been enrolled in special-education programs for native English speakers that did not help them develop their English proficiency, or they may have been identified as “struggling readers” rather than English-language learners who require specialized English-language instruction. For a related discussion, see test accommodations.
The families of long-term English learners may have been unable to advocate for their children due to cultural or linguistic barriers. For example, some immigrant parents may be unaccustomed to American schools and cultural expectations, and consequently they may not have the confidence or language ability needed to navigate school policies and request specialized services for their children.
The students may have experienced social, cultural, and linguistic isolation in school, or some may have experienced overt neglect, bias, or racism. For example, the “outsider status,” cultural exclusion, or sense of alienation that some long-term English learners feel could lead to a disinterest in school or in improving their English-language skills, while school programs and policies may intentionally or unintentionally limit or deny students access to specialized English-language instruction.
For more detailed discussions, including relevant reforms and debates, see academic language, dual-language education, English-language learner, and multicultural education.
In schools, common planning time refers to any period of time that is scheduled during the school day for multiple teachers, or teams of teachers, to work together.
In most cases, common planning time is considered to be a form of professional development, since its primary purpose is to bring teachers together to learn from one another and collaborate on projects that will lead to improvements in lesson quality, instructional effectiveness, and student achievement. Generally speaking, these improvements result from (1) the improved coordination and communication that occurs among teachers who meet and talk regularly, (2) the learning, insights, and constructive feedback that occur during professional discussions among teachers, and (3) the lessons, units, materials, and resources that are created or improved when teachers work on them collaboratively. While common planning time may be used for other purposes in some schools and situations—for example, staff members may use the time to coordinate an academic program or school-improvement initiative—the term is predominately associated with teaching-related planning and work.
Discussing teacher work: Teachers may collectively review lesson plans or assessments that have been used in a class, and then offer critical feedback and recommendations for improvement.
Discussing student work: Teachers may look at examples of student work turned in for a class, and then offer recommendations on how lessons or teaching approaches may be modified to improve learning and the quality of student work.
Discussing student data: Teachers may analyze student-performance data from a class to identify trends—such as which students are consistently failing or underperforming—and collaboratively develop proactive teaching and support strategies to help students who may be struggling academically. By discussing the students they have in common, teachers can develop a stronger understanding of the specific learning needs and abilities of certain students, which can then help them coordinate and improve how those students are taught.
Discussing professional literature: Teachers may select a text to read, such as a research study or an article about a specialized instructional technique, and then engage in a focused conversation about the text and how it can help inform or improve their teaching techniques.
Creating courses and curriculum: Teachers may collaboratively work on lesson plans, assignments, projects, and new courses, such as an interdisciplinary course taught by two teachers from different subject areas (for example, an art-history course taught by an art teacher and a history teacher). Teachers may also plan or develop other types of learning experiences, such as capstone projects, demonstrations of learning, learning pathways, personal learning plans, or portfolios, for example.
Common planning time can be contrasted with “teacher preparation time” or “prep periods,” which are periods of time during the school day when individual teachers, typically working on their own, can plan and prepare for their classes, meet with students, or grade assignments. Common planning time could be considered an evolution of the traditional preparation period, and in recent decades there has been a growing movement in education to encourage more frequent and purposeful collaboration among educators.
Professional learning communities: A widely used professional-development strategy in schools, professional learning communities are groups of educators who meet regularly, share expertise, and work collaboratively to improve their teaching skills and the academic performance of their students. In some schools, the terms common planning time and professional learning community (or any of its many synonyms) may be used interchangeably, particularly when the time is largely or entirely devoted to activities commonly associated with professional learning communities. For a more detailed discussion, see professional learning community.
Teaming: Another widely used school-improvement strategy, teaming pairs a group of teachers (typically between four and six) with sixty to eighty students. The general goal of teaming is to ensure that students are well known by core group of adults in the school, that their learning needs are understood and addressed, and that they receive the social, emotional, and academic support from teachers and staff that they need to succeed in school. Common planning time is often provided to teachers on a particular team to help them plan and coordinate the team-related projects and work. For a more detailed discussion, see teaming.
While the common planning time concept is not typically an object of debate, skeptics may question whether the time will actually have a positive impact on student learning, whether teachers will use the time purposefully and productively, or whether students would be better served if teachers spent more of their time teaching. Since it often extremely difficult, from a research perspective, to attribute gains in student performance to any one influence in a school (because so many potential factors can affect performance, including familial or socioeconomic dynamics outside of a school’s control), the benefits of common planning time may be difficult to measure objectively and reliably.
It is more likely, however, that common planning time will be criticized or debated when the time is poorly used or facilitated, when meetings become disorganized and unfocused, when teachers have negative experiences during meetings, and when the practice is perceived as a burdensome administrative requirement— rather than, say, an opportunity to improve one’s teaching skills. Like any school-improvement strategy or program, the quality of the design and execution will typically determine the results achieved. If meetings are poorly facilitated and conversations lapse into complaints about policies or personalities, or if educators fail to turn group learning into actual changes in instructional techniques, common planning time is less likely to be successful.
Teachers may assume more leadership responsibility or feel a greater investment in a school-improvement process.
The faculty culture may improve, and professional relationships can become stronger and more trusting if the faculty is interacting and communicating more productively.
More instructional innovation may take hold in classrooms and academic programs, and teachers may begin incorporating effective instructional techniques that are being used by colleagues.
Competing responsibilities and logistical issues can make the scheduling of regular common planning time difficult. Insufficient meeting time or irregularly scheduled time may then undermine the strategy and its intended benefits.
Inadequate training for group facilitators could produce ineffective facilitation, disorganized meetings, and an erosion of confidence in the strategy.
A lack of clear, explicit goals for common planning time can lead to unfocused conversations, misspent time, and general confusion about the purpose of the meetings.
A negative school or faculty culture could contribute to tensions, conflicts, factions, and other issues that undermine the potential benefits of common planning time.
A lack of observable, measurable progress or student-achievement gains can erode support, motivation, and enthusiasm for the strategy.
Highly divergent educational philosophies, belief systems, or learning styles can lead to disagreements that undermine the collegiality and sense of shared purpose typically required to make common planning time successful.
Social promotion is the practice of promoting students to the next grade level even when they have not learned the material they were taught or achieved expected learning standards. Social promotion is often contrasted with retention, the practice of holding students back and making them repeat a grade when they fail to meet academic expectations, or strategies such as proficiency-based learning, which may require students to demonstrate they have achieved academic expectations before they are promoted to the next grade level.
Generally speaking, the practice is called “social” promotion because non-academic factors and considerations, including societal pressures and expectations, influence promotion decisions. For example, educators and parents may not want to separate a young student from his or her friends or peer group, a school or community may not want a top athlete to lose his or her eligibility to play sports, or schools may not want to experience the consequences and public embarrassment that may result if significant numbers of students are held back. Considerations about the “socialization” of students—how they will learn to interact productively with peers and navigate social situations and expectations—also influence promotion decisions, particularly during the elementary grades. For example, educators may not want to damage a student’s self-esteem or put him or her at greater risk of suffering from the social, emotional, behavioral, and psychological problems often associated with grade retention. In these cases, promoting students, even though they did not meet academic expectations, is perceived to be in the best interests of the student. In a word, social promotion may result from a wide variety of educational, cultural, and socioeconomic causes—far too many to extensively catalogue here.
It should be noted that “social promotion” is largely used as a pejorative term, which complicates any attempt to define the concept since it carries negative connotations—the implication is that students are being promoted even though they are not academically ready and haven’t “earned” the promotion. The issue, however, is far more complicated in practice. While most debates about the topic are often framed as an either/or option between social promotion and grade retention, many observers have suggested that this dichotomy is both misleading and unhelpful, given that grade promotion may be the best option for students who have failed to meet academic expectations (because holding students back can have harmful effects), and grade retention is not the only solution to inadequate academic preparation (because schools can use a wide variety of academic support strategies to accelerate learning and help students catch up).
The central issue in social promotion is not the act of promotion, per se, but the problems associated with students progressing through the educational system when they haven’t learned what they were supposed to learn. The distinction here is subtle but significant. For example, are learning gaps growing over time and becoming more severe and consequential with each passing grade? Or are learning gaps being addressed and reduced over time, even though students have not met all academic expectations before being promoted from one grade to the next? In the first case, social promotion can have a variety of negative consequences for students and schools, while in the second case promotion is relatively harmless because the real problem—students not learning—is being addressed over time. When investigating or reporting on social promotion, it is important to scrutinize all the factors associated with practice, including the benefits or consequences it may have for students in specific situations.
While there is no way to determine precisely how prevalent social promotion is in public schools, the practice appears to be both common and widespread, particularly among students of color, students from lower-income households, students who are not proficient in English, and students with disabilities—i.e., groups more likely to experience cultural, socioeconomic, and educational inequality. One national survey, for example, found that a majority of participating public-school teachers reported that they promoted academically underprepared students in the preceding year. In addition, scores on standardized tests indicate that learning gaps are both significant and persistent across grade levels throughout the United States.
Given that the causes and forms of social promotion are both many and complex, states, districts, and schools may use a wide variety of strategies to eliminate or reduce social promotion. For example, proficiency-based learning, and related strategies such as demonstrations of learning, may require students to demonstrate achievement of expected learning standards before they are promoted to the next grade level. High-stakes tests—which may trigger penalties for underperforming schools, teachers, and students—are another strategy, although an extremely controversial one, used by states and policy makers. For example, students who perform poorly on some standardized tests may not be promoted to the next grade level until they achieve a minimum score on a test.
Educators may also use any number of academic support and acceleration strategies to help students meet expected learning standards or catch up with their peers academically, which would therefore render social promotion unnecessary. For example, “early warning systems” are used by educators to proactively identify students who may be at greater risk of dropping out or struggling academically, socially, and emotionally in school. Most early warning systems consist of educators collecting and analyzing student data—such as test scores, course grades, failure rates, absences, and behavioral incidences—before students begin a grade, and then using that information to provide the most appropriate academic programming, support, and services to help the students succeed. Schools are also increasingly using a variety of strategies known as “expanded learning time” to increase the amount of time students are learning in school, outside of school, and during vacation breaks, which can help who fallen behind academically catch up with their peers.
Other methods used to reduce learning gaps focus on the underlying causes of student underperformance, including a lack of sufficient interest, motivation, ambition, or aspirations. In these cases, educators may employ strategies that attempt to engage student interests and ambitions or provide them with more “personalized” instruction, lessons, and support. Strategies such as differentiation, learning pathways, and personalized learning would be three representative examples.
Social promotion is a complex problem with complex causes. While low-performing schools and poor-quality teaching certainly contribute to inadequate academic preparation, larger cultural and socioeconomic forces—from racism to income inequality to disparity in educational attainment—significantly contribute to the achievement gaps and opportunity gaps that often play a part in social promotion. For these reasons, debates about social promotion not only tend to be multifaceted and nuanced, and the practice has far-reaching implications for schools and society—and no easy solutions.
Critics of social promotion tend to argue that promoting academically unprepared students not only does an injustice to the students, but it can exacerbate the problems associated with the practice. For example, students are likely to fall further and further behind academically, which can increase the chances that they will not catch up with their peers and drop out of school before graduating. In addition, teachers in higher grades may become increasingly burdened with underprepared students who will not only require more time, attention and resources, but who will also be placed into courses alongside students who are ready for more challenging lessons and instruction. In these cases, the teacher’s job may become significantly more difficult, and the prepared students may not receive the attention and instruction they need and deserve. Or conversely, the unprepared students will require more of a teacher’s time and attention, which can negatively affect instructional quality for prepared students.
Critics of the practice may also argue that social promotion sends a variety of messages to students—for example, meeting expectations is optional, low-quality work is acceptable, or failure will still be rewarded—that could have negative long-term consequences for both the students and society. When social promotion becomes institutionalized or widespread, it can also mislead parents, policy makers, and the public into believing that students are making adequate progress in school and succeeding academically, when in fact their academic progress may be masking deeper, underlying problems in the system.
Critics may also argue that social promotion keeps students in the same type of educational settings, courses, and programs that are simply not working for them. In this case, if learning needs were being accurately diagnosed, the rationale goes, students could be placed into alternative courses and programs where they would be more likely to receive the kind of instruction and support they need to succeed in school (rather than being forced to retake the same courses that did not work the first time around, which may happen in the case of grade retention). Critics of social promotion may also cite research indicating that holding all students to high academic expectations increases their academic achievement and preparation, no matter where they started from, or that strong teaching and academic support can accelerate learning progress by months or years, thereby avoiding the need to socially promote students.
Those who support social promotion, or believe that it may be beneficial to students in certain cases, may argue that holding students back and making them repeat grades can have a variety of negative consequences: it will separate students from their natural peer group; it may increase the chances that they will struggle academically or drop out of school; it may increase the likelihood that they will suffer from low self-esteem, ridicule, and bullying; or it can increase the risk of social, emotional, behavioral, and psychological problems. In these cases, advocates of social promotion may cite research indicating that grade retention can have a variety of negative effects on students, and that in many cases grade retention does not work—students who are held back often never catch up with their peers, and they are at a greater risk of dropping out during adolescence.
In addition, the costs associated with grade retention can be significant, since holding students back effectively adds a year to the total cost of teaching those students (assuming the students remain in school). In this case, social promotion may result from financial pressures and logistical concerns, such as the increased costs and operational complexities associated with holding students back or providing them with the additional teaching and services they need to meet academic expectations.
In education, the term common standards predominately refers to learning standards—concise, written descriptions of what students are expected to know and be able to do at a specific stage of their education—that are used to guide public-school instruction, assessment, and curricula within a country, state, school, or academic field. That said, there are different types of common standards in education that may be used in a variety of ways (see examples below).
In brief, standards are consider “common” when (1) a single set of standards is used throughout an education system, state, district, or school, and (2) when they are applied and evaluated in consistent ways, whether they are learning standards for students or professional standards for educators. For example, standardized tests are one method used to consistently evaluate whether students from different schools and states have achieved expected learning standards.
For more detailed discussions, including relevant debates, see learning standards, proficiency, and high expectations.
Subject-area learning standards: Both national and international organizations that represent specific academic fields and content areas often develop learning standards for their academic disciplines. Typically, committees of experts and specialists develop these learning standards, which are then publicly released for voluntary adoption and use by countries, states, districts, schools, or subject-area professional organizations. The standards developed by the National Council for the Social Studies and the American Alliance for Health, Physical Education, Recreation and Dance would be two examples. State and national governments and agencies also develop subject-area learning standards (see examples below).
International learning standards: Some international organizations representing groups of educators in specific academic disciplines throughout the world develop standards for learning or teaching in a specific academic field. The standards developed by the International Reading Association and the International Society for Technology in Education would be two examples.
National learning standards: Many countries, such as Canada and Singapore, use national learning standards to guide instruction in public schools—i.e., national governmental agencies are responsible for developing and overseeing the learning standards applied to public schools. In the United States, the Common Core State Standards for the subject areas of English language arts and mathematics are two sets of learning standards that have been adopted by a majority of states. Unlike Canada and Singapore, the federal government does not play a role in developing these learning standards, but their widespread adoption by most states makes them a form of common standards used throughout the country. The Next Generation Science Standards would be another example similar to the Common Core State Standards.
State learning standards: State education agencies (i.e., departments of education) and state-based professional organizations also develop common academic standards for use within a particular state. All fifty states in the United States have established—through legislative action or state rules and requirements—learning standards for the major academic content areas (i.e., English language arts, mathematics, science, social studies, health, etc.). Recently, many states incorporated the Common Core State Standards into their state learning standards, and many of those same states are participating in the development of the Next Generation Science Standards.
Professional standards: Many membership organizations for educators create common professional standards for their specific academic field or area of expertise—the National Science Teachers Association’s Standards for Science Teacher Preparation, the National Council for the Social Studies’ National Standards for Social Studies Teachers, and Learning Forward’s Standards for Professional Learning would be three examples. In addition to professional standards for teachers, professional standards have been developed for administrators and other school staff, such as guidance counselors, school psychologists, or athletic coaches. Professional standards may also be developed or adopted by state education agencies and other governing bodies, which then use the standards to guide job-performance evaluations or teaching licensure and certification, for example. Professional standards may be applied at the international, national, state, and organizational levels, and they typically describe expectations for competence, behavior, and professional growth.
Accreditation standards: Organizations and agencies that accredit schools, academic institutions, and teacher-education programs also develop and use common standards during the accreditation process. In these cases, the common standards may be used in the evaluation of schools and programs in a given state or region. For example, the New England Association of Schools and Colleges accredits public schools, career and technical education programs, and postsecondary institutions in the northeastern United States, it uses different sets of standards for the different types of schools it accredits.
In education, learning objectives are brief statements that describe what students will be expected to learn by the end of school year, course, unit, lesson, project, or class period. In many cases, learning objectives are the interim academic goals that teachers establish for students who are working toward meeting more comprehensive learning standards.
Defining learning objective is complicated by the fact that educators use a wide variety of terms for learning objectives, and the terms may or may not be used synonymously from place to place. For example, the terms student learning objective, benchmark, grade-level indicator, learning target, performance indicator, and learning standard—to name just a few of the more common terms—may refer to specific types of learning objectives in specific educational contexts. Educators also create a wide variety of homegrown terms for learning objectives—far too many to catalog here. For these reasons, this entry describes only a few general types and characteristics.
While educators use learning objectives in different ways to achieve a variety of instructional goals, the concept is closely related to learning progressions, or the purposeful sequencing of academic expectations across multiple developmental stages, ages, or grade levels. Learning objectives are a way for teachers to structure, sequence, and plan out learning goals for a specific instructional period, typically for the purpose of moving students toward the achievement of larger, longer-term educational goals such as meeting course learning expectations, performing well on a standardized test, or graduating from high school prepared for college. For these reasons, learning objectives are a central strategy in proficiency-based learning, which refers to systems of instruction, assessment, grading, and academic reporting that are based on students demonstrating understanding of the knowledge and skills they are expected to learn before they progress to the next lesson, get promoted to the next grade level, or receive a diploma (learning objectives that move students progressively toward the achievement of academic standards may be called performance indicators or performance benchmarks, among other terms).
Learning objectives are also increasingly being used in the job-performance evaluations of teachers, and the term student learning objectives is commonly associated with this practice in many states. For a more detailed discussion, including relevant reforms and debates on the topic, see value-added measures and student-growth measures.
Learning objectives are also a way to establish and articulate academic expectations for students so they know precisely what is expected of them. When learning objectives are clearly communicated to students, the reasoning goes, students will be more likely to achieve the presented goals. Conversely, when learning objectives are absent or unclear, students may not know what’s expected of them, which may then lead to confusion, frustration, or other factors that could impede the learning process.
School-year or grade-level objectives: In this case, learning objectives may be synonymous with learning standards, which are concise, written descriptions of what students are expected to know and be able to do at a specific stage of their education. Grade-level learning objectives describe what students should achieve academically by the end of a particular grade level or grade span (terms such as grade-level indicators or grade-level benchmarks may be used in reference to these learning objectives or standards).
Course or program objectives: Teachers may also determine learning objectives for courses or other academic programs, such as summer-school sessions or vacation-break programs. In this case, the objectives may be the same academic goals described in learning standards (in the case of a full-year course, for example), or they may describe interim goals (for courses that are shorter in duration).
Unit or project objectives: Teachers may determine learning objectives for instructional units, which typically comprise a series of lessons focused on a specific topic or common theme, such as an historical period, for example. In the case of project-based learning—an instructional approach that utilizes multifaceted projects as a central organizing strategy for educating students—teachers may determine learning objectives for the end of long-term project rather than a unit.
Lesson or class-period objectives: Teachers may also articulate learning objectives for specific lessons that compose a unit, project, or course, or they may determine learning objectives for each day they instruct students (in this case, the term learning target is often used). For example, teachers may write a set of daily learning objectives on the blackboard, or post them to an online course-management system, so that students know what the learning expectations are for a particular class period. In this case, learning objectives move students progressively toward meeting more comprehensive learning goals for a unit or course.
Descriptive statements: Learning objectives may be expressed as brief statements describing what students should know or be able to do by the end of a defined instructional period. For example: Explain how the Constitution establishes the separation of powers among the three branches of the United States government—legislative, executive, and judicial—and articulate the primary powers held by each branch. State learning standards, which may comprise a variety of learning objectives, are commonly expressed as descriptive statements.
“I can” statements: Teachers may choose to express learning objectives as “I can” statements as a way to frame the objectives from a student standpoint. The basic idea is that “I can” statements encourage students to identify with the learning goals, visualize themselves achieving the goals, or experience a greater sense of personal accomplishment when the learning objectives are achieved. For example: I can explain how the Constitution establishes the separation of powers among the three branches of the United States government—legislative, executive, and judicial—and I can articulate the primary powers held by each branch.
“Students will be able to” statements: “Students will be able to” statements are another commonly used format for learning objectives, and the abbreviation SWBAT may be used in place of the full phrase. For example: SWBAT explain how the Constitution establishes the separation of powers among the three branches of the United States government—legislative, executive, and judicial—and articulate the primary powers held by each branch.
In education, the term stakeholder typically refers to anyone who is invested in the welfare and success of a school and its students, including administrators, teachers, staff members, students, parents, families, community members, local business leaders, and elected officials such as school board members, city councilors, and state representatives. Stakeholders may also be collective entities, such as local businesses, organizations, advocacy groups, committees, media outlets, and cultural institutions, in addition to organizations that represent specific groups, such as teachers unions, parent-teacher organizations, and associations representing superintendents, principals, school boards, or teachers in specific academic disciplines (e.g., the National Council of Teachers of English or the Vermont Council of Teachers of Mathematics). In a word, stakeholders have a “stake” in the school and its students, meaning that they have personal, professional, civic, or financial interest or concern.
In some cases, the term may be used in a more narrow or specific sense—say, in reference to a particular group or committee—but the term is commonly used in a more general and inclusive sense. The term “stakeholders” may also be used interchangeably with the concept of a “school community,” which necessarily comprises a wide variety of stakeholders.
The idea of a “stakeholder” intersects with many school-reform concepts and strategies—such as leadership teams, shared leadership, and voice—that generally seek to expand the number of people involved in making important decisions related to a school’s organization, operation, and academics. For example, shared leadership entails the creation of leadership roles and decision-making opportunities for teachers, staff members, students, parents, and community members, while voice refers the degree to which schools include and act upon the values, opinions, beliefs, perspectives, and cultural backgrounds of the people in their community. Stakeholders may participate on a leadership team, take on leadership responsibilities in a school, or give “voice” to their ideas, perspectives, and opinions during community forums or school-board meetings, for example.
Stakeholders may also play a role in community-based learning, which refers to the practice of connecting what is being taught in a school to its surrounding community, which may include local history, literature, and cultural heritages, in addition to local experts, institutions, and natural environments. Community-based learning is also motivated by the belief that all communities have intrinsic educational assets that educators can use to enhance learning experiences for students, so stakeholders are necessarily involved in the process.
Generally speaking, the growing use of stakeholder in public education is based on the recognition that schools, as public institutions supported by state and local tax revenues, are not only part of and responsible to the communities they serve, but they are also obligated to involve the broader community in important decisions related to the governance, operation, or improvement of the school. Increasingly, schools are being more intentional and proactive about involving a greater diversity of stakeholders, particularly stakeholders from disadvantaged communities and backgrounds or from groups that have historically been underserved by schools or that have underperformed academically, including English-language learners, students of color, immigrant students, and special-education students. In some cases, federal or state programs and foundation grants may encourage or require the involvement of multiple stakeholder groups in a school-improvement effort as a condition of funding.
Stakeholder-engagement strategies are also widely considered central to successful school improvement by many individuals and organizations that work with public schools. Because some communities may be relatively uninformed about or disconnected from their local schools, a growing number of educational reformers and reform movements advocate for more inclusive, community-wide involvement in a school-improvement process. The general theory is that by including more members of a school community in the process, school leaders can foster a stronger sense of “ownership” among the participants and within the broader community. In other words, when the members of an organization or community feel that their ideas and opinions are being heard, and when they are given the opportunity to participate authentically in a planning or improvement process, they will feel more invested in the work and in the achievement of its goals, which will therefore increase the likelihood of success.
In some cases, when schools make major organizational, programmatic, or instructional changes—particularly when parents and community members are not informed in advance or involved in the process—it can give rise to criticism, resistance, and even organized opposition. As a reform strategy, involving a variety of stakeholders from the broader community can improve communication and public understanding, while also incorporating the perspectives, experiences, and expertise of participating community members to improve reform proposals, strategies, or processes. In these cases, educators may use phrases such as “securing community support,” “building stakeholder buy-in,” or “fostering collective ownership” to describe efforts being made to involve community stakeholders in a planning and improvement process. In other cases, stakeholders are individuals who have power or influence in a community, and schools may be obligated, by law or social expectation, to keep certain parties informed the school and involved in its governance.
English-language learners, or ELLs, are students who are unable to communicate fluently or learn effectively in English, who often come from non-English-speaking homes and backgrounds, and who typically require specialized or modified instruction in both the English language and in their academic courses.
Educators use a number of terms when referring to English-language learners, including English learners (or ELs), limited English proficient (LEP) students, non-native English speakers, language-minority students, and either bilingual students or emerging bilingual students. The proliferation of terms, some of which may be used synonymously and some of which may not, can create confusion. For example, the term English-language learner is often used interchangeably with limited English proficient student, but some school districts and states may define the terms differently for distinct classifications of students. Nonetheless, the federal government and many state governments have acknowledged that both terms refer to the same group of students—those with limited proficiency in English. When investigating or reporting on English-language learners, it is important to determine precisely how the term, or a related term, is being defined in a specific educational context. In some cases, for example, the terms are used in a general sense, while in others they may be used in an official or technical sense to describe students with specific linguistic needs who receive specialized educational services.
Generally speaking, English-language learners do not have the English-language ability needed to participate fully in American society or achieve their full academic potential in schools and learning environments in which instruction is delivered largely or entirely in English. In most cases, students are identified as “English-language learners” after they complete a formal assessment of their English literacy, during which they are tested in reading, writing, speaking, and listening comprehension; if the assessment results indicate that the students will struggle in regular academic courses, they may be enrolled in either dual-language courses or English as a second language (ESL) programs.
English-language learners may also be students who were formerly classified as limited English proficient, but who have since acquired English-language abilities that have allowed them to transition into regular academic courses taught in English. While assessment results may indicate that they have achieved a level of English literacy that allows them to participate and succeed in English-only learning environments, the students may still struggle with academic language. For this reason, the federal government requires schools and programs receiving federal funding for English-language-learner programs to monitor the academic progress of students and provide appropriate academic support for up to two years after they transition into regular academic courses.
English-language learners are not only the fastest-growing segment of the school-age population in the United States, but they are also a tremendously diverse group representing numerous languages, cultures, ethnicities, nationalities, and socioeconomic backgrounds. While most English-language learners were born in the United States, their parents and grandparents are often immigrants who speak their native language at home. In addition, English-language learners may face a variety of challenges that could adversely affect their learning progress and academic achievement, such as poverty, familial transiency, or non-citizenship status, to name just a few. Some English-language learners are also recently arrived immigrants or refugees who may have experienced war, social turmoil, persecution, and significant periods of educational disruption. In some extreme cases, for example, adolescent-age students may have had little or no formal schooling, and they may suffer from medical or psychological conditions related to their experiences (the term students with interrupted formal education, or SIFE, is often used in reference to this subpopulation of English-language learners).
On average, English-language learners also tend, relative to their English-speaking peers, to underperform on standardized tests, drop out of school at significantly higher rates, and decline to pursue postsecondary education. In school, they may also be enrolled—at significantly higher rates than their English-speaking peers—in lower-level courses taught by underprepared or less-experienced teachers who may not have the specialized training and resources needed to teach English-language learners effectively.
The increase in the number of English-language learners in public schools, coupled with the significant educational challenges faced by this student population, has led to numerous changes in curriculum, instruction, assessment, and teacher preparation. For example, states and national organizations have developed standards to guide curriculum and instruction in English-as a second language programs, while customized teaching and learning materials for English-language learners are now routinely introduced into regular academic courses. In addition, assessments and standardized tests have also been adapted to more accurately measure the academic achievement of English-language learners, and the majority of states now use the same large-scale assessment—the World-Class Instructional Design and Assessment consortium’s ACCESS assessment (Assessing Comprehension and Communication in English State-to-State)—to identify English-language learners, place them into appropriate programs, and measure their academic progress and English acquisition. Teacher-preparation programs and certification requirements have also been modified to address relevant skills and training, and many states and national accrediting associations require formal training in the instruction of English-language learners. And in schools with significant populations of English-language learners, relevant experience, credentials, and training are often given priority during hiring and employment.
Dual-language education, formerly called bilingual education, refers to academic programs that are taught in two languages. While schools and teachers may use a wide variety of dual-language strategies, each with its own specific instructional goals, the programs are typically designed to develop English fluency, content knowledge, and academic language simultaneously.
English as a second language refers to the teaching of English to students with different native or home languages using specially designed programs and techniques. English as a second language is an English-only instructional model, and most programs attempt to develop English skills and academic knowledge simultaneously. It is also known as English for speakers of other languages (ESOL), English as an additional language (EAL), and English as a foreign language (EFL).
Sheltered instruction refers to programs in which English-language learners are “sheltered” together to learn English and academic content simultaneously, either within a regular school or in a separate academy or building. Teachers are specially trained in sheltered instructional techniques that may require a distinct licensure, and there are many different sheltered models and instructional variations.
Given the culturally sensitive and often ideologically contentious nature of the peripheral issues raised by the participation of non-English-speaking students in the American public-education system—including politicized debates related to citizenship status, English primacy, immigration reform, and employment and social-services eligibility for non-citizens—it is perhaps unsurprising that English-language learners, and the instructional methods used to educate them, can become a source of debate. For example, a significant number of states have adopted “English as the official language” statutes, and citizen referendums have passed in other states prohibiting dual-language instruction except in special cases.
The issues of citizenship status and fairness tend to be at the center of debates about English-language learners and the best ways to educate them. Critics often argue that the use of the non-English languages in public schools (outside of world-language courses) deemphasizes the role of English as a source of linguistic and cultural unification. While critics generally do not object to bilingualism—the ability to speak two languages—they often contend that non-English instruction inhibits or delays the acquisition of English fluency (yet there is a growing body of research indicating that increasing reading, writing, speaking, and listening skills in their native languages can facilitate English acquisition among English-language learners).
While there is widespread agreement that English-language learners should become proficient in English, debates often center on issues related to cultural assimilation. Those who favor assimilation into American society tend to emphasize English-only policies and instruction, while those who favor acculturation tend to argue for the importance of maintaining bicultural identity and bilingual development. In addition, since English-language learners and dual-education programs may require additional funding, training, and staffing, debates about fairness and resource allocation may also arise.
For more detailed discussions, see dual-language education (for debates related to non-English instruction), multicultural education (for debates related to cultural education and assimilation), and test accommodations and test bias (for debates related to the assessment of English-language learners).
Other related entries include equity, learning gap, achievement gap, and opportunity gap.
In education, the term proficiency is used in a variety of ways, most commonly in reference to (1) proficiency levels, scales, and cut-off scores on standardized tests and other forms of assessment, (2) students achieving or failing to achieve proficiency levels determined by tests and assessments, (3) students demonstrating or failing to demonstrate proficiency in relation to learning standards (for a related discussion, see proficiency-based learning); and (4) teachers being deemed proficient or non-proficient on job-performance evaluations.
To understand how proficiency works in educational contexts, it is important to recognize that all proficiency determinations are based on some form of standards or measurement system, and that proficiency levels change in direct relation to the scales, standards, tests, and calculation methods being used to evaluate and determine proficiency. It is therefore possible, for example, to alter the perception of proficiency by lowering standards or cut-off scores on tests, or to overlook that two distinct—and therefore incomparable—proficiency systems are being compared side-by-side, even though different standards, tests, or calculation methods were used to determine proficiency (see Common systems vs. disparate systems below). Because the bar for proficiency can diverge significantly from system to system, state to state, test to test, school to school, and course to course, or from year to year when changes are made to learning standards and accompanying tests, proficiency in education may become a source of confusion, debate, controversy, and even deception.
High standards vs. low standards: One source of debate is related to the standards upon which a proficiency determination is based, and whether the standards are being applied consistently or fairly to produce accurate results. Some may argue, for example, that the standards or cut-off scores for “proficiency” on a given test are too low, and therefore the test results will only produce “false positives”—i.e., they will indicate that students are proficient when they are not. A test administered in eleventh grade that reflects a level of knowledge and skill students should have acquired in eighth grade would be one general example. Because reported “proficiency” rises and falls in direct relation to the standards used to make a proficiency determination, it’s possible to manipulate the perception and interpretation of test results by elevating or lowering standards. Some states, for example, have been accused of lowering proficiency standards to increase the number of students achieving “proficiency,” and thereby avoid the consequences—negative press, public criticism, large numbers of students being held back or denied diplomas (in states that base graduation eligibility on test scores)—that may result from large numbers of students failing to achieve expected or required proficiency levels.
Common systems vs. disparate systems: Since proficiency must be determined by some form of measurement system—whether it’s a certain percentage of correct answers on a test or a highly sophisticated mathematical algorithm, as with value-added measures used in teacher evaluation—proficiency determinations can be more or less accurate based on the quality of the system being used, or they can be comparable (when common systems are used) or incomparable (when disparate systems are used). Confusion may result when there is disagreement about the methods being used to determine proficiency, or when two different systems are being compared even though the results are not comparable in a valid or reliable way. For example, when the Common Core State Standards were adopted by a number of states, the states were then required to use different standardized tests, based on a different set of standards, to determine “proficiency” (i.e., the tests would measure achievement against the more recently adopted Common Core standards, as opposed to the learning standards formerly used by the states). In this case, both the standards and the tests used to measure proficiency have changed significantly, which makes any comparisons between the old system (student test scores from previous years) and the new system (student scores on the new tests) difficult or impossible. Advocates of the Common Core typically argue that the new standards will allow for more consistent comparisons of student performance across state lines—and thereby more reliably or usefully measure student learning—because “common” standards and “common” tests are being used.
Alignment vs. misalignment: Proficiency levels may also rise or fall in relation to the level of alignment between a test and the content actually taught to students. For example, if schools teach a selection of concepts and skills that are not evaluated on a given test, the results may produce a “false negative”—i.e., students may have learned what they were taught, but they were not tested on content they were taught, producing misleading results (proficiency is based on the content that was tested, not the content that was taught). The question of alignment and misalignment often arises in debates about learning standards. For example, when states adopt a new set of learning standards, teachers then have to “align” what they teach to the new standards. If the process of alignment is poorly executed or delayed, students may take tests based on the new standards even though what they were taught was still based on an older set of standards. The adoption of the Common Core State Standards by a majority of states has become a source of discussion and debate on this issue.
Learning vs. reporting: As described above, it may be possible for students to learn a lot (or very little) in schools but still appear to have learned very little (or a lot) due to the systems and standards being applied, or due to the misalignment of teaching and testing. Potential confusion and problems, therefore, may stem from the tendency of people to view test scores as accurate, absolute measures of learning, rather than relatively limited indicators of learning that may be potentially flawed or misleading. (For a related discussion, see measurement error.) For example, students may learn important skills in school such as problem solving and researching that are not specifically evaluated by tests, or they may be have learned a large body of knowledge, just not the specific knowledge evaluated by a given test or assessment. In these cases, “proficiency” rates on tests—often reported as either percent proficient or proportion proficient—may present only a partial or misleading picture of what students have learned. It is for this reason, among others, that testing experts often recommend against making important decisions about students on the basis of a single test score.
Appropriate vs. inappropriate proficiency levels: Given the issues described above, proficiency determinations are also the object of debates related to the appropriateness or inappropriateness of a given proficiency scale, standard, or system. For example: Is it appropriate to hold a non-English-speaking student to the same proficiency standards, as measured by the same English-language tests, as a native-English-speaking student? Or, similarly, a recently immigrated student who has had very little formal education in her home country? (For a related discussion, see test bias.) Teacher evaluations are another object of debate and controversy on this issue, particularly when it comes to factoring student achievement into performance evaluations. Advocates of using student-achievement indicators, such as test scores, may argue that it is appropriate to consider student achievement, given that it’s a teacher’s job to improve student learning. If the academic achievement of their students is not considered, how is possible to accurately or meaningfully evaluate teacher performance? Opponents may counter-argue, however, that student achievement is influenced by a host of factors outside of a teacher’s control, such as student’s prior educational experiences, the socioeconomic status of the student’s parents, or the stability and support present in a student’s home environment. Consequently, it would inappropriate to hold teachers accountable for factors that are beyond their influence or control. In these cases, proficiency systems and determinations may be debated or disputed when they are perceived to be biased, unfair, or inequitable by one group or another.
When used by educators, the term school community typically refers to the various individuals, groups, businesses, and institutions that are invested in the welfare and vitality of a public school and its community—i.e., the neighborhoods and municipalities served by the school.
In many contexts, the term encompasses the school administrators, teachers, and staff members who work in a school; the students who attend the school and their parents and families; and local residents and organizations that have a stake in the school’s success, such as school-board members, city officials, and elected representatives; businesses, organizations, and cultural institutions; and related organizations and groups such as parent-teacher associations, “booster clubs,” charitable foundations, and volunteer school-improvement committees (to name just a few). In other settings, however, educators may use the term when referring, more specifically, to the sense of “community” experienced by those working, teaching, and learning in a school—i.e., the administrators, faculty, staff, and students. In this case, educators may also be actively working to improve the culture of a school, strengthen relationships between teachers and students, and foster feelings of inclusion, caring, shared purpose, and collective investment.
The term school community also implicitly recognizes the social and emotional attachments that community members may have to a school, whether those attachments are familial (the parents and relatives of students, for example), experiential (alumni and alumnae), professional (those who work in and derive an income from the school), civic (those who are elected to oversee a school or who volunteer time and services), or socioeconomic (interested taxpayers and the local businesses who may employ graduates and therefore desire more educated, skilled, and qualified workers). Depending on the specific context in which the term is used, school community may have more or less inclusive—or more or less precise—connotations.
The “school community” concept is closely related to the concepts of voice and shared leadership, which generally seek to broaden the involvement of more individuals, and more diverse viewpoints, in the governance and programming in a school or district. The idea of a school community may also intersect with leadership teams and the development of mission and vision statements or school-improvement plans—all of which may involve students, parents, and other individuals who are not employed by a school. While the concept is related in some ways to professional learning communities, the “school community” concept is distinct (although the term “learning community” may refer to both school communities and professional learning communities).
The idea of a school community may also have an official, democratic connotation, given that the majority of public schools and districts are overseen by elected school boards or other governing bodies. School boards make and revise school policies, and they authorize certain governance decisions and activities—responsibilities that often extend to the development and approval of school-improvement proposals. In these cases, school-board members are elected to represent “the community” in a direct, official capacity.
Generally speaking, the growing use of school community reflects the recognition that schools, as public institutions supported by state and local tax revenues, are not only part of and responsible to the communities they serve, but they are also obligated to involve the broader community in important decisions related to the governance, operation, or improvement of the school. Increasingly, schools are being more intentional and proactive about involving a greater diversity of community members, particularly those from disadvantaged communities and backgrounds, or from groups that have historically been underserved by schools or that have underperformed academically, including English-language learners, students of color, immigrant students, and special-education students. In some cases, federal or state programs and foundation grants may encourage or require the involvement of multiple community groups in a school-improvement effort as a condition of funding.
Community-engagement strategies are also widely considered central to successful school improvement by many individuals and organizations that work with public schools. Because some communities may be relatively uninformed about or disconnected from their local schools, a growing number of educational reformers and reform movements in recent decades have advocated for more inclusive, community-wide involvement in an improvement process. The general theory is that by including more members of a school community in the process, school leaders can foster a stronger sense of “ownership” among the participants and within the broader community. In other words, when the members of an organization or community feel that their ideas and opinions are being heard, and when they are given the opportunity to participate authentically in a planning or improvement process, they will feel more invested in the work and in the achievement of its goals, which will therefore increase the likelihood of success.
In some cases, when schools make major organizational, programmatic, or instructional changes—particularly when parents and community members are not informed in advance or involved in the process—it can give rise to criticism, resistance, and even organized opposition. As a reform strategy, involving a variety of “stakeholders” from the broader community can improve communication and public understanding, while also incorporating the perspectives, experiences, and expertise of participating community members to improve reform proposals, strategies, or processes. Educators may use phrases such as “securing community support,” “building stakeholder buy-in,” or “fostering collective ownership” to describe efforts being made to involve community members in a planning and improvement process. In other cases, stakeholders are individuals who have power or influence in a community, and schools may be obligated, by law or social expectation, to keep certain parties informed the school and involved in its governance.
The term alignment is widely used by educators in a variety of contexts, most commonly in reference to reforms that are intended to bring greater coherence or efficiency to a curriculum, program, initiative, or education system.
When the term is used in educational contexts without qualification, specific examples, or additional explanation, it may be difficult to determine precisely what alignment is referring to. In some cases, the term may have a very specific, technical meaning, but in others it may be vague, undecipherable jargon. Generally speaking, the use of alignment tends to become less precise and meaningful when its object grows in size, scope, or ambition. For example, when teachers talk about “aligning curriculum,” they are likely referring to a specific, technical process being used to develop lessons, deliver instruction, and evaluate student learning growth and achievement. On the other hand, some education reports, improvement plans, and policy proposals may refer to the “alignment” of various elements of an education system without describing precisely what might be entailed in the proposed alignment process. And, of course, some “alignments” may be practical, thoughtful strategies that produce tangible improvements in schools and student learning, while others may be unspecific “action items” that never get acted on, or they may be strategies that show promise in theory, but that turn out to be overly complex and burdensome when executed in states, districts, and schools.
Policy: Educators, reformers, policy makers, and elected officials may call for the “alignment of policy and practice.” For example, federal or state laws, regulations, and rules may not be enacted in districts or schools, or educators may not follow policies established by school boards and districts. Or enacted laws and regulations may contradict one another, leaving school leaders and teachers wondering which laws and rules they should follow. In addition, the interpretation and implementation of a given education policy in schools may diverge significantly from the guidance and objectives of a policy, which may then require modifications to—or the alignment of—the policy language and resulting “practices” used by educators. Generally speaking, the alignment of policy usually entails a process of refinement, iteration, clarification, and communication during the development, and following the adoption, of a new policy or set of policies.
Strategy: School leaders may work to “align” the organization and operation of a district or school, including how students are taught, with a given school-improvement plan, reform strategy, or educational model. In this case, the alignment process might entail a wide variety of reforms—from reallocating budgetary expenditures to restructuring school schedules to redesigning courses and lessons—in ways that are intended to achieve the objectives of the improvement plan, while also ensuring that its parts are working together coherently and effectively. For a related discussion, see action plan.
Learning Standards: Educators may work to “align” what and how they teach with a given set of learning standards, such as the Common Core State Standards or the subject-area standards developed by states and national organizations. In this case, modifications may be made to lessons, course designs, academic programs, and instructional techniques so that the concepts and skills described in the standards are taught to students at certain times, in certain sequences, or in certain ways. For related discussions, see learning progression and proficiency-based learning.
Assessment: Teachers may “align” assessments, standards, lessons, and instruction so that the assessments evaluate the material they are teaching in a unit or course. Test-development companies also “align” standardized tests to a state’s learning standards so that test questions and tasks address the specific concepts and skills described in the standards for a certain subject area and grade level. In individual cases, teachers may align assessments and lessons more or less precisely, but developers of large-scale standardized tests utilize sophisticated psychometric strategies intended to improve the validity and accuracy of the assessment results (although this is a source of ongoing debate). For a related discussion, see measurement error.
Curriculum: Educators may “align” curriculum in different ways, but perhaps the most common forms are (1) aligning curriculum—the knowledge, skills, topics, and concepts that are taught to students, and the lessons, units, assignments, readings, and materials used in the teaching process—with specific learning standards, and (2) aligning various curricula within a school, such as the curriculum for a particular course, with other curricula in the school to improve overall coherence and effectiveness. In the second case, for example, educators may align curricula by making sure that courses follow a logical learning sequence, within and across subject areas and grade levels, so that new concepts build on previously taught concepts. For a more detailed discussion, see coherent curriculum.
Professional Development: School leaders, educational experts, reform organizations, and government agencies may “align” professional development—such as training sessions, workshops, conferences, and resources—with the objectives of specific policies, improvement plans, or educational models. For example, state education agencies may provide training sessions for superintendents and principals to help them implement new teacher-evaluation requirements, or districts and schools may contract with experts and outside organizations to help their faculties learn new educational approaches or teaching techniques.
Defining interim assessment is complicated by the fact that educators use a variety of terms for these forms of assessment—such as benchmark assessment or predictive assessment—and the terms may or may not be used synonymously. It should also be noted that there is often confusion and debate about the distinctions between formative assessments and interim assessments, and educators may define the terms differently from school to school or state to state.
Generally speaking, interim assessments fall between formative assessment and summative assessment, and understanding their intended purpose requires an understanding of the basic distinctions between these two assessment strategies.
Formative assessments are used collect detailed information that educators can use to improve instructional techniques and student learning while it’s happening. Summative assessments, on the other hand, are used to evaluate learning progress and achievement at the conclusion of a specific instructional period—usually at the end of a project, unit, course, semester, program, or school year. In other words, formative assessments are for learning, while summative assessments are of learning. Or as assessment expert Paul Black put it, “When the cook tastes the soup, that’s formative assessment. When the customer tastes the soup, that’s summative assessment.” It should be noted, however, that the distinction between formative and summative is often fuzzy in practice, and educators may hold divergent interpretations of and opinions on the subject.
Like formative assessments, teachers may use interim assessments to identify concepts that students are struggling to understand, skills they are having difficulty mastering, or learning standards they have not yet achieved so that adjustments can be made to lessons, instructional techniques, and academic support. But unlike formative assessments, interim assessments—depending on how they are designed and used—may allow for the comparison of student results across courses, schools, or states, and they may be used by school, district, and state leaders to track the academic progress of certain student populations. The distinction here is between assessments that are used on a daily basis by individual teachers during the instructional process (formative assessments), and either standardized or “common” assessments that are used by multiple teachers, schools, districts, or states, which allow students results to be compared.
A preliminary test developed by a company, organization, or consortium—such as the Smarter Balanced Assessment Consortium—that is intended to evaluate how well students are prepared for a standardized test that will be administered on a future date. In this case, results from the interim assessment would be used by school leaders and teachers to better prepare students for the future test.
A common literacy assessment or rubric that teachers develop to evaluate student learning progress in relation to expected reading standards. In this case, the assessment would be used by multiple teachers in a school or district, and it would be used in advance of a summative literacy assessment. What is “common” in this example is both the assessment being used and the reading standards it is based on. Results from the interim assessment would be used to better prepare students for future assessments.
The tests, assignments, or projects are used to determine whether students have learned what they were expected to learn. In other words, what makes an assessment “summative” is not the design of the test, assignment, or self-evaluation, per se, but the way it is used—i.e., to determine whether and to what degree students have learned the material they have been taught.
Summative assessments are given at the conclusion of a specific instructional period, and therefore they are generally evaluative, rather than diagnostic—i.e., they are more appropriately used to determine learning progress and achievement, evaluate the effectiveness of educational programs, measure progress toward improvement goals, or make course-placement decisions, among other possible applications.
Summative-assessment results are often recorded as scores or grades that are then factored into a student’s permanent academic record, whether they end up as letter grades on a report card or test scores used in the college-admissions process. While summative assessments are typically a major component of the grading process in most districts, schools, and courses, not all assessments considered to be summative are graded.
Summative assessments are commonly contrasted with formative assessments, which collect detailed information that educators can use to improve instruction and student learning while it’s happening. In other words, formative assessments are often said to be for learning, while summative assessments are of learning. Or as assessment expert Paul Black put it, “When the cook tastes the soup, that’s formative assessment. When the customer tastes the soup, that’s summative assessment.” It should be noted, however, that the distinction between formative and summative is often fuzzy in practice, and educators may have divergent interpretations and opinions on the subject.
Standardized tests that are used to for the purposes of school accountability, college admissions (e.g., the SAT or ACT), or end-of-course evaluation (e.g., Advanced Placement or International Baccalaureate exams).
Culminating demonstrations of learning or other forms of “performance assessment,” such as portfolios of student work that are collected over time and evaluated by teachers or capstone projects that students work on over extended periods of time and that they present and defend at the conclusion of a school year or their high school education.
While most summative assessments are given at the conclusion of an instructional period, some summative assessments can still be used diagnostically. For example, the growing availability of student data, made possible by online grading systems and databases, can give teachers access to assessment results from previous years or other courses. By reviewing this data, teachers may be able to identify students more likely to struggle academically in certain subject areas or with certain concepts. In addition, students may be allowed to take some summative tests multiple times, and teachers might use the results to help prepare students for future administrations of the test.
It should also be noted that districts and schools may use “interim” or “benchmark” tests to monitor the academic progress of students and determine whether they are on track to mastering the material that will be evaluated on end-of-course tests or standardized tests. Some educators consider interim tests to be formative, since they are often used diagnostically to inform instructional modifications, but others may consider them to be summative. There is ongoing debate in the education community about this distinction, and interim assessments may defined differently from place to place. See formative assessment for a more detailed discussion.
While educators have arguably been using “summative assessments” in various forms since the invention of schools and teaching, summative assessments have in recent decades become components of larger school-improvement efforts. As they always have, summative assessments can help teachers determine whether students are making adequate academic progress or meeting expected learning standards, and results may be used to inform modifications to instructional techniques, lesson designs, or teaching materials the next time a course, unit, or lesson is taught. Yet perhaps the biggest changes in the use of summative assessments have resulted from state and federal policies aimed at improving public education—specifically, standardized high-stakes tests used to make important decisions about schools, teachers, and students.
While there is little disagreement among educators about the need for or utility of summative assessments, debates and disagreements tend to center on issues of fairness and effectiveness, especially when summative-assessment results are used for high-stakes purposes. In these cases, educators, experts, reformers, policy makers, and others may debate whether assessments are being designed and used appropriately, or whether high-stakes tests are either beneficial or harmful to the educational process. For more detailed discussions of these issues, see high-stakes test, measurement error, test accommodations, test bias, score inflation, standardized test, and value-added measures. | 2019-04-18T10:22:52Z | https://www.edglossary.org/category/entry/page/2/ |
The number of shares outstanding of the registrant’s common stock as of January 30, 2015 was 475,152,112.
(A) Professional Employer Organization (“PEO”) revenues are net of direct pass-through costs, primarily consisting of payroll wages and payroll taxes, of $7,217.4 million and $6,140.4 million for the three months ended December 31, 2014 and 2013, respectively, and $12,953.6 million and $11,087.6 million for the six months ended December 31, 2014 and 2013, respectively.
(A) As of June 30, 2014, $2,015.8 million of short-term marketable securities and $183.8 million of cash and cash equivalents are related to the Company's outstanding commercial paper borrowings (see Note 9).
Dividend received from CDK Global, Inc.
Cash retained by CDK Global, Inc.
The accompanying Consolidated Financial Statements and footnotes thereto of Automatic Data Processing, Inc. and its subsidiaries (“ADP” or the “Company”) have been prepared in accordance with accounting principles generally accepted in the United States of America (“U.S. GAAP”). The Consolidated Financial Statements and footnotes thereto are unaudited. In the opinion of the Company’s management, the Consolidated Financial Statements reflect all adjustments, which are of a normal recurring nature, that are necessary for a fair presentation of the Company’s results for the interim periods.
The preparation of financial statements in conformity with U.S. GAAP requires management to make estimates and assumptions that affect the assets, liabilities, revenue, expenses, and other comprehensive income that are reported in the Consolidated Financial Statements and footnotes thereto. Actual results may differ from those estimates. The Consolidated Financial Statements and all relevant footnotes have been adjusted for discontinued operations (see Note 3).
Interim financial results are not necessarily indicative of financial results for a full year. The information included in this Quarterly Report on Form 10-Q should be read in conjunction with the Company’s Annual Report on Form 10-K for the fiscal year ended June 30, 2014 (“fiscal 2014”).
In May 2014, the Financial Accounting Standards Board ("FASB") issued Accounting Standards Update ("ASU") 2014-09, "Revenue from Contracts with Customers," which outlines a single comprehensive model for entities to use in accounting for revenue arising from contracts with customers and supersedes most current revenue recognition guidance, including industry-specific guidance. ASU 2014-09 requires an entity to recognize revenue depicting the transfer of goods or services to customers in an amount that reflects the consideration to which the entity expects to be entitled in exchange for those goods or services. ASU 2014-09 will also result in enhanced revenue related disclosures. ASU 2014-09 is effective for fiscal years, and interim reporting periods within those years, beginning after December 15, 2016. The Company has not yet determined the impact of ASU 2014-09 on its consolidated results of operations, financial condition, or cash flows.
In April 2014, the FASB issued ASU 2014-08, "Reporting Discontinued Operations and Disclosures of Disposals of Components of an Entity." ASU 2014-08 requires that a disposal representing a strategic shift that has (or will have) a major effect on an entity’s financial results or a business activity classified as held for sale should be reported as discontinued operations. ASU 2014-08 also expands the disclosure requirements for discontinued operations and adds new disclosures for individually significant dispositions that do not qualify as discontinued operations. ASU 2014-08 is effective prospectively for fiscal years, and interim reporting periods within those years, beginning after December 15, 2014. The impact of ASU 2014-08 is dependent upon the nature of dispositions, if any, after adoption.
In July 2014, the Company adopted ASU 2013-11, “Presentation of an Unrecognized Tax Benefit When a Net Operating Loss Carryforward, a Similar Tax Loss, or a Tax Credit Carryforward Exists.” ASU 2013-11 requires netting of unrecognized tax benefits against a deferred tax asset for a loss or other carryforward that would apply in settlement of the uncertain tax position. The adoption of ASU 2013-11 did not have a material impact on the Company's consolidated results of operations, financial condition, or cash flows.
On September 30, 2014, the Company completed the tax free spin-off of its former Dealer Services business, which was a separate reportable segment, into an independent publicly traded company called CDK Global, Inc. ("CDK"). As a result of the spin-off, ADP stockholders of record on September 24, 2014 (the "record date") received one share of CDK common stock on September 30, 2014, par value $0.01 per share, for every three shares of ADP common stock held by them on the record date and cash for any fractional shares of CDK common stock. ADP distributed approximately 160.6 million shares of CDK common stock in the distribution. The spin-off was made without the payment of any consideration or the exchange of any shares by ADP stockholders.
The Company recorded a decrease to retained earnings of $1.5 billion for the reduction in net assets of CDK related to the spin-off, offset by an increase to retained earnings of $825.0 million related to the cash dividend received from CDK as part of the spin-off. The spin-off, transitional, and on-going relationships between ADP and CDK are governed by the Separation and Distribution Agreement entered into between ADP and CDK and certain other ancillary agreements.
Incremental costs associated with the spin-off of $2.5 million for the three months ended December 31, 2014 and $45.3 million for the six months ended December 31, 2014 are included in discontinued operations on the Statements of Consolidated Earnings and are principally related to professional fees.
On February 28, 2014, the Company completed the sale of its Occupational Health and Safety services business ("OHS") for a pre-tax gain of $15.6 million, less costs to sell, and recorded such gain within earnings from discontinued operations on the Statements of Consolidated Earnings in the three months ended March 31, 2014. OHS was previously reported in the Employer Services segment.
Options to purchase 0.1 million shares of common stock for the three months ended December 31, 2013 and 0.1 million shares of common stock for the six months ended December 31, 2013 were excluded from the calculation of diluted earnings per share because their inclusion would have been anti-dilutive.
During the six months ended December 31, 2014, the Company sold notes receivable related to Dealer Services financing arrangements for a gain of $1.4 million. Refer to Note 7 for further information.
(A) Included within available-for-sale securities are corporate investments with fair values of $148.2 million and funds held for clients with fair values of $20,180.0 million. All available-for-sale securities were included in Level 2.
(B) Included within available-for-sale securities are corporate investments with fair values of $2,086.3 million and funds held for clients with fair values of $18,070.2 million. All available-for-sale securities were included in Level 2.
For a description of the fair value hierarchy and the Company's fair value methodologies, including the use of an independent third-party pricing service, see Note 1 "Summary of Significant Accounting Policies" in the Company's Annual Report on Form 10-K for fiscal 2014. The Company did not transfer any assets between Level 1 and Level 2 during the six months ended December 31, 2014 or the year ended June 30, 2014. In addition, the Company did not adjust the prices obtained from the independent pricing service. The Company has no available-for-sale securities included in Level 1 or Level 3 as of December 31, 2014.
At December 31, 2014, Corporate bonds include investment-grade debt securities with a wide variety of issuers, industries, and sectors, primarily carry credit ratings of A and above, and have maturities ranging from January 2015 to June 2023.
At December 31, 2014, U.S. Treasury and direct obligations of U.S. government agencies primarily include debt directly issued by Federal Home Loan Banks and Federal Farm Credit Banks with fair values of $4,530.1 million and $1,022.0 million, respectively. U.S. Treasury and direct obligations of U.S. government agencies represent senior, unsecured, non-callable debt that primarily carries a credit rating of Aaa, as rated by Moody's, and AA+, as rated by Standard & Poor's, and have maturities ranging from January 2015 through August 2024.
At December 31, 2014, asset-backed securities include AAA rated senior tranches of securities with predominantly prime collateral of fixed rate credit card, auto loan, and rate reduction receivables with fair values of $1,548.2 million, $364.4 million, and $205.3 million, respectively. These securities are collateralized by the cash flows of the underlying pools of receivables. The primary risk associated with these securities is the collection risk of the underlying receivables. All collateral on such asset-backed securities has performed as expected through December 31, 2014.
At December 31, 2014, other securities and their fair value primarily represent: AAA and AA rated supranational bonds of $383.0 million, AAA and AA rated sovereign bonds of $319.9 million, and AA rated mortgage-backed securities of $100.8 million that are guaranteed primarily by Federal National Mortgage Association ("Fannie Mae"). The Company's mortgage-backed securities represent an undivided beneficial ownership interest in a group or pool of one or more residential mortgages. These securities are collateralized by the cash flows of 15-year and 30-year residential mortgages and are guaranteed by Fannie Mae as to the timely payment of principal and interest.
Client funds obligations represent the Company's contractual obligations to remit funds to satisfy clients' payroll and tax payment obligations and are recorded on the Consolidated Balance Sheets at the time that the Company impounds funds from clients. The client funds obligations represent liabilities that will be repaid within one year of the balance sheet date. The Company has reported client funds obligations as a current liability on the Consolidated Balance Sheets totaling $34,461.1 million and $18,963.4 million as of December 31, 2014 and June 30, 2014, respectively. The Company has classified funds held for clients as a current asset since these funds are held solely for the purposes of satisfying the client funds obligations. The Company has reported the cash flows related to the purchases of corporate and client funds marketable securities and related to the proceeds from the sales and maturities of corporate and client funds marketable securities on a gross basis in the investing section of the Statements of Consolidated Cash Flows. The Company has reported the cash inflows and outflows related to client funds investments with original maturities of 90 days or less on a net basis within net increase in restricted cash and cash equivalents and other restricted assets held to satisfy client funds obligations in the investing section of the Statements of Consolidated Cash Flows. The Company has reported the cash flows related to the cash received from and paid on behalf of clients on a net basis within net increase in client funds obligations in the financing activities section of the Statements of Consolidated Cash Flows.
Approximately 82% of the available-for-sale securities held a AAA or AA rating at December 31, 2014, as rated by Moody's, Standard & Poor's and, for Canadian securities, Dominion Bond Rating Service. All available-for-sale securities were rated as investment grade at December 31, 2014.
Accounts receivable, net, includes the Company's trade receivables, which are recorded based upon the amount the Company expects to receive from its clients, net of an allowance for doubtful accounts. Notes receivable are recorded based upon the amount the Company expects to receive from its clients, net of an allowance for doubtful accounts and unearned income. The allowance for doubtful accounts is the Company's best estimate of probable credit losses related to trade receivables and notes receivable based upon the aging of the receivables, historical collection data, and internal assessments of credit quality, as well as in the economy as a whole. The Company charges off uncollectable amounts against the reserve in the period in which it determines they are uncollectable. Unearned income on notes receivable is amortized using the effective interest method.
During the six months ended December 31, 2014, the Company sold notes receivable related to Dealer Services financing arrangements for $225.5 million. Although the sale of the notes receivable transfers the majority of the risk to the purchaser, the Company does retain a minimal level of credit risk on the sold receivables. The cash received in exchange for the notes receivable sold was recorded within the operating activities on the Statements of Consolidated Cash Flows and the gain on sale realized was recorded within Other income, net on the Statements of Consolidated Earnings (see Note 5).
The Company determines the allowance for doubtful accounts related to notes receivable based upon a specific reserve for known collection issues, as well as a non-specific reserve based upon aging, both of which are based upon history of such losses and current economic conditions. As of December 31, 2014 and June 30, 2014, there were no notes receivable that were specifically reserved; the entire notes receivable reserve balance was comprised of non-specific reserves.
(A) As a result of the sale of the notes receivable related to Dealer Services financing arrangements, the Company released $10.7 million of non-specific reserves that were accrued on the sold notes receivable, which was recorded in selling, general, and administrative expenses on the Statements of Consolidated Earnings.
The allowance for doubtful accounts as a percentage of notes receivable was approximately 2% as of December 31, 2014 and 5% as of June 30, 2014.
On an ongoing basis, the Company evaluates the credit quality of its financing receivables, utilizing aging of receivables, collection experience, and charge-offs. As events related to a specific client dictate, the credit quality of a client is reevaluated. Approximately 100% of notes receivable were current at December 31, 2014 and June 30, 2014.
(A) The goodwill balance at June 30, 2014 and December 31, 2014 is net of accumulated impairment losses of $42.7 million related to the Employer Services segment.
Other intangibles consist primarily of purchased rights, covenants, patents, and trademarks (acquired directly or through acquisitions). All of the intangible assets have finite lives and, as such, are subject to amortization. The weighted average remaining useful life of the intangible assets is 6 years (4 years for software and software licenses, 10 years for customer contracts and lists, and 3 years for other intangibles). Amortization of intangible assets was $38.2 million and $34.4 million for the three months ended December 31, 2014 and 2013, respectively, and $75.8 million and $70.1 million for the six months ended December 31, 2014 and 2013, respectively.
The Company has a $2.25 billion, 364-day credit agreement with a group of lenders that matures in June 2015. In addition, the Company has a five-year $2.0 billion credit facility and a five-year $3.25 billion credit facility maturing in June 2018 and June 2019, respectively, each with an accordion feature under which the aggregate commitment can be increased by $500.0 million, subject to the availability of additional commitments. The interest rate applicable to committed borrowings is tied to LIBOR, the effective federal funds rate, or the prime rate depending on the notification provided by the Company to the syndicated financial institutions prior to borrowing. The Company is also required to pay facility fees on the credit agreements. The primary uses of the credit facilities are to provide liquidity to the commercial paper program and funding for general corporate purposes, if necessary. The Company had no borrowings through December 31, 2014 under the credit agreements.
The Company’s U.S. short-term funding requirements related to client funds are sometimes obtained through a commercial paper program, which provides for the issuance of up to $7.5 billion in aggregate maturity value of commercial paper, rather than liquidating previously-collected client funds that have already been invested in available-for-sale securities. The Company’s commercial paper program is rated A-1+ by Standard & Poor’s and Prime-1 by Moody’s. These ratings denote the highest quality commercial paper securities. Maturities of commercial paper can range from overnight to up to 364 days. At December 31, 2014, the Company had no commercial paper outstanding. At June 30, 2014, the Company had $2,173.0 million of commercial paper outstanding, which was repaid on July 1, 2014. For the three months ended December 31, 2014 and 2013, the Company's average borrowings were $3.0 billion and $3.3 billion, respectively, at weighted average interest rates of 0.1%. For the six months ended December 31, 2014 and 2013, the Company's average borrowings were $3.1 billion and $3.2 billion, respectively, at weighted average interest rates of 0.1%. The weighted average maturity of the Company’s commercial paper issued during the three and six months ended December 31, 2014 approximated two days.
The Company’s U.S. and Canadian short-term funding requirements related to client funds obligations are sometimes obtained on a secured basis through the use of reverse repurchase agreements, which are collateralized principally by government and government agency securities, rather than liquidating previously-collected client funds that have already been invested in available-for-sale securities. These agreements generally have terms ranging from overnight to up to five business days. At December 31, 2014 and June 30, 2014, the Company had no obligations outstanding related to reverse repurchase agreements. For the three months ended December 31, 2014 and 2013, the Company had average outstanding balances under reverse repurchase agreements of $598.6 million and $402.0 million, respectively, at weighted average interest rates of 0.5% and 0.6%, respectively. For the six months ended December 31, 2014 and 2013, the Company had average outstanding balances under reverse repurchase agreements of $584.7 million and $465.7 million, respectively, at weighted average interest rates of 0.5%. In addition, the Company has $3.25 billion available on a committed basis under the U.S. reverse repurchase agreements.
Stock Options Stock options are granted to employees at exercise prices equal to the fair market value of the Company's common stock on the dates of grant. Stock options are issued under a graded vesting schedule and have a term of 10 years. Options granted prior to July 1, 2008 generally vest ratably over five years and options granted after July 1, 2008 generally vest ratably over four years. Compensation expense is measured based on the fair value of the stock option on the grant date and recognized over the requisite service period for each separately vesting portion of the stock option award. Stock options are forfeited if the employee ceases to be employed by the Company prior to vesting.
Time-Based Restricted Stock and Time-Based Restricted Stock Units Time-based restricted stock and time-based restricted stock units granted prior to the year ended June 30, 2013 ("fiscal 2013") are subject to vesting periods of up to five years and awards granted in fiscal 2013 and later are subject to a vesting period of two years. Awards are forfeited if the employee ceases to be employed by the Company prior to vesting.
Time-based restricted stock cannot be transferred during the vesting period. Compensation expense relating to the issuance of time-based restricted stock is measured based on the fair value of the award on the grant date and recognized on a straight-line basis over the vesting period. Employees are eligible to receive dividends on shares awarded under the time-based restricted stock program.
Time-based restricted stock units are settled in cash and cannot be transferred during the vesting period. Compensation expense relating to the issuance of time-based restricted stock units is recorded over the vesting period and is initially based on the fair value of the award on the grant date; and is subsequently remeasured at each reporting date during the vesting period. No dividend equivalents are paid on units awarded under the time-based restricted stock unit program.
Performance-Based Restricted Stock and Performance-Based Restricted Stock Units Performance-based restricted stock and performance-based restricted stock units generally vest over a one to three year performance period and a subsequent service period of up to 26 months. Under these programs, the Company communicates "target awards" at the beginning of the performance period with possible payouts at the end of the performance period ranging from 0% to 150% of the "target awards." Awards are forfeited if the employee ceases to be employed by the Company prior to vesting.
Performance-based restricted stock cannot be transferred during the vesting period. Compensation expense relating to the issuance of performance-based restricted stock is recognized over the vesting period based on the fair value of the award on the grant date with subsequent adjustments to the number of shares awarded during the performance period based on probable and actual performance against targets. After the performance period, if the performance targets are achieved, employees are eligible to receive dividends during the remaining vesting period on shares awarded under the performance-based restricted stock program.
Performance-based restricted stock units are settled in either cash or stock, depending on the employee's home country, and cannot be transferred during the vesting period. Compensation expense relating to the issuance of performance-based restricted stock units settled in cash is recognized over the vesting period initially based on the fair value of the award on the grant date with subsequent adjustments to the number of units awarded during the performance period based on probable and actual performance against targets. In addition, compensation expense is remeasured at each reporting period during the vesting period based on the change in ADP stock price. Compensation expense relating to the issuance of performance-based restricted stock units settled in stock is recorded over the vesting period based on the fair value of the award on the grant date with subsequent adjustments to the number of units awarded based on the probable and actual performance against targets. Dividend equivalents are paid on awards settled in stock under the performance-based restricted stock unit program.
Employee Stock Purchase Plan The Company offers an employee stock purchase plan that allows eligible employees to purchase shares of common stock at a price equal to 95% of the market value for the Company's common stock on the last day of the offering period. This plan has been deemed non-compensatory, and therefore no compensation expense has been recorded.
The Company currently utilizes treasury stock to satisfy stock option exercises, issuances under the Company's employee stock purchase plan, and restricted stock awards. From time to time, the Company may repurchase shares of its common stock under its authorized share repurchase programs. The Company repurchased 5.2 million shares in the three months ended December 31, 2014 as compared to 1.4 million shares repurchased in the three months ended December 31, 2013 and the Company repurchased 5.7 million shares in the six months ended December 31, 2014 as compared to 5.6 million shares repurchased in the six months ended December 31, 2013. The Company considers several factors in determining when to execute share repurchases, including, among other things, actual and potential acquisition activity, cash balances and cash flows, issuances due to employee benefit plan activity, and market conditions.
Stock-based compensation expense attributable to CDK employees are included in discontinued operations and therefore not presented in the table above. For the three months ended December 31, 2013, such stock-based compensation expense was $5.9 million. For the six months ended December 31, 2014 and 2013, such stock-based compensation expense was $5.1 million and $9.8 million, respectively.
As a result of the spin-off of CDK, the number of vested and unvested ADP stock options, their strike price, and the number of unvested performance-based and time-based restricted shares and units were adjusted to preserve the intrinsic value of the awards immediately prior to the spin-off using an adjustment ratio based on the market close price of ADP stock prior to the spin-off and the market open price of ADP stock subsequent to the spin-off. Since these adjustments were considered to be a modification of the awards in accordance to ASC 718, the Company compared the fair value of the awards immediately prior to the spin-off to the fair value immediately after the spin-off to measure potential incremental stock-based compensation expense, if any. The adjustments did not result in an increase in the fair value of the awards and, accordingly, the Company did not record incremental stock-based compensation expense. Unvested ADP stock options, unvested restricted stock, and unvested restricted stock units held by CDK employees were replaced by CDK awards immediately following the spin-off. The stock-based compensation expense associated with the original grant of ADP awards to remaining ADP employees will continue to be recognized within earnings from continuing operations in the Company's Statement of Consolidated Earnings.
As of December 31, 2014, the total remaining unrecognized compensation expense related to non-vested stock options, restricted stock units, and restricted stock awards amounted to $8.3 million, $23.9 million, and $131.0 million, respectively, which will be amortized over the weighted-average remaining requisite service periods of 1.7 years, 1.4 years, and 1.6 years, respectively.
During the six months ended December 31, 2014, the following activity occurred under the Company’s existing plans, including the impacts related to the spin-off of CDK, described above.
The fair value of each stock option issued is estimated on the date of grant using a binomial option pricing model. The binomial model considers a range of assumptions related to volatility, risk-free interest rate, and employee exercise behavior. Expected volatilities utilized in the binomial model are based on a combination of implied market volatilities, historical volatility of the Company’s stock price, and other factors. Similarly, the dividend yield is based on historical experience and expected future changes. The risk-free rate is derived from the U.S. Treasury yield curve in effect at the time of grant. The binomial model also incorporates exercise and forfeiture assumptions based on an analysis of historical data. The expected life of the stock option grant is derived from the output of the binomial model and represents the period of time that options granted are expected to be outstanding.
(A) The weighted average fair values were adjusted to reflect the impact of the spin-off of CDK.
Net pension expense for the three months ended December 31, 2013 includes $1.2 million reported within earnings from discontinued operations on the Statement of Consolidated Earnings and net pension expense for the six months ended December 31, 2014 and 2013 includes $4.3 million and $2.4 million, respectively, reported within earnings from discontinued operations on the Statements of Consolidated Earnings. Included within pension expense related to discontinued operations for the six months ended December 31, 2014 were total one-time charges of $3.2 million for curtailment charges and special termination benefits directly attributable to the spin-off of CDK.
The effective tax rate for the three months ended December 31, 2014 and 2013 was 33.3% and 32.0%, respectively. The increase in the effective tax rate is due to the resolution of certain tax matters during the three months ended December 31, 2013, partially offset by income tax benefits in the three months ended December 31, 2014 related to the usage of foreign tax credits in a planned repatriation of foreign earnings and a change in tax law.
The effective tax rate for the six months ended December 31, 2014 and 2013 was 33.7% and 33.1%, respectively. The increase in the effective tax rate is due to the resolution of certain tax matters during the six months ended December 31, 2013, partially offset by income tax benefits in the six months ended December 31, 2014 related to the usage of foreign tax credits in a planned repatriation of foreign earnings, a change in tax law, and adjustments to the tax liability.
In June 2011, the Company received a Commissioner’s Charge from the U.S. Equal Employment Opportunity Commission (“EEOC”) alleging that the Company has violated Title VII of the Civil Rights Act of 1964 by refusing to recruit, hire, transfer, and promote certain persons on the basis of their race, in the State of Illinois from at least the period of January 1, 2007 to the present. The Company continues to investigate the allegations set forth in the Commissioner’s Charge and is cooperating with the EEOC’s investigation.
On July 18, 2011, athenahealth, Inc. filed a complaint against ADP AdvancedMD, Inc. (“ADP AdvancedMD”), a subsidiary of the Company. The complaint alleged that ADP AdvancedMD’s activities in providing medical practice management and billing and revenue management software and associated services to physicians and medical practice managers infringed two patents owned by athenahealth, Inc. The complaint sought monetary damages, injunctive relief, and costs. In November 2014, the parties entered into a settlement agreement to end this litigation and dismiss all claims and counterclaims. The terms of the settlement did not have a material adverse impact on the Company's results of operations, financial condition or cash flows.
The Company is subject to various claims and litigation in the normal course of business. When a loss is considered probable and reasonably estimable, the Company records a liability in the amount of its best estimate for the ultimate loss. Although the Company currently believes that resolving its outstanding claims, individually or in aggregate, will not have a material adverse impact on the consolidated financial statements, these matters involve complex issues subject to inherent uncertainty and there can be no assurance that these matters will be resolved in a manner not adverse to the Company.
Company’s services and products. The Company does not expect any material losses related to such representations and warranties.
CDK also had obligations related to purchase and maintenance agreements on software, equipment, and other assets of which $2.9 million, $4.1 million, and $2.4 million relates to fiscal years ending June 30, 2015, 2016, and 2017, respectively, which were assumed by CDK as part of the spin-off.
The Company has obligations related to purchase and maintenance agreements on the software, equipment, and other assets that were disclosed in its Annual Report on Form 10-K for fiscal 2014. In December 2014, the Company extended the term of a contract, which resulted in incremental obligations of $43.1 million and $87.3 million for the fiscal years ending June 30, 2019 and June 30, 2020, respectively.
The Company transacts business in various foreign jurisdictions and is therefore exposed to market risk from changes in foreign currency exchange rates that could impact its consolidated results of operations, financial position, or cash flows. The Company manages its exposure to these market risks through its regular operating and financing activities and, when deemed appropriate, through the use of derivative financial instruments. The Company does not use derivative financial instruments for trading purposes. The Company had no derivative financial instruments outstanding at December 31, 2014 or June 30, 2014. | 2019-04-24T22:37:38Z | https://www.sec.gov/Archives/edgar/data/8670/000000867015000004/q2fy1510q.htm |
With the CISM exam, you can be well-prepared for the real deal. You can identify weak areas of IT security and strengthen them. Use these CISM exam questions to gauge your ability to weed out loopholes in a company’s IT architecture and protect the company’s infrastructure against threats. The CISM practice test can be taken multiple times and is free of cost. Answer the Simplilearn’s CISM questions today and get ready to become a certified IT security professional!
1. As an IS Manager, you would like to lay down clearly-defined roles and responsibilities? What is the BEST benefit that you expect?
Ensure that all your team comply with policies.
Your team knows what to do and when.
Every one is clear about their responsibilites and work that need to be done.
Your team is more accountable.
2. Who would you look to enforce access rights to application data?
3. You need to get approval form senior management to implement a warm site. How can you BEST achieve this?
Present a business case with cost-benefit analyses.
Present how the warm site would be measured.
4. As an IS Manager you are developing IS Strategy for your organization. Which is the MOST important component of the strategy?
Adoption of a internation control framework like COBIT, ISO etc.
5. Which of the following is MOST important to understand when developing a meaningful information security strategy?
6. You are implementing IS policy within your organization. There is a sense of discomfort from within the organization about certain components of the policy. What is the BEST approach to counter this?
Publish documentation about the IS Policy, so that all staff are aware.
7. You have joined an organization recently as an IS Manager. You have requested a meeting with the senior management to discuss organization's network security to the senior managerment. What would you present FIRST?
Infrastructure layout of the organization which highlights all protective devices installed.
Present a a list of attacks that the organization may face on the network.
Present the first draft of your IS Policy.
Present the risk assessment report.
8. You are an IS Manager of an ecommerce portal. You have seen in the media about a new regulation that affects ecommerce transactions. What should you do FIRST?
Call for a meeting with all key stakeholders to discuss the regulation.
Inform the risk management team to analyze the regulation.
Check whether the controls in the existing ecommerce portal can address the regulation.
Inform you team of your plan to conduct a gap analysis.
9. Which of the following would help to change an organization's security culture?
establish security metrics and performance monitoring.
ensure that legal and regulatory requirements are met.
support the business objectives of the organization.
Provide cost-benefit to the organization.
Ensures all members of the IS Team understan corporate governance.
Ensure that all operations within the security team are consistent.
Ensure a faster response time for regulations.
able to provide a more integrated, holistic program.
an essential aspect of developing a security strategy.
a requirement for industry (ISO) certification.
a necessary component of organization governance.
Determine likely areas of noncompliance.
Determine likely areas of compliance.
Understand the threats to the business.
14. Which of the following requirements would have the lowest level of priority in information security?
explains the current risk profile.
to the extent that they impact the enterprise.
by developing policies that address the requirements.
to ensure that guidelines meet the requirements.
17. What would a security manager PRIMARILY utilize when proposing the implementation of a security solution?
19. Which of the following is the PRIMARY reason to change policies during program development?
The policies must comply with new regulatory and legal mandates.
Appropriate security baselines are no longer set in the policies.
The policies no longer reflect management intent and direction.
Employees consistently ignore the policies.
22. Which of the following is MOST likely to be discretionary?
23. What is the PRIMARY role of the information security manager in the process of information classification within an organization?
address the process for communicating a violation.
be straightforward and easy to understand.
be customized to specific groups and roles.
25. Which person or group should have final approval of an organization's information security policies?
data privacy directive applicable globally.
27. When implementing effective security governance within the requirements of the company\'s security strategy, which of the following is the MOST important factor to consider?
the responsibilities of organizational units.
29. Which of the following should drive the risk analysis for an organization?
30. Which of the following are seldom changed in response to technological changes?
31. Who is responsible for ensuring that information is categorized and that specific protective measures are taken?
32. Which of the following is the BEST approach to obtain senior management commitment to the information security program?
Describe the reduction of risk.
Present the emerging threat environment.
Demonstrate the alignment of the program to business objectives.
33. What will have the HIGHEST impact on standard information security governance models?
35. Which of the following is a key area of the ISO 27001 framework?
state only one general security mandate.
are aligned with organizational goals.
govern the creation of procedures and guidelines.
37. Which of the following is responsible for legal and regulatory liability?
38. Priority should be given to which of the following to ensure effective implementation of information security governance?
ability to mitigate business risks.
use of new and emerging technologies.
benefits in comparison to their costs.
invite an external consultant to create the security strategy.
allocate budget based on best practices.
define high-level business security requirements.
41. Which of the following would be the BEST approach to securing approval for information security expenditures?
determine the probability of success.
define the issues to be addressed.
43. Which of the following would be MOST helpful to achieve alignment between information security and organization objectives?
benchmark a number of successful organizations.
demonstrate potential losses and other impacts that can result from a lack of support.
inform management of the legal requirements of due care.
demonstrate support for desired outcomes.
assuming overall protection of information assets.
implementing security controls in products they install.
ensuring security measures are consistent with policy.
identify whether current controls are adequate.
communicate the new requirement to audit.
implement the requirements of the new regulation.
conduct a cost-benefit analysis of implementing the control.
focused on eliminating all risks.
a balance between technical and business requirements.
defined by the board of directors.
51. You are a IS Manager recently appointed. You now need to evaluate the data classification in the organization. Who would you talk to?
52. What would be the BEST outcome for any risk management process?
The business operations exceed their performance every year.
53. Which of the following is the MOST appropriate use of gap analysis?
It used to identify gaps in information security tools.
It is used to identify gaps between performance of your organization with the next best competitor.
54. As an IS Manager, which part of data classification would consider as MOST important?
55. You are an IS Manager discussing with the IT team of your organization, on implementation project plan of a new application to be rolled out. The IT team feels that as this is a technology based application, business managers or their team members need not be part of the project team. What should you do?
Discuss this with the business managers first.
Ask for the requirement document to check whether it is really a technology based application.
Take it up with the Security Steering committee in the next scheduled meeting.
Take this issue with the internal auditors.
56. As an IS Manager, you are considering to upgrade and implement controls to establish a layered protection to your organization. Which is the MOST important consideration?
Controls that can be implemented by standard procedures.
Controls that can fail in different conditions.
Controls that do not display critical messages when they fail.
57. What mechanisms are used to identify weakness or threats that can affect a business critical application?
Run an disaster recovery scenario on the affected network.
58. How, as an IS Manager would you test the effectiveness of a control you implemented.
Staff can be easily trained on implementing and using the control.
59. What from the following is NOT true of transfer of risk?
Once the risk is transferred, it also removes the responsibility of the risk.
Risk is transferred from one party to another party.
Transfer of risk is a requirement that can be met by with an insurance contract.
Risk transfer is a risk response technique adopted by many organizations.
60. As an IS Manager, you are designing the networks for your organization. From a risk perspective, which of the following requires your close attention?
The Local network is segregated into multiple virtual LANs (VLANs).
You are providing remote access to your staff as well as partners.
The CEO wants install wireless network in her conference room.
Study the organization chart of your company.
Analyze past and present financial details of the organization and group it by department managers.
62. What is the purpose of vulnerability assessment?
63. Which of the following would help management determine the resources needed to mitigate a risk to the organization?
64. You are in a meeting with CEO and the board and discussing implementation of a key control. One of the main agenda points is the investment for the control. What technique would you adopt to get their support?
Present reports from the penetration test carried out by an external party.
Present a report from Gartner indicating that the supplier of the control is on their niche player quadrant.
65. You have completed the risk assessment process of your organization and now are left with residual risks. However, the likelihood and impact of these risks are high. What is the BEST solution?
Mitigate the impact by purchasing insurance.
Transfer applications to a cloud provider.
66. As an IS Manager you need to determine the criticality and sensitivity of information assets. What would you carry out?
Security assessment which also includes social engineering tactics.
67. What is the purpose of carrying out a Business impact analysis?
Measure the criticality of the business function along with acceptable downtime and resources affected.
Calculate the cost of business outages caused by a impact.
Prepare the disaster recovery team and its organization chart.
68. A business critical system has a requirement to have an account that cannot be automatically locked by the system. What would be the BEST countermeasure to prevent a hacker running a brute force attack on the account.
Disallow access to this account.
69. Which of the following would be of GREATEST importance to the security manager in determining whether to further mitigate residual risk?
70. A project manager is developing a developer portal and requests that the security manager assign a public IP address so that it can be accessed by in-house staff and by external consultants outside the organization's local area network (LAN). What should the security manager do FIRST?
before developing a business case.
at each stage of the software development life cycle (SDLC).
72. What is the TYPICAL output of a risk assessment?
structured query language (SQL) injection.
76. Which of the following measures would be MOST effective against insider threats to confidential information?
assess the risk of noncompliance.
prepare a status report for management.
78. Which of the following steps in conducting a risk assessment should be performed FIRST?
reviewing new laws and regulations.
a data leak prevention program.
a network intrusion detection system (IDS).
81. There is a time lag between the time when a security vulnerability is first published, and the time when a patch is delivered. Which of the following should be carried out FIRST to mitigate the risk during this time period?
82. During which phase of development is it MOST appropriate to begin assessing the risk of a new application system?
83. Which one of the following factors of a risk assessment typically involves the GREATEST amount of speculation?
84. Which of the following is the MOST important reason to include an effective threat and vulnerability assessment in the change management process?
To reduce the need for periodic full risk assessments.
To ensure that information security is aware of changes.
To ensure that policies are changed to address new threats.
treated as a distinct process.
conducted by the IT department.
86. Which of the following is the MOST important requirement for setting up an information security infrastructure for a new system?
87. A permissive controls policy would be reflected in which one of the following implementations?
Allows access unless explicitly denied.
IT systems are configured to fail closed.
Specifies individuals can delegate privileges.
Permits control variations with defined limits.
88. Which of the following is the BEST method to ensure the overall effectiveness of a risk management program?
89. Which of the following would be the BEST indicator of an asset's value to an organization?
90. In which phase of the development process should risk assessment be FIRST introduced?
91. Which of the following would be MOST relevant to include in a cost-benefit analysis of a two-factor authentication system?
92. Which is the BEST way to measure and prioritize aggregate risk deriving from a chain of linked system vulnerabilities?
93. A company recently developed a breakthrough technology. Since this technology could give this company a significant competitive edge, which of the following would FIRST govern how this information is to be protected?
94. Which of the following is the BEST quantitative indicator of an organization's current risk tolerance?
95. Which of the following is the MOST important element to consider when initiating asset classification?
as a mandate that requires organization compliance.
based on the level of risk they pose to the organization.
97. Which of the following authentication methods prevents authentication replay?
98. Which of the following groups would be in the BEST position to perform a risk analysis for a business?
the cost of security is higher in later stages.
information security may affect project feasibility.
information security is essential to project approval.
it ensures proper project classification.
101. Which of the following is the MOST important consideration when developing a service level agreement (SLA) to mitigate the risk that outsourcing will result in a loss to the business?
create a separate account for the programmer as a power user.
log all of the programmer's activity for review by their supervisor.
have the programmer sign a letter accepting full responsibility.
perform regular audits of the application.
103. Which of the following is the MAIN objective in contracting with an external company to perform penetration testing?
104. Which of the following is MOST effective in preventing weaknesses from being introduced into existing production systems?
105. Which of the following should be done FIRST when making a decision to allow access to the information processing facility (IPF) of an enterprise to a new external party?
the parties to the agreement can perform.
confidential data are not included in the agreement.
the right to audit is a requirement.
107. An organization's information security manager is planning the structure of the Information Security Steering Committee. Which of the following groups should the manager invite?
108. Which of the following is the BEST indicator that security awareness training has been effective?
109. Which of the following is the MOST important item to consider when evaluating products to monitor security across the enterprise?
110. Which of the following would raise security awareness among an organization's employees?
111. Which of the following is MOST effective for securing wireless networks as a point of entry into a corporate network?
112. Which of the following security controls addresses availability?
113. Which of the following is an advantage of a centralized information security organizational structure?
It is easier to promote security awareness.
It is easier to manage and control.
It is more responsive to business unit needs.
It provides a faster turnaround for security requests.
114. Which of the following is the MOST appropriate individual to implement and maintain the level of information security needed for a specific business application?
116. What is the MOST important reason for conducting security awareness programs throughout an organization?
an effective control over connectivity and continuity.
a service level agreement (SLA) including code escrow.
a business impact analysis (BIA).
118. Which of the following is the MOST appropriate individual to ensure that new exposures have not been introduced into an existing application during the change management process?
119. Which of the following devices should be placed within a DMZ?
120. Which of the following is the MOST important action to take when engaging third-party consultants to conduct an attack and penetration test?
121. Which of the following will BEST protect against malicious activity by a former employee?
are decrypted by the firewall.
may be quarantined by mail filters.
may be corrupted by the receiving mail server.
123. The effectiveness of virus detection software is MOST dependent on which of the following?
124. Which of the following is the BEST way to erase confidential information stored on magnetic tapes?
only at the beginning and at the end of the new process.
during the entire life cycle of the process.
at the appropriate point since timing of assessments will differ for processes.
depending upon laws and regulations.
127. What is the BEST policy for securing data on mobile universal serial bus (USB) drives?
128. Which of the following is the BEST approach to dealing with inadequate funding of the security program?
Require management to accept the increased risk.
Prioritize risk mitigation and educate management.
Reduce monitoring and compliance enforcement activities.
129. An organization that outsourced its payroll processing performed an independent assessment of the security controls of the third party, per policy requirements. Which of the following is the MOST useful requirement to include in the contract?
messages displayed at every logon.
an intranet web site for information security.
circulating the information security policy.
132. The data backup policy will contain which of the following?
133. Which of the following would be the BEST defense against sniffing?
134. Which of the following controls is MOST effective in providing reasonable assurance of physical access compliance to an unmanned server room controlled with biometric devices?
135. When considering outsourcing services, at what point should information security become involved in the vendor management process?
136. Which of the following will BEST ensure that management takes ownership of the decision making process for information security?
137. Which of the following, using public key cryptography, ensures authentication, confidentiality and nonrepudiation of a message?
138. Which of the following is the MOST important consideration when implementing an intrusion detection system (IDS)?
140. What is the GREATEST risk when there is an excessive number of firewall rules?
141. Which item would be the BEST to include in the information security awareness training program for new general staff employees?
142. Which of the following is MOST effective in protecting against the attack technique known as phishing?
to a higher false reject rate (FRR).
to a lower crossover error rate.
to a higher false acceptance rate (FAR).
exactly to the crossover error rate.
link policies to an independent standard.
146. Which of the following is the MOST important guideline when using software to scan for security exposures within a corporate network?
assess the problems and institute rollback procedures, if needed.
disconnect the systems from the network until the problems are corrected.
immediately uninstall the patches from these systems.
immediately contact the vendor regarding the problems that occurred.
148. The organization has decided to outsource the majority of the IT department with a vendor that is hosting servers in a foreign country. Of the following, which is the MOST critical security consideration?
Laws and regulations of the country of origin may not be enforceable in the foreign country.
A security breach notification might get delayed due to the time difference.
Additional network intrusion detection sensors should be installed, resulting in an additional cost.
The company could lose physical control over the server and be unable to monitor the physical security posture of the servers.
149. Which of the following is the BEST way to verify that all critical production servers are utilizing up-to-date virus signature files?
150. Which of the following is the MOST important element to ensure the successful recovery of a business during a disaster?
the information security steering committee.
customers who may be impacted.
data owners who may be impacted.
152. Which of the following application systems should have the shortest recovery time objective (RTO)?
153. Which of the following is the BEST mechanism to determine the effectiveness of the incident response process?
during the establishment of the plan.
once an incident has been confirmed by operations staff.
after fully testing the incident management plan.
after the implementation details of the plan have been approved.
conducting a business impact assessment.
conducting a table-top business continuity test.
selecting an alternate recovery site.
156. Which of the following recovery strategies has the GREATEST chance of failure?
use the test equipment in the warm site facility to read the tapes.
periodically retrieve the tapes from the warm site and test them.
have duplicate equipment available at the warm site.
inspect the facility and inventory the tapes on a quarterly basis.
159. Which of the following is MOST important when deciding whether to build an alternate facility or subscribe to a third-party hot site?
assess the impact of the loss and determine mitigating steps.
communicate the best practices in protecting laptops to all laptop users.
instruct the erring employees to pay a penalty for the lost laptops.
recommend that management report the incident to the police and file for insurance.
161. Which of the following actions should take place immediately after a security breach is reported to an information security manager?
162. At the conclusion of a disaster recovery test, which of the following should ALWAYS be performed prior to leaving the vendor's hot site facility?
163. What is the FIRST action an information security manager should take when a company laptop is reported stolen?
164. Which of the following is the MOST effective method to ensure that a business continuity plan (BCP) meets an organization's needs?
Require quarterly updating of the BCP.
Automate the survey of plan owners to obtain input to the plan.
Periodically test the cross-departmental plan with varied scenarios.
require the use of strong passwords.
install an intrusion detection system (IDS).
166. Which of the following should be performed FIRST in the aftermath of a denial-of-service attack?
167. When a significant security breach occurs, what should be reported FIRST to senior management?
require a stable, rarely changed environment.
be located on the network.
170. During the recovery process following a natural disaster, a server that hosts an important new customer-facing web service was among the last systems restored, resulting in significant lost sales. Which of the following is the BEST approach to prevent this from happening again?
Regularly review and update the business impact analysis (BIA).
Ensure that the sales department has representation on the recovery team.
Establish a warm site for recovery purposes.
171. When electronically stored information is requested during a fraud investigation, which of the following should be the FIRST priority?
173. When performing a business impact analysis (BIA), which of the following should calculate the recovery time and cost estimates?
174. Which of the following is the BEST way to verify that all critical production servers are utilizing up-to-date virus signature files?
always the best option for an enterprise.
often in conflict with effective problem management.
the basis for enterprise risk management (ERM) activities.
a component of forensics training.
176. Which of the following is the MOST important consideration for an organization interacting with the media during a disaster?
177. Which of the following is the MOST important to ensure a successful recovery?
178. Which of the following is the MOST important aspect of forensic investigations that will potentially involve legal action?
clear policies detailing incident severity levels.
broadly dispersed intrusion detection capabilities.
training employees to recognize security incidents.
effective communication and reporting processes.
180. Which of the following is MOST closely associated with a business continuity program?
181. A new e-mail virus that uses an attachment disguised as a picture file is spreading rapidly over the Internet. Which of the following should be performed FIRST in response to this threat?
182. Which of the following would be a MAJOR consideration for an organization defining its business continuity plan (BCP) or disaster recovery program (DRP)?
rebuild the server from the last verified backup.
place the web server in quarantine.
shut down the server in an organized manner.
rebuild the server with original media and relevant patches.
184. Which of the following should be the PRIMARY basis for making a decision to establish an alternate site for disaster recovery?
185. Why is 'slack space' of value to an information security manager as part of an incident investigation?
186. Which of the following MOST effectively reduces false-positive alerts generated by a security information and event management (SIEM) process?
187. Who would be in the BEST position to determine the recovery point objective (RPO) for business applications?
188. When performing a business impact analysis (BIA), which of the following would be the MOST appropriate to calculate the recovery time and cost estimates?
allow business processes to continue during the response.
allow the security team to assess the attack profile.
permit the incident to continue to trace the source.
examine the incident response process for deficiencies.
how an attack was launched on the network.
potential attacks on the internal network.
191. Which of the following terms and conditions represent a significant deficiency if included in a commercial hot site contract?
192. A serious vulnerability is reported in the firewall software used by an organization. Which of the following should be the immediate action of the information security manager?
193. The business continuity policy should contain which of the following?
identify people who have not followed the process.
identify equipment that is needed.
196. Which of the following should be determined FIRST when establishing a business continuity program?
intrusion detection system (IDS) capabilities.
198. A company has a network of branch offices with local file/print and mail servers; each branch individually contracts a hot site. Which of the following would be the GREATEST weakness in recovery capability?
make an image copy of the media.
200. Which of the following is the MOST significant risk of using reciprocal agreements for disaster recovery?
Both entities are vulnerable to the same threat.
The contract contains legal inadequacies.
The cultures of the organizations are not compatible.
One party has more frequent disruptions.
The correct option is D.Well-defined roles and responsibilities is a major requirement for accountability.
The correct option is D.As data custodians, security administrators are responsible for enforcing access rights to data. Data owners are responsible for approving these access rights. Business process owners are sometimes the data owners as well, and would not be responsible for enforcement. The security steering committee would not be responsible for enforcement.
The correct option is C.Business case development, including a cost-benefit analysis, is most ideal to win the support of the senior management. The remaining options cannot be used to garner support on their own and always must be accompanied by a business case. Keeping senior management informed of regulatory requirements may help gain support for initiatives, but it is well known that more than half of all organizations are not in compliance, so may not get senior management buy-in.
The correct option is A.Without defined objectives, a strategy-the plan to achieve objectives-cannot be developed. Time frames for delivery are important but not critical for inclusion in the strategy document. Similarly, the adoption of a control framework is not critical to have a successful information security strategy. Policies are developed subsequent to, and as a part of, implementing a strategy.
The correct option is D.Alignment of security with business objectives requires an understanding of what an organization is trying to accomplish. The other choices are all elements that must be considered, but their importance is secondary and will vary depending on organizational goals.
The correct option is B.The best way to ensure the organization adopt the IS Policy is to get management support. Pressure from senior management will help to enforce the policy.
The correct option is D.A tool used to help the senior management to understand high level threats, probabilities and existing controls is a risk assessment. Others may follow the risk assessment presentation.
The correct option is C.It is BEST to check FIRST whether the existing controls can address the regulation. Susbsequent steps can be taken only if the existing controls cannot address the regulation.
The correct option is B.Management support and pressure will help to change an organization's culture. Procedures will support an information security policy, but cannot change the culture of the organization. Technical controls will provide more security to an information system and staff; however, this does not mean the culture will be changed. Auditing will help to ensure the effectiveness of the information security policy; however, auditing is not effective in changing the culture of the company.
The correct option is D.The business objectives of the organization supersede all other factors. Establishing metrics and measuring performance, meeting legal and regulatory requirements, and educating business process owners are all subordinate to this overall goal.
The correct option is A.Non-alignment of corporate governance and security governance will result in potentially weak, duplicate or unneccessary controls. This can result in additional costs for the organization to rework and align security governance with corporate governance.
The correct option is A.A holistic model based on a systems approach can help clarify complex relationships and their interdependencies within an organization, and thus provide a more effective integration of people, processes and technology. While a systems approach is useful for developing a security strategy and for understanding the relationship between people, processes and technology, a systems approach is not essential nor is it a requirement for industry (ISO) certification.
The correct option is A.Business operations are the main driver for security activities. IS Manager must ensure that all activities, carried out not only support the organizational objectives, but also preserves the organization.
The correct option is A.Information security priorities may, at times, override technical specifications, which then must be rewritten to conform to minimum security standards. Regulatory and privacy requirements are government-mandated and, therefore, not subject to override. The needs of the business should always take precedence in deciding information security priorities.
The correct option is A.Management is primarily interested in security solutions that can address risks in the most cost-effective way. To address the needs of an organization, a business case should address appropriate security solutions in line with the organizational strategy.
The correct option is A.Legal and regulatory requirements should be assessed based on the impact of noncompliance or partial compliance balanced against the costs of compliance, the risk tolerance defined by management, and the extent and nature of enforcement. International standards may not address the legal requirements in question. Policies should not address particular regulations because regulations are subject to change. Policies should only address the need to assess regulatory requirements and deal with them appropriately based on risk and impact. Guidelines would normally not address regulations, although standards may address regulations based on management's determination of the appropriate level of compliance.
The correct option is C.The information security manager needs to prioritize the controls based on risk management and the requirements of the organization. The information security manager must look at the costs of the various controls and compare them against the benefit the organization will receive from the security solution. The information security manager needs to have knowledge of the development of business cases to illustrate the costs and benefits of the various controls. All other choices are supplemental.
The correct option is B.Business dependency assessment is a process of determining the dependency of a business on certain information resources. It is not an outcome or a product of effective security management. Strategic alignment is an outcome of effective security governance. Where there is good governance, there is likely to be strategic alignment. Risk assessment is not an outcome of effective security governance; it is a process. Planning comes at the beginning of effective security governance, and is not an outcome but a process.
The correct option is C.Policies must reflect management intent and direction. Policies should be changed only when management determines that there is a need to address new legal and regulatory requirements. Regulatory requirements typically are better addressed with standards and procedures than with high-level policies. Standards set security baselines, not policies. Employees not abiding by policies is a compliance and enforcement issue rather than a reason to change the policies.
The correct option is B.Information security projects should be assessed on the basis of the positive impact that they will have on the organization. Time,cost and resource issues should be subordinate to this objective.
The correct option is A.Privacy policies must contain notifications and opt-out provisions; they are a high-level management statement of direction. They do not necessarily address warranties, liabilities or geographic coverage, which are more specific.
The correct option is C.Policies define security goals and expectations for an organization. These are defined in more specific terms within standards and procedures. Standards establish what is to be done while procedures describe how it is to be done. Guidelines provide recommendations that business management must consider in developing practices within their areas of control; as such, they are discretionary.
The correct option is A.Defining and ratifying the classification structure of information assets is the primary role of the information security manager in the process of information classification within the organization. Choice B is incorrect because the final responsibility for deciding the classification levels rests with the data owners. Choice C is incorrect because the job of securing information assets is the responsibility of the data custodians. Choice D may be a role of an information security manager but is not the key role in this context.
The correct option is C.As high-level statements, information security policies should be straightforward and easy to understand. They are high-level and,therefore, do not address network vulnerabilities directly or the process for communicating a violation. As policies, they should provide a uniform message to all groups and user roles.
The correct option is C.Senior management should have final approval of all organization policies, including information technology (IT) security policies.Business unit managers should have input into IT policies, but they should not have authority to give final approval. The CISO would more than likely be the primary author of the policies and therefore is not the appropriate individual to approve the policies. The CIO should provide input into the IT security policies, but should not have the authority to give final approval.
The correct option is B.As a subsidiary, the local entity will have to comply with the local law for data collected in the country. Senior management will be accountable for this legal compliance. The policy, being internal, cannot supersede the local law. Additionally, with local regulations differing from the country in which the organization is headquartered, it is improbable that a groupwide policy will address all the local legal requirements. In case of data collected locally (and potentially transferred to a country with a different data privacy regulation), the local law applies, not the law applicable to the head office. The data privacy laws are country-specific.
The correct option is A.The goal of information security is to protect the organization's information assets. International security standards are situational, depending upon the company and its business. Adhering to corporate privacy standards is important, but those standards must be appropriate and adequate and are not the most important factor to consider. All employees are responsible for information security,but it is not the most important factor to consider.
The correct option is A.Information security exists to help the organization meet its objectives. The information security manager should identify information security needs based on organizational needs. Organizational or business risk should always take precedence. Involving each organizational unit in information security and establishing metrics to measure success will be viewed favorably by senior management after the overall organizational risk is identified.
The correct option is B.Although senior management should support and sponsor a risk analysis, the know-how and the management of the project will be with the security department. Quality management and the legal department will contribute to the project.
The correct option is C.Policies are high-level statements of objectives. Because of their high-level nature and statement of broad operating principles, they are less subject to periodic change. Security standards and procedures as well as guidelines must be revised and updated based on the impact of technology changes.
The correct option is B.Routine administration of all aspects of security is delegated, but top management must retain overall responsibility. The security officer supports and implements information security for senior management. The end user does not perform categorization. The custodian supports and implements information security measures as directed.
The correct option is D.A security program must be aligned to business objectives. Senior management will support the security program only when it helps achieve the business objectives. The security program will always try to reduce the risk; however, it has to be balanced against the cost and impact to the business. Reduction of risk alone cannot justify the security program from the senior management perspective. The security program monitors emerging threats; however, the threat environment itself cannot determine the mitigating activities. There are many ways to deal with threats. Senior management is primarily interested in how the security program can mitigate threats while supporting the ultimate business goals. While benchmarking against other enterprises can provide an approach, it does not necessarily mean that the approach is the best one for the enterprise.
The correct option is C.Information security governance models are highly dependent on the overall organizational structure. Some of the elements that impact organizational structure are multiple missions and functions across the organization, leadership and lines of communication.Number of employees and distance between physical locations have less impact on information security governance models since well-defined process, technology and people components intermingle to provide the proper governance. Organizational budget is not a major impact once good governance models are in place, hence governance will help in effective management of the organization's budget.
The correct option is D.Business owners are ultimately responsible for their applications. The legal department, compliance officer and information security manager all can advise, but do not have final responsibility.
The correct option is D.Operational risk assessment, financial crime metrics and capacity management can complement the information security framework, but only business continuity management is a key component.
The correct option is C.The most important characteristic of good security policies is that they be aligned with organizational goals. Failure to align policies and goals significantly reduces the value provided by the policies. Stating expectations of IT management omits addressing overall organizational goals and objectives. Stating only one general security mandate is the next best option since policies should be clear; otherwise, policies may be confusing and difficult to understand. Governing the creation of procedures and guidelines is most relevant to information security standards.
The correct option is C.The board of directors and senior management are ultimately responsible for all that happens in the organization. The others are not individually liable for failures of security in the organization.
The correct option is D.Planning is the key to effective implementation of information security governance. Consultation, negotiation and facilitation come after planning.
The correct option is A.The most fundamental evaluation criterion for the appropriate selection of any security technology is its ability to reduce or eliminate business risks. Investments in security technologies should be based on their overall value in relation to their cost; the value can be demonstrated in terms of risk mitigation. This should take precedence over whether they use new or exotic technologies or how they are evaluated in trade publications.
The correct option is D.All four choices are valid steps in the process of implementing a formal information security program; however, defining high-level business security requirements should precede the others because the implementation should be based on those security requirements.
The correct option is A.Justifying and obtaining approval for a security initiative is more likely to be successful if it is supported by a well-developed business case. Choices B, C and D will each be just one typical element of a business case.
The correct option is D.Without a clear definition of the issues to be addressed, the other components of a business case cannot be determined.
The correct option is C.A security program enabling business activities would be most helpful to achieve alignment between information security and organization objectives. All of the other choices are part of the security program and would not individually and directly help as much as the security program.
The correct option is D.The most effective approach to gain support from management for the information security program is to persuasively demonstrate how the program will help achieve the desired outcomes. This can be done by providing specific business support in areas of operational predictability and regulatory compliance, and by improving resource allocation and meaningful performance metrics. While benchmarking similar organizations can be helpful in some instances to make a case for management support of the information security program, benchmarking by itself is not sufficient. Requirements for the exercise of due care should also be covered by the desired outcomes.
The correct option is C.The first step in implementing information security governance is to define the security strategy based on which security baselines are determined. Adopting suitable security standards, performing risk assessment and implementing security policy are steps that follow the definition of the security strategy.
The correct option is D.Security responsibilities of data custodians within an organization include ensuring that appropriate security measures are maintained and are consistent with organizational policy. Executive management holds overall responsibility for protection of the information assets. Data owners determine data classification levels for information assets so that appropriate levels of controls can be provided to meet the requirements relating to confidentiality, integrity and availability. Implementation of information security in products is the responsibility of the IT developers.
The correct option is A.If current security practices and procedures already meet the new regulation, then there is no need to implement new controls.
The correct option is B.Information security should ensure that business objectives are met given available technical capabilities, resource constraints and compliance requirements. It is not practical or feasible to eliminate all risks. Regulatory requirements must be considered, but are inputs to the business considerations. The board of directors does not define information security, but provides direction in support of the business goals and objectives.
The correct option is B.Privacy protection is necessary to ensure that the receiving party has the appropriate level of protection of personal data. Change management primarily protects only the information, not the privacy of the individuals. Consent is one of the protections that is frequently, but not always, required. Encryption is a method of achieving the actual control, but controls over the devices may not ensure adequate privacy protection and, therefore, is a partial answer.
The correct option is C.The data owner is usually a member of management who has due care responsibility and will be held responsible for any neglect.The data custodian is responsible for preserving and protecting data confidentiality, integrity and availability based on the classification requirements assigned by the data owner.
The correct option is B.The best strategy for risk management is to reduce risk to an acceptable level, as this will take into account the organization's appetite for risk and the fact that it would not be practical to eliminate all risk. Ensuring business growth via risk management is not practical. Any risk external or internal cannot be completely removed, and there would be residual risk in any organization which must be acceptable to the business.
The correct option is D.A gap analysis is most useful in addressing the differences between the current state and an ideal future state. It is not appropriate for evaluating other choices listed.
The correct option is C.Level of impact that would occur as a result of exposure is the KEY element to consider for data classification.
The correct option is A.Verifying the decision with the business units is the correct answer because it is not the IT function's responsibility to decide whether a new application modifies business processes. Other choices are not appropriate and may delay the project.
The correct option is B.Common failure in existing controls must be considered by upgrading or adding controls so that they fail under different conditions. This will allow for layered protection from the total risks to an organization.
The correct option is B.A security gap analysis can identify weakness of the security controls in place. Ideally this will cover not only the weakness of the technology but also of the process in place.
The correct option is C. Control effectiveness is achieved when control works as intended. Facility for notification does not determine control effectiveness. Reliability is also not an indication of control strength; weak controls can be highly reliable, even if they are ineffective controls.
The correct option is A.Transferring risk is a risk response technique that reduces the impact, but does not eliminate responsibility. Please note that certain risk cannot be transferred.
The correct option is C.Of the list, wireless poses the maximum risk if not configured properly. Even if misconfigured, the other choices generally pose less risk.
The correct option is A.The starting point for identify asset owners is to study and analyze all business processes. Business process provide input, the process business logic and the corresponding output. It will also provide accountabilities and controls, and hence provides the most accurate picture to assign the ownership of information assets. Other choices does not provide information on asset owners.
The correct option is D.A network vulnerability assessment intends to identify known weaknesses and vulnerabilities in software and tools based on common misconfigurations and missing updates.
The correct option is B.The business impact analysis (BIA) determines the possible outcome of a risk and is essential to determine the appropriate cost of control. The risk analysis process provides comprehensive data, but does not determine definite resources to mitigate the risk as does the BIA. The risk management balanced scorecard is a measuring tool for goal attainment. A risk-based audit program is used to focus the audit process on the areas of greatest importance to the organization.
The correct option is A.In this scenario, presentation of the cost-benefit analysis can be used to justify investment in a specific control measure.
The correct option is A.When residual risk are high, the only practical solution is to purchase insurance. Purchasing insurance is also known as risk transference.
The correct option is D.The criticality and sensitivity of information assets can be determined by the impact of the probability of the threats exploiting weaknesses in the asset. This also takes into account the asset value.
The correct option is A.The main purpose of a BIA is to measure the downtime tolerance, associated resources and criticality of a business function.
The correct option is B.Creating a long and a strong random password is difficult to hack due to the long time it takes to break the account. Access to the account cannot be disallowed as it is business critical. Isolation the system to a DMZ or enabling logging will not prevent an brute force attack.
The correct option is C.The security manager would be most concerned with whether residual risk would be reduced by a greater amount than the cost of adding additional controls. The other choices, although relevant, would not be as important.
The correct option is A.The information security manager cannot make an informed decision about the request without first understanding the business requirements of the developer portal. Performing a vulnerability assessment of the developer portal and installing an intrusion detection system (IDS) are best practices but are subsequent to understanding the requirements. Obtaining a signed nondisclosure agreement will not take care of the risks inherent in the organization's application.
The correct option is D.Performing risk assessments at each stage of the SDLC is the most cost-effective method because it ensures that vulnerabilities are discovered as soon as possible. A risk assessment performed before system development will not find vulnerabilities introduced during development. Performing a risk assessment at system deployment is generally not cost-effective and can miss a key risk. If performed prior to business case development, a risk assessment will not discover risk introduced during the SDLC.
The correct option is D.An inventory of risk is the output of a risk assessment. All other choices contribute to, or are subsequent to, the assessment.
The correct option is A.The authentication process is broken because, although the session is valid, the application should reauthenticate when the input parameters are changed. The review provided valid employee IDs, and valid input was processed. The problem here is the lack of reauthentication when the input parameters are changed. Cross-site scripting is not the problem in this case since the attack is not transferred to any other user's browser to obtain the output. Structured query language (SQL) injection is not a problem since input is provided as a valid employee ID and no SQL queries are injected to provide the output.
The correct option is C.Segregation of duties is primarily used to discourage fraudulent activities. Employee monitoring and enhanced compliance are secondary considerations. Supervision is not reduced, but facilitated.
The correct option is A.Inconsistent compliance can be the result of different factors, but is often a lack of awareness. Assessing the risk of noncompliance will provide the information needed to determine the most effective remediation requirements. If awareness is adequate, training may not help and increased compliance enforcement may be indicated. A report may be warranted, but will not directly address the issue that is normally a part of the information security manager's responsibilities. Increased enforcement is not warranted if the problem is a lack of effective communication about security policy.
The correct option is A.Risk assessment first requires one to identify the business assets that need to be protected before identifying the threats. The next step is to establish whether those threats represent business risk by identifying the likelihood and effect of occurrence, followed by assessing the vulnerabilities that may affect the security of the asset. This process establishes the control objectives against which key controls can be evaluated.
The correct option is D.A risk assessment should be conducted to determine new risks introduced by the outsourced processes. The other choices may or may not be identified as mitigating measures based on the risks determined by the assessment.
The correct option is C.Information classification must be conducted first. Only after data are determined critical to the organization can a data leak prevention program be properly implemented. User awareness training can be helpful, but only after data have been classified. Network intrusion detection is a technology that can support the data leak prevention program, but it is not a primary consideration.
The correct option is A.The best protection is to identify the vulnerable systems and apply compensating controls until a patch is installed. Minimizing the use of vulnerable systems and communicating the vulnerability to system users could be compensating controls but would not be the first course of action. Choice D does not make clear the timing of when the intrusion detection system (IDS) signature list would be updated to accommodate the vulnerabilities that are not yet publicly known. Therefore, this approach should not always be considered as the first option.
The correct option is A.Risk should be addressed as early in the development of a new application system as possible. In some cases, identified risks could be mitigated through design changes. If needed changes are not identified until design has already commenced, such changes become more expensive. For this reason, beginning risk assessment during the design, development or testing phases is not the best solution.
The correct option is D.The likelihood of a threat encountering a susceptible vulnerability can only be estimated statistically. Exposure, impact and vulnerability can be determined within a range.
The correct option is A.By assessing threats and vulnerabilities during the change management process, changes in risk can be determined and a risk assessment can be updated incrementally. This keeps the risk assessment current without the need to complete a full reassessment. Information security should have notification processes in place to ensure awareness of changes that might impact security. Policies should rarely require adjustment in response to changes in threats or vulnerabilities. While including an effective threat and vulnerability assessment may assist in maintaining compliance, it is not the primary reason for the change management process.
The correct option is C.Risk management activities are more likely to be executed as part of a business process. The scope of information security risk management encompasses more than IT processes. Communication alone does not necessarily correlate with successful execution of the process. | 2019-04-23T21:51:03Z | https://www.simplilearn.com/cism-exam-prep-free-practice-test |
A Collection of Holiday Recipes ~ Food for the 12 Days!
Food plays an enormous role in our Christmas celebrations. We don’t just eat special foods on Christmas Eve or Christmas but we spread out special foods from December 24th through January 6th. We don’t always eat the same foods on the same day each year but we do eat the same foods or this year a variation of the same foods.
Every year we have Buffalo Style Chicken Wings. I think these are the best chicken wings in the universe and the only ones that come close are the ones from Buffalo Wild Wings. Seriously, these are good. Crispy, flavorful, and you make it as hot or mild as you like. We started out making these for Christmas Eve and then switched it over to New Year’s Eve. They’re great for Super Bowl Parties, if you do those kinds of things. They’re also wonderful as a special movie night “junk food” dinner meal.
I had to modify the 8 Minute Cheesecake this year to make it work for my husband and I. Same basic idea *but* I used heavy cream and whipped it myself in place of cool whip. I substituted Truvia for sugar in the recipe and I omitted the crust altogether and scooped the cheesecake filling into pretty 1/2 pint jelly jars. This cheesecake is wonderful and lovely and amazing and has become a Christmas Dinner Dessert tradition.
At least once during the 12 Days of Christmas we eat Cream Cheese and Bacon Scrambled Eggs. We don’t usually make them on Christmas Day as we don’t eat breakfast before church. These scrambled eggs are perfect though for a special brunch. I think I might serve them at Pascha.
This year we had French Toast Strata as part of our come-home-from-Christmas-Liturgy-starving-and-need-to-eat food. It’s a little different from our Breakfast Casserole: Strata. There are lots of variations that one can do but none are great choices for low-carb. It does make a great holiday brunch. I adore bacon but in the strata I prefer sausage. I think the sausage gives it more flavor.
We also enjoy stuffed mushrooms and fruit soup during the holiday season. I don’t have my stuffed mushroom recipe up on my blog so no link yet. These mushrooms are stuffed with cream cheese and bacon. You can omit the bacon and still have yummy vegetarian stuffed mushrooms. In addition to making a great appetizer for Christmas dinner these are wonderful to bring as “finger foods” to any holiday potluck gathering. Hmm, I think I need to get this recipe up on the blog. I’ve mentioned them before. I haven’t actually made the mushrooms in years though we eat them at both Thanksgiving and Christmas. The girls have taken over preparing the mushrooms. I like having teenagers who like to cook.
Another holiday food tradition: Fruit Soup comes from my husband’s family. It has become our tradition to bring it to the Christmas Eve Lenten Potluck Supper at church. Nope, I don’t have the recipe on the blog. I can tell you that it has lots of different dried fruits such as apple rings, raisins, prunes, apricots and whatever else looks/sounds good tossed into a pot with water and allowed to soak over night. Toss in some cinnamon sticks and cook for hours until you have this unappetizing thick fruity sauce like thing that passes as soup. It doesn’t look good but it does taste amazing. It is one of Supergirl’s most favorite things to eat. You can add sugar to sweeten it but we think it is plenty sweet on its own. We do not add any orange or lemon zest though you can certainly add that as well. We found that if we did do add orange zest then we wanted to add sugar and we prefer to not add sugar.
Do you have special foods that you only serve during the holidays?
Celebrating the 12 days of Christmas is more of a Western tradition than an Eastern Tradition; however, many American Orthodox Christians view the time from December 25th through (and including) January 5th as the 12 Days of Christmas. January 6th is the Feast of Theophany.
In the West, January 6th is Epiphany. Epiphany is the visit of the Magi or Wise Men and some view it as the 12th day of Christmas ( Dec 26 would then be day 1). Theophany is the celebration of the Lord’s baptism in the Jordan river and the revealing of Jesus as the Son of God.
Liturgically the Orthodox church does not celebrate 12 Days. I think I read it is 6 Days. I think the “Leaving Taking of the Nativity” is Dec 29 or Dec 30? January 1st is celebrated as the Circumcision of our Lord and it is also St. Basil’s Day.
For my family we love that celebrating the Incarnation of God can be stretched out. Without the Incarnation, there could be no redemption but that is a meaty topic for another time and place.
We begin our Christmas celebrations on December 24, the Eve of the Feast. Prior to Christmas Eve we are in preparation mode. We prepare ourselves spiritually through fasting (as well as prayer and almsgiving) for 40 days (November 15th through and including December 24th).
Our Parish serves the Royal Hours in the morning on the 24th. My priest jokes that they should not be called Royal Hours but rather Imperial Hours because the Emperor attended these services. There are three in the liturgical year and are served on the Eve of the Feast: Christmas, Theophany, and Pascha. Well, eve isn’t exact for Pascha. I believe they are served on Holy Friday.
Later, in the early evening, Vesperal Divine Liturgy of St. Basil is served. This service is beautiful and there are 8 Old Testament Readings. These are the readings that foretell the coming of the Messiah. You know, like the passage in Isaiah.
Immediately after the service we gather for a Lenten potluck. In the Slavic tradition there is a 12 course Holy Supper. We do have a quiet candlelight meal but it is not the traditional symbolic meal. You will find some regular items though such as pierogies, and fruit soup (because it’s what my husband makes).
We come home after church and sometimes open a present or two and do any finalizing for Christmas morning. Since we fast before liturgy we don’t have breakfast on Christmas morning. I have yet to serve the same thing on Christmas for the last 6 years! Maybe I’ll settle into a tradition when I have grandchildren.
If the girls are up early enough on Christmas morning we check stockings. This year? Nope. We had to race to get to church on time! The service is long. Matins starts at 9am and I think we pulled out of the parking lot to head home around Noon. Of course some time is spent after the service greeting one another and in some cases exchanging gifts. Now the feasting can begin!
I always serve beef on Christmas. Beef is the meat we don’t eat during the Nativity Fast. I also do not serve fish until after January 6th. There are certain foods that I only make a couple of times of the year. This keeps them special for holidays.
We don’t stop celebrating on Dec 25th though. Honeybear takes time off from work and we spend time together as a family playing board games, eating special foods, playing Wii, watching DVDs, visiting with friends, or enjoying special outings.
God taking on human flesh: God becoming man so that man can become like God is a big deal. Humanity and all creation is redeemed because a Virgin gave birth. It is worth celebrating for more than one day.
One of the perks or benefits of being a member of the Schoolhouse Review Crew is a Yearly Membership to SchoolhouseTeachers.com. This incredible members only resource is filled with lesson plans from preschool to high school and covering everything from basic core subjects like grammar, spelling and math to history and geography. You’ll also find electives like foreign language, music (both theory and instrument!), filmmaking and more!
New members of the Schoolhouse Review Crew are currently writing reviews of SchoolhouseTeachers.com. Since I am not writing a review, I’m just going to drop this link right here so that you can go read *real* reviews of the Yearly Membership.
Read Reviews of SchoolhouseTeachers.com from the Schoolhouse Review Crew!
Since I love having a membership to SchoolhouseTeachers.com I just had to take a moment and tell you about this special deal that is happening right now!
My favorite lesson plan sections are the lapbooking and literature units. I recently discovered the Imperial Russia course and I am contemplating how to add that to Turtlegirl’s current studies since she has been studying the Russian language.
My favorite non lesson plan section are the planners. I blogged about the planners during my 5 Days of Back to Homeschool series this past fall.
So much good stuff over at SchoolhouseTeachers.com! Check it out!
I’m not sure and I’m too lazy too look it up but I think this is the first edition that I’ve done for month of December.
1. One of the things I always look forward to in late November/early December is a delivery of a box of citrus fruit from Texas. Every year my husband’s parents order oranges, grapefruits and tangelos from Texas. These are the *best* I have ever tasted. I’ve already eaten two tangelos today!
2. We started making some dietary changes for health reasons in June 2013 when Honeybear had a mild heart attack. Now, we’ve got another unexpected health diagnosis and have to radically change the way we eat. Honeybear got blood drawn for cardiology follow-up and ended up with a diagnosis of Type 2 Diabetes. We’ve been watching *my* sugars for quite some time but his diagnosis seemed to come out of nowhere. The doctor joked that now we have his and her glucose meters.
3. We have a lovely but quiet Thanksgiving here with just the six of us. We played the 50th Anniversary Doctor Who edition of Trivial Pursuit. It can be played with just the cards and a special die or it can be played with the original TP game board. The colors match the more recent original TP board but work just as well with the old one. We gave the girls the Doctor Who Trivia game for Christmas last year. We ordered through Amazon but it was fulfilled and shipped from the U.K. How cool is that?!
4. We made and canned applesauce last week. I’m working on a separate blog post about our canning adventures. Look for it soon!
5. December 6th is Saint Nicholas day. Yes, there really is a Santa Claus, though the American chubby fellow bears little resemblance to the historical Bishop of Myrna. Our church always has a special activities on the first Sunday of December to celebrate the feast day for St. Nick. We gather around to hear the story of the Old Man and his three beautiful daughters who want to marry the handsome young men but have no money for a dowry. After the story, we get a visit from Saint Nicholas. Supergirl loves getting her little package from St. Nicholas.
Mama ~ Our God Is a Baby!
Sometimes children say the most profound and beautiful things. This afternoon Supergirl and were hanging out with my kindle. She was chattering away and I was flipping through recipes. Suddenly she was quiet. She looked up at the Nativity Scene and said, “Jesus is a baby.” I nodded and replied yes this is the time of year when we remember Jesus as a baby.
She pondered a moment longer and declared “Mama, our God is a baby!” That stopped me. I mean it isn’t new information but I had never quite thought about it in that light. Never took the time to think how amazing it is that God was a baby. Most of the time when we hear “God” we think of God the Father. Jesus is the Son. But Supergirl is spot on. Jesus is God so that means that our God is a baby.
So often I wonder if she understands what she hears at church or the Sunday school lessons or my inconsistent bible lessons and then she has a day like today. She does get it. It’s a paradox how she gets it on both a very basic and yet profoundly deep level at the same time. Jesus is God. Jesus is our Savior. Jesus is a baby. God is a baby who came to be our Savior. So very basic. So very deep.
And then? It’s back to life as I know it. “Mama, can I play Frozen now?” “No.” “But, mama, I want to play.” And later she says “You’re my best Mama, ever!” Why? Because I do spelling with her. She’s a precious reminder of everything good.
Last week I decided to make my Random 5 on Fridays in November to be random thankful thoughts because November is the month for gratitude posts and status updates.
1. Earlier this week we had beautiful sunshine. I am grateful for the sunshine! Today as I type this it is cold, gray, and rainy which is much more typical November weather for us. Living here in the rainy part of the PNW, I’ve come to appreciate the sunshine and not take it for granted like I did in other places I’ve lived.
2. This week on November 19th, Honeybear and I had our 20th Wedding Anniversary. I am grateful that we have made it this far. We’ve had more than our fair share of stress, problems, and disagreements. We’ve weathered military deployments, a child with serious health issues, and his heart attack in June of 2013. Somehow we’ve managed to stay together.
3. I am thankful for flowers! Our wedding flowers were Fire And Ice Roses which are bi-color red and white roses. When Honeybear buys me flowers he always tries to get bi-color or two-tone roses in remembrance of our wedding flowers. This picture doesn’t show it well but these roses are two-toned.
4. I am thankful for my daughters who took it upon themselves to decorate for our anniversary. They wanted to make it special. Turtlegirl drew a Happy Anniversary Message on our white board using fall colors. The colors didn’t turn out so well in the photo but I am sharing anyway because I love the bride and groom on top of the wedding cake. Tailorbear put up cranberry streamers and set up our little dinner table for two. Burgundy, Wine or Cranberry were the colors we aimed for with the wedding.
5. This week marked the 12th anniversary of my dad’s passing. I am grateful that he and I had a much better relationship the last few years of his life than we did when I was growing up. It’s like he was two different men. I miss him very much especially around the holidays.
It’s official. The 2014 Schoolhouse Review Crew Year has ended. I’m sad to see the year end but I’m going to enjoy a little break. I’m excited though because I’ve been invited back to the Crew for 2015!
End of the crew year means Blue Ribbon Awards. Last week I shared my family’s favorite products and vendors from the 2014 Crew Year. Today I’m highlighting some of the Schoolhouse Review Crew Favorites. The votes were tallied and these are some of the official winners! You’ll want to visit the Schoolhouse Review Crew Blog to see the full list of winners because I am just going to highlight a few of them here.
Our wonderful Assistant Director, Debra, put the voting form together for the crew to fill out. She organized all the vendors into 27 different categories! Did you know that the crew reviewed more than 115 products from over 50 vendors! I think there were something like 58 different review slots. That’s a lot of awesome products. I counted up my reviews the girls and I used and reviewed: 31. Some were just for mom like the Motivated Moms e-planner and the Trident case that I am still using with Kindle. Some were just for Supergirl like Digital Heroes and Heroines of the past: American History. Tailorbear discovered that she loved learning science with Fascinating Biology and Turtlegirl finally got to study French with Middlebury Interactive Languages. We had a little something for everyone!
So now to a sampling of the Schoolhouse Review Crew Favorites! Note: links to products are either to my review or to the page where you can find schoolhouse crew reviews if my family did not review that vendor.
To see all of the categories and their winners click on the banner below!
I did not think of this last week but today it occurred to me that I could combine Random 5 on Friday and Thanksgiving type thoughts. I see many friends on Facebook listing something they are grateful for everyday for the month of November. I like that idea but I can’t keep up so I’m going to write up things that I am grateful for for the remaining Fridays in November.
1. I am thankful for sunshine even if it is so cold outside. ~ The artic air seems to have arrived here and temps are in the low to mid- 40’s during the day and in the 20’s in the evenings. Yes, I know not so cold as many other areas of the country but that is cold for us. I do love the sunshine though which is very rare for November.
3. I am thankful that we got the gas fireplace fixed. ~ I am so thankful that we have a fireplace. It didn’t really work last winter but it has been wonderful to turn the fireplace on, starting on some colder days in October. The girls have been doing some of their history work and playing games on the floor in the living room in front of the fire.
4. I am thankful for home canned tomatoes! ~ My friends, along with two of my daughters, and I canned boxes and boxes of tomatoes. I love having quarts and pints of tomatoes. This week I used tomatoes in the crockpot to make a yummy sausage pasta sauce and today I used a quart of tomatoes for the soup we’ll eat tonight.
5. I am thankful for the Schoolhouse Review Crew. ~ I am thankful for the crew, for the great products I get to review, for the awesome Crew Leadership, and especially for the friends I have made. Friends who have made me laugh, prayed for me, encouraged me and who might even read my blog sometimes!
What are you thankful for this week? In addition to my regular Random 5 link up, I’m also linking up to This Day Has Great Potential, a blog from another wonderful Crew friend!
Looking Back at the 2014 Schoolhouse Review Crew Year!
Yesterday I posted and linked up my final review for the 2014 Crew Year. How awesome that we got to end the year with a game! So, today I’m looking back over the year and remembering what amazing products my family got to use.
Favorite Literature Program: Oh my this was so hard to choose just one to vote for the official Blue Ribbon Awards but since this is just a list of my family favorites and it’s my blog, I’m going to go ahead have two favorites!
It’s Shakespeare and it’s Lightning Literature so of course we loved it! Boobear is a much bigger fan of Shakespeare than either Turtlegirl or Tailorbear but I was surprised by how much they enjoyed using and reviewing Lightning Literature Shakespeare Comedies & Sonnets.
I love love love love Progeny Press study guides. We have reviewed them multiple times over the several years I have been on the crew. It should not be a surprise that we could not decide between two wonderful and very different literature curricula. This was the first time that Supergirl used a Progeny Press guide and she loved meeting with mom to have mom read the book, The Courage of Sarah Noble, aloud. I think she loved the extra mommy daughter time and I know she loved building a log cabin out of pretzels and frosting! I loved that my high schoolers could use the Hunger Games Study Guide.
Favorite Math and Language Arts Supplements: This actually covers 4 or 5 of the official categories but I’m combining them because my two favorite products can both be used as Math and LA supplements!
In 2014 I got to review IXL for the third time! This year they have a new math app for grades k-6 that works for the newer Kindle Fires. You’ll have to read the review to see all the other neat stuff but Supergirl adores being able to practice math on Mama’s Kindle. The LA portion of the website can be used on the Kindle using the browser!
The Learning Palette makes my list of favorites! I love the wrap-ups but I adore and Supergirl loves and adores the Learning Palette. It is fun. It can be done independently and it covers so many topics in both Math and Language Arts. When she has finally mastered all of the first grade material, I am going to have search for sales on the second grade packages!
A Few More Favorites: Rather than try to group them by category I just present a few more of our favorite review crew products from 2014!
I love Math-U-See so I was sure I would love Spelling You See. I was not wrong. I was afraid that it would be too much for Supergirl. We’ve had to take many long breaks from using it but she’s making far more progress than I thought she could and that means this has to be one of my favorite crew products for 2014!
I will confess that I was not interested in Roman Roads or the Old Western Culture program they offer when I saw it on the upcoming vendor list. I really didn’t not want to have to write a review. I showed the website to Turtlegirl and let her check out the sample lessons and she begged for it. What could I say? This was her favorite crew product. And you know what? It ranks up there as one of my favorites too and I am so grateful that we had the opportunity to review it!
The Last favorite that I share today is the My Student Logbook. In my review I stated that I would be voting for it as The Product I Didn’t Know I Needed. This simply tool has helped us so much. She keeps track of chores and school and little things like “put on glasses” since she often forgets to wear her glasses which seems to contribute to her headaches. I will have to purchase one of these for her next year!
So that’s it for now! I reviewed so many wonderful and amazing products but these stand out as some of our favorites. Next week I’ll share about the official winners!
What do “Closet Urge,” “Snow Napkin,” “Cheese Man,” “Danger Photo.” “Rainbow Future,” and “Fish Socks” have in common? These were all items that my family, posing as snake oil salesmen, attempted to sell to customers: Pregnant Lady, Prom Date, or Runaway among others while playing the game Snake Oil.
This hilarious game does not take long to learn, has no small parts to keep track of or lose, and encourages creativity. Snake Oil is produced by Out of the Box Games and is intended for 3-10 players ages 10+.
Setting up the game is easy! There are two types of cards: Customer Cards and Word Cards. Customer Cards are blue on one side and green on the other. Each side lists a character such as Rock Star, Priest, or Dictator. These cards go in the center. The Word Cards are divided into 4 decks in the plastic tray. These are placed around the Customer Cards,in easy reach of the players. These Word Cards have nouns: goggles, belt, closet, toilet, knife, cup, socks, pony, pillow, freedom and many many more. There are 28 Customer Cards which give you 56 different Customers. There are 336 Word Cards. Each player draws a hand of 6 Words Cards. Select a player to be the Customer for the first round and you are ready to play!
The Customer draws a Customer Card, makes a selection from either the blue or green side and becomes the persona on the card. In other words, I set aside my own personality and try to think like a Grave Robber, Spy, or Couch Potato. What products would make life easier if I was a Cheerleader or an Alien?
The other players select two cards from their hand to combine to create a product. Now the fun begins. Players become Snake Oil Salesmen, think of used car salesmen or door-to-do solicitors, and try to pitch their product. If the Customer chooses your product, you get to collect the customer card! The game ends when each player has been the Customer once. The player with the most Customer Cards wins! A full game takes 20-30 minutes to complete.
My husband remarked, “the more you play the game, the better your ability to pitch things.” He really seemed to love the game. He has a twisted sense of humor and this came through with his sales pitches. He pitched the now infamous “Cloud Belt” perfect for a Prom Date who wants to be light on her feet while she dances with her partner. He also said “using that used car salesman type of pitch is what makes the game so much fun!” It was a bit scary to see how much fun he had with that!
I do want to add a note about ages. The game says ages 10+ but some of the Word Cards and Customer Cards may be either too mature or too complex for the under 12 crowd. I would list the ages as 12+ but in a family setting it may work with 10+. We only played with ages 15 to adult.
The game includes suggestions for variations and we came up with our own. Instead of taking one customer card, you take two. We made it a rule that you had to use one blue and one green to combine. Some of our crazy combos included Soldier Santa, Fashion Model Prison Guard, and Alien Teenager. This gave us more variety and allowed us to get even more creative with our product pitches.
We have all decided this game will be played on Thanksgiving. We plan to play the suggested variation “The More the Merrier” which allows each player to have more than six cards. The directions suggest “eight or even ten cards per hand instead of the standard six.” More Word Cards mean more choices and that means more laughter and fun!
Visit the Schoolhouse Review Crew blog to read what others have to say about Out of the Box Games.
Marcy from Ben and Me has retired from hosting Blogging Through The Alphabet. She did five rounds. Five! So my friend Kristi from A Potter’s Hand Academy has taken up the baton and is now hosting a new round of Blogging Through The Alphabet. She started last week. I missed the memo. But that’s ok. I’ll just start now and I’ll start from the beginning with the Letter A. The first thing that came to mind was Advent or in the Orthodox world, Nativity Fast or little lent. Since my post for the letter B is about the books I want to read during Advent, I’m going to combine the two letters.
For many Orthodox Christians in the USA, the Nativity Fast will start on Saturday November 15. Well, actually for all Orthodox Christians the fast will begin on November 15 but one is November 15 on the Revised Julian Calendar (New) Calendar which currently follows the civil Gregorian calendar that we are all familiar with. The other calendar is the Julian Calendar (Old) and currently differs from the Gregorian calendar by 13 days. So when the secular wall calendar says November 28, that’s November 15 on the Old calendar. But I digress.
The word fasting tends to conjure up images of no food or very little food. But for Orthodox Christians food is only one part. In addition to fasting from food we are to abstain from or fast from sin. Well certainly we should abstain from sin all the time but during fasting seasons we put a greater emphasis on seeking to put to death the desires of the flesh and seek a deeper, more meaningful spiritual relationship with God. We can do this by praying more. Praying and fasting go hand in hand. For that reason many folks will withdraw from online forums or social media such as Facebook. Some will spend less time watching television.
When I read that I immediately thought of the TV show Criminal Minds which Honeybear and I have been watching on Netflix streaming. We both think that giving up Criminal Minds during the Nativity Fast is in our best interest. I plan to abstain from most of our regular streaming. It has been our habit to watch more documentaries and educational type programing during fasting seasons, rather than the sitcoms and dramas that we usually watch.
Mere Christianity by C.S. Lewis ~ I started this ages and ages ago. It’s time to finish it.
Christ the Eternal Tao by Hieromonk Damascene ~ I borrowed this from a friend but I haven’t started it yet.
The Man Who Was Thursday by GK Chesterton ~ The same friend who loaned me the above book is hosting an Advent Read Along. I have my copy on my Kindle ready to go and I am looking forward to reading it.
Evergreen by Susan May Warren. This one is a Christmas Winter Novella so I might wait until after Christmas and read it during the 12 Days and I first have to finish book 3 in the series When I Fall in Love which I am currently reading.
Considering how long Time and Chance and Christ the Eternal Tao are, my list may be overly ambitious for 40 days if I were trying to finish all of them.
In addition to reading the above books, I do also intend to make my daily Bible readings a priority and use the 40 days to make Bible reading a daily habit like making my bed or combing my hair.
I’m looking forward to Advent and reading more Books.
There are less than a handful of crew products that I have begged to review multiple times. IXL is one of the few. I love this program. Supergirl loves this program. I begged for the opportunity to review it for the third time. Yes, it’s even better now! IXL very generously provided a one year subscription for up to five students for both IXL Math and IXL Language Arts. So grab a cup of your favorite beverage and let’s chat about what’s new with IXL.
Students practice the skill and earn points towards mastery. Students must score 100 points to achieve mastery. They earn points for correct answers and lose points for incorrect answers. Questions within the skill for a specific grade will get harder and when you reach 91-93 points you enter the “Challenge Zone.” Supergirl always gets excited about the challenge zone. These questions are the hardest for the skill set.
When the student has mastered the skill, a special screen comes up showing them the medals they have won. In the Math section of the program, students earn visual rewards on a game board. These are highly motivating. Rewards are given for how long you’ve been practicing or for mastering a certain number of skills. At the PreK level the final reward was for achieving all the rewards! This game board reward system is not available for the LA portion of the website but Turtlegirl, Supergirl, and I all agree that adding one to the LA side would make it even more fun.
When I first reviewed IXL Math, it only went from PreK through Algebra. At the beginning of this current review period I got an email from IXL with the exciting news that IXL has added more upper level math programs including Algebra 2 Skills, Geometry Skills and Precalculus skills. I signed myself up as a student so I could play too!
Another added feature: Apps! I was so thrilled to find out that IXL Math for grades PreK-6th grade is available for 3rd Generation and newer Kindle Fires! I have a Kindle Fire HDX 7” and the app is gorgeous! If you do not have a compatible device, no worries the program works well in iPad and Kindle browsers too!
This program, as I stated above is intended as way to practice skills. It is not intended to be a stand alone math program. However, I have found that for Supergirl I can use it as her primary math program. The program does have some instruction pop up when you get a problem wrong. For many students this is not enough to teach a new concept but works well a reminder of the concept. Some students grasp math very easily and at the younger levels, some parents may be comfortable using this as their primary math program.
Both Tailorbear and Turtlegirl are using IXL strictly for practice, review and reinforcement of upper level math. Tailorbear needed extra practice with word problems. I love that IXL offers practice in word problems in Algebra! She needed help with specific types of word problems, like percent, and she was able to use IXL to practice and review for her Algebra test.
Turtlegirl is currently studying geometry. She is using IXL when she needs extra practice with geometry but she is also using it to keep algebra skills from slipping away. I think she’ll be using IXL to freshen up her algebra skills after finishing geometry and before moving on to algebra 2.
This aspect did not exist in 2012 when I first reviewed IXL. When we reviewed IXL again in 2013, they had added the LA portion. Though Supergirl functions most closely at a first grade level, she was ready for some of the skills at the 2nd grade level. We did quickly reach a point where the material was beyond her so we set it aside and just used the math portion. I was thrilled that she had made developmental progress and we were able to work through more of the 2nd grade LA skills this time. I was pleasantly surprised by how much more she could do this time.
Like the Math program, the Language Arts section is intended for practice to achieve mastery and is not intended as a full independent curriculum. I used it as a way to introduce concepts to Supergirl and then to practice those concepts.
In theory both Tailorbear and Turtlegirl are past the skills offered in IXL Language Arts, but Turtlegirl played around with some of the 8th grade skills to refresh her memory of grammar concepts such as when to use commas. In our experience even high school students may benefit from the practice of the 8th grade language skills.
Let’s chat a moment about the settings in the Settings section you’ll find several neat little options. To find the settings you must be logged in to your parent account. Choose profile & settings and scroll down to settings. Profiles are first and settings is at the bottom.
You can choose to enable audio for upper grades (grades 2-5). The audio is always on for grades PreK through 1st. This audio option though only applies to the MATH portion of the website. With the audio option students can click a sound icon and have the questions as well as the answers read aloud. This option is NEW and it is huge! Previously audio was only for grades Prek through 1st grade. I am not sure when this option was added but it is new to me this review period and one that I mentioned wanting in both of my previous reviews. We are hoping this option will be added for the LA portion but this is not as critical for us as Supergirl is not working independently in that subject the way she does in math.
This is an excellent tool for practice. I like how they offer virtual prizes to motivate the student. I wish they offered virtual prizes for the grammar/language part like they do for math. The grammar/language arts part was much more difficult than the math part. I recommend this as a way for students to practice concepts they have already mastered.
I like IXL. I like to play on it on the Kindle. I like math. I like learning about time. I love getting rewards like the medals and then I get to uncover a prize. I love to count. I also like doing the language arts with my mom at her computer. I wish they had prizes but I do like earning the medals.
Price: Pricing starts at just $9.95 per month for one child for one subject. Annual subscriptions for one child for one subject start at $79. Each additional child is just $2 a month or $20 per year. A month-to-month subscription for both Math and LA for 2 children would be $17.95 and for a yearly subscription, $149.
You can try IXL for free. Guest users will not have tracking or reports and there is a daily practice limit of 20 questions.
For more information visit the Schoolhouse Review Crew blog to read what others have to say about IXL. You can also read my IXL review from 2012 and my IXL review from 2013. We love this program!
All information is correct and accurate as of the date of this review. You can read my other Schoolhouse Review Crew Reviews to find more great products.
I’ve missed a few weeks so these random thoughts are not all from this past week.
1. This is has been a rough week. A very dear older gentleman fell asleep in the Lord on Monday. This amazing man adopted my whole family. He was like a grandfather to all of us. He came to see all of BooBear’s recitals her Freshman year of college. He gifted us with an amazing evening at the Ballet a few Christmases ago. He was always smiling and always had a story to share. He lived a rich and full life and I am grateful that my family got to know him. Memory Eternal, John! You will be missed!
2. BooBear is in choir this year and we attended her first choir concert a couple of weeks ago. She also participated as a pianist and a vocalist in a special Evening of Schubert recital. She did not do a vocal solo but did sing as part of the group. This picture is from her choir concert. What a beautiful woman she has grown to be!
3. Last week she had another recital. This one was the keyboard students recital but by keyboard they mean “instruments that use a keyboard.” All but one of the performers were pianists. The recital ended with the one organist performing. The organ, by the way, is incredible! Beautiful beautiful piece of craftsmanship and sounds wonderful as well. No pictures allowed so I have none. It was custom made and copyrighted so photos are not allowed.
4. Last week was Halloween and we had friends over. We made (ok ok Honeybear made) homemade pizza. We tried a pizza combo that we had not done in years. BBQ Chicken. Use BBQ sauce in place of pizza sauce, topped with cubes of cooked chicken and then a combo of mozzarella cheese and cheddar cheese. It was a huge success. I think we’re going to be having BBQ Chicken pizza again this weekend. Maybe I’ll take pictures and do a recipe post.
5. I did not get pictures of my girls in costume. Boobear did not dress up. She did not trick or treat. She’s such a grown up now. Turtlegirl was a waiter. She even used her daddy’s bowtie. Oh how I wish I had picture! She had a pretty white ruffled blouse with a black vest. Black dress pants and an apron. Tailorbear opted to do the rock star thing again. I had fun spray painting her hair with a pink dye. She also experimented with wild eye makeup and used the guitar from the wii band hero as a prop. Supergirl has been watching Once Upon a Time with us and decided she had to be a blue fairy.
I am not a science oriented person. I did not take a single science course in high school. I avoided Biology completely. I did my best to avoid chemistry as well, choosing to meet my college science with a lab requirements by taking Geology classes. Dissecting a rock is much easier than dissecting a frog! As a home school mom I’ve learned to love science, but even though I’ve had two high school students complete biology in my home school, I still prefer that someone else teach it.
Tailorbear loves biology. The options I used with BooBear and Turtlegirl require more independent study and so are not as suitable for Tailorbear’s temperament. We were excited to see Fascinating Biology offered from Fascinating Education.
Dr. Margulies is the founder and creator of Fascinating Education. He is a neurologist, author of three educational textbooks and currently holds the title clinical assistant professor at two universities. As a neurologist he understands the brain. He understands how it works and how we learning. Tailorbear struggles with focusing her attention, Dr. Margulies understands how attention is focused, and on how motivation affects learning. Since Tailorbear struggles with focusing her attention, Dr. Margulies Biology program would seem tailor made.
Fascinating Biology uses an audio/visual approach to present the material. Students listen to a lecture while watching slides. It is designed to cover what you would find in a typical high school course. There are 19 Lessons and each lesson includes the audio lecture with video, a glossary tab, a script and a test. I love that each lesson has a transcript of the lecture that you can download. The average length of each presentation is about 45 minutes.
The tests can be taken online and automatically graded or the parent can use a special password to access printable tests along with answer keys and administer a paper test. The test questions are multiple choice. Once the student has completed the test, her score, along with the minimum passing score are revealed. Students have the option of retaking the test, or printing the test score.
If you would like more detailed information on how the program works, Dr. Margulies has created a detailed tutorial that demonstrates step by step how to log in to the course and use the tools. The tutorial is presented the same way as course lessons.
This program does not have a lab option for biology. I think including labs would take this closer to being a college prep level high school biology program. Also, this biology course is written from a biochemistry point of view and the suggested sequence for Fascinating Education curricula is Chemistry, Biology, and then Physics. Tailorbear has completed an integrated Chemistry and Physics course so I felt she had the prerequisite background in chemistry to start with biology. For students who want to take biology but have not had chemistry Fascinating Education does offer a subset of Chemistry for a $20 add on.
"I really like the format of the lectures. It gave me a visual and auditory way to learn the material. And it was very helpful. I really like the test format too. This gave me a way to understand what I missed. Overall I love this biology program! I really feel like I am learning science!"
Like Tailorbear, I like the format of the lectures. I also love that a transcript of the lecture is available so that the student can review the material. I do wish however, that the program included labs. I’m also concerned that there is enough material and that is presented in-depth enough for a full high school credit of Biology. This program is wonderful as a spine and I love that my daughter loves it. We’ll be adding in supplemental material and labs so that I can be sure she has covered enough to earn a full high school biology credit to put on her transcript.
NOTE: You can view a SAMPLE lesson and read the FAQ to get even more information about Fascinating Education and Fascinating Biology.
Visit the Schoolhouse Review Crew blog to read what others have to say about Fascinating Education.
October is winding down. Next Friday is the 31st!
1. The Schoolhouse Review Crew is taking applications for the 2015 crew year. Hurry though because the applications are scheduled to close on Monday!
2. As the 2014 Crew year winds down I realized that I only have a few more reviews to write. This is bittersweet. Bitter because it’s the end of the year and sweet because I’ve been on the crew long enough to really appreciate the break. (Yes, I’ve submitted my re-application for the 2015 year. I’ll be ready to tackle reviews again after the holidays but I am looking forward to the break between crew runs!) My very last review of the 2014 year is a game from Out of the Box Games called Snake Oil. Warning this game can be dangerous to health: we have laughed so hard that it hurt!!! Here is a laughing Turtlegirl. I was laughing so hard I almost couldn’t get the picture! She doesn’t like the picture but I think it truly captures how hard we have laughed. I have her permission to share it.
3. I’m challenging myself to NOT turn on the furnace for as long as I can. I’m striving to make it to at least November 1st. We have a gas fireplace and I have broken down and turned that on a twice this week but for the most part we’re pulling on socks and sweaters and using fleece throws. Oh and drinking hot beverages. I’d rather be cold than hot but sometimes the cold here just cuts through all the way to the bone. It’s a wet cold which is so different than the dry cold I grew up with in MN. But I like bundling up and getting all cozy.
4. We’ve had a woodpecker visiting our backyard for several days. He’s beautiful, but drives the cats crazy. He likes to get up on the roof right by the back door. Sometimes he’ll fly from the roof to the tree. Turtlegirl got this great picture of him.
5. Tailorbear loves the rain. She also loves the wind so she is especially happy when she can go outside and experience both rain and wind. She doesn’t so much like having her picture taken but we got one anyway. | 2019-04-24T07:02:28Z | https://www.circlingthroughthislife.com/2014/ |
Couldn’t make it to one of our in-house GPS classes? No worries! now you can join Touratech Adventure Expert and Touratech Rally Ride Coordinator, Eric Archambault for this in-depth, GPS skills class. – FULL TRANSCRIPT IS BELOW!
All right, this is gonna be Navigating With Tracks Tips & Tricks. And this is based on a class we’ve been offering since 2016 at the Touratech USA showroom and at the Touratech Rallies. So we’re gonna break this into a video, it has a few parts. Kind of break it into little bit easier to digest segments and then at the end, we’re gonna have some GPS unit specific things that we’ll cover so you don’t have to watch four different GPS units when you only own one of them.
And with this, we’re gonna start with what tracks are, we’re gonna kind of go through a little bit of terminology, how to dial in a track from the Backcountry Discovery Routes, or if you get an email from Touratech, you’re going to one of the rallies, how do you get it from your computer into your GPS unit. And then how to get it to show on your GPS unit so you can successfully navigate with it.
And with that we’re gonna really focus on kind of the current and the older Garmin units. The Garmin Zumo 66G and 665, the Garmin Zumo 590 and 595, the Zumo 350, 395 and 396 as well as the BMW Nav 4, Nav 5 and Nav 6 units which are all based on various generations of Zumos we just mentioned. As well as the Garmin Montana 600 series. That would include the BMW Motorad Adventure Unit which is just a repackaged Garmin Montana. And we’re gonna touch base a little bit on the Garmin GPS Map 60 Series unit. It was a 60 and 62, 64, now the 66. And a lot of the other outdoor units, they’re touchscreen, they’re probably gonna follow the Garmin Montana format.
The older push button style units with the smaller screens, those are probably going to follow the GPS Map 60 Series format. Some things might be slightly different, but Garmin doesn’t have too many different platforms, they package it differently for different applications. But those are gonna be there. If you’re using another unit that’s not a Garmin unit, be it a Trail Tech or a TomTom and even some of the cellphone apps, they can handle tracks. A lot of the things we’re going to discuss, especially in setting up your unit, your menus might be different but the functions and the topics we’re talking about do apply.
So there might be some differences but the core principles are gonna be universal. So you can get a lot from this class, even if you’re not using a Garmin but we’re really focusing on the Garmins because in North America, Garmin just has the biggest footprint in the automotive, motorcycle and recreational space. So that’s what all of us here at Touratech USA run. We use Montanas, we kind of run all of them and most of us lean toward the Montana for various reasons but we’re really familiar with any of them. So if you watch this and you have questions later, definitely give us a call, we’ll get you pointed in the right direction or get you talking to the right guy that’s familiar with the unit you are using.
And before we go too deep, I always like to start any class with introducing myself. My name is Eric Archambault, I’ve been with Touratech since 2012, I am the showroom manager at Touratech as well as the East Coast Rally coordinator so I spend a lot of time besides selling stuff and doing all the stuff you expect somebody at Touratech to do. I spend a lot of time riding both for Touratech and then for myself. I still am a huge enthusiast and love going out and doing the stuff. And then setting up the events, both the East Coast Rally and I normally help out a lot on the West Coast Rally doing all the pre running and for years I led rides. I do get to use a GPS in a lot of capacities.
So I really think I’m a bit of an expert there, at least very, very familiar. Expert is a big phrase but I’m very comfortable using these units and have used them in a lot of different aspects. Where if you’re creating a ride or just doing something fun for yourself or you’re setting something up for hundreds of people. The things you do might be different but the fundamentals will be the same. And well, I started at Touratech in 2012 and that’s when really my GPS use, I started using it a lot and really grew there.
I am rooted in using paper maps, I was in the United States Marine Corps and I did the land nav where you walk around the woods with a compass and a map and I did pretty well with that, I understand map reading and that type of backcountry navigation. And I’ve also done timekeeping in Duros where you have a roll chart and then you’re following, you’ve gotta keep a certain pace and hit things at certain times, and if you’re too early, you’re in trouble, if you’re too late you get penalized as well. I still do some of those events every year. I do some European style time regularity rallies where it’s the same sort of thing but on public roads and there’s generally a rolling rule, if you get a speeding ticket, you’re automatically disqualified.
So if you make navigation errors, you can try to make it up by speeding but you’re probably gonna get zapped and get disqualified anyways. So I enjoy a lot of the older types of navigation. I enjoy doing that, [reconnering 00:05:12] and that type of stuff. But when it comes down to it these days, the GPS and navigating with tracks is a really fun way of navigating. It’s a lot easier and if its not in a competition format, well with a road book rally, you’re doing math in your head. You’re figuring average speeds and time and distance and you’re looking out for turns and you’re always kind of glued to your odometer. And navigating with a track, especially for something like a Backcountry Discovery Route, it’s a lot more fun because you’re just following a path, you don’t have to worry about, does this road go somewhere? You have something that’s no good and you’re just out there exploring and having fun. Somebody else did the legwork, and doing legwork’s fun too.
But it’s great to go out there and just go chase a track that either you’ve written before or somebody told you, “Hey, this is awesome.” And you’re going and doing it, or something like the backcountry route where you have a great non-profit that’s out there producing these rides and keeping these riding areas open for you.
So that’s just a little bit about me. I always like to start any GPS based class to talking about paper maps because it’s gone away a lot but years ago especially, the old timer that would always stand up and I used paper maps and a compass. And if my compass stopped working that means the core of the Earth isn’t spinning anymore and we’ve got bigger problems. And I still carry a compass with me and I still carry paper maps and it’s really not uncommon especially if I’m out there trying to create a ride, be it helping with the Backcountry Discovery Route folks or for one of the Touratech Rallies. And I’m out there and I am running my GPS, I can see where I am, but there’s no replacing that three foot piece of paper that is gonna show you that really big picture.
So I bounce between them and especially when I’m finishing up and polishing up a rally, I normally have multiple paper maps on my desk, because they all show you something different. Be it a motorcycle focused map, or just a USGS map or maybe a regional map that’s just for recreational use. And you can get a lot of information from all those. And then your map reading skills, if you’re just getting into it and you haven’t really used paper maps and you start writing topographic maps on your GPS, and you start learning to read the contour lines and the symbols. And you grab a paper map and you’re gonna find a lot of it’s the same.
So, they do go hand in hand. If I’m out there pre running for an event, I have the paper maps, I’m writing the GPS, I probably have a satellite transponder for in case of emergencies on me, and I do have my little write in the rain notebook that a lot of times, it’s just quicker to write down a quick note. I’ll drop a waypoint and say, “Hey, two miles after waypoint 005, really ruddy and muddy. Probably don’t run beginner riders down it, recheck before the event, just in case everything dries up.” So, all this technology works together, use all the tools you can get for the job and a GPS is a great tool and it’s made backcountry navigation a lot easier in the recent years. But don’t leave the paper maps at home, they still have a place and they still give you a big picture that’s hard to replicate on a four or five inch GPS screen.
So, that’s just my piece on that, I just like to clear that up that a lot of times people have this mentality, even if they maybe have no experience but they hear people on both sides arguing, one’s better than the other, right? They both have their strengths and weaknesses and together they’re better than either of them alone. So there’s that.
We’re gonna go into terms of vocabulary. When it comes to GPS and backcountry navigation, there is a lot of terms, especially when you get into the electronic side with the GPSs, that sound very similar that are completely different things. So, I always like to start with getting everyone on the same sheet of music, on when I say track or route or waypoint, what I mean and then you’re not trying to make your GPS do a thing that it can’t do because you’re using the wrong term. And not that I’m always right, but in this class I’m gonna make sure I’m using the correct terms.
So, we’re gonna just dive in, it is only a couple of them and it’s gonna clear up a lot of clarification. It’s gonna get you kind of in the right mindset as well for when you’re dealing with a GPS and what it can do and what it actually is.
First term is gonna be waypoint. And we’ve got three images here. The top center one. That one’s going to be a screenshot off of a Garmin Montana. You have a little campsite that’s marked 006. That was just the waypoint number that was given. The red line is a track that’s been shown on the map. You have a contour line up in the right hand corner, and then down below you have a little bit of a blue track that is just a active track that happened to be on the GPS unit that I grabbed this screenshot from.
And the big thing with a waypoint, the bottom two pictures are gonna be from Garmin Basecamp which is gonna be your computer side software when you’re dealing with Garmin GPS units. You have the GPS itself and then you have your computer interface. It lets you manage and build a lot of stuff. And that’s gonna show you more details. And that bottom right hand one, it says position and it gives you a coordinate. It’s north 47 degrees, 19.132 minutes, west 118 degrees by 40.692 minutes.
And the big thing that I’d like to bring up on this, that position is fixed on the globe. If you think of the globe as just a white ball, you have your latitude and longitude points. Some are up down, some are left right. Those positions don’t care about what map you’re using. It is just a position on that sphere that is the Earth. And you obviously want to have the best map, the most detailed map for your area. But those positions don’t care about the map. In the case of this waypoint, regardless to what map you have, that waypoint’s gonna be shown on top of it.
So it’s not an address that well, if my map doesn’t show the streets, it’s not gonna show. It is showing the point on the Earth and it doesn’t care about the maps. You could have no maps, it’s always going to be shown as long as it’s not hidden. But by default, it’s gonna be shown, and you can see too, in the Garmin Basecamp views down there in the bottom, you can add notes and other information that you’d like. Or you can change the symbol, if it was a gas station, put a gas pump or something like that.
But, big takeaway, a waypoint is just a fixed position on the globe. And waypoints have a lot of use besides marking a campsite. I use them a lot if I’m gonna go ride dirt bikes and I’m running the GPS on the bike. When I park my truck and unload the bike, I mark a waypoint there. And then I will have that fixed point that I can look at on the GPS and depending on my maps, I can navigate back to either by saying, “Just take me there,” or I can kind of zoom out and see where I am and see that and then just kind of follow trails until I find my way there.
And the same thing if you’re out there in the woods and you have a mishap. Either you break your bike or you break yourself or if your buddy breaks themself or their bike and you have to leave either a bike in the woods or leave a person in the woods, you can mark a waypoint. Maybe you don’t have a satellite transponder but you’re gonna go and get cell coverage and you’re gonna get first responders and you can tell them, “Hey, my buddy has a broken leg, he can’t walk. He seems okay, he’s not in shock. He’s at this location.” You’re gonna read off that location and any first responder is gonna know what to do with that information.
That format is universal, it doesn’t matter what GPS they have or anything like that. That format is universal. You could talk to somebody in another country, this is universal. So it’s really great that way where you can mark locations, I’ve even done it, I’ve been riding around the woods setting up a rally and I find this great camp spot and it’s just hidden out in the middle of nowhere. I’m seven ways from nowhere and I find it and I can mark the waypoint and go, “You know what? Next week,” and I’m gonna drive out here. I know it was hard to get out here, but I think I can find a way out and ride back out there the next week and camp out for the weekend and just have solitude for a couple of days.
So waypoints are a great way of marking a location that you’re gonna wanna use again. Be it finding your truck, finding your camp, finding your buddy that’s broken that you’re getting help for. It’s really a great tool.
So track is going to be the next one. And this is where we’re gonna get into that fuzzier terminology a little bit. And a track, up top we have a picture, another again, screenshot from a Garmin Montana. And the red line is a track. This is somewhere in central Pennsylvania where we do the East Cost Rally. I’m sure that was one of the rides we did one of the years and there’s waypoints and stuff laid over it. But that red line is a track.
And down below is an example and it’s not of that track, so don’t geek out and say those position points aren’t in Pennsylvania, they’re not, they’re in Washington. But it was just a good example I had on hand. And it really highlights what a track is. And a track is a series of positions that are connected by a line. And again, the positions are sort of like a waypoint where you see the latitude and longitude coordinates but they’re not related to the map. They’re related to the world. They’re related to that sphere that we’re on.
It’s just a path you either intend to go or somebody went before and then you intend to go. Or you have confidences there and you mapped it out using satellite imagery or other maps on the computer. So, the thing with the track, and the tough part is there’s a very small part of the world that even the people that navigate that use it, it’s used for backcountry travel, be it overlanding or backpacking, all that type of stuff. And especially for off road motorcycle use. I think maritime and some aviation, they still use it because if you’re on a boat or an airplane you’re not using routes either. And it’s a way to make paths that you can repeat and navigate with.
And that’s why when you call Garmin, you sometimes can have a hard time even calling a lot the companies that make GPSs, because so few people use this that their tech support people, they’re used to, “Well, my car GPS doesn’t know where the mall is, why doesn’t it know where the mall is?” And it’s for whatever reason. This is kind of a specialized navigation and it’s great for what it is because it doesn’t care about your maps. And I could give it to you and you could have maps that are five years old, you could just have a different version of the map, it doesn’t matter. It’s gonna show you that same line.
We’ve just taken a Sharpie and drawn it out on the map, hitting all these position points on the globe. So that’s why we use Backcountry Discovery Routes. Or use these for Backcountry Discovery Routes and for Touratech Rallies. Because everyone’s gonna be running a little bit different map sets. Some of them might be older. You might be running a Topo map, or maybe a third party map or something. And you’re gonna see the same thing. You’re gonna follow the same line. And it’s not gonna give me any prompts, you’re just overlaying this track on top of your map that you intend to follow, and as long as you stay on that line, it’s gonna go where that line’s supposed to go, where it was intended to go.
And then getting a little bit more into tracks, the neat thing that the GPS units do, is you have your current track. And that is gonna be what we’re all used to seeing on the screen. You kind of get a pale blue line behind you and maybe if you cross over, you’ll see it again, but that’s where you’ve been. And a lot of times we call it a breadcrumb trail. It’s where you’ve been. And that’s gonna be your current track and this is a screen grab again, off of Garmin Montana.
And when your current track is full, whether you have it set to reset every night at midnight or after a certain number of points, or after a certain file size, it’s gonna become an archived track. On some of the Garmin Zumo units, instead of having current track and archived track, you’re gonna have current track and there’s gonna be a bunch of saved ones. And the format’s gonna be similar to the ones below the archive track here, where it’s a date and a time. In this case, those are saved tracks in the case of this Garmin Montana, and those were just written before and then dragged back into the unit as a saved track. Normally there’d be a name. You’d name it something that’d make sense, like that bottom one, 02 Stone Valley Loop which was one of the Touratech Valley East rides a couple years ago, that it was given a name that would make sense.
But your GPS, unless you disable the feature or turn it off, it’s always gonna be recording where you go. So when you plug it into your computer, if you go and you just wander off and you find this really rad road, you can go back and look at it in Garmin Basecamp and go, “I’m gonna save that section because you know, I think I’m gonna go back there sometime and ride that again,” or “I’m gonna show my buddy because I was out in the mountains and all of a sudden the dirt road turned into this beautiful five miles just paved go-kart track in the middle of the woods.” And you’d be surprised how many times that happens, too.
And it’s really great when it happens and it’s really great when you can go, “You know what? I’m going back there next week and I’m just gonna rip that for 20 minutes.” So, that’s the rest of tracks is it’s also something that’s being saved behind you, that’d be your current track or an archived track and then if you bring them in, it’s gonna be the saved ones, because the GPS unit says, “Hey, you saved this and now you brought it in.” Not “We’re recording it,” or “We’re archiving it on the unit itself.” So that’s the different between those. The screen will vary a little bit on some of the Zumos, but it’s the same effect.
Route and trip and then even sometimes adventure. Garmin likes to, especially on the Zumo units, update their terminology. They’re trying to keep it sounding a lot like a smartphone and a lot of it’s trying to make it more comfortable for people where people do complain if you’re using 20 year old terms. Those of us that have been using the GPS units, the [kind of 00:19:33] units for 20 years kind of hate that they changed. The same thing has a new name and it’s still the same thing, but it’s six on one, half a dozen on the other, I guess.
So, a route or a trip or any of those things, that’s not a track. Pretty much if the unit’s trying to make it not a track, you just wanna say no, if you’re doing backcountry navigation. The GPS is going to take a handful of points along that track and show you how smart it is by using all of the usual auto routing features, be it avoiding tolls or highways. You want dirt roads, you don’t want dirt roads. You want twisty roads, and it’s gonna try to put as many of the things you want and as few of the things you don’t want along that path.
And these are great if you’re just saying, “Hey, I’m in Seattle, I wanna get to Spokane, I wanna do it the fastest way possible,” boom, or you’re leaving Seattle and you wanna hit a store somewhere along the way and you’re going to Idaho Falls. It’ll figure it out and you really don’t care about the path as long as you hit a couple of points. But if you’re doing backcountry navigation, routes and trips and any of that stuff, it’s not gonna work well because it’s gonna want to snap you to roads, it’s gonna look at what your routing is and generally you’re gonna default your routing to get you somewhere the fastest.
Because if you’re punching in an address, that’s what you want. So while these have a place, they don’t have a place in backcountry navigation, that’s really the focus on this, is doing Backcountry Discovery Routes, or going to the Touratech Rally and riding some rad rides. So I would just avoid all of this for that type of stuff. Just it exists and if you wanna go down that hole, explore it.
But if you’re trying to do off road stuff or backcountry stuff, routes and trips and especially if you have a track that you know you wanna ride that path, if you make it a route or a trip, it’s almost guaranteed to make it something that you don’t want to the point that if you’re at the Touratech Rally, people will come back and they wanna go ride a ride. They’ll be right outside where we hold the event, “They’ll say take this,” they’ll convert to a trip and they’ll ride the highway around the forest and get back to right where they started because the GPS is like, “Why would you go down all these dirt roads? Because you started here, you finished here, you went that way. So we’ll take you that way and we’ll bring you back here.” And that’s what happens.
So, if you have a track and you want to ride a track, you need to show the track and follow the track. And we’ll get into it a little bit more how to do that, but route and trip, he GPS, especially the Zumos will push you towards, but it’s not applicable for backcountry travel and just say no.
So now we’re gonna get into one of things that is kind of the harder part and it’s one we get a lot of questions about is loading tracks into a GPS. And if you have a Garmin GPS, to load the track into it, you’re gonna need to have Garmin Basecamp on your computer. And that is a free download from Garmin and we have a screenshot of the software right here, of Garmin Basecamp. And this is where you can manage your different maps, you can save and archive data, and you see on the left hand side of the screen, that on my computer there is folders with events that I’ve done and different Backcountry Discovery Routes. And even last week and last month, the things have archived in there. So you can really save a lot of information and you can create tracks in here. It’s a really great piece of software.
A lot of people complain it’s a little bit old, but the nice thing is you learn it and it’s not gonna change next week, or in six months if you haven’t used it because it’s winter and you’re not doing motorcycle things, you’re getting ready for new trips and you fire it up and it has an update and nothing’s where you left it. I appreciate that this software has not changed much in a long time.
So the first thing we’re gonna do and I’m gonna assume, if you don’t have Garmin Basecamp and you’re going to try to go through this, just pause this video, go to Garmin, download Garmin Basecamp, get it unpacked, and then you can even follow right along with this. So we’re gonna start up with the desktop view. And I’ve already downloaded the California Backcountry Discovery Route. It’s the third from the top on the right hand side, it’s a GPX file. That’s the way this type of GPS information is saved, and for Garmins it’s kind of a universal format.
And so I have that on my desktop. If you download it, sometimes it’ll go to your downloads folder, just know where you put it. And in some cases, some computers still will download it as a zip file, and if you’re gonna try to bring it into Basecamp, you need to make sure you unzip that file. And Basecamp won’t tell you that you need to unzip it, it’ll bring in the zip file that doesn’t work and then that’s frustrating. So if you download it and it’s in a zip file, unzip it before you try to bring it in.
In this case, I just put it right to my desktop so it’d be easy for this example. And then we’ll go back into Basecamp here, and we’re gonna go to file, and before actually we even go to file, on my computer, on the right hand toolbar in the middle of there, or the left hand toolbar, rather. In the middle, you have my computer, and that’s everything that’s saved on there. And I have my collection selected, and that’s kind of the top where you can save things menu. That’s gonna be your broadest one. And if I was clicked in another one of those sub-menus and I imported something, it’d go there.
Then you can see that when you go to file and import into, it says my collection. So if you forgot to do that, you’ll get that visual cue that, “Well, I don’t think I wanna put the California BDR into my Washington Backcountry Discovery Route because that wouldn’t make sense.” So just make sure you’re at my collection when you go to import. And a lot of times, even if I’m doing more generic stuff, I kind of like to go just to my collection and then I put it where I want it and maybe I’m just not super confident with computers sometimes, so I don’t mind the extra step just to know that it’s not gonna go into the ether or it’s gonna be lost. I’d rather go to my computer and then drag it if I need to drag it somewhere. But a lot of times if there’s a Backcountry Discovery Route or the Touratech Rally, you’re gonna want to have it in it’s own little page or folder anyways.
So from there, you’re gonna go down, import into my collection. And here we’ve kind of scrolled down, we got into where we found where the file was on my desktop. And I am using a Mac here, on a PC, it’s very, very similar. I bounce between the two platforms and I can’t tell you the difference. So I know there are differences but it’s gonna be very similar. So we select the file and then we’re gonna hit import. And then from there you can see now in my collection, I have the California Backcountry Discovery Route and with the BDRs, it is nice that they date the files. So you know when you downloaded it, and if they do an update, they change the date. So if you have one, and I think I have some saved that are many, many years old.
You want to navigate with the most current one. But it’s kind of a nice thing they’ve started doing where they put the date in there. And below the on my computer on the left hand side, you have where our California backcountry discovery route is highlighted. Down below, you see there’s a list of waypoints and things and we’ll zoom in a little bit here. And yeah, again, you’ll see all that and you can start to see that there’s fuel stops and then you can even see some of the tracks. So you can see that all of that is everything that’s in that tab on my computer, in your collection there, you can see all the contents of it, the waypoints and then the tracks as well.
And then from here, you’re gonna want to import it into your GPS unit itself because having it on your computer is great, but your computer doesn’t have a GPS and you can’t ride with your computer on your handlebars very well anyways. So, actually we’ll back up back a sec here. And you can see that I have my Garmin plugged in. So, up at the very top is my Garmin devices. And you can see I have a Garmin Montana 600 hooked up. And then that no name is the micro SD card that’s in the unit itself.
And when you go to import it, yeah, there’s also some maps but when you go to import it, it’ll give you the option to select if you wanna put it in the micro SD card or into the internal memory of the GPS. So we’ll go back to importing it in there. You have to be highlighted on the California BDR and you go to transfer and it’s gonna say “Send California BDR to Sandberg 2018 to device.” And it’s nice that they give you whenever you’re in these transfer modes, it’s not just saying, “Do you wanna send this file,” it’s giving you the file name. So you get that reminder and just make sure, “Yep, that’s what I wanna do,” and I’ve definitely caught myself where I clicked on something else or fat fingered it a little bit and go, “No, no, no,” back out and click the right thing.
So you select that and it’s going to give you this thing here, it’s gonna ask you, “Okay, select which device,” and if I had multiple GPSs hooked up to my computer it would list all of them. When I go down. It gives me the two tabs, the Montana and no name. And these tracks for the Backcountry Discovery Routes where it’s just tracks and waypoints, they’re pretty small files. So you could put it in either one, if you don’t have an SD card, just put it in the internal drive. It’s down to personal preference, a lot of times I use the internal memory. But if you’re gonna do a trip and do a whole bunch of them, maybe you’d put them all in the card so you knew they were in the card.
When you go to find them on the GPS unit, it looks at it all the same. It doesn’t differentiate between the card and the GPS unit’s internal memory. And below that too, select what you’d like to send. And tracks and waypoints, if you’re talking Touratech Rally or Backcountry Discovery Routes, that’s all you’re gonna have. If you were using routes and you wanted to transfer a route, you’d wanna click that and by default, I believe that is normally checked. I normally uncheck it and I just don’t use routes at all for anything.
I always use tracks or waypoints. So select the applicable ones, the bottom two or something, the custom map and imagery, that’s something they’ve added recently. And again, it does a default click of never needing to do that and I just can’t really speak onto if that’s gonna cause problems. I’d say if you downloaded the Backcountry Discovery Route or the Touratech Rally or the KTM Rally, something like that, you just need to bring tracks and waypoints over so just have those selected. Then you hit send, it’ll give you a little spinny wheel for a second. And then, it’s not a bad idea to go into the unit itself where I put it in the unit’s internal memory. And then if you click on that in my devices, you can see below it says Montana 600, the unit number and you have this list of tracks and higher up there are the waypoints, so you can verify what you put in the unit’s actually there.
And it’s a really good practice to do, rarely are there problems, but just out of loading hundreds of GPSs at the Touratech Rally, when people bring them and I need to load them, I normally need to check that before somebody goes to go out for a ride and then they’re in a rush or if you’re gonna get on an airplane and go somewhere, and you just loaded it real quick, I’d even fire up the GPS and look and make sure that the tracks are there.
So rarely are there problems, but it’s just good practice, especially if you can’t fix it in a couple of minutes to double check. So, after I import something, I normally like to just look, I normally just do it here in Garmin Basecamp and check that it’s there. So, now you have the tracks loaded in your GPS, the next video we’re gonna cover how to actually dial in your unit for it and then we’ll have videos specific for all the common GPSs on how to show them on your GPS itself.
So that’s the first of a few videos here on Navigating With Tracks Tips & Tricks. So we’re all talking the same language with the terminology, we’ve got the tracks from the internet into your computer, into your GPS. And then the next one, we’re gonna start dialing in how to display the tracks most effectively on your GPS and then we will have the unit specific ones on actually how to show it and change color and things like that on your unit as well. So hopefully this was good, hope you stay tuned for the rest of them. Of course if you ever have any questions, give us a call at Touratech USA, or shoot us an email. All of us play with this stuff, we all ride and go out and make tracks and follow tracks and get out there and do this stuff. So we’ll see you soon. | 2019-04-24T04:16:33Z | http://blog.touratech-usa.com/2019/01/15/touratech-gps-classes-tips-tricks-part-1/ |
At the end of the Civil War, widow Nancy McCary Sumrall entertained a marriage offer tendered by widower Moses Holyfield. Nancy was 28 and the mother of four young sons. Moses, although inconsistent in reporting his age, had probably entered his seventies. His children from his first marriage were all grown and on their own. Like most of his Piney Woods neighbors, Moses was not a wealthy man. Still, he owned 450 acres of land, of which 80 were cleared for cultivation. He had not been a slave owner, so he suffered no finance losses due to emancipation. Nancy’s husband had died while serving in the Confederate Army in 1862. We do not know how long Nancy deliberated over the matter, nor do we know whether her practical considerations and emotional sentiments were in harmony or in conflict. What is known is that in due course Nancy accepted Moses Holyfield’s proposal and became his bride.
Marriages between men of advanced years and women several decades their junior did not begin in the wake of the Civil War. Throughout the nineteenth century women bore the burden of frequent pregnancies that often began in their teens and—if their health, endurance, and luck proved sufficient—might continue for another three decades. If any of these attributes failed them, their death usually necessitated the search for a new wife; preferably one still comfortably within the range of childbearing years. Second and third marriages resulted from a pragmatic understanding of the workload required to maintain a household in a subsistence level economy. The death of a wife left children without a mother in an era when children attended privately operated schools only sporadically, if at all. In addition to child care, women performed an array of essential functions: cooking and cleaning, making of clothing and numerous household items such as soap and candles, and cultivating vegetable gardens. If there were no daughters old enough to assume these duties, the absence of a wife would be keenly felt. This was true even in the higher realms of Piney Woods society. Slave owner Isaac Anderson was among the wealthiest men in Jones County when his wife Teresia Powell Anderson died in 1850. After a decent interval, the widower Anderson set about courting Sarah Rebecca Deason, the daughter of a local merchant with whom he was well acquainted. Two years later, the sixty-six year old Isaac had successfully won the hand of twenty-three year old Sarah Rebecca.
The toll the Civil War exacted upon the male population of the South had a discernable, if not necessarily radical, impact on the institution of marriage. In 1870 Jones County contained 449 white females between the ages of 20 and 40, compared to only 332 males. And within this reduced pool of men, it can be assumed that some portion had lost limbs or otherwise been seriously impaired by the war. Despite these obstacles, Piney Wood women, whether single or widowed, could and did marry local men during the Reconstruction era. But in order to do so, many had to revise their concepts about what constituted a suitable domestic partner.
Nancy McCary was born in Alabama in 1837. Her parents, Tandy and Cloah McCary, were both natives of South Carolina. The birth states for their children indicate that around 1843 the McCary family moved across the state line to Wayne County, Mississippi. Nancy became the bride of Elisha Sumrall in1852 when she was 15 and he was 21. The location of the couple over the next decade is unknown. But later records reveal that Nancy gave birth to at least four sons: Benjamin (1854), Theodore (1856), James (1858), and Jefferson (1861). The question remains as to whether the Jacob Sumrall (1852) who later married Martha Rushing Walters was the eldest son of Elisha and Nancy (see part two of Jones County Widows).
Like many other men having a family to support, Elisha did not join in the first wave of Confederate volunteers in the spring of 1861. On March 26, 1862, however, he enlisted in Company I of the 36th Alabama Volunteers and was dispatched to Mt. Vernon Arsenal, outside of Mobile. There his military service came to an abrupt end on June 4 when he died, probably of a camp disease, a scant two months and 10 days after his enlistment. On October 17, 1862 Nancy filed papers to obtain his back pay. A Confederate paymaster computed the amount due as $50.66. The request made its ponderous way through the war time bureaucracy until, on November 28, 1863, approval was granted by the Comptroller’s Office. Nancy signed a receipt for the payment on January 15, 1864. During the interval while she and her children waited, Confederate currency had suffered an inflation rate exceeding 700%, rendering her settlement essentially worthless.
Sometime after receiving her token payment, Nancy moved to Jones County. She may well have sought to remove herself and her young sons from harm’s way. The Mobile and Ohio Railroad, which passed through Wayne County, held strategic value for both armies. Jones County was devoid of railroads and had a sizable community of Sumrall in-laws, making it an attractive haven. In her new surroundings Nancy made the acquaintance of Moses Holyfield. He had been born in South Carolina, probably circa 1796, and moved his family to Jones County in the 1830s. Based on the 1840 and 1850 censuses, Moses and his wife Milly had seven sons and one daughter. By 1860, the only child remaining in the household was a grown son named Mark, age 33.
Although Moses did not own slaves, evidence indicates he felt strongly about the secessionist cause. On May 4, 1861 he enlisted in the 8th Regiment, Mississippi Volunteers at Ellisville and traveled 57 miles to the rendezvous point at Enterprise. Upon ascertaining that Moses was 65 years old, the officers doubtlessly saluted his determination and vigor, but sent him home.
Millie Holyfield, who was approximately the same age as her husband, died towards the end of the war. This left Moses facing his final years with a sizable farm and an empty house. If the growing number of young widows around him did not fill Moses with delight—since each widow suggested the role attrition was playing in determining the final outcome of the war—at least it made him aware that his prospects for another marriage had been greatly enhanced. What may well have encouraged him to initiate a courtship of Nancy was not just her youth, but the prospect of welcoming her four boys into his household.
As mentioned previously, Moses Holyfield had carved out a modest yeoman’s existence. In 1870 he possessed 80 acres of crop land, with another 100 acres in pasture and 270 acres of woodlands. His livestock holdings were small for the region: six cows, seven sheep, and 10 pigs. The previous year the farm had produced 100 bushels of corn, 75 bushels of sweet potatoes, and a cash crop of two bales of cotton. His farm clearly stood to benefit from the additional labor of four young stepsons. The census of 1870 captures the transformation taking place within the Holyfield household. Moses gave his age as 75 while Nancy stated she was 32. Her sons ranged in age from nine to 16. With them was 14 year old Richard Holyfield, a young relative of Moses, working as a farm laborer. In addition, Moses and Nancy had started a new family, consisting of son William, three, and a six month old daughter named Mary. For Moses it could truly be said that life had begun, again, at 70.
Nancy must have understood when she agreed to the marriage that it would not be a long term relationship. Moses died in the mid-1870s and Nancy again found herself a widow, having added three small children to the household (another son, Charles, had been born in 1874). But, owning to her second marriage, her circumstances were more secure. The interlude with Moses had provided time for her sons to reach manhood. Although sons James and Jefferson remained in Nancy’s household in 1880, they were leaving their teens. Their older brother Benjamin, married and a father, lived next door. Having regained some security in her life, for perhaps half a decade Nancy remained single. When she did marry again, it was in the fall of 1883 to Carney Slay Sumrall, a man who had lost his wife four months earlier.
The Sumrall’s were among the early settlers in south Mississippi. Patriarch Thomas Sumrall was born in South Carolina in 1740 and died in Marion County in 1821. He was the great-grandfather of Elisha Sumrall, Nancy’s first husband. (This line descended from son Levi Sumrall and his son Jacob Sumrall, who was Elisha’s father.) He was also the great-grandfather of Carney Slay Sumrall. (This line descended from son Moses Sumrall and his son Howell Sumrall, who was Carney’s father.) Thus Carney was a second cousin of Nancy’s first husband. There may have been a closer connection linking the couple: some genealogies give the maiden name of Carney Sumrall wife as Catherine (‘Kitty’) McCary. This matches the name of Nancy’s older sister on the 1850 census.
Carney Slay Sumrall, named after a Wayne County Baptist minister, was born in 1830. He was a Confederate veteran who had enlisted in Company E (the Shubuta Guards) of the 37th Regiment Mississippi Volunteers on March 8, 1862 at age 32. Unlike his cousin Elisha, Carney seemed able to cope with camp life, suffering only one recorded bout of illness. Although records are sketchy, they suggest he took part in the siege of Vicksburg and was paroled. He is documented as having surrendered with his unit at Citronelle, Alabama on May 11, 1865. He returned to farming in Jasper County where, in 1870, he was enumerated with his wife and a daughter named Mary. By 1880 he had moved to the small Jones County community of Pinelville, where he and Catherine scratched out a merger existence in a childless household. Catherine died in May of 1883 and soon thereafter the new widower must have begun calling on Nancy Holyfield.
Carney Sumrall appears to have ranked below the widow Holyfield in terms of economic status. He reported the value of his 1879 farm production as $95, paltry even by contemporary Jones County standards. But Nancy may have reached a point where she could afford to let sentiment play a larger role in her decisions. On September 17, 1883 Carney Sumrall and Nancy McCary Sumrall Holyfield applied for a marriage license and solemnized their vows six days later. At the time Nancy was 46 and Carney 53. She was leaving her childbearing years behind and may well have looked forward to a long marriage. If so, it was an unfulfilled wish. Just six years later, on December 12, 1889, Carney Sumrall applying for another marriage license—this time to Elizabeth Hinton Coats. The absence of any divorce proceedings in the surviving court records indicates Nancy had died. Although some genealogies list her as dying in November of 1902 and being interred in Wayne County, they have apparently confused her with another Nancy Sumrall, born in 1847, who was the wife of Enoch S. Sumrall.
In wedding Elizabeth Coats, Carney had once again chosen a Civil War widow. Born in 1838, Elizabeth Hinton had been the wife of Thomas N. Coats. He, like other married men facing conscription, enlisted on May 12, 1862 and was mustered into Company F of the 7th Battalion Mississippi Infantry. He also participated in the siege at Vicksburg and, following its surrender, was paroled. A muster roll in the Mississippi Archives indicates Thomas N. Coats went absent without leave from January 3 until April 10, 1864, during which time Elizabeth became pregnant with their third child. Five days before Col. Lowry led troops into Jones County to deal with the deserters, he rejoined his unit. Thomas was subsequently captured on July 4, 1864 at the battle of Kennesaw Mountain, near Atlanta. From there he was shipped north to Camp Douglas, Illinois where he died of pleurisy on February 9, 1865—three days after his fellow Jones Countian George Warren Walters had died in the same camp. (see part two of Jones County Widows). Perhaps unwilling to loosen her standards regarding potential suitors, Elizabeth remained a widow and reared her three children. Twenty-four years elapsed between the death of her husband and her acceptance of Carney Sumrall’s proposal.
Carney and Elizabeth were last enumerated on the 1900 census. Elizabeth died in July of 1902 and was buried in the Union Line cemetery near Soso. In May of 1907 Carney was admitted to Beauvoir, the former gulf coast residence of Jefferson Davis that had been converted into a Confederate retirement home. But he later discharged himself and returned to Jones County, where he died in 1909. His grave is beside that of wife Catherine in the old section of Hickory Grove cemetery in Laurel. The author has been unable to locate the grave sites of Moses Holyfield and Nancy McCary Sumrall Holyfield Sumrall. It is known that Nancy’s sons by Elisha Sumrall continued to reside in Jones County until their deaths in the 1920s and 30s.
Hayes Cottage, Beauvoir Soldiers Homes, Biloxi, MS, where C. S. Sumrall once resided.
Nancy McCary Sumrall and Elizabeth Hinton Coats demonstrate how two Piney Woods women, eventually fated to marry a common husband, reacted to their status as Civil War widows. When given an early opportunity to re-marry, albeit to an elderly man, Nancy accepted the offer as a practical partnership necessary to sustain her family through difficult times. We can surmise that Elizabeth was less inclined to make such compromises, with the result that she retained her widow’s status for two dozen years after the war. Whether accepting or rejecting prospective mates found among the reduced pool of post-war men, however, both women coped with the circumstances life had presented them.
I just wanted to let you know how much I’ve enjoyed reading your historical accounts. You have proved to be a talented writer! A skill that I very much appreciate. (Side note here:) I have no direct ancestors who settled into MS, but do have ancestors of this era who lived in Eastern Kentucky. (Carter Co KY) Some years ago, I concluded that the majority of marriages from this era took place as a means of mutual survival. You validated my conclusions in your last paragraph. But unlike my plain writing style, you clearly demonstrate your talent for penning great prose. Vikki also belongs in this top league. At best, I’m merely an appreciative reader.
Ed, I do Thank you!
You are welcome, Vikky. The posts in this series deal with a group of women who, like Nancy, were mostly illiterate. Thus they left us no records of their thoughts. Indeed, they typically only left faint documented traces from which I’m attempting to reconstruct a portion of their lives. I continue to hope one of them dictated some letters to relatives in, say, Texas that might come to light. In any case, we can’t say that people whose marriage bonds were based on pragmatic needs did not develop sincere affection for one another that may have been stronger than the romantic attractions of our current age.
Mary age 8 , Ala.
“I am a descendant of Benjamin McCary(s/o Tandy) and wife Margaret Summerall through their daughter, Martha McCary who married Daniel Cicero Williams in Wayne Co., MS circa 1879/80. See census of 1880 Wayne Co., MS. Do you have more information as to why the family was broken up just after the Civil War? Was Benjamin killed in the war? Martha went to live her grandparents, Jacob Summerall, I believe.
Margaret was the daughter of Jacob Sumerall and Mary Friday Choctaw Co. Alabama and then Wayne County, MS. She married Benjamin McCary (son of Tandy McCary) in the early 1850s.
It sure seems those Sumralls were related by blood or marriage to a lot of the Jones County people – I wonder if maybe the Holifield/Holyfield family was also somehow related.
I had earlier written asking you when Laurel was established, as I found it curious that Jacob Sumrall’s daughter Martha Elizabeth told me that her dad’s family came to Texas from “near Laurel in Jones County”, yet it seems Laurel was “established” in 1882, while Jacob and his family came to Texas in the 1870’s. That could indicate that the Texas family maintained contact with some family back in Mississippi after Laurel was established. Or maybe the community had always been known as Laurel.
Thanks again for a fascinating history lesson.
Ed, your mention of the Mobile & Ohio Railroad which runs through Wayne County reminded me of this information I picked up somewhere, don’t remember where or when, about a Jacob Sumrall in Clarke County – don’t know whose line he belonged to, there seems to have been many Jacob Sumralls around back then. I wonder if you are familiar with this house and history?
Circa 1859 – Greek Revival vernacular, 175 County Road 253, north of Shubuta on Highway 45. The house was build by Jacob Sumrall; a railroad man living in the area after the Mobile & Ohio Railroad was established in 1855. His family is said to have lived in boxcars until this house was completed. The site became a community center and was known as “Sumrall Switch” because the train would stop here to offload groceries distributed from a store behind the house. The Sumralls had a brick kiln and ground their own feed. Their place is said to be the site of the first church and school in this vicinity.
Good story. I knew lots of Sumralls and Holifields (i instead of y) while growing up in Jones Co. As you know, there were a lot of Mauldins there also, mostly descendants of William Harrison Mauldin who homesteaded the farm I grew-up on in 1850. William raised 18 kids in the 2 room log cabin he built, one being my GGrandfaather Lemuel Harrison Mauldin. But I have just discovered some shocking news for the Jones County Mauldins. I just took the Ancestry.com DNA test and found that William Harrison Mauldin and all of his 100s (maybe into thousands now) descendants should have the last name of “Smith.” A Smith broke into the Male DNA line about 225 years ago, in Pendleton District, SC (Anderson, SC). It was either Harmon Smith or his father Christopher (Kitt) Smith. I am betting on Christopher because he was one cool dude and left a wide swath. But, I am trying to piece the fragments together while maintaining provable lines. Intersesting stuff.
Would you be willing to share your info about Mr. Smith? He may be my great great great grandfather. My grandmother was Leona May Mauldin Patrick 1909-1977.
1) Most histories do cite 1882 as the establishment date for Laurel. However, there were older settlements in the area, the most notable of which was Pinelville. Hence, as Laurel grew it was natural for people to say they or their parents had lived “near Laurel” (well before its founding) rather than referring to some crossroads community with which most people would not be familiar.
2) Both “Holifield” and “Holyfield” spelling were used in 19th century Jones County. I chose “Holyfield” for Moses because that is the spelling most frequently associated with him.
3) It is very likely that there were other connections between the Sumrall and McCary families. After all, this was the Piney Woods. For example, I found a land deed from 1871 in which Moses Holyfield sold a 60 acre parcel near Pinelville to “Mrs. Kitty Sumrall” for $100. This is a further suggestion of a connection between Kitty and Nancy, who I suspect were sisters–my assumption being that Kitty was the Catherine McCary, age 16, on the 1850 census report Tim cited.
It is interesting the land was sold to Kitty rather than to her husband, Carney. This property (near Pinelville) seems to be where the couple resided in 1880. Carney later homesteaded 160 acres near Soso. His property there adjoined that of widow Elizabeth Hinton Coats, whom he married in 1889.
I’ll check on any Civil War records for Benjamin McCary.
Some follow-up information about BENJAMIN MCCARY, brother of Nancy McCary Sumrall (et al): it appears this was another case of a brother-sister in one family marrying a sister-brother in another family. Benjamin married Margaret Sumrall while his younger sister Nancy married Margaret’s older brother Elisha. Both marriages seem to have taken place circa 1852.
I’ve found no evidence (yet) that this Benjamin McCary died as a result of service during the Civil War. Benjamin and wife Margaret were last documented on the 1860 census in Choctaw county, AL with their five children. Some Ancestry Public Trees associate him with the service records of a B.H. McCary who served in the 29 AL Infantry, Co H. But a much better claim can be made that this soldier was Benjamin H. McCary of Bibb County, AL. He can be found there on the 1860 census in the household of his parents, Martin and America McCary. He was captured in Nashville, survived imprisonment at Camp Douglas, IL, and then returned to AL where he died in 1922.
TANDY C. MCCARY was born 1 Jul 1854. After the 1860 household census, he next appeared on the 1880 Wayne County census. He remained in Wayne County until his death, per his tombstone inscription, on 21 Dec 1928.
COLUMBIA ELIZABETH MCCARY was born 1 Dec 1855. She also re-emerged on the 1880 census for Wayne County. But other records show she had married Neil A. Kelly in Washington County, AL in 1873. She re-married to Stephen Lee Murphy in 1879. The couple moved to Jones County. Columbia died, age 93, in Bay Springs on 7 Dec 1948.
MARTHA MCCARY is the only child I’ve found on the 1870 census, living in the household of her Sumrall grandparents (Jacob and Mary). She had been born, per her death certificate, on 26 Jul 1859. She married Daniel C. Williams in Wayne County in Jul 1879. The couple moved to Forrest County where Martha died on 27 Aug 1921.
I would appreciate any info you have on Columbia McCary. Sh is my grandmother Ouida, mother. Thank you!
I get dizzy and confused with all the Elishas and Jacobs in so many different generations and so many degrees of relation, but thought someone could make use of this and determine just where this Jacob Sumrall fits in.
In addition to the three children of Benjamin McCary listed above that grew to adulthood, I have information that their daughter Chloe, aged 8 in the 1860 Census of Choctaw County, MS, appears later in Jones County, MS as the wife of Joshua O. Holifield (imagine that). There are a lot of folks on Ancestry.Com that give her name as Chloe Salina McCary and claim that she was born 30 June 1852 in France, but that just doesn’t stack up with the Jones County, MS, census records. The date may be right, but she was clearly born in Alabama. Joshua and Chloe’s daughter Mary Catherine married Gillis Pinkney (“Pink”) Temples and they removed to Franklin Parish, LA, where they eventually become great-great-grandparents to country music star Tim McGraw. Now isn’t that interesting?
Very interesting, indeed, Michael Hurdle! Thanks for sharing that information.
I enjoyed ALL DATA, I am the granddaughter of Poline Sumrall who’s men relatives were Jeff and Willie Sumrall buried at Clark cemetery, Jones county Ms. OFF LOWER Myrick Rd.
My grandmother Poline is buried at the Myrick cemetery, Laurel, Ms. She married George Brown Morgan.. Poline’s sisters were Sally, Minnie, Susan Sumrall. many relatives to her were DUNAGIN’S & Clarks. NOT SURE HOW THOSE ARE SPELLED.
SUMRALLS BURIED AT CLARK CEMETERY, JONES COUNTY, MS.
All these people were born in the 1800’s.
Would you have any information on Thomas and Lavinia Collins Williams in your databases? Thomas Williams was born 1838 in MS. He was on the 1840 Clarke or Wayne County Census with his father, Sampson Williams, as the head of the household. He was on the 1850 Lauderdale County with his Mother Elizabeth Williams as the head of household.. He was on the1860 Clarke County Census as 22 yrs old and Lavina Williams 18 (born 1842) and Levise Colling (believe it should have been Collins) as the head of the household. They appear to be married living with her Mother. They had two children and the children are on the 1900 Beat 4, Jasper County Census. O. Williams is the head of the household and I believe the O is for Oliver. Oliver is married to Annie Hosey and Oliver’s sister M.L. (Mary) Williams is living with them. Thomas Williams is believed to have died in the Civil War as a Confederate. His brother John Wesley Williams and nephew Allen P Williams was in the 1 MS Cav Reserves, Co H but Broadfoot Publishing could not find anything on Thomas Williams. Thats about all I have, but would like to learn more and wondered if Lavenia Collins is connected to Sarah Collins, Jasper Collins or the others. I’d also like to learn more about Thomas Williams service in the Confederacy. I have no reason to believe he was a Southern Unionist but don’t rule anything out. Oliver Williams is buried in the Union Seminary Cemetery, Moss, Jasper County, MS and his headstone is in Find A Grave.
If any of this information matches up with something you have I would appreciate any information or advice you can give me. Thanks!
It’s nice to hear from you after so long a time! In response to the interesting info you provided, I checked my Collins files for any mention or listing of a Lavina, but found none. The geographic connections are certainly close enough to indicate a relationship, and there may indeed be one within some of the Collins branches that I have not researched extensively.
Perhaps another reader can add additional information that will shed light on Lavina Collins Williams and her Williams descendants.
There is a 1850 census record for Joseph Collins and wife Lavina (sic) Collins in Wayne County. Among their 5 children is a daughter also named Lavina, age 8. This would seem to match the “Levise Colling” and daughter “Levina Williams” whom you located on the 1860 census. On the 1850 census Joseph Collins reported being born in North Carolina ca 1790. He apparently moved to Mississippi sometime prior to 1830, since a match can be found in Wayne County on the 1830 and 1840 censuses.
Wayne county tax records could probably help clarify when Joseph arrived in Mississippi. In 1830, there were 3 Wayne County households headed by Collins males: Jacob, Joseph, and Robert. Whether there was any kinship connection among them is unknown to me.
After the death of Thomas Williams, his widow Lavinia re-married to Jeremiah Gregory (Find-A-Grave listed a “Jeromier Gregory”). She died in 1884. Both she and her second husband are buried in the Mt. Pleasant Cemetery, Stringer, Jasper County. One of the Find-A-Grave memorials cites Lavinia Collins’ mother’s maiden name as Lavinia Hetherington.
It would appear that if Joseph Collins was related to the Jones County Collins clan, it was a distant connection.
My initial check of the Fold3 military records did not find a match for this Thomas Williams, but I do some more digging.
Thanks for helping Jan out, Ed!
I have researched Moses Holifield for many years because it is said that his first wife was Millie Rivers, daughter of Mark Rivers and Annie Parker. The RIVERS lived in Chesterfield,County, SC. In 1820 Moses Holifield is listed in the Census for Anson County, NC just across the state line from Chesterfield.
1820 U S Census, Anson County, North Carolina, Population Schedule, Huntley, Anson County, North Carolina, page 33, Line 22, , Household of Moses Holyfield.
Moses and Millie were married sometime around 1815-1817; in 1830 they are listed in the Chesterfield Census.
1830 U. S. Census, Chesterfield County, South Carolina, Population Schedule, Chesterfield County, South Carolina, page 244, Line 19, Household of Moses Holifield.
They moved to Jones County, Mississippi before 1840. Millie Rivers Holifield died between 1860 and 1867.
If anyone has any additional info on this couple; please contact me.
I need some help with these Sumerall/Sumrall lines, if anyone can help. I’m the daughter of Roz Morgan Newell who posted on here in 2010 about our Poline Sumrall Morgan b. 1890, Jones County MS. We have always known her as Mary Pauline Morgan who married George Brown Morgan of Myrick Community, Jones County, MS, but I have seen her listed on a census as Poline rather than Pauline. Anyway, we have tried for years to find our Pauline’s family and have gotten nowwhere. Just last night I found on a census where a James Sumrall b. 1860 and approximately 70 years old was residing with Pauline and George and their son John, my maternal grandfather. Now, Mother has thought perhaps Pauline’s father’s name was Jeff. Interestingly, this James Sumrall has a brother named Jefferson. Mother has a copy of a report card for Pauline signed by J. Sumrall – could be James or Jefferson, but based on the census, I’m “assuming” James Sumrall was my great-great grandfather. The problem is, I can find children listed for other members of the family but haven’t found a list of any of the children of James I. Sumrall and his wife Mary Alice (last name unknown but “may” be Carter?). In fact, I found mention that there are no records for his descendants. I can’t help but wonder why there is so much information on all the others and nothing on his. But there is no Pauline/Poline listed as a child of Jefferson Sumrall and since James was living with Pauline in his old age, it seems reasonable he was her father. It would mean so very much for us to finally be able to identify our branch of the Jones County Sumrall lines and know who are kin are!!! We do know from the marriages of Pauline’s sisters that coming forward their allied lines include Stringfellow, Stribling, Lomax, but it’s the family connections that these sisters came from that we so desperately seek. Pauline’s sister were Sally, Minnie Corine m. Luther Ernest Stringfellow, and Susan. If there were other children, I’m not aware of it. ANY help with this branch of the Sumralls would be greatly appreciated!!
The 1900 census records for Jones County show Pauline Sumrall as the 11 year-old daughter of James and Alice Sumrall. Indeed, that census listed 3 sons of Nancy McCary and Elisha Sumrall (Jefferson, James, and Benjamin) in adjoining households along with their wives and children. Since all census records from 1890 were destroyed in a fire, this is the best single source for grandchildren of Elisha Sumrall.
The children enumerated along with James and wife Alice were: son William (b Jul 1881); daughter Vilindy (b Nov 1887); dau Pauline (b Feb 1889); dau Sallie (b Sep 1891); dau Minnie (b Feb 1895); son Thomas (b Dec 1897); dau Bernice (b May 1900). Alice reported having given birth to 9 children of whom 7 were still living. This coincides with the fact that 7 children were enumerated. The 2 deceased children would explain the gap between the birth of William in 1881 and Vilindy in 1887.
The 1910 census showed children Sallie, Minnie, Thomas, and ‘Tina’ (aka Bernice) still living in the household with their parents.
Oh, Ed, I can’t thank you enough! I got tears when I read this. This nails it down completely and ties us in to all the wonderful history you have shared on the family! I’m SO happy to finally know who we came from on this branch of our family that has eluded us all these many years! Again, thank you so much! You just made my day and I can’t wait to share this with my mother and cousins!
Debbie: Although the genealogy posted with the grave stone indicates this to be Nancy McCary (Sumrall > Holyfield > Sumrall), I believe this is the grave of another Nancy Sumrall — the wife of Enoch S. Sumrall. As noted in the article, records show that the twice widowed woman born Nancy McCary married Carney Sumrall on 17 Sep 1883. There is no indication of Nancy and Carey divorcing, yet Carney married widow Elizabeth Hinton Coats on 12 Dec 1889. From this I surmise that Nancy McCary (et al) died some time prior to 1889.
Both Enoch and Nancy Sumrall are listed on the 1900 census in Wayne County. Meanwhile, Carney Sumrall and second wife Elizabeth are found on the same census in Jones County (Ancestry transcribed the couple of “Casey Sumerall” and “Elizabeth Sumerall”).
Oh well, another case of misleading info on the internet. I thought surely I had found where she’s buried! Thanks for clarifying!
Ed, The Baptist preacher who Carney Slay Sumrall was named after was Rev William Carney Slay (1802-1863), who is a 1st cousin 5x removed. I believe William Carney’s parents were Nathan N and Nancy or Martha Sumrall who I believe was the daughter of Thomas (1740-1821) and Ann Thomas Sumrall (I don’t have it nailed down yet). Thomas and Ann are my 5th great grandparents. William Carney’s first wife was Belinda McDuffie with whom he had at least 7 children. After Belinda’s death in 1850 he married Elizabeth Shoemake(r) with whom he had at least another 5 children. William Carney and Elizabeth Shoemake(r) Slay are buried in the Mount Zion Baptist Church Cemetery in Wayne County. I have not been able to connect Elizabeth to my Shoemake line but believe if there is a connection it goes back to TN, SC or VA.
Chuck: Thanks for the additional information on the Sumerall and related families.
To Ed Payne and other Sumrall researchers some updated information I just found again yesterday.
In May 1961 I made my first (and last until recently) foray into genealogy. I visited the national archives in DC and they pulled up the microfilm for Jones County Ms 1860 and 1870. Since I was only interested in the Sumrall name, and variations, I copied only information containing those names.
The 1860 census record from the archives was headed: “Census of Jones County, Mississippi, July 25, 1860 – Post Office Ellisville” and listed 8 families: Greenberry Sumerall, Thomas V. Sumerall, Henry Sumrall, John Sumrall, Thomas Sumerall, Henry Sumrall, John Sumeral, and H. L. Sumerall. No Jacob or Elisha Sumrall.
It’s possible Martha’s children by her first husband were also listed but since I was only concerned at the time with the Sumrall name I wouldn’t have written down the information if they were named Walters, as I didn‘t know the connection.
“I never knew any of the Sumralls, only my father and brothers, just heard my father tell very little and as you figured his father was killed during the war.
BORN May 11, 1873. Then Martha Walters died and Jacob marked LUCY JANE WILLIAMS 1876 and MARTHA ELIZABETH SUMRALL BORN 10 year later FEBRUARY 20, 1886. Timothy Sumrall’s great-grandfather ELY THEODORE and LENORA ROUNTREE were married in Blanco County Texas – think it was 1892.
The rest of number 8 of the 1870 census are all brothers of JACOB SUMRALL. He and Lucy Jane visited Laurel in 1910. JACOB had an uncle THOMAS F. SUMRALL there at this time (perhaps this is the Thomas V. at number 4 of 1870 and 2 of 1860).
Sumrall, Jacob 18 farmer Miss.
Holifield, Moses 75 farmer S.C.
Nancy 32 keeping house Ala.
I think this establishes pretty conclusively that Jacob was the brother of the 4 Sumrall boys listed in the Ancestry census as living with Moses and Nancy Sumrall Holifield. In the space of about a month Jacob’s brothers were listed in two different locations – maybe they were visiting one or helping out on the farm?
Does anyone have an idea of how to find the earlier June 21, 1870 census, why there were two records and how to correct or add the national archives record to the Ancestry file? I would like to correct and add this information for future researchers as it contains more accurate information related to this Jacob Sumrall.
Thank you so much, Tim, for sharing all this great information from your files.
Tim, thank you for this additional information. I was able to fill in some gaps and able to add your line to my tree. We’re 4th cousins. Not all of us are able to travel and search archives, so it’s wonderful when someone who has shares the data and gives us documentation. Thank you for all your efforts on the Sumrall lines and for sharing with us.
Like Vikki, I want to thank you for this information which — as you noted — provides credible evidence that Jacob Sumrall was the elder brother of Benjamin, Theodore, James, and Jefferson Sumrall. As regards census dating: census takers were given an official enumeration date. Although the actual canvassing of a county took place days or weeks after this date, the census taker was supposed to collect information based on the household composition as it was on the official census date. For the 1870 census, this date was June 1. I’ve found several instances where memories failed and the same persons were recorded twice because they had relocated between the official census date and the date when the census taker arrived.
If you are interested in the written instructions that census takers were expected to follow, the link below provides the instructions for the 1870 census.
Have you found any clues that would narrow the time frame for which Moses and Nancy were married after the War. Was it immediately, in late 1865 or is “just after the war’ all that is known?.
The marriage records for Jones County available at the Mississippi Archives begin in 1882. My description of the marriage of Moses Holyfield and Nancy McCary Sumrall as occurring “just after the war” is meant in relative terms. The 1870 census listed the first child of Moses and Nancy, William Martin, as age 2, In practical terms, this meant that on the enumeration date of 29 July 1869 his age fell somewhere between 2 years and 2 years 11 months. From this I deduced the marriage most likely took place circa 1865-1866.
Thank you so much for the incredible information on Nancy McCary Sumrall Holifield Sumrall. It put so many pieces of a confusing puzzle together. I’m wondering if you have any information from all of your research to locate accurate records on Benjamin R. Sumrall, son of Elisha and Nancy Sumrall? Specifically a death date and burial location.
I have visited the Archives in Jackson and found nothing on Benjamin R. Sumrall. Benjamin R Sumrall and Mary Ann Holifield Sumrall are last seen in the 1920 census. Mary Ann died in 1924 but no records for Benjamin R. were found.
Can you offer any suggestions to locate Jones County obituary records for 1920 through 1924. (Mary Ann is shown as a widow on her death certificate in 1924. An obituary is a reach since I couldn’t find the record at the Archives.
Any assistance (or research suggestions) you could provide is so appreciated! This will not only help our family find a missing puzzle piece but it will help many others as well.
Thanks for your comments. I’m not sure I can be much help on Benjamin R. Sumrall. Based on the 1920 census and the death certificate you found for his wife Mary A. (Holifield) Sumrall, it would appear Benjamin died between 1920 and 1924. Odd that you did not find a death certificate for him during that period, but it’s been known to happen. I saw that the contemporary with whom he is sometimes confused, Benjamin Franklin Sumrall, died in 1923.
I tend to go with birth years listed on the earliest censuses since, in those days, people got more haphazard about their age as time went on. Still, I note that by the 1920 census, Benjamin reported an age consistent with the year of birth given in 1870 (= 1854).
Your mention of the death certificates of Mary Sumrall and son Ellis Buford Sumrall citing their burial at “res Myrick.” This suggests to me that they were buried on family land — and it is reasonable to suppose Benjamin would have been interred there as well. The question is whether there are any records of a Sumrall family plot in the Myrick area. I assume you checked the 3 volumes of cemetery transcriptions at the Archives (as I recall the volumes are divided in to North, Middle, and South Jones County). The markers may be long gone, but you could try to find the location of Benjamin’s residence and farm in the 1920s.
Thank you so much for your speedy reply! Land records and another visit to the Archives is the plan. I’m impressed at your incredible knowledge and that you are so willing to help others with research. I’m reassured with your information that I’m at least on the right path. Thank you doesn’t seem adequate but THANK YOU!!
I had my DNA test done earlier this year and I have a match with a Sumrall descendant who only has her line traced back to Jessie Marion Sumrall and wife Clara Jane Brownlee. Most searches I’ve done to try to find the connection since we’re DNA related have no parents found for Jessie, except one site that said he was likely the son of Benjamin, son of Elisha and Nancy, and his wife Mary. I can’t find a listing of children for Benjamin and Mary Ann (Holifield) Sumrall. Do you know if they had a son named Jessie Marion b. 1880?
The records I have do indeed have a son Jessie born in 1880 to Benjamin R and Mary Ann Sumrall. There is a lot of confusion because of another Benjamin R Sumrall. I feel fairly confident in the records I have found that includes census records. I’ve also found actual headstones for a lot of the family. I’d be happy to share the details I have if you would like. Send a message directly to DeLaney.busby@yahoo.com.
One of the challenging (or exacerbating) things about genealogy is dealing with two people with the same name, born around the same time, and living in the same general locale. Two men named “Benjamin Sumrall” can be found on the Jones County censuses of 1870-1920. One seems to have been the son of Harmon Levi Sumrall and Bethany Shotts. He was born ca 1852 and his full name was Benjamin Franklin Sumrall. He married Sabra Jane Collins. The other Benjamin Sumrall (middle initial possibly “R”) is found on the 1870 census in the household of Moses Holifield and wife Nancy (McCary Sumrall) Holifield along with 3 brothers. So the signs point to this Benjamin Sumrall being the son of Elisha Sumrall.
On his World War I draft registration card, Jesse Sumrall gave his full name as Jesse Marion Sumrall, whereas some genealogies cite his middle name as “Milton.” He listed his date of birth as 14 November 1881 (3 years older than on the 1900 census). Jesse married Clara Octavia Brownlee and sometime after the 1940 census moved his family to Mobile, AL where he died on 18 July 1980. He is buried Oakland Grove Baptist Cemetery in Laurel, MS. Obtaining a copy of his Alabama Death Certificate might provide further evidence of the connection.
As for the two Benjamin Sumrall(s), both seem to have died in the 1920s. I believe the one who married Sabra is the one who died 11 September 1923 and is buried in Springhill Cemetery, Laurel, MS. Thus far I have been unable to find a death certificate for either man. As I indicated to Kathy, it seems likely that the Benjamin Sumrall who was the son of Elijah and Nancy is buried with his wife in a lost family plot.
Ed, the detail and clarity you provide is, once again, amazing! Thank you for your knowledge and help! Separating the two Benjamin Sumrall’s and acknowledging that there were indeed two individuals is so helpful! Thank you! Thank you!
Ed, thank you so much for the detailed info on the two Benjamins and confirming son Jesse. You are always so thorough and always provide very useful information. Again, thank you so much.
One more question, Ed. Is Mary Ann’s father Joshua the same as Joshua married to Chloe McCary? If not, how does Joshua connect? Thanks.
One last set of comments about the two men named Benjamin Sumrall who both lived in Jones County ca 1850-1924. If you have any further questions, or would like an image file of the death certificates I describe below, request my email from Vikki Bynum.
A trip to the Mississippi Archives clarified that there is a death certificate for Benjamin FRANKLIN Sumrall. As suspected, he is the one who died on 11 September 1923. His parents were recorded as “H. Levy Sumrall” (Harmon Levi Sumrall) and “Bethany Shotts.” A further bit of evidence turned up in the Jones County cemetery transcriptions which have this Benjamin buried beside wife Sabra Jane Collins Sumrall, who died 5 March 1912. Both were interred at Springhill Baptist Church cemetery.
The death certificate that remains elusive is one for Benjamin R. Sumrall, the son of Elisha and Nancy McCary Sumrall. He was enumerated on the 1920 census, but his wife’s death certificate from 12 October 1924 listed her as a widow. So Benjamin R. Sumrall apparently died between February 1920 (when the census enumeration was concluded) and October 1924.
Kathy had told me that the wife of Benjamin R. Sumrall died in 1924. The death date for “Mary Ann Sumrall” was 12 October 1924. Her parents were listed as “Josh Holifield” and “Mary Craft.” Sadly, at least six Ancestry public trees cite Find-A-Grave to confuse her with Mary E. Anderson Sumrall, wife of Henry E. Sumrall, who died in 1943 – this despite accurate information on the Find-A-Grave page.
The Joshua Holifield who married Chloe McCary was not the father of Mary Ann. Per the 1900 census for Joshua and Chloe, he was born in Apr 1844 in Mississippi. The father of Mary Ann was born ca 1818 in South Carolina. The exact nature of the relationship of Moses, Joshua 1, and Joshua 2 is unknown to me.
Thank you again and again. We went to the archives and got the death certificate copies while there but I appreciate your research more than you know. We have done some additional research including locating deed records that provided the location of the original Sumrall property in Myrick. I’m a bit hesitant to contact the current owners but I intend to ask if they have seen any evidence of grave markers on the land. It all appears to be wooded so it’s entirely possible that markers wouldn’t have even been noticed. In addition, there are a couple of unidentified adult graves at the Clarke Cemetery in Myrick. Since there are several Sumrall decendants buried there it is worth checking to see if any records are available. I was there this week and hope to contact the cemetery caretakers to see if other records might be available. With your vast knowledge and experience, if you have any other suggestions I would be happy to try. We too have seen so many records that confuse the two Benjamin’s. I feel if I could at least locate a burial site it would help. I realize that there are times that these mysteries aren’t solved but I want to make every effort to try.
Anybody have any information on the parents and further ancestors/relatives of Alex Sumrall (1880-1943) married to Alice McCree Sumrall (1881-1946). They are buried at Quitman Cemetary in Clarke County, MS.
Some sleuthing on Ancestry shows an “Alex Robert Sumrall” registered for the WWI draft in Sept 1918. He gave his age as 37, his year of birth as 1881, and his nearest relative as “Alice Sumrall.” Going back from there, the 1910 census of Clarke County shows “Robert A. Sumrall” age 28 with wife Alice and daughter Elsie age 1. The 1900 census of Jasper County has “Alexandre Sumral” age 18 in the household of John H. Sumrall, age 45, and wife Malisa, age 44.
The 1890 census records were destroyed in a fire, but in 1880 a John Sumerall and wife Lissa were enumerated in Jasper County — with ages that match the 1900 couple. Ancestry transcribed their surname as “Lemerall” but users have submitted corrections. Find-A-Grave has an entry for “J.H. Sumrall” as buried in Rose Hill Cemetery, Jasper County — however, there is no photograph. If the year of his death (1926) is correct, there should be a MS death certificate. Further Ancestry links suggest John H. Sumrall was a son of John and Elisabeth Sumrall, found on the 1860 in Jasper County, ages 28 and 31 respectively.
Hope this proves a useful trail.
Thanks Ed and now I’ve found some more interesting twists. I’ve got a baptismal certificate from St. Michael’s Church listing the parents of Alex Robert Sumrall being John Sumrall and Sara Ann Rogers. John Howard Sumrall and Sara Ann Rogers Sumrall are buried at Old Phalti Cemetery around Paulding, Jasper County, Mississippi. The birth and death dates of of John Howard Sumrall and the J.H. Sumrall listed at Rose Hill are VERY close. I can’t find any trace of any Lissa or Malissa outside the census data (other than John H. Sumrall had an aunt near his age with the middle name of Malissa but she got married and had children) so this is quite a mystery still. Just need to see if that gravestone in Rose Hill is really there as we don’t have a picture yet and then figure out who Lissa/Malissa is?
I note that the memorial for J.H. Sumrall at Rose Hill has been removed from Find-A-Grave and saw on the same site the actual photograph of the tombstone of “John Howard Sumrall” at Old Phalti. If you wish to contact me further, request my email from moderator Vikki Bynum.
You can let me know here if you’d like Ed’s email info, ssumrall. | 2019-04-22T06:51:56Z | https://renegadesouth.wordpress.com/2010/04/07/part-4-ed-payne-on-jones-county-civil-war-widows/ |
We are so much happy to participate in RC Aircraft workshop. Thank you for all the AerotriX team members for providing this type of practical knowledge.
Due to schedule postponement, there was some lack of interest in lecture. But it turned out to be very interesting and good experience. I LOVE IT DUDE!!!
The CFD lecture was clearly explained and made it clear that every student can understand it and I guess this is more than sufficient for us.
Lecture was very helpful for my CFD Course and I came to know about how exactly CFD software works..
Lecturers are one of the best. Workshop was very good. Due to last minute postponement of time, we have to face some problems. But workshop is best.
Interesting theory sessions with practical application problems were a good combination for the intellectual mind. The PPT was excellent and professional and we had a good hands-on experience. Felt like an engineer and proper sequential steps directed by the faculties were mind blowing.
I liked each and every part of the lecture. The way cross-questioning was done to clear our doubts and queries by taking the concept of bird flying was fantastic.
The best part was that we were allowed to construct our own ornithoptor. This practical knowledge helps in understanding.
Very excellent lecture and good examples of various types of birds and i really liked designing the wings and propellers(shafts)of the birds in AerotriX workshop.
Lecture was good and it was an interactive session which helped our imaginative skills grow up.
Great Lecture. Mind Blowing. We wish to see you at our college.
Its awesome to see things fly at economic prices. Workshop is so interesting.
The lecture was too good and Aditya sir gave us a huge knowledge about the aeroplane and its main parts. The workshop is also very nice and gave us a lot of practical knowledge to construct an RC aircraft and all the teachers are too frank with us and cleared all the doubts!
It is useful to our career. Now we know how the aircraft flies.
Your aircraft design methods are easy to understand. I feel better.
Aditya sir has taught an excellent lecture to us and we learnt a lot as we are first year students. Excellent cooperation by the faculties, they even helped us when we made mistakes. According to me, in aerotrix workshop I was much affected by temperament and assistance of the lecturers and liked the way they taught us. Thank you aerotrix.
The workshop was really a great thing, which made us to know more information about designing.
The lecture was excellent in which we came to know about the basics and needs of aircraft. We cleared all our doubts with the help of Aditya sir’s full guidance and we made RC aircraft with the help of Aditya sir team and overall it was an awesome experience to be a part of AerotriX workshop.
Brilliant class taken by IIT Kanpur graduate. Beautiful videos are shown. The workshop is very useful.
I learn many things and I also make an aeroplane with my knowledge. It's all because of you. It is the best experience for me. Thanks for your team sir.
Lexture was very good and was in a easy language.Use of videos and photos was really beneficial for us and we learnt a lot about aircrafts that are controlled by remote. Thank you aerotrix, your cooperation in the workshop was great!
I like the workshop. I dont know how these 2 days gone without making us bored.
Explanation given during lecture was great, it cleared all our doubts. I love to be a part of any other workshops organized by AerotriX.
I am lucky I had this nice lecture and lecturers. I was so interested. I didnt enjoyed like this workshop earlier.
The workshop was inspiring, informative and enthusiastic. The lecture was very helpful and also the AerotriX team has updated our minds with info about the aircraft.
Excellent Teaching and assistance in each and every step. It helps us to learn so many things which is useful to us in the future life.
An excellent lecture about the basics of aerodynamics and it was really great spending our time with you in the workshop. And i wish i would be a coordinator for AerotriX team.
This is wonderful workshop. We like to have more on this type.
The lecture regarding aerotrix workshop is marvelous and i want it to come again because we learnt the theory as well as the fabrication regarding the aeroplane.
Excellent and Enjoyable. We are thankful to you for giving so much attention on us for minute steps also. Please give information about construction of different RC model categories.
In very short time, you have given all the fundamental things from which we easily made RC Aircraft. In workshop, i have learnt and enjoyed a lot from your team.
The faculty proved that he was a very good lecturer. He explained in a very simple manner. It was really helpful for us. Now we all are able to make an aircraft.
I felt like a dream come true. Though the workshop was led by youngsters, it became very easy to grasp. I would like to say my sincere thanks to all the team members of Aerotrix for helping us.
An excellent lecture about the basics of aerodynamics and aeronautics! It was nice and great spending our timw with you in the workshop. Thank you AerotriX.
It was a clear and perfect lecture. The workshop was very interesting and adventurous. It gave a new experience ofmaking a new aircraft by our ideas and efforts. I wish to conduct these workshops often, such that every student will have clear idea of their practical work.
Informative and Interesting. We heard many theory classes before but we are not having any practical idea. By this workshop we are able to imagine our aircraft and its flight.
The lecture was very interactive and excellent. It gave us a clear concept about the ornithopters. thank you aerotrix.
I like this workshop. I never attended a nice workshop like this.
The workshop was good and we got new experience as we made ornithopther for the first time and it was THE BEST. It was so best such that I can do it at my home also and I fell very great to be a part of this workshop. It would be a great opportunity for my friends if you organize workshop in our college!
The workshop was really interesting. The faculties were very helpful. It was an excellent workshop.
Outstanding lecture.. we enjoyed and gained a lot of knowledge and the workshop was fantastic and interesting. Frankly, our dreams came true because of participating in RC aircraft workshop. Thank you aerotrix!
Its good and innovative. I got an idea about aircraft manufacturing techniques and the tricks behind it. We participated with lot of interest. It was practical and was very good.
The lecture was awesome... Aditya knows the excellent stuff and as promised in the website, the basics of the aeromodelling were taught well evev for a non technical student like me.
The workshop was really excellent and which made us to know many new things about aircraft.
The lecture was good. In short span of time, we got acquainted with all the necessary information needed in designing an aircraft. Due to this workshop, it was a nice start for me in the field or aeronautics.
It was very useful to us and the way of communication of trainers was very good.
The lecture was awesome and I got chance to learn more about aerodynamics and THE BEST thing that I liked in the workshop is that the TEAM of aerotrix are cooperative and they helped us a lot.
Way of conveying the points in lecture were very nice.
It was very good and interactive session. Special thanks to asish and his team for providing a memorable experience during the workshop.
I was so much interested in the lecture as I heard lot of things which I didn't get from my courses. It was an inspiration for me.
I have seen many aircraft and I did not think I will get a chance to make one. Wow! I completed my own aircraft. The advice from lectures helped a lot on my designing. It was an amazing experience.
The lecture was very informative and knowledgeable, thanks to the team as the work at workshop was very exciting but at the same time we had to work hard, and help was always available from AerotriX team.
This is my first workshop I attending in second year. I have never attended a lecture like this, we made a model of an aircraft by our own its all because of you, Thank you this workshop is really encourage able for the engineering students who are interested in design area. I have learned a lot ,which really made me to gather a lot information.
The lecture was so good that it aroused my interest in it and i would wait for another event like this and also workshop helped us to develop innovating thinking.
It was really great. The presentation and the core specifications of each and every part of aircraft was very well explained. Workshop was awesome. This is the best workshop for the beginners having interest in RC aircraft design. Had a great knowledge and experience.
Really very educative, learned a lot! Very different from normal boring lectures, increased interest in aeronautics!! The workshop was very ice and gained us the practical knowledge!
The lecture was really knowledgeable it was complete package of theory and practical knowledge as well. Workshop was a great experience with more knowledge. The team was really very helpful.
Excellent teaching and assisting us in each & every step in the workshop. It helped us to learn so many things which is useful in future life.
Lecture was awesome. Startup village is a nice place to conduct workshop.This is a nice place to learn new things .
WE ENJOYED VERY MUCH FOR PAST THREE DAYS SIR. IN THIS PROGRAM WE WERE ABLE TO KNW MANY THINGS ABOUT AIRCRAFT. BUT WE FEEL IT IS SO SHORT PERIOD. WE NEED ANOTHER PROGRAM SIR. ITS VERY USEFUL TO US SIR.OUR FLIGHT HAS CRASHED DUE TO MORE WEIGHT, WE FEEL SORRY FOR THAT. BUT WE ARE SURE OUR NEXT CRAFT WILL DEFINITELY FLY DUE TO YOU ONLY SIR THANK YOU& WE MISS UR TEAM.OUR TEAM NUM IS 23.
i am from EXCEL COLLEGE sir. Your class is very useful for us sir ... we are happy sir. our flight is flying very well sir. thanks for your team sir .our TEAM NUMBER IS 22.
I want punches RC Aircraft from your workshop can you please send me detail of your aircraft.I bless you all the best.
I hope you send me tour item detail soon.
Lecture was amazing and got an opportunity to learn all concepts of aerodynamics .Pulled me into "AERO"
Amazing, maintained a high standard and up to the satisfaction of every child. The workshop had excellent lecture and liked the way they managed things!
The lecture was truly amazing and good info, got to learn new things, new concepts with practical applications! And the workshop was mind blowing. I really liked the way we were guided and your teaching helped us obtain the desired result! Thank you AerotriX.
The lecture is very good in regards of giving the knowledge of aerodynamics and aerofoil shape also provides the idea of pressure difference in a flying objects all concepts are cleared. A little amount of hard work gives us good Boomerang. This is a new experience for me to make a boomerang.
bindaas workshop......attrective lecture.. gained lots of imp thing..
Very interesting, passionate and supreme quality lecture. The workshop was very nice and it was filled with lots of interesting things.
It was excellent... Gained a lot of knowledge.
Lecture was really superb and knowledge increasing. It was good to have some base about the parts of an RC plane and how it works. I'm feeling really well about my first experience of designing a plane. Lecturers were all supportive, well-behaved and highly intelligent. Nice one!. AerotriX taught us the lesson of 'AUnity is Strength'. Thanks AerotriX!
Best Workshop I had attended ever :) Even belonging to non-mechanical branch, I didn't feel that I was out of topic. Very good!!
Lecture was fantastic. Appreciate your effort. The workshop was very useful for me 'cos I am interested in the branch of Aeronauical Engg.
The lecture was fantastic. They told us a hell lot of new things and excellently made us understand the concepts.
Lecture is very knowledgeable for us. It helps us remember Aerotrix knowledge and helpful for future. Really good experience. It was amazing to make the Boomerang.
Its very nice to be a part of it. I have learnt a lot from the lecture. To make a boomerang and designing it on our own is very nice.
Amazing and very interesting workshop. Lecture was very good but becomes excellent if it is carried for some more time.
Amazing and informative lecture. Very simple and supportive procedures explained very clearly.
Lecture was very good. I am thankful to you for giving me opportunity to attend the lecture. Workshop was innovative and helpful to develop as an engineer.
Informative and useful lecture. The workshop helped me to know how the rc aircraft is designed and manufactured.
Lecture was informative, interesting and interactive. All doubts were satisfactorily cleared. Workshop was fun and enjoyable.
Concise and to the point lecture. The workshop had systematic steps. Everything was detailed and patiently explained.
It was a good, short and interesting lecture with fun and lot of questions. Workshop was a lot of work but full of fun.
It was a nice and interesting lecture for upcoming engineers.
Lecture was very good and understand. The workshop was interesting and I felt like I was in a competition!
The lecture was very good in regards of giving knowledge about aerodynamics. All misconceptions cleared. This is a new experience for me.
Marvellous! I learned a lot. No confusions left. Very interesting and entertaining workshop.
We were really very happy to have u as a sponsor and conduct an air show in our campus. The show was fabulous. We can tell that by seeing the crowd that came there in spite of rain. People really enjoyed the show and we are looking forward to organize a workshop by you guys in the coming year's Technozion. Thanks for being here and u guys made our day.
Great initiative from highly talented and passionate young engineers!
A smart piece of teamwork & practice where accuracy and patience join. Nice explanation of concepts and truly adventurous event.
The interesting topic of mine was made understood well. Practical experience on the aircraft manufacturing and designing was incredible.
The workshop was an overall good experience and gave us a lot of knowledge about aircraft design. It was a truly wonderful experience.
The lecture presented before us did seem informative and it wasnt boring at all. The workshop was held in an excellent way, step by step, well - planned and well - managed.
Aerotrix gives a chance to develop ones skills and interests in aeromodelling at a larger level and it connects all people having similar passions and interests. Such a collective effort would prove worthwhile and spread awareness about aerospace. Good work guys.
The lecture was good. The visual presentation of all sorts of major activities helped in understanding the concept. The workshop was a great experience. Building our own aircraft is what we dreamt of and it had come true with aerotrix. Thanks AerotriX!
Aerotrix is the best place for all the aero lovers to hone their skills and provides them opportunities just in one click, which were really unavailable for many students in India earlier. Good work guys... keep it up.
The lecture was interesting and interactive. The information given was helpful and i can use it for my future projects. The workshop was exciting, educational and a good way to learn team work, coordination and how to actually make a working plane.
The lecture was impeccable and we really had a lot of fun by enjoying the new technology. The workshop was excellent and the respected members guided us well to make the plane.
good work..looking forward for some exciting events from you..All the best..
It is great opportunity to listen to lectures like this. Excellent workshop. we enjoyed a lot and we had a great experience. Thank you for your cooperation.
Very good work and awesome...surely it would be beneficial to all students..
Good work!! Hope this would be beneficial to many students and spread awareness about aero-related stuff..
Lecture was very much interesting unlike the lectures I have seen so far. It was very much informative and not even once did i feel bored. The workshop was creative and I enjoyed it at the same time. Fun and activity packed session.
The lecture had clear presentation of principles in a simple and understandable manner. The workshop was very informative and it is an interesting experience.
Aerotrix.com, provides very good opportunity to learn engineering basics and apply in competitive environment. Keep up your good work!
It is a wonderful class which i never really attended before. It is a pleasure to our mind to attend the class and our team work made us build a wonderful flight. we were very much eager to construct the flight.
The lecture from the aerotrix team was excellent and the lecturers interacted with us like friends. The instructions given by the lecturers to make the aircraft are very good and they made us develop the aircraft in a group. thank you for that.
It was good. Came to know about areas which are normally not discussed. It was fun and useful. Really had a great time during the workshop. It was well delivered and properly executed.
The lecture was good. Workshop was awesome. The workshop actually dealt with the making of the model. So it was worth attending. Thanks.
Nothing is more wonderful and excellent than making a RC Aircraft by our own hands and making it fly high in the sky.
The lecture was just excellent. The workshop was fully based on practical skills. The team team work helped us develop unity and we came to know how to work in groups.
Lecturers of 'AerotriX' are very brilliant and having complete knowledge about aircraft production and explanation. Workshop of 'AerotriX' is really exciting and enjoyable. Thank you for visiting my college.
The workshop was excellent with a clear lecture which made us to understand the concepts.The modelling and fabrication section was awesome which went till late night 10:30pm.They were very patience and made us to design our aircraft.
Thanks to The Aerotrix for a wonderful workshop.
I thank AEROTRIX for helping me to know more about the 'aero' world. Designing and fabricating a model of aircraft in 2 days of workshop was unbelievable but AEROTRIX MADE IT!
It was much worth and something new to us and workshop was amazing, familiarized new stuffs and WOKE UP some of the creativity in us.
I had a great time with lecture,understanding every bit they said,and during the workshop the staff of aerotrix helped us a lot to make things by ourselves and cheered up to do with perfection and ease, It was too good!
Good. Workshop was excellent the guidelines were really useful. now i am confident enough that I can make my own model. Thank you AerotriX.
It was good and very valuable for us to learn interesting facts about aeromodeling. The workshop process is excellent but we should take special care while fabrication because it is very sensitive. The operation of aircraft should be learnt by every individual i.e. operation in radio controlled aircraft.
The work shop at Gitam university was amazing.
i personally got great idea on ornithopters construction and fabrication. specially the work shop on the second day was really out standing.
Lecture was good and made us clear about what exactly Ornithopther is.. and we arenow passionate about MAVs..Thank you aerotrix!
Just one word 'awesome'. I liked these kind of hands-on workshop and would like to attend as many as possible. The fabrication part was really good and I am waiting for next workshop.
It was very good. basic concepts and fundamentals were thoroughly cleared. Design and fabrication was unique and got to learn a lot of things.
The lecture was very good. Lecturer has the vast knowledge and has taught us lot of useful things to learn. The workshop was excellent. It was a great experience for all of us thanks to college and Aerotrix for providing this kind of workshop. From the part of aeronautics Engg. it was a great event we got sufficient knowledge about how to make a plane and how a plane flies.
Lecture was great and informative. It was a nice experience to make an object which can fly.
lecture is interesting and learn a lot about the flight of planes. The workshop is very good as the practically aid by our hands on the model.
Knowledgeable and interesting. learning experience was good and innovative.
very interesting and fabulous. all the teaching fac. are very interactive and helpful. Looking forward for next workshop.
The lecture of Mr. Chronister was really good and fabrication done by aerotrix was also very good.
The lecture was very clear and audible. It helped us to get the clear understanding of concepts. The workshop helped me to get ideas about Aerospace.
Truly useful and concept clearly explained. It was a amazing practical which was made easy and satisfied to perform our own ornithopter. Everything was perfect and this should be encouraged by the youngsters to motivate and learn something new concept.
Lecture was nice, he spoke clearly and the concept was explained perfectly. The workshop is benefit to learn about the ornithopter and how to make it. I would like to be the part of workshop again.
Lecture is very good. The lecture explain point to point about the topic clearly. Workshop help us to improve our project development skills. I am happy to be a part of this workshop because it helps us to grow knowledge about our branch basic project in lecture.
Your Next shedule when will come,All are waiting but after few days college will start.
Sorry I am writing late But, Fascinated by planes I move to Aerotrix and they teach me how to make yours own RC.
the lecture was informative and gave the necessary background to work on the model.
- varun kashyap ,participated in rc aircraft design workshop@NHCE, SIR.M. Visveswaraya institute of technology.
The workshop was excellent.Right from design session to fabrication and flying,it was informative.literally we learnt 'aerospace practically'!!
The lecture was really good understood many concepts of aircraft and flight parameters.
workshop was interesting, being a mechanical student I enjoyed it alot, moreover learning of new things was plus point.
All the members of the workshop were really helpful and the interaction was good enough to made us clear about the fabrication concepts.
I have learnt many new technical purpose things, today i feel like I am an engineer.
Awesom lecture came to know many interesting facts about the Air vehichle. workshop was interactive and very productive,I enjoyed the most.
the lecture was nice and it enhanced our knowledge about aerodynamics and aircraft.
the workshop is enjoyable and we have learnt many tricks and facts to make things fly.
The lecture was very informative and all the basics of IC enginge was covered and was easy to understand. workshop was very useful in gaining practical knowledge of IC engine.
It was nice and fully surprised n satisfied with the lecture, it was really helpful for my knowledge.
workshop was fully related to the creative things which was enjoyed by every participants and also enhanced the practical knowledge.
This was absolutely marvelous workshop ever did,I enjoyed every moment of this workshop.
Awesome,I can't forget this time,satisfactory than earning money.
i found the lecture very easy given by MR.ASISH.P & it could be undersood by any student of any branch of engg. aerotrix team members helped us to prepare our own glider aircraft. that was a great experience & learning with AEROTRIX.
Awesome lecture, all the doubts were clearly explained in a best manner, fabrication part was fantastic everything were told briefly, got great help from their side .
The lecture was very fantastic,and created new ideas in our mind about an aircraft.
i really enjoyed the workshop,it was an unforgettable workshop,helped me to increase my knowledge.
Lecture was really interesting or we can say sound-hell, we got maximum idea about aircraft from the lecture, workshop was also very very interesting we enjoyed it to the core, it was a thrilling experience for us, we got excellent and memorable experiences from it . .
the lecture is nice,got to know about the basics of an aircraft.
the workshop was excellent and gave as practical knowledge about an aircraft.
the lecture was very nice,it was new for me . but created an interest in aerospace.
the mechanism of aircraft and the importance of each control of an aircraft was thought well .
the lecture was very interesting and knowledgeable,the workshop was one of my best experience after entering to the technical field.
its been fantastic workshop I have ever attended. it gives tremendous knowledge about the different aspects of flying and aeronautics.
The workshop was awesome and the faculty was very friendly.Students were able to gain a lot of practical knowledge.
Wish That AEROTRIX visit our college soon.
workshop was very useful and enlightening for interested students. The idea of first hand assembly and dis-assembly of the engine is practically very awesome.
thanx for ur wonderful gathers.
the lecture was good and the concept was properly explained.
and the workshop was interesting.
the lecture was satisfactory.it is the first time that i attended any sort of workshop and i enjoyed it very much.
The lecture was fabulous. The workshop session was enjoyable. The step by step approach, understanding and working with friends impose fabulous imprints.
The lecture was very good, i learned many things about the glider and the concept behind the lift of glider.it was a very good experience fabricating a glider.
the lecture was good and delivered well, workshop was conducted nicely.
Nice concept according to practical glider motion,i learnt about how the gliders fly.the workshop gave me chance to deal practically with the concept, which was very nice.
the workshop was very perfect,i learned about the basics of an aircraft.
the lecture was very subjective from the aeronautical engineering,the workshop is perfect according to my expectations.it encouraged me very much to work in aerospace sector in the future.i plan to attend all workshops conducted by aerotrix.
it was very usefull,we gained practical knowledge from this workshop and it also gave us good basement for future studies.
i like the workshop very much,it was very useful for me, i got to learn the basics of an aircraft and its designing.i thank aerotrix for giving me such a wonderful opportunity.
the lecture was excellent and innovative,workshop was good and it increased creativity in us.we got a chance to expose our own creativity in the workshop.
The lecture was good and workshop was excellent.
I would really like to make a remote control aircraft,which would really be more interesting.
The lecture was very good and easy to understand.students gains practical knowledge in these kind of workshops.
Was good ,knowledgeable. Would like to have such sessions in our camps, was manageable and quite innovative.
The lecture was very good and very informative. The workshop was really good as things were made by students only.
The lecture was very informative. It was presented well and was very well put across. The hand-on approach was a good experience although I wish there was scope for innovation.
The lecture was good and we got good knowledge about the aircrafts and its related terminologies. The workshop was fun and interesting. We really learned a lot about the importance of aircraft design.
The lecture was interesting with a good presentation and the videos were interesting too. It is a very good way to make the students design their own planes.
The lecture was very educative. Helped in understanding the basics of aerodynamics and flight. The workshop was very interactive and well-organised. It helped gain hands on experience. Good facilities and components provided.
Nothing else is required than whatever taught in workshop to build an aircraft. Full hands on special care for every work by mentors. Thanks... thanks a lot!. To my knowledge nothing else is required for beginners.
A very amazing and a very enthusiastic workshop by Team Aerotrix.Each and every participant of the workshop was enthralled by the RC airshow and the new concept of Ornithopters imparted to them and gave the feedback that this was the best workshop they had attended till date. Therefore I as part of the organizing group and as part of Team Soaring Eagles would like to thank Team Aerotrix for coming to our college and we hope that in future also we organize workshops on RC Flying in our college with Soaring Eagles in association with Aerotrix.Once again thanks a lot for the amazing workshop.
Lecture was informative and novel. Nobody's ever given a workshop on the topic before. Workshop was fun and involving. Good hands-on experience and allows application of theoretical knowledge in practical life.
It was a better lecture compared to our teachers lecture. We could understand in his first explanation itself. This was the best workshop I have ever seen. I have learnt many practical things and had fun.
Lecture was very interesting and entertaining one. I had only theoretical knowledge about the bird flight but this workshop has provided me with practical knowledge by making a ornithopter of our own which i liked very much.
The lecture was very informative and it was described in a very well manner so that all of us got it very clearly. It was really a very helpful workshop for all of us. We got an innovative idea to make ornithopters of our own.
The lecture was really good and informative. The workshop was the best part of the program and was really fun. We gor first hand knowledge on design of wings, aerofoil, structure etc.
It was good. We had a visual look of the concept apart from what we had studied in text books. We got more knowledge than what we got in our class room.
Wokrshop was very worthy, we learnt a lot, good examples used for basic design of an aircraft.
Was good was informative and a great value addition. The practicals were great.
It was quite helpful. We were able to get new ideas abt the RC plane. It was really awesome experience, teachers were very helpful. Atmosphere in which we were working was very good.
This workshop is good & very useful to me. Now iam cleared with basic.
Tell me I will not understand,show me I may understand, involve me I will understand.This quote sums up what we have done in the workshop. We basically learnt how an aircraft is made and it was very usefull.
The lecture was superb,clear and very informative. Modelling an aircraft and making it is a mind blowing experience.
I learned lot about aircraft modelling, workshop was a great time for me to learn the basics of aircraft design.
It helped me to understand the basic concept of aerodynamics and design of aircraft. Workshop gives us practical knowledge of designing an aircraft.
Really impressive, got to know many concepts about aircraft & how they really fly. It was wonderful to construct a plane for the very first time with my friends. Had lots of fun & learned many things.
Aerotrix lecture helped me to make my concepts clear and give me the opportunity to know many things related to aircraft, I learnt how to make the aircraft design and working principle of it.
A very helpful workshop which helped us in understanding the basic details and wonderful world of aeronautical science.
Frankly speeking this was the first lecture that I attended in IIT-G.I found the lecture impressive and it didn’t lack any charm.
its good for young brains.
Sir m from IIT Kanpur and have heard a lot about your lectures before workshops. I was not able to attend your workshop in techkriti'11 but now want to attend one . Please tell me where i can attend one nearest and earliest !!
Hello to 'AEROTRIX GANG', we are proud to say that, now we all kown that how to make an aircraft in an thermocoule and set the motors in that thermocoule and it will fly in the sky very successfully. But we want another training our 'EXCEL COLLEGE OF ENGINEERING AND TECHNOLOGY' like that one, but we need atleast 5 days to learn that to make an airacraft, but in new style of aircraft in model.BYE TO AEROTRIX FRIENDS.
Being a 1st year aerospace student it was a very exciting experience. Which help me to know more about my field every thing thought was very clear. The lecture was very attractive.
The lecture was really outstanding. It was in an understandable way and the lecture was friendly. I loved the workshop very much & thank AEROTRIX for organizing such a wonderful & useful workshop.
Lectures are good and helpful. They are friendly and show responsibility to each and every student. Workshop is different and good. The model making is new and good one. The co-ordinate’s are helpful in making the difficult one, which is good.
The workshop conducted was awesome and was not only study oriented but also I enjoyed it a lot. I have even decided to do my MINI PROJECT on Radio controlled aircraft after getting trained in this workshop.
This is very good move for us and feel. Very good to attend this type of lecture. Lecture was very very good and totally knowledgeful. This workshop is great move for engineers. Because of it’s my view is totally changed. Now I can do any thing .At least I’m very appreciate after this.
The lecture was good. Work was what we did. We really worked unlike other workshops, where we used to listen to the lecture & SLEEP; we forgot sleeping and kept on working.
I think lecture is good those students never studied about aircraft can also understand the thing. Workshop was fully satisfied.
The aerotrix team, having commence knowledge at the same time great experience really helped us to get knowledge from their lectures. The core details and specification and logic behind every concept was properly explained. It was really a bench mark for RC aircraft design. Having my interest in RC aircraft. this workshop completely helped me to fly my interest and gaining great experiences really learnt a lot thanks allot.
Lecture conducted was appreciative. Team comprises of two members & both of them are talented and knowledgeable people. Workshop conducted by AEROTRIX team was excellent.
Lecture is very nice and the way. They answered every question we asked. The workshop is very interesting & making the model of ornithopter is good.
It was very informative and is helped us to understand more about basics of bird flying. On doing this workshop I’m in a position how to handle the major projects in technical education.
The lecture was very useful and informative as well as innovative. It will be useful to young engineers like us. Rc workshop is very innovative & exiting experience for me and my team. It improved my understanding about aircrafts.
Lecture was simple to understand and easy to apply in the workshop. Well planned & nicely conducted.
It's really interesting and unforgettable. We Enjoyed and experience each & every second of workshop. It’s highly useful.
The lecture was very awesome. Workshop was very nice I enjoyed doing it.
It’s very useful we learnt a lot from that lecture. We learned to calculate chord, span of wing etc., . Workshop is very useful. They thought us very clearly.
It was very useful think it will help me in future. Workshop was very good. I gained knowledge from it.
Good lecture and it increase our knowledge and interest in aerotrix is developed through this. It is time consuming and good lecture be a part of workshop. It is good and gives us a lot of knowledge. It’s totally based on practical and new things are invented through this.
Lecture gives very impressive information in the best way. We learned many things from this workshop. Students are showing very much interest in doing this workshop.
Very interactive lecture. Very informatic each and every doubt was clarified. Entire team was very helpful, guiding us through the construction of the model.
THE FANTA FANTABULOUSLY FANTASTIC AEROTRIX TEAM , IT is the best place for all the aero lovers to hone their skills and provides them opportunities just in one click, which were really unavailable for many students in India earlier. Good work guys... keep it up.
This is my first workshop about aeronautics and I’m very happy to know a lot about aircraft design in my first workshop itself. Your practical approach to the subject is really good. Very interactive, kindled our brains to finish the design in time and really helped me in the making of the aircraft design. Really good.
That was really nice. We got to know very new form highly qualified lecturer. It was great to spend time and work with whole team.
You people are really doing a good job of conducting workshops and making us to bright up ourselves in different aspects..
It is very nice & useful for us. We know something about the aircraft thanks for good lecture. We learned hoe to make the aircraft, that's very useful for my future.
Fun and enriching expreience. Highly Interactive and Informative lectures. Hardworking and friendly team.
Best part is that these people would never get fed up of students asking for the same thing again and again but would rather clear the doubts with patience which is a true proof of their dedication and love for their work.
Thank u all for coming. Had an Amazing time.
The lecture was very essential as we came across many fact about aerodynamics. The workshop was conducted in a very good manner and was very interesting .the beauty of workshop is participants come from different states. I thank you for giving me this opportunity of making an aircraft model. I learnt that there is no achievement without hard work.
- S.Sathesh Kumar. ,Government College of Trchnology. 3'rd year mechanical, Coimbatore.
It was an awesome lecture by the faculty and it cleared lot of our concepts. As in the 1st year, it was not expected to know this much, but this lecture helped me knowing about my field.
The lecture was very interesting, the presentation made difficult topics simpler. The session was also interactive. Workshop conducted in a very helpful manner. All the requirements & necessities were very well managed.
Lecture was great it shares lot of practical knowledge about the particular subject related to RC Aircraft design. Workshop was awesome can’t explain in my words. I want to take part in more such workshops.
It's really good experience. The workshop was really worthwhile and skillful.
Lecture is very good and the way of saying it was fluent & helpful. The way of doing model is very interesting, they way they thought us was nice.
The lecture was nice, which included all the basic concepts of flying. Workshop was different from the previous workshops, the AEROTRIX people were helpful and co - operative.
Really Exciting. Had to work hard. Good concept's, which will be helpful. Had to attend tiil late night but worth the time konwledge we gained.
In the workshop I got the clear idea about the plane as well as the fundamentals of an aircraft, forces acting on it & configurations.
Lecture was good, every concept’s from the basic were thought. It’s easy to unstinting for other branch student's also. Workshop kit, materials, organizing everything was well & good.
Lecture was good and interesting. It’s easy to understand for all departments. Workshop was good & interesting.
hello sir we feel that this workshop is very useful to us we need the another workshop to our college.
The lecture was really too good and we wish they should help more students to excel in there types of event which is wonderful. The workshop makes us to learn some tactics to change way of doing some work or a job. I really feel good as a part of this workshop. Aerotrix Rockzzz.
The lecture was very conceptual and interesting. It was our first exposure to aircraft design. Workshop was interesting and it made us work hard for 2 day’s to build an aircraft, it was a good technical experience for us.
Lecture was quite interactive session and was very good. It is really interesting and is more affluent. This workshop is quite effective in bringing out designing talent and focus oriented.
It was really interesting and wonderful experience. Got some real practical knowledge.
It was good opportunity to learn the physics behind flying of birds and ornithoptor itself. It was a good "practical" experience where theory was well applied in practice.
The best lecture I've attended in the recent past. Exciting workshop.
It is simple superb and interesting workshop.
It is really a matter of honour to organize an event as such at BITS PIlani, Goa. This develops innovative thinking and is the platform for creative approach. | 2019-04-21T00:22:56Z | https://www.aerotrix.com/testimonials |
For the official language of China, Taiwan and Singapore, also known as Mandarin, see Standard Chinese. For other languages spoken in China, see Languages of China.
Unless otherwise specified, Chinese in this article is written in simplified Chinese/traditional Chinese; Pinyin order. If the simplified and traditional characters are the same, they are written only once.
"Han language" redirects here. For the Athabaskan language, see Hän language.
Chinese (simplified Chinese: 汉语; traditional Chinese: 漢語; pinyin: Hànyǔ; literally: 'Han language'; or especially though not exclusively for written Chinese: 中文; Zhōngwén; 'Chinese writing') is a group of related, but in many cases not mutually intelligible, language varieties, forming the Sinitic branch of the Sino-Tibetan language family. Chinese is spoken by the ethnic Chinese majority and many minority ethnic groups in China. About 1.2 billion people (around 16% of the world's population) speak some form of Chinese as their first language.
The varieties of Chinese are usually described by native speakers as dialects of a single Chinese language, but linguists note that they are as diverse as a language family.[b] The internal diversity of Chinese has been likened to that of the Romance languages, but may be even more varied. There are between 7 and 13 main regional groups of Chinese (depending on classification scheme), of which the most spoken by far is Mandarin (about 960 million, e.g. Southwestern Mandarin), followed by Wu (80 million, e.g. Shanghainese), Min (70 million, e.g. Southern Min), Yue (60 million, e.g. Cantonese), etc. Most of these groups are mutually unintelligible, and even dialect groups within Min Chinese may not be mutually intelligible. Some, however, like Xiang and certain Southwest Mandarin dialects, may share common terms and a certain degree of intelligibility. All varieties of Chinese are tonal and analytic.
Standard Chinese (Pǔtōnghuà/Guóyǔ/Huáyǔ) is a standardized form of spoken Chinese based on the Beijing dialect of Mandarin. It is the official language of China and Taiwan, as well as one of the four official languages of Singapore. It is one of the six official languages of the United Nations. The written form of the standard language (中文; Zhōngwén), based on the logograms known as Chinese characters (汉字/漢字; Hànzì), is shared by literate speakers of otherwise unintelligible dialects.
The earliest Chinese written records are Shang dynasty-era oracle inscriptions, which can be traced back to 1250 BCE. The phonetic categories of Archaic Chinese can be reconstructed from the rhymes of ancient poetry. During the Northern and Southern dynasties period, Middle Chinese went through several sound changes and split into several varieties following prolonged geographic and political separation. Qieyun, a rime dictionary, recorded a compromise between the pronunciations of different regions. The royal courts of the Ming and early Qing dynasties operated using a koiné language (Guanhua) based on Nanjing dialect of Lower Yangtze Mandarin. Standard Chinese was adopted in the 1930s, and is now the official language of both the People's Republic of China and the Republic of China on Taiwan.
Most linguists classify all varieties of Chinese as part of the Sino-Tibetan language family, together with Burmese, Tibetan and many other languages spoken in the Himalayas and the Southeast Asian Massif. Although the relationship was first proposed in the early 19th century and is now broadly accepted, reconstruction of Sino-Tibetan is much less developed than that of families such as Indo-European or Austroasiatic. Difficulties have included the great diversity of the languages, the lack of inflection in many of them, and the effects of language contact. In addition, many of the smaller languages are spoken in mountainous areas that are difficult to reach, and are often also sensitive border zones. Without a secure reconstruction of proto-Sino-Tibetan, the higher-level structure of the family remains unclear. A top-level branching into Chinese and Tibeto-Burman languages is often assumed, but has not been convincingly demonstrated.
The first written records appeared over 3,000 years ago during the Shang dynasty. As the language evolved over this period, the various local varieties became mutually unintelligible. In reaction, central governments have repeatedly sought to promulgate a unified standard.
The earliest examples of Chinese are divinatory inscriptions on oracle bones from around 1250 BCE in the late Shang dynasty. Old Chinese was the language of the Western Zhou period (1046–771 BCE), recorded in inscriptions on bronze artifacts, the Classic of Poetry and portions of the Book of Documents and I Ching. Scholars have attempted to reconstruct the phonology of Old Chinese by comparing later varieties of Chinese with the rhyming practice of the Classic of Poetry and the phonetic elements found in the majority of Chinese characters. Although many of the finer details remain unclear, most scholars agree that Old Chinese differs from Middle Chinese in lacking retroflex and palatal obstruents but having initial consonant clusters of some sort, and in having voiceless nasals and liquids. Most recent reconstructions also describe an atonal language with consonant clusters at the end of the syllable, developing into tone distinctions in Middle Chinese. Several derivational affixes have also been identified, but the language lacks inflection, and indicated grammatical relationships using word order and grammatical particles.
Middle Chinese was the language used during Northern and Southern dynasties and the Sui, Tang, and Song dynasties (6th through 10th centuries CE). It can be divided into an early period, reflected by the Qieyun rime book (601 CE), and a late period in the 10th century, reflected by rhyme tables such as the Yunjing constructed by ancient Chinese philologists as a guide to the Qieyun system. These works define phonological categories, but with little hint of what sounds they represent. Linguists have identified these sounds by comparing the categories with pronunciations in modern varieties of Chinese, borrowed Chinese words in Japanese, Vietnamese, and Korean, and transcription evidence. The resulting system is very complex, with a large number of consonants and vowels, but they are probably not all distinguished in any single dialect. Most linguists now believe it represents a diasystem encompassing 6th-century northern and southern standards for reading the classics.
The relationship between spoken and written Chinese is rather complex. Its spoken varieties have evolved at different rates, while written Chinese itself has changed much less. Classical Chinese literature began in the Spring and Autumn period.
After the fall of the Northern Song dynasty, and during the reign of the Jin (Jurchen) and Yuan (Mongol) dynasties in northern China, a common speech (now called Old Mandarin) developed based on the dialects of the North China Plain around the capital. The Zhongyuan Yinyun (1324) was a dictionary that codified the rhyming conventions of new sanqu verse form in this language. Together with the slightly later Menggu Ziyun, this dictionary describes a language with many of the features characteristic of modern Mandarin dialects.
Up to the early 20th century, most of the people in China spoke only their local variety. As a practical measure, officials of the Ming and Qing dynasties carried out the administration of the empire using a common language based on Mandarin varieties, known as Guānhuà (官话/官話, literally "language of officials"). For most of this period, this language was a koiné based on dialects spoken in the Nanjing area, though not identical to any single dialect. By the middle of the 19th century, the Beijing dialect had become dominant and was essential for any business with the imperial court.
In the 1930s a standard national language Guóyǔ (国语/國語 "national language") was adopted. After much dispute between proponents of northern and southern dialects and an abortive attempt at an artificial pronunciation, the National Language Unification Commission finally settled on the Beijing dialect in 1932. The People's Republic founded in 1949 retained this standard, calling it pǔtōnghuà (普通话/普通話 "common speech"). The national language is now used in education, the media, and formal situations in both Mainland China and Taiwan. In Hong Kong and Macau, because of their colonial and linguistic history, the language used in education, the media, formal speech, and everyday life remains the local Cantonese, although the standard language has become very influential and is being taught in schools.
The Chinese language has spread to neighbouring countries through a variety of means. Northern Vietnam was incorporated into the Han empire in 111 BCE, marking the beginning of a period of Chinese control that ran almost continuously for a millennium. The Four Commanderies were established in northern Korea in the first century BCE, but disintegrated in the following centuries. Chinese Buddhism spread over East Asia between the 2nd and 5th centuries CE, and with it the study of scriptures and literature in Literary Chinese. Later Korea, Japan, and Vietnam developed strong central governments modeled on Chinese institutions, with Literary Chinese as the language of administration and scholarship, a position it would retain until the late 19th century in Korea and (to a lesser extent) Japan, and the early 20th century in Vietnam. Scholars from different lands could communicate, albeit only in writing, using Literary Chinese.
Although they used Chinese solely for written communication, each country had its own tradition of reading texts aloud, the so-called Sino-Xenic pronunciations. Chinese words with these pronunciations were also extensively imported into the Korean, Japanese and Vietnamese languages, and today comprise over half of their vocabularies. This massive influx led to changes in the phonological structure of the languages, contributing to the development of moraic structure in Japanese and the disruption of vowel harmony in Korean.
Borrowed Chinese morphemes have been used extensively in all these languages to coin compound words for new concepts, in a similar way to the use of Latin and Ancient Greek roots in European languages. Many new compounds, or new meanings for old phrases, were created in the late 19th and early 20th centuries to name Western concepts and artifacts. These coinages, written in shared Chinese characters, have then been borrowed freely between languages. They have even been accepted into Chinese, a language usually resistant to loanwords, because their foreign origin was hidden by their written form. Often different compounds for the same concept were in circulation for some time before a winner emerged, and sometimes the final choice differed between countries. The proportion of vocabulary of Chinese origin thus tends to be greater in technical, abstract, or formal language. For example, in Japan, Sino-Japanese words account for about 35% of the words in entertainment magazines, over half the words in newspapers, and 60% of the words in science magazines.
Vietnam, Korea, and Japan each developed writing systems for their own languages, initially based on Chinese characters, but later replaced with the Hangul alphabet for Korean and supplemented with kana syllabaries for Japanese, while Vietnamese continued to be written with the complex Chữ nôm script. However, these were limited to popular literature until the late 19th century. Today Japanese is written with a composite script using both Chinese characters (Kanji) and kana. Korean is written exclusively with Hangul in North Korea, and supplementary Chinese characters (Hanja) are increasingly rarely used in South Korea. Vietnamese is written with a Latin-based alphabet.
Examples of loan words in English include "tea", from Hokkien (Min Nan) tê (茶), "dim sum", from Cantonese dim2 sam1 and "kumquat", from Cantonese gam1gwat1 (金橘).
Jerry Norman estimated that there are hundreds of mutually unintelligible varieties of Chinese. These varieties form a dialect continuum, in which differences in speech generally become more pronounced as distances increase, though the rate of change varies immensely. Generally, mountainous South China exhibits more linguistic diversity than the North China Plain. In parts of South China, a major city's dialect may only be marginally intelligible to close neighbors. For instance, Wuzhou is about 120 miles (190 km) upstream from Guangzhou, but the Yue variety spoken there is more like that of Guangzhou than is that of Taishan, 60 miles (95 km) southwest of Guangzhou and separated from it by several rivers. In parts of Fujian the speech of neighboring counties or even villages may be mutually unintelligible.
Until the late 20th century, Chinese emigrants to Southeast Asia and North America came from southeast coastal areas, where Min, Hakka, and Yue dialects are spoken. The vast majority of Chinese immigrants to North America spoke the Taishan dialect, from a small coastal area southwest of Guangzhou.
Jin, previously included in Mandarin.
Huizhou, previously included in Wu.
Pinghua, previously included in Yue.
Some varieties remain unclassified, including Danzhou dialect (spoken in Danzhou, on Hainan Island), Waxianghua (spoken in western Hunan) and Shaozhou Tuhua (spoken in northern Guangdong).
Standard Chinese, often called Mandarin, is the official standard language of China and Taiwan, and one of the four official languages of Singapore (where it is called "Huáyŭ" 华语 or simply Chinese). Standard Chinese is based on the Beijing dialect, the dialect of Mandarin as spoken in Beijing. The governments of both China and Taiwan intend for speakers of all Chinese speech varieties to use it as a common language of communication. Therefore, it is used in government agencies, in the media, and as a language of instruction in schools.
In mainland China and Taiwan, diglossia has been a common feature. For example, in addition to Standard Chinese, a resident of Shanghai might speak Shanghainese; and, if he or she grew up elsewhere, then he or she is also likely to be fluent in the particular dialect of that local area. A native of Guangzhou may speak both Cantonese and Standard Chinese. In addition to Mandarin, most Taiwanese also speak Minnan, Hakka, or an Austronesian language. A Taiwanese may commonly mix pronunciations, phrases, and words from Mandarin and other Taiwanese languages, and this mixture is considered normal in daily or informal speech.
The official Chinese designation for the major branches of Chinese is fāngyán (方言, literally "regional speech"), whereas the more closely related varieties within these are called dìdiǎn fāngyán (地点方言/地點方言 "local speech"). Conventional English-language usage in Chinese linguistics is to use dialect for the speech of a particular place (regardless of status) and dialect group for a regional grouping such as Mandarin or Wu. Because varieties from different groups are not mutually intelligible, some scholars prefer to describe Wu and others as separate languages.[better source needed] Jerry Norman called this practice misleading, pointing out that Wu, which itself contains many mutually unintelligible varieties, could not be properly called a single language under the same criterion, and that the same is true for each of the other groups.
Mutual intelligibility is considered by some linguists to be the main criterion for determining whether varieties are separate languages or dialects of a single language, although others do not regard it as decisive, particularly when cultural factors interfere as they do with Chinese. As Campbell (2008) explains, linguists often ignore mutual intelligibility when varieties share intelligibility with a central variety (i.e. prestige variety, such as Standard Mandarin), as the issue requires some careful handling when mutual intelligibility is inconsistent with language identity. John DeFrancis argues that it is inappropriate to refer to Mandarin, Wu and so on as "dialects" because the mutual unintelligibility between them is too great. On the other hand, he also objects to considering them as separate languages, as it incorrectly implies a set of disruptive "religious, economic, political, and other differences" between speakers that exist, for example, between French Catholics and English Protestants in Canada, but not between speakers of Cantonese and Mandarin in China, owing to China's near-uninterrupted history of centralized government.
Because of the difficulties involved in determining the difference between language and dialect, other terms have been proposed: ISO 639-3 follows Ethnologue in assigning individual language codes to the 13 main subdivisions, while Chinese as a whole is classified as a 'macrolanguage'. Other options include vernacular, lect regionalect, topolect, and variety.
Most Chinese people consider the spoken varieties as one single language because speakers share a common culture and history, as well as a shared national identity and a common written form. To Chinese nationalists, the idea of Chinese as a language family may suggest that the Chinese identity is much more fragmented and disunified than it actually is and as such is often looked upon as culturally and politically provocative. Additionally, in Taiwan it is closely associated with Taiwanese independence, some of whose supporters promote the local Taiwanese Hokkien variety.
The phonological structure of each syllable consists of a nucleus that has a vowel (which can be a monophthong, diphthong, or even a triphthong in certain varieties), preceded by an onset (a single consonant, or consonant+glide; zero onset is also possible), and followed (optionally) by a coda consonant; a syllable also carries a tone. There are some instances where a vowel is not used as a nucleus. An example of this is in Cantonese, where the nasal sonorant consonants /m/ and /ŋ/ can stand alone as their own syllable.
In Mandarin much more than in other spoken varieties, most syllables tend to be open syllables, meaning they have no coda (assuming that a final glide is not analyzed as a coda), but syllables that do have codas are restricted to nasals /m/, /n/, /ŋ/, the retroflex approximant /ɻ /, and voiceless stops /p/, /t/, /k/, or /ʔ/. Some varieties allow most of these codas, whereas others, such as Standard Chinese, are limited to only /n/, /ŋ/ and /ɻ /.
All varieties of spoken Chinese use tones to distinguish words.[d] A few dialects of north China may have as few as three tones, while some dialects in south China have up to 6 or 12 tones, depending on how one counts. One exception from this is Shanghainese which has reduced the set of tones to a two-toned pitch accent system much like modern Japanese.
妈/媽 mā high level "mother"
麻 má high rising "hemp"
马/馬 mǎ low falling-rising "horse"
骂/罵 mà high falling "scold"
诗/詩 si1 high level, high falling "poem"
史 si2 high rising "history"
弒 si3 mid level "to assassinate"
时/時 si4 low falling "time"
市 si5 low rising "market"
是 si6 low level "yes"
色 sik1 high level (stopped) "color"
锡/錫 sik3 mid level (stopped) "tin"
食 sik6 low level (stopped) "to eat"
Chinese is often described as a "monosyllabic" language. However, this is only partially correct. It is largely accurate when describing Classical Chinese and Middle Chinese; in Classical Chinese, for example, perhaps 90% of words correspond to a single syllable and a single character. In the modern varieties, it is usually the case that a morpheme (unit of meaning) is a single syllable; In contrast, English has plenty of multi-syllable morphemes, both bound and free, such as "seven", "elephant", "para-" and "-able".
This phonological collapse has led to a corresponding increase in the number of homophones. As an example, the small Langenscheidt Pocket Chinese Dictionary lists six words that are commonly pronounced as shí (tone 2): 十 "ten"; 实/實 "real, actual"; 识/識 "know (a person), recognize"; 石 "stone"; 时/時 "time"; 食 "food, eat". These were all pronounced differently in Early Middle Chinese; in William H. Baxter's transcription they were dzyip, zyit, syik, dzyek, dzyi and zyik respectively. They are still pronounced differently in today's Cantonese; in Jyutping they are sap9, sat9, sik7, sek9, si4, sik9. In modern spoken Mandarin, however, tremendous ambiguity would result if all of these words could be used as-is; Yuen Ren Chao's modern poem Lion-Eating Poet in the Stone Den exploits this, consisting of 92 characters all pronounced shi. As such, most of these words have been replaced (in speech, if not in writing) with a longer, less-ambiguous compound. Only the first one, 十 "ten", normally appears as such when spoken; the rest are normally replaced with, respectively, shíjì 实际/實際 (lit. "actual-connection"); rènshi 认识/認識 (lit. "recognize-know"); shítou 石头/石頭 (lit. "stone-head"); shíjiān 时间/時間 (lit. "time-interval"); shíwù 食物 (lit. "food-thing"). In each case, the homophone was disambiguated by adding another morpheme, typically either a synonym or a generic word of some sort (for example, "head", "thing"), the purpose of which is simply to indicate which of the possible meanings of the other, homophonic syllable should be selected.
However, when one of the above words forms part of a compound, the disambiguating syllable is generally dropped and the resulting word is still disyllabic. For example, shí 石 alone, not shítou 石头/石頭, appears in compounds meaning "stone-", for example, shígāo 石膏 "plaster" (lit. "stone cream"), shíhuī 石灰 "lime" (lit. "stone dust"), shíkū 石窟 "grotto" (lit. "stone cave"), shíyīng 石英 "quartz" (lit. "stone flower"), shíyóu 石油 "petroleum" (lit. "stone oil").
Most modern varieties of Chinese have the tendency to form new words through disyllabic, trisyllabic and tetra-character compounds. In some cases, monosyllabic words have become disyllabic without compounding, as in kūlong 窟窿 from kǒng 孔; this is especially common in Jin.
Chinese morphology is strictly bound to a set number of syllables with a fairly rigid construction. Although many of these single-syllable morphemes (zì, 字) can stand alone as individual words, they more often than not form multi-syllabic compounds, known as cí (词/詞), which more closely resembles the traditional Western notion of a word. A Chinese cí ("word") can consist of more than one character-morpheme, usually two, but there can be three or more.
hànbǎobāo, hànbǎo 汉堡包/漢堡包, 汉堡/漢堡 – "hamburger"
wǒ 我 – "I, me"
rén 人 – "people, human, mankind"
dìqiú 地球 – "The Earth"
They make heavy use of grammatical particles to indicate aspect and mood. In Mandarin Chinese, this involves the use of particles like le 了 (perfective), hái 还/還 ("still"), yǐjīng 已经/已經 ("already"), and so on.
Chinese has a subject–verb–object word order, and like many other languages of East Asia, makes frequent use of the topic–comment construction to form sentences. Chinese also has an extensive system of classifiers and measure words, another trait shared with neighboring languages like Japanese and Korean. Other notable grammatical features common to all the spoken varieties of Chinese include the use of serial verb construction, pronoun dropping and the related subject dropping.
Although the grammars of the spoken varieties share many traits, they do possess differences.
The entire Chinese character corpus since antiquity comprises well over 20,000 characters, of which only roughly 10,000 are now commonly in use. However Chinese characters should not be confused with Chinese words. Because most Chinese words are made up of two or more characters, there are many more Chinese words than characters. A more accurate equivalent for a Chinese character is the morpheme, as characters represent the smallest grammatical units with individual meanings in the Chinese language.
Estimates of the total number of Chinese words and lexicalized phrases vary greatly. The Hanyu Da Zidian, a compendium of Chinese characters, includes 54,678 head entries for characters, including bone oracle versions. The Zhonghua Zihai (1994) contains 85,568 head entries for character definitions, and is the largest reference work based purely on character and its literary variants. The CC-CEDICT project (2010) contains 97,404 contemporary entries including idioms, technology terms and names of political figures, businesses and products. The 2009 version of the Webster's Digital Chinese Dictionary (WDCD), based on CC-CEDICT, contains over 84,000 entries.
The most comprehensive pure linguistic Chinese-language dictionary, the 12-volume Hanyu Da Cidian, records more than 23,000 head Chinese characters and gives over 370,000 definitions. The 1999 revised Cihai, a multi-volume encyclopedic dictionary reference work, gives 122,836 vocabulary entry definitions under 19,485 Chinese characters, including proper names, phrases and common zoological, geographical, sociological, scientific and technical terms.
The 7th (2016) edition of Xiandai Hanyu Cidian, an authoritative one-volume dictionary on modern standard Chinese language as used in mainland China, has 13,000 head characters and defines 70,000 words.
Like any other language, Chinese has absorbed a sizable number of loanwords from other cultures. Most Chinese words are formed out of native Chinese morphemes, including words describing imported objects and ideas. However, direct phonetic borrowing of foreign words has gone on since ancient times.
Some early Indo-European loanwords in Chinese have been proposed, notably 蜜 mì "honey", 狮/獅 shī "lion," and perhaps also 马/馬 mǎ "horse", 猪/豬 zhū "pig", 犬 quǎn "dog", and 鹅/鵝 é "goose".[f] Ancient words borrowed from along the Silk Road since Old Chinese include 葡萄 pútáo "grape", 石榴 shíliu/shíliú "pomegranate" and 狮子/獅子 shīzi "lion". Some words were borrowed from Buddhist scriptures, including 佛 Fó "Buddha" and 菩萨/菩薩 Púsà "bodhisattva." Other words came from nomadic peoples to the north, such as 胡同 hútòng "hutong". Words borrowed from the peoples along the Silk Road, such as 葡萄 "grape," generally have Persian etymologies. Buddhist terminology is generally derived from Sanskrit or Pāli, the liturgical languages of North India. Words borrowed from the nomadic tribes of the Gobi, Mongolian or northeast regions generally have Altaic etymologies, such as 琵琶 pípá, the Chinese lute, or 酪 lào/luò "cheese" or "yoghurt", but from exactly which source is not always clear.
Modern neologisms are primarily translated into Chinese in one of three ways: free translation (calque, or by meaning), phonetic translation (by sound), or a combination of the two. Today, it is much more common to use existing Chinese morphemes to coin new words in order to represent imported concepts, such as technical expressions and international scientific vocabulary. Any Latin or Greek etymologies are dropped and converted into the corresponding Chinese characters (for example, anti- typically becomes "反", literally opposite), making them more comprehensible for Chinese but introducing more difficulties in understanding foreign texts. For example, the word telephone was loaned phonetically as 德律风/德律風 (Shanghainese: télífon [təlɪfoŋ], Mandarin: délǜfēng) during the 1920s and widely used in Shanghai, but later 电话/電話 diànhuà (lit. "electric speech"), built out of native Chinese morphemes, became prevalent (電話 is in fact from the Japanese 電話 denwa; see below for more Japanese loans). Other examples include 电视/電視 diànshì (lit. "electric vision") for television, 电脑/電腦 diànnǎo (lit. "electric brain") for computer; 手机/手機 shǒujī (lit. "hand machine") for mobile phone, 蓝牙/藍牙 lányá (lit. "blue tooth") for Bluetooth, and 网志/網誌 wǎngzhì (lit. "internet logbook") for blog in Hong Kong and Macau Cantonese. Occasionally half-transliteration, half-translation compromises are accepted, such as 汉堡包/漢堡包 hànbǎobāo (漢堡 hànbǎo "Hamburg" + 包 bāo "bun") for "hamburger". Sometimes translations are designed so that they sound like the original while incorporating Chinese morphemes (phono-semantic matching), such as 拖拉机/拖拉機 tuōlājī "tractor" (lit. "dragging-pulling machine"), or 马利奥/馬利奧 Mǎlì'ào for the video game character Mario. This is often done for commercial purposes, for example 奔腾/奔騰 bēnténg (lit. "dashing-leaping") for Pentium and 赛百味/賽百味 Sàibǎiwèi (lit. "better-than hundred tastes") for Subway restaurants.
Foreign words, mainly proper nouns, continue to enter the Chinese language by transcription according to their pronunciations. This is done by employing Chinese characters with similar pronunciations. For example, "Israel" becomes 以色列 Yǐsèliè, "Paris" becomes 巴黎 Bālí. A rather small number of direct transliterations have survived as common words, including 沙发/沙發 shāfā "sofa", 马达/馬達 mǎdá "motor", 幽默 yōumò "humor", 逻辑/邏輯 luóji/luójí "logic", 时髦/時髦 shímáo "smart, fashionable", and 歇斯底里 xiēsīdǐlǐ "hysterics". The bulk of these words were originally coined in the Shanghai dialect during the early 20th century and were later loaned into Mandarin, hence their pronunciations in Mandarin may be quite off from the English. For example, 沙发/沙發 "sofa" and 马达/馬達 "motor" in Shanghainese sound more like their English counterparts. Cantonese differs from Mandarin with some transliterations, such as 梳化 so1 faa3*2 "sofa" and 摩打 mo1 daa2 "motor".
Western foreign words representing Western concepts have influenced Chinese since the 20th century through transcription. From French came 芭蕾 bālěi "ballet" and 香槟 xiāngbīn, "champagne"; from Italian, 咖啡 kāfēi "caffè". English influence is particularly pronounced. From early 20th century Shanghainese, many English words are borrowed, such as 高尔夫/高爾夫 gāoěrfū "golf" and the above-mentioned 沙发/沙發 shāfā "sofa". Later, the United States soft influences gave rise to 迪斯科 dísikē/dísīkē "disco", 可乐/可樂 kělè "cola", and 迷你 mínǐ "mini [skirt]". Contemporary colloquial Cantonese has distinct loanwords from English, such as 卡通 kaa1 tung1 "cartoon", 基佬 gei1 lou2 "gay people", 的士 dik1 si6*2 "taxi", and 巴士 baa1 si6*2 "bus". With the rising popularity of the Internet, there is a current vogue in China for coining English transliterations, for example, 粉丝/粉絲 fěnsī "fans", 黑客 hēikè "hacker" (lit. "black guest"), and 博客 bókè "blog". In Taiwan, some of these transliterations are different, such as 駭客 hàikè for "hacker" and 部落格 bùluògé for "blog" (lit. "interconnected tribes").
Another result of the English influence on Chinese is the appearance in Modern Chinese texts of so-called 字母词/字母詞 zìmǔcí (lit. "lettered words") spelled with letters from the English alphabet. This has appeared in magazines, newspapers, on web sites, and on TV: 三G手机/三G手機 "3rd generation cell phones" (三 sān "three" + G "generation" + 手机/手機 shǒujī "mobile phones"), IT界 "IT circles" (IT "information technology" + 界 jiè "industry"), HSK (Hànyǔ Shuǐpíng Kǎoshì, 汉语水平考试/漢語水平考試), GB (Guóbiāo, 国标/國標), CIF价/CIF價 (CIF "Cost, Insurance, Freight" + 价/價 jià "price"), e家庭 "e-home" (e "electronic" + 家庭 jiātíng "home"), Chinese: W时代/Chinese: W時代 "wireless era" (W "wireless" + 时代/時代 shídài "era"), TV族 "TV watchers" (TV "television" + 族 zú "social group; clan"), 后РС时代/後PC時代 "post-PC era" (后/後 hòu "after/post-" + PC "personal computer" + 时代/時代), and so on.
Since the 20th century, another source of words has been Japanese using existing kanji (Chinese characters used in Japanese). Japanese re-molded European concepts and inventions into wasei-kango (和製漢語, lit. "Japanese-made Chinese"), and many of these words have been re-loaned into modern Chinese. Other terms were coined by the Japanese by giving new senses to existing Chinese terms or by referring to expressions used in classical Chinese literature. For example, jīngjì (经济/經濟; 経済 keizai in Japanese), which in the original Chinese meant "the workings of the state", was narrowed to "economy" in Japanese; this narrowed definition was then re-imported into Chinese. As a result, these terms are virtually indistinguishable from native Chinese words: indeed, there is some dispute over some of these terms as to whether the Japanese or Chinese coined them first. As a result of this loaning, Chinese, Korean, Japanese, and Vietnamese share a corpus of linguistic terms describing modern terminology, paralleling the similar corpus of terms built from Greco-Latin and shared among European languages.
The Chinese orthography centers on Chinese characters, which are written within imaginary square blocks, traditionally arranged in vertical columns, read from top to bottom down a column, and right to left across columns. Chinese characters denote morphemes independent of phonetic change. Thus the character 一 ("one") is uttered yī in Standard Chinese, yat1 in Cantonese and it in Hokkien (form of Min). Vocabularies from different major Chinese variants have diverged, and colloquial nonstandard written Chinese often makes use of unique "dialectal characters", such as 冇 and 係 for Cantonese and Hakka, which are considered archaic or unused in standard written Chinese.
Written colloquial Cantonese has become quite popular in online chat rooms and instant messaging amongst Hong-Kongers and Cantonese-speakers elsewhere. It is considered highly informal, and does not extend to many formal occasions.
The Chinese had no uniform phonetic transcription system until the mid-20th century, although enunciation patterns were recorded in early rime books and dictionaries. Early Indian translators, working in Sanskrit and Pali, were the first to attempt to describe the sounds and enunciation patterns of Chinese in a foreign language. After the 15th century, the efforts of Jesuits and Western court missionaries resulted in some rudimentary Latin transcription systems, based on the Nanjing Mandarin dialect.
In Hunan, women in certain areas write their local language in Nü Shu, a syllabary derived from Chinese characters. The Dungan language, considered by many a dialect of Mandarin, is nowadays written in Cyrillic, and was previously written in the Arabic script. The Dungan people are primarily Muslim and live mainly in Kazakhstan, Kyrgyzstan, and Russia; some of the related Hui people also speak the language and live mainly in China.
Each Chinese character represents a monosyllabic Chinese word or morpheme. In 100 CE, the famed Han dynasty scholar Xu Shen classified characters into six categories, namely pictographs, simple ideographs, compound ideographs, phonetic loans, phonetic compounds and derivative characters. Of these, only 4% were categorized as pictographs, including many of the simplest characters, such as rén 人 (human), rì 日 (sun), shān 山 (mountain; hill), shuǐ 水 (water). Between 80% and 90% were classified as phonetic compounds such as chōng 沖 (pour), combining a phonetic component zhōng 中 (middle) with a semantic radical 氵 (water). Almost all characters created since have been made using this format. The 18th-century Kangxi Dictionary recognized 214 radicals.
Modern characters are styled after the regular script. Various other written styles are also used in Chinese calligraphy, including seal script, cursive script and clerical script. Calligraphy artists can write in traditional and simplified characters, but they tend to use traditional characters for traditional art.
There are currently two systems for Chinese characters. The traditional system, used in Hong Kong, Taiwan, Macau and Chinese speaking communities (except Singapore and Malaysia) outside mainland China, takes its form from standardized character forms dating back to the late Han dynasty. The Simplified Chinese character system, introduced by the People's Republic of China in 1954 to promote mass literacy, simplifies most complex traditional glyphs to fewer strokes, many to common cursive shorthand variants.
Singapore, which has a large Chinese community, was the second nation to officially adopt simplified characters, although it has also become the de facto standard for younger ethnic Chinese in Malaysia. The Internet provides the platform to practice reading these alternative systems, be it traditional or simplified.
A well-educated Chinese reader today recognizes approximately 4,000 to 6,000 characters; approximately 3,000 characters are required to read a Mainland newspaper. The PRC government defines literacy amongst workers as a knowledge of 2,000 characters, though this would be only functional literacy. School-children typically learn around 2,000 characters whereas scholars may memorize up to 10,000. A large unabridged dictionary, like the Kangxi Dictionary, contains over 40,000 characters, including obscure, variant, rare, and archaic characters; fewer than a quarter of these characters are now commonly used.
"National language" (國語/国语; Guóyǔ) written in Traditional and Simplified Chinese characters, followed by various romanizations.
Romanization is the process of transcribing a language into the Latin script. There are many systems of romanization for the Chinese varieties, due to the lack of a native phonetic transcription until modern times. Chinese is first known to have been written in Latin characters by Western Christian missionaries in the 16th century.
Today the most common romanization standard for Standard Chinese is Hanyu Pinyin, often known simply as pinyin, introduced in 1956 by the People's Republic of China, and later adopted by Singapore and Taiwan. Pinyin is almost universally employed now for teaching standard spoken Chinese in schools and universities across America, Australia and Europe. Chinese parents also use Pinyin to teach their children the sounds and tones of new words. In school books that teach Chinese, the Pinyin romanization is often shown below a picture of the thing the word represents, with the Chinese character alongside.
The second-most common romanization system, the Wade–Giles, was invented by Thomas Wade in 1859 and modified by Herbert Giles in 1892. As this system approximates the phonology of Mandarin Chinese into English consonants and vowels, i.e. it is an Anglicization, it may be particularly helpful for beginner Chinese speakers of an English-speaking background. Wade–Giles was found in academic use in the United States, particularly before the 1980s, and until 2009 was widely used in Taiwan.
When used within European texts, the tone transcriptions in both pinyin and Wade–Giles are often left out for simplicity; Wade–Giles' extensive use of apostrophes is also usually omitted. Thus, most Western readers will be much more familiar with Beijing than they will be with Běijīng (pinyin), and with Taipei than T'ai²-pei³ (Wade–Giles). This simplification presents syllables as homophones which really are none, and therefore exaggerates the number of homophones almost by a factor of four.
Other systems of romanization for Chinese include Gwoyeu Romatzyh, the French EFEO, the Yale system (invented during WWII for U.S. troops), as well as separate systems for Cantonese, Min Nan, Hakka, and other Chinese varieties.
Chinese varieties have been phonetically transcribed into many other writing systems over the centuries. The 'Phags-pa script, for example, has been very helpful in reconstructing the pronunciations of premodern forms of Chinese.
There are also at least two systems of cyrillization for Chinese. The most widespread is the Palladius system.
Yang Lingfu, former curator of the National Museum of China, giving Chinese language instruction at the Civil Affairs Staging Area in 1945.
With the growing importance and influence of China's economy globally, Mandarin instruction is gaining popularity in schools in the United States, and has become an increasingly popular subject of study amongst the young in the Western world, as in the UK.
In 1991 there were 2,000 foreign learners taking China's official Chinese Proficiency Test (also known as HSK, comparable to the English Cambridge Certificate), while in 2005, the number of candidates had risen sharply to 117,660. By 2010, 750,000 people had taken the Chinese Proficiency Test. By 2017, 6.5 million candidates had taken the Chinese Proficiency Test of various kinds.
According to the Modern Language Association, there were 550 elementary, junior high and senior high schools providing Chinese programs in the United States in 2015, which represented a 100% increase in two years. At the same time, enrollment in Chinese language classes at college level had an increase of 51% from 2002 to 2015. On the other hand, the American Council on the Teaching of Foreign Languages also had figures suggesting that 30,000 – 50,000 students were studying Chinese in 2015.
In 2016, more than half a million Chinese students pursued post-secondary education overseas, whereas 400,000 international students came to China for higher education. Tsinghua University hosted 35,000 students from 116 countries in the same year.
With the increase in demand for Chinese as a second language, there are 330 institutions teaching Chinese language globally according to the Chinese Ministry of Education. The establishment of Confucius Institutes, which are the public institutions affiliated with the Ministry of Education of China, aims at promoting Chinese language and culture as well as supporting Chinese teaching overseas. There were more than 480 Confucius Institutes worldwide as of 2014.
^ No specific variety of Chinese is official in Hong Kong and Macau. Residents predominantly speak Cantonese and use traditional Chinese characters, the de facto regional standard. Standard Mandarin and simplified Chinese characters as the national standard are also used in some official and educational settings. The HK SAR Government promotes 两文三語 [Bi-literacy (Chinese, English) and Tri-lingualism (Cantonese, Mandarin, English)], while the Macau SAR Government promotes 三文四語 [Tri-literacy (Chinese, Portuguese, English) and Quad-lingualism (Cantonese, Mandarin, Portuguese, English)], especially in public education.
David Crystal, The Cambridge Encyclopedia of Language (Cambridge: Cambridge University Press, 1987), p. 312. "The mutual unintelligibility of the varieties is the main ground for referring to them as separate languages."
Charles N. Li, Sandra A. Thompson. Mandarin Chinese: A Functional Reference Grammar (1989), p. 2. "The Chinese language family is genetically classified as an independent branch of the Sino-Tibetan language family."
Norman (1988), p. 1. "[...] the modern Chinese dialects are really more like a family of languages [...]"
DeFrancis (1984), p. 56. "To call Chinese a single language composed of dialects with varying degrees of difference is to mislead by minimizing disparities that according to Chao are as great as those between English and Dutch. To call Chinese a family of languages is to suggest extralinguistic differences that in fact do not exist and to overlook the unique linguistic situation that exists in China."
^ a b DeFrancis (1984), p. 42 counts Chinese as having 1,277 tonal syllables, and about 398 to 418 if tones are disregarded; he cites Jespersen, Otto (1928) Monosyllabism in English; London, p. 15 for a count of over 8000 syllables for English.
^ A word pronounced in a wrong tone or inaccurate tone sounds as puzzling as if one said bud in English, meaning 'not good' or 'the thing one sleeps in.'"
^ A distinction is made between 他 as "he" and 她 as "she" in writing, but this is a 20th-century introduction, and both characters are pronounced in exactly the same way.
^ Encyclopædia Britannica s.v. "Chinese languages": "Old Chinese vocabulary already contained many words not generally occurring in the other Sino-Tibetan languages. The words for 'honey' and 'lion', and probably also 'horse', 'dog', and 'goose', are connected with Indo-European and were acquired through trade and early contacts. (The nearest known Indo-European languages were Tocharian and Sogdian, a middle Iranian language.) A number of words have Austroasiatic cognates and point to early contacts with the ancestral language of Muong–Vietnamese and Mon–Khmer."; Jan Ulenbrook, Einige Übereinstimmungen zwischen dem Chinesischen und dem Indogermanischen (1967) proposes 57 items; see also Tsung-tung Chang, 1988 Indo-European Vocabulary in Old Chinese.
^ a b Chinese Academy of Social Sciences (2012), p. 3.
^ Mair (1991), pp. 10, 21.
^ Norman (1988), pp. 12–13.
^ Handel (2008), pp. 422, 434–436.
^ Handel (2008), p. 426.
^ Handel (2008), p. 431.
^ Norman (1988), pp. 183–185.
^ Schuessler (2007), p. 1.
^ Baxter (1992), pp. 2–3.
^ Norman (1988), pp. 42–45.
^ Baxter (1992), p. 177.
^ Baxter (1992), pp. 181–183.
^ Schuessler (2007), p. 12.
^ Baxter (1992), pp. 14–15.
^ Ramsey (1987), p. 125.
^ Norman (1988), pp. 34–42.
^ Norman (1988), p. 24.
^ Norman (1988), p. 48.
^ Norman (1988), pp. 48–49.
^ Norman (1988), pp. 49–51.
^ Norman (1988), pp. 133, 247.
^ Norman (1988), p. 136.
^ Coblin (2000), pp. 549–550.
^ Coblin (2000), pp. 540–541.
^ Ramsey (1987), pp. 3–15.
^ Norman (1988), p. 133.
^ Zhang & Yang (2004).
^ Sohn & Lee (2003), p. 23.
^ Miller (1967), pp. 29–30.
^ Kornicki (2011), pp. 75–77.
^ Kornicki (2011), p. 67.
^ Miyake (2004), pp. 98–99.
^ Shibatani (1990), pp. 120–121.
^ Sohn (2001), p. 89.
^ Shibatani (1990), p. 146.
^ Wilkinson (2000), p. 43.
^ Shibatani (1990), p. 143.
^ a b Wurm et al. (1987).
^ a b c Norman (2003), p. 72.
^ Norman (1988), pp. 189–190.
^ Ramsey (1987), p. 23.
^ Norman (1988), p. 188.
^ Norman (1988), p. 191.
^ Ramsey (1987), p. 98.
^ Norman (1988), p. 181.
^ Kurpaska (2010), pp. 53–55.
^ Kurpaska (2010), pp. 55–56.
^ Kurpaska (2010), pp. 72–73.
^ Klöter, Henning (2004). "Language Policy in the KMT and DPP eras". China Perspectives. 56. ISSN 1996-4617. Retrieved 30 May 2015.
^ Kuo, Yun-Hsuan (2005). New dialect formation : the case of Taiwanese Mandarin (PhD). University of Essex. Retrieved 26 June 2015.
^ a b DeFrancis (1984), p. 57.
^ Thomason (1988), pp. 27–28.
^ Mair (1991), p. 17.
^ DeFrancis (1984), p. 54.
^ Romaine (2000), pp. 13, 23.
^ Wardaugh & Fuller (2014), pp. 28–32.
^ Liang (2014), pp. 11–14.
^ Hymes (1971), p. 64.
^ Thomason (1988), p. 27.
^ Campbell (2008), p. 637.
^ DeFrancis (1984), pp. 55–57.
^ Lewis, Simons & Fennig (2015).
^ Haugen (1966), p. 927.
^ Mair (1991), p. 7.
^ Hudson (1996), p. 22.
^ Baxter (1992), p. 7–8.
^ Norman (1988), p. 52.
^ Chao (1948), p. 24.
^ Matthews & Yip (1994), pp. 20–22.
^ Terrell, Peter, ed. (2005). Langenscheidt Pocket Chinese Dictionary. Berlin and Munich: Langenscheidt KG. ISBN 978-1-58573-057-5.
^ Norman (1988), p. 10.
^ Kane (2006), p. 161.
^ Zimmermann, Basile (2010). "Redesigning Culture: Chinese Characters in Alphabet-Encoded Networks". Design and Culture. 2 (1).
^ "How hard is it to learn Chinese?". BBC News. January 17, 2006. Retrieved April 28, 2010.
^ (in Chinese) "汉语水平考试中心:2005年外国考生总人数近12万",Gov.cn Xinhua News Agency, January 16, 2006.
^ a b "Chinese as a second language growing in popularity". CGTN America. 2015-03-03. Retrieved 2017-07-29.
^ "China is third most popular destination for international students". CGTN America. 2017-03-18. Retrieved 2017-07-29.
Campbell, Lyle (2008), "[Untitled review of Ethnologue, 15th edition]", Language, 84 (3): 636–641, doi:10.1353/lan.0.0054.
Chappell, Hilary (2008), "Variation in the grammaticalization of complementizers from verba dicendi in Sinitic languages", Linguistic Typology, 12 (1): 45–98, doi:10.1515/lity.2008.032.
Chinese Academy of Social Sciences (2012), Zhōngguó yǔyán dìtú jí (dì 2 bǎn): Hànyǔ fāngyán juǎn 中国语言地图集(第2版):汉语方言卷 [Language Atlas of China (2nd edition): Chinese dialect volume], Beijing: The Commercial Press, ISBN 978-7-100-07054-6.
Coblin, W. South (2000), "A brief history of Mandarin", Journal of the American Oriental Society, 120 (4): 537–552, doi:10.2307/606615, JSTOR 606615.
Handel, Zev (2008), "What is Sino-Tibetan? Snapshot of a Field and a Language Family in Flux", Language and Linguistics Compass, 2 (3): 422–441, doi:10.1111/j.1749-818X.2008.00061.x.
Hudson, R. A. (1996), Sociolinguistics (2nd ed.), Cambridge: Cambridge University Press, ISBN 978-0-521-56514-1.
Hymes, Dell (1971), "Sociolinguistics and the ethnography of speaking", in Ardener, Edwin, Social Anthropology and Language, Routledge, pp. 47–92, ISBN 978-1-136-53941-1.
Kornicki, P.F. (2011), "A transnational approach to East Asian book history", in Chakravorty, Swapan; Gupta, Abhijit, New Word Order: Transnational Themes in Book History, Worldview Publications, pp. 65–79, ISBN 978-81-920651-1-3.
Lewis, M. Paul; Simons, Gary F.; Fennig, Charles D., eds. (2015), Ethnologue: Languages of the World (Eighteenth ed.), Dallas, Texas: SIL International.
Liang, Sihua (2014), Language Attitudes and Identities in Multilingual China: A Linguistic Ethnography, Springer International Publishing, ISBN 978-3-319-12619-7.
Mair, Victor H. (1991), "What Is a Chinese "Dialect/Topolect"? Reflections on Some Key Sino-English Linguistic terms" (PDF), Sino-Platonic Papers, 29: 1–31.
Matthews, Stephen; Yip, Virginia (1994), Cantonese: A Comprehensive Grammar, Routledge, ISBN 978-0-415-08945-6.
Norman, Jerry (2003), "The Chinese dialects: phonology", in Thurgood, Graham; LaPolla, Randy J., The Sino-Tibetan languages, Routledge, pp. 72–83, ISBN 978-0-7007-1129-1.
Sohn, Ho-Min; Lee, Peter H. (2003), "Language, forms, prosody, and themes", in Lee, Peter H., A History of Korean Literature, Cambridge University Press, pp. 15–51, ISBN 978-0-521-82858-1.
Thomason, Sarah Grey (1988), "Languages of the World", in Paulston, Christina Bratt, International Handbook of Bilingualism and Bilingual Education, Westport, CT: Greenwood, pp. 17–45, ISBN 978-0-313-24484-1.
Wardaugh, Ronald; Fuller, Janet (2014), An Introduction to Sociolinguistics, John Wiley & Sons, ISBN 978-1-118-73229-8.
Zhang, Bennan; Yang, Robin R. (2004), "Putonghua education and language policy in postcolonial Hong Kong", in Zhou, Minglang, Language policy in the People's Republic of China: Theory and practice since 1949, Kluwer Academic Publishers, pp. 143–161, ISBN 978-1-4020-8038-8.
R. L. G. "Language borrowing Why so little Chinese in English?" The Economist. 6 June 2013.
Wikimedia Commons has media related to Chinese languages.
Wikivoyage has a travel guide for Chinese phrasebook. | 2019-04-18T19:03:09Z | http://conceptmap.cfapps.io/wikipage?lang=en&name=Chinese_language |
Right Course/Wrong Course: I find myself in an odd position. I’m not particularly interested in female ordination, though if it were offered to me, I’d not turn down the blessing. And yet, here I am, offering my two cents to OW.
First let me say that I can imagine a day when some form of priesthood will be granted in mortality to LDS women. In the late 1990’s, I became one of those dreaded history hobbyists, though I’ve not kept up my private study with any rigor. Regardless, back then I was stunned to learn that Joseph Smith spoke openly of female ordination. For instance, Emma Smith and her counselors were “ordained” to the Relief Society presidency (See Joseph Smith Papers, beginning bottom of pg 5). Of course, the term “ordain” in regards to women and LDS history has been parsed ad nauseum. A good, old-fashioned Mormon revelation–a real Thus-Saith-the-Lord–would be settling. However, even if female ordination became available, I doubt the practice would exchange a single male face in the organizational flow chart with that of a female. After all, Joseph Smith did not place any women in leadership positions outside the women’s Relief Society. Some may assume, in this day and age, female ordination would inherently mean broader female leadership, but history doesn’t bear that out. At least, that’s how I see it.
Communicating only “to the leaders of the church and to the Lord” is a mistake. In fact, at this point, it is redundant. You’ve made yourselves clear and visible to the Church hierarchy, and you’ve prayed. I submit the focus of your communication needs to be on the people sitting beside you in the pews on Sunday. They don’t stand with you and, if they don’t stand with you, how can you assert that they, particularly the daughters of God, are ready? By taking the public action that is planned for the the Priesthood Session of General Conference on April 5th, 2014, you will increase the membership’s distrust of you and leave them even less ready for female ordination than they are now. In other words, OW, you are about to shoot itself in the foot.
Stay with me as I review a little 20th century history to support my point.
Member Readiness May Matter more than Personal Worthiness: I remember reading that Apostle Spencer W. Kimball sensed that the priesthood would/should be extended to all worthy males long before the 1978 revelation. As an apostle, he felt moved to bring up the subject in meetings with the Twelve, but there was either no interest or agreement. Once Kimball became the president of the church, he moved among the Saints, quietly telling male members of “African descent” to prepare themselves, though no promises were made. Approximately three years after he became president, he and the apostles received a revelation that literally changed the face of the priesthood. Ordain Women hopes for a similarly sensitive church leader and a similar revelation.
Some cynically point out the revelation granting all worthy males the priesthood came only after years of political and social pressure on the Church … and they are correct in that, even if they are not correct that that pressure caused the revelation. If the church didn’t give in to earlier pressures, I have no reason to think any specific pressure in June of 1978 would have been sufficient to effect a change without the kind of spiritual experience that earns the title of revelation.
Interestingly, today’s discussions about blacks and the priesthood tend to revolve around the church hierarchy–the Brethren–and not the membership. But we forget that, in 1950, if a group of black Mormon men had stood outside a priesthood session of General Conference, asking for admittance, they would have been turned away. In fact, it’s very likely they would have been condemned by mid-century American Mormons as uppity, as not understanding either their place (their divine nature) or God’s will. Both are things OW members are accused of. I think it’s a fairly safe assertion to make that, in 1950, the Mormon population in the U.S. was not ready to accept the extension of the priesthood to black males. And so it didn’t come. Something had to change.
Enter the Civil Rights movements, brought straight into American homes with the advent of television. Images of bigotry softened hearts all across the U.S., including the hearts of the Saints, and taught all of America that Christ-like love must overpower socially-entrenched racism. Without this change, the 1978 revelation, late as so many argue it was, may never have happened. The membership of the church had to be more than okay with the change: I believe it had to crave it before the revelation came.
To put it simply, the negative feelings Latter-day Saints hold for the secular feminist’s organization is bleeding over into their feelings about today’s LDS feminism. We see this manifest in rampant assumptions that LDS feminism is angry, demanding, power-hungry, anti-family, anti-God, anti-church and, of course, anti-men. Ordain Women has a difficult uphill battle, one which external (secular) feminist pressure is more likely to hurt tha n help. This is a very different model than the one preceding the revelation granting ordination to all worthy males. The broader membership is just beginning to consider the way we address female modesty. Female ordination is a long way from that. If OW continues to focus primarily on communicating their readiness and humility to the leadership and the Lord, but does not focus on preparing the membership, the revelation they seek will not come.
Far be it from me to compare the struggles of African Americans, whether outside or inside the LDS Church, to the current situation of LDS women. To do so would an injustice. But the parallels between the restoration of the priesthood blessings to worthy males regardless of race has built-in parallels to the current quest of Ordain Women. And so I must visit the comparison as respectfully as I can.
Genesis Group founder Darius Gray has spoken often about his early efforts to make the church more inclusive. (Read the last question/answer of this interview with Mormon Artist.) One thing that is absent from the way the Genesis Group went about its efforts is a visible protest presence, even one as mild as standing outside the doors of the Priesthood session. I estimate Gray and other Genesis leaders understood such action would inflame existing prejudice, rather than help their cause or lead to an ecclesiastical answer to their prayers. They intuited that their quest was as much a public relations outreach extended toward the general membership as a matter of prayer and inspiration from the Lord to the First Presidency.
Let’s not forget that leaders of the Church of Jesus Christ of Latter-day Saints (affectionately called the Brethren) are also part of the membership and of the Mormon culture. No one hires them from outside. They grew up with the same cultural code as Mormons without high position. If they are particularly aged, they were taught a gender-differentiated worldview. If, eventually, there is to be female ordination, the path to it will likely mirror the path to ordination for black male members. In other words, the general membership must have their hearts softened toward it. Not because the leaders of the Church will bow to the desires of the membership, but because the membership is the soil our Heavenly Father uses to grow truth. The soil must be prepared while the Lord is whispering inspiration to his sustained leaders.
Trending: Consider the current climate regarding female ordination. In October 2013, the Pew Research Center published a poll showing that 90% of LDS women and 84% of LDS men don’t think women should be ordained. I’ve seen more recent polling that suggests the numbers of LDS women opposed to female ordination remains the same, but that more men are warming to the idea, possibly by up to 50%. Either way, a substantial majority of the membership rejects female ordination. The rhetoric I hear coming from the general membership of the church is often hostile toward feminism in general, and Ordain Women specifically. Just as the church membership once accused black LDS men (and there weren’t many) who dared aspire to the priesthood of not understanding or submitting to God’s will, so they accuse Ordain Women of not understanding the divine nature of women. Or of men, I suppose. The similarities are real. If a revelation instating female ordination is to come, OW needs someone like Spencer W. Kimball, a man who is sensitive to the issue, a man open to inspiration, and a man who the Lord will, in time, elevate up the patriarchal ladder. As Kimball was being prepared by Heavenly Father, a work among the members was also happening. And a similar work will be needed among present-day Mormons before any revelation could occur, assuming it will occur.
If you still don’t think it matters to the institutional Church that the majority of the membership is opposed to female ordination, consider the elevated import the Church’s official response to the planned action places on the need for wide consensus among, at least, LDS women. Paragraph two of the March 17th statement to OW from the Public Affairs Office reads: “Women in the church, by a very large majority, do not share your advocacy for priesthood ordination for women and consider that position to be extreme.” With this line, the Church has officially acknowledged that nothing will happen until the membership desires it. In essence, the statement outlines the preparatory work OW must accomplish: Ready the soil (ready the culture) or the seed will not take root. Individual supporters of Ordain Women may feel ready for female ordination, but most active, faithful LDS women aren’t giving it serious consideration.
The David and Goliath Paradigm: After the Church’s PR department issued its statement, I came across one LDS feminist’s comparison of the Ordain Women/LDS Church paradigm to that of David and Goliath. Though not a proponent of female ordination, she felt that the church’s statement was combative and that issuing it was a public relation faux pas for the Church. She reasoned, the little guy always wins the hearts of the people.
Perhaps in the non-LDS world, Ordain Women gains sympathy because the statement establishes a David and Goliath paradigm between OW (David) and the Church (Goliath), but the opposite will be true in the LDS world. Mainstream Mormons are going to root for their Church which, unlike Goliath, they view as good and true.
As I reflect on the David and Goliath paradigm, it strikes me as something OW should be careful to avoid. Consider the Biblical story: A young upstart picks up a stone and, using his slingshot, kills Goliath, the giant. After this success, David chops off the giant’s head and eventually becomes the leader of the people. The David and Goliath analogy suggests that Ordain Women wants to destroy the Church and assume leadership. This paradigm feeds the fears of the mainstream LDS. Maybe the PR department was calculating enough to intentionally establish this paradigm, but I’ll give the good people who serve there the benefit of the doubt. Regardless, OW now has to live with the fact that the paradigm is publicly established. The Church, as host of the event we call the Priesthood Session, has asked OW to refrain from their planned public action. If OW does not stay away, it will appear to the membership that, like David, they have picked up a stone. It may not be their intent to be combative, but combativeness will be the perception of mainstream Latter-day Saints, regardless of the smiles worn by OW supporters. Martin Luther King urged his supporters to remain non-violent. The trouble is, in Mormondom, disharmony and dissension are esteemed darn near as detrimental to spiritual growth as violence is to peaceful reform. The televised image of OW not walking toward the doors of the Priesthood session, but assembling elsewhere, will communicate a peaceful and positive message about them to the membership.
The only stones Ordain Women should be picking up are those which clear the field for planting. I urge them to stay away from the doors of the Priesthood Session this weekend. As I see it, this is the best action OW can take at this moment in history. It will begin to soften the hearts of the general membership toward them and, by extension, their cause. OW has a choice: They either move toward the doors of the Conference Center or toward the embrace of mainstream Mormons. At the end of the day, they will not find themselves inside either, but at least one of these directions will bring them closer to a Church readied for what they desire.
I caution them to choose prayerfully and wisely what they will be communicating this weekend. And to whom.
“He that is without sin among you, let him cast a first stone…” John 8:32.
I don’t participate in OW, but have been a long-time feminist, and am also a lifetime member. In my attempts to raise feminist issues amongst my friends at Church, I have never had so many people willing and interested in discussing these issues as in the last 6 months or so. I agree that if there is to be a revelation extending a priesthood role to women, that the Lord will wait until the membership are ready, and until there is a prophet sensitive to these issues that is ready to wrestle with the Lord on the matter. However, I feel the communication happening now is reading the field. And I largely credit it to the high profile of OW at the moment. My theory is that there will always be people who will vilify the most extreme expression of a viewpoint. Until recently, I felt like that was me with my feminist ways, and I never raised the question of female ordination. Now I no longer feel like most people consider me extreme, and I’ve had the most positive reception i’ve ever had, even from some fairly conservative Mormons.
The issue of Mormons not being receptive to feminism is probably generational I think. The era of feminism that ridiculed stay at home moms and that kind of thing happened in the 70’s, and while older generations certainly remember it, that type of thing really doesn’t appear to have much of a following in modern feminism. The people growing up now, and even who are young adults now could easily be ready in my view, for the kind of revelation you’re talking about. And from my own experience, OW is part of what’s helping that along.
Britti00, do not misread me. I agree that OW has done a fantastic job of raising awareness and bringing the topic of female ordination to the forefront. If you read my post carefully, you will see that I’m not suggesting they abandon demonstrations–or public actions–in a general way, only that they stay away from the doors this particular day out of respect… and to ingratiate themselves a little with those who continue to consider them troublemakers. Demonstrate elsewhere–not in the Free Speech Zone obviously–and I’m sure the cameras will follow and the conversation will continue. This is a small concession and not a concession of their mission or values. I’m not sure they’d lose a thing by reinventing this action, not that I think they will.
I agree that older generations of Mormons (to include the Brethren) may be less inclined to embrace feminism than younger members, but I wouldn’t bet the farm on it. I see Mormons of many ages being open to LDS feminism, but I’m not seeing them open up to female ordination.
This past weekend, I attended my local book club meeting, which is comprised mostly of younger women. They most certainly are interested in discussions about gender issues in the church, but universally they are not interested in female ordination. They think OW is wasting its time and causing contention where there needn’t be any. They expressed a desire to see LDS feminism focus on “real problems” and affirmed their belief that the priesthood is simply something God wants men to have and not women. In other words, OW is not winning their hearts. I may be wrong, but I seemed to sense some near resentment on their part for them not focusing on issues that *can* be changed, (remembering, again, that they not only don’t think female ordination is a change that’s in the pipe or should be).
I’m simply saying that, based on my experience with women like these (women who are faithful, young, and devout), OW could get a lot of traction out of redirecting this particular public action. Not stopping public actions, not shutting up and going home. Just a redirection that supports their claim that they are supportive members. Such a redirection will impress a lot of faithful Mormons who have questions about OW and its motives.
I (perhaps naively) have more faith in God and the Church than that God bases revelations on the misogyny and/or pride of His children.
Also of note (and probably worthy of clarification): the linked Pew article was published in 2013. BUT it is based on a survey conducted in 2011. I think the way you framed that (probably unintentionally) study gives the impression that the Pew survey is October 2013 data when it is not.
I don’t think I said anywhere that God bases His revelations on the misogyny, or lack thereof, of His followers. I said that the soil isn’t yet ready for a revelation and advised that the focus be more on that than on changing the leaders, who will only change the official position once the Lord so says.
Thanks for the info on the poll data. I didn’t realize it was 2011 numbers since I saw the 2013 date. My bad. I was expecting to see the numbers I saw in a recent Salt Lake Trib article, which put the numbers at something like 90% of women and about 50% of men against female ordination. My searches failed to find that poll so I went with the one I had. I agree that the numbers should be updated and will look again for the more recent numbers.
But the newer numbers won’t really change my point that the membership still is not interested. That will need to change and taking off-putting actions just doesn’t seem wise to me when they could still demonstrate off-site. The cameras will still follow.
Zack, I’m still not finding that poll, but I went back into the post and gave it a nod. I just wish I had a link … If anyone has it, please let me know.
I’ve read those Salt Lake Tribune articles. They are referencing this poll.
My article cited the stats you are referencing. They are from research done by David Campbell and Robert Putnam for the book American Grace: How Religion Divides and Unites Us (published in 2010). My guess is that Pew asked the questions differently because that was too big a difference to reflect a sea change, IMO.
Angela, THANK YOU! Today you are my hero. I was starting to think I was going crazy. I knew I’d read these numbers.
Thank you for this well-thought out post. I love that people are talking about these issues. I think my views align well with yours. I appreciate OW for the work they are doing in bringing a public awareness to “Mormon Feminism”. I also think they (for the most part) are not being confrontational, angry, or annoying). They are helping people think outside of their boxes. We need that. We need to see what others see. We need to feel what others feel. We need to listen and not judge! I wish them luck this weekend, but I also wish them the saavy-ness they may need to deal wisely with the situation they are in now.
I hope that as we try and share our feelings with the others “in our pew” that they will listen and not judge. I think it is actually interesting that while OW and fMH are pushing for more respect and recognition and fairness and equality for women, some of their biggest antagonists are ‘stay at home mommies’, many of whom have and will benefit immensely from the changes made because of the Feminist’s willingness to be vocal.
Rebekah, I love the discussion too. And I agree with your observations about today’s LDS feminism not being confrontational in the way so many accuse. There is, of course, a level of confrontation in asking for entrance to the Priesthood Session when they know they will be turned away, but they handled that last year with class and dignity. I applaud that. The video of that experience (which I link to) is quite moving.
Regardless of what they decide to do–to stick with the planned action (which I expect) or alter it in some way (which I recommend)–I’m confident the OW leadership will find a way to continue to navigate through it all. Thanks Rebekah.
You’re wrong and I hate you.
Also, that poll is completely arbitrary. It doesn’t matter what “the church” thinks or what members think. Since when did a vote on whether women should be ordained have any bearing on church policy and doctrine? Isn’t that your whole argument against OW? That we need to stop petitioning because what we think has nothing to do with God’s will?
Yeah. So stop it. You’re just contradicting yourself and you look dumb.
Oh, you let me worry about whether or not I look dumb.
I said nothing about votes or even popular opinion. I said that the soil isn’t ready for the revelation OW wants. I used the polling to demonstrate that point, not to suggest that God will grant a revelation when the numbers are better.
Furthermore, I am not arguing against Ordain Women and I certainly have not asked them to stop petitioning the Lord or church leaders, only to focus elsewhere because both the Lord and the leaders are aware of their desire. They should focus more on messaging to the general membership.
I admire Ordain Women. In fact, I’m suggesting a path that I genuinely think will improve their odds of success.
I’d also point out, Keli, that your vitriolic tone toward me is hardly going to help your cause. Unfortunately, you are shoring up the notion that LDS feminists are angry and accusatory.
Lisa, I really appreciate what you are saying here and even as a member of OW, I’ve asked myself the same questions. In the end, I realized that the PR department severely misrepresented OW and I feel like that was purposeful. I think the best thing we can do at this point is demonstrate how reverent we will be so that we are able to demonstrate that we are not, truly are not, a disruption but conduct ourselves with love, faith, and propriety.
Amy, thank you. I trust OW will handle themselves with as much dignity as they did in October. I prefer to be careful about assigning nefarious motives when I really haven’t a evidence (…unless we are talking about a politician. 🙂 I don’t know. I’m sure the PR dept has smart people working there, people who calculate the effect of their wording. Its troubling. Unfortunately, as I wrote, OW is stuck with the paradigm they established. Hang tough and stay prayerful.
Honest question- Don’t you think the best thing would have been to respectfully honor the request made to you by the Church? You say you feel that you were misrepresented. So wouldn’t you want to remove all doubt and show that you DO sustain the leaders and respect the Church enough to accept what they asked of you? Wouldn’t ignoring their request reveal something about your group that would give an air of legitimacy to the things you feel are misrepresentations?
Amy, you said “I realized that the PR department severely misrepresented OW and I feel like that was purposeful.” I’ve heard a number of OW supporters echo the same thought. I expect that pretty much everything put out by a PR employee of any organization is “purposeful,” that’s kind of their job after all. However, I don’t accept that this is a smear job that we’ve gotten so accustomed to in the world we live in. Rather than lying about OW, I see the statements put out by the Church as an honest representation of the Church’s perception of OW’s actions and statements. The fact that most rank and file members agree with the Church’s expressed perception (no surprise) seems to be OW’s biggest problem, and I think that OW would have been better served to have followed Lisa’s advice above, which was excellent.
Regardless of the root cause, OW does appear to many (or most) to be in opposition to the Church. The media loves that narrative, so they push it. The Church clearly believes this to be the case. So, when I hear Kate Kelly claim that OW is “in the Church,” and more ambitiously state that OW “is the Church,” it rings really hollow. Most members don’t accept that faithful members do what OW did, regardless of how reverent or respectful OW tried to be. OW’s continuation of the action after being asked to not do it was not viewed as reverent or respectful by the rank and file.
Another point in this discussion is leaving it to the Lord to determine what that Priesthood would look like: would it be an ordination into the Melchizedek Priesthood exactly like men, or an ordination into a Relief Society priesthood. So far, OW is demanding only one form of MP priesthood, and not a women’s version. This also establishes a demand that smacks of NOW.
That Darius Gray and the Genesis group had historical hope: Elijah Abel, etc,. gave them a direction to go in. Emma Smith’s ordination to the RS presidency would perhaps also be a historical pattern. However, one then has to accept that there may be a difference in priesthoods. I’m not sure OW is interested in receiving what the Lord would offer..
That’s essentially what I see, or what I wonder about regarding the future of female ordination. What will it look like? This is a tough one because, should a revelation come and should it not restore a priesthood that looks like the male version, our faith practice will remain in conflict with the broader American culture. What then? But I have to say, this is a very interesting time to experience life as a Latter-day Saint. Thanks for chiming in.
I don’t have anything to add, just couldn’t stand that to be the last comment. Thank you, Lisa, for your well-written, cool-headed thoughts. I was most alarmed a year ago that we (church population at large, even self-proclaimed “feminists” such as self) were unable to have a calm–or any–discussion on the subject, and am happy to see that it has changed. Hooray for progress.
This is a very interesting, pragmatic read on the situation.
Awesome post. Thank you for weaving your way through the crowd of voices of those for, against, confused by, and curious about Ordain Women and for describing that path very carefully. I also enjoyed reading your exchange with justagirl on your other post.
Kudos to you for being able to see where the common criticisms are empty and unthought, but, for all that, also being able to see what actual constructive criticisms can be made. When there are unthoughtful comments or criticisms on either side, it’s important (though daring) to be able to work through them for the sake of understanding, progress, and truth.
Whether or not I agree or like all of what you said, I whole-heartedly respect your intellectual and sensitive work. Thanks again.
Thanks Karen. I try to limit my brain-dead days to Sundays, since (forgive me) sometimes meetings can be a tad dry. And sometimes its better if I just sleep through Sunday School so I’m not tempted to chime in. In all seriousness, I appreciate your generous words.
I’m wondering if you guys are being so nice to make up for that one person calling me dumb and professing her hatred for me. If that’s the case, I think I should have someone call me dumb every day.
I think you are right. It is an interesting conundrum OW has set up for itself. However, OW has a few other flaws that I think feed into this so called “debate” without even realizing it.
1) OW has created a false dichotomy with regards to Priesthood. They are demanding it for all of God’s children; but I have always wondered what it is that they think the “Priesthood” is. When the members of OW, or its founding and leading group, think of or envision the “Priesthood,” what is it they see? Do they see the Bishop? Do they see the pageantry of priesthood ordinations and blessings? Or do they see baptisms?
If they see the Bishops, then all they see is the leadership – which women in the church already have. In no other Christian church that I can think of can a woman lead any portion of the church without an ordination into a “priesthood.” And yet, women in the LDS Church have been able to hold positions of leadership from the beginning (hence Emma’s “Ordination,” which was not an ordination into a priesthood, but a consecration for a calling; it still happens today, but we use a more refined nomenclature). Today, a woman can lead a local, stake, or general group of women in several different positions – Relief Society, Young Women’s, and Primary. They can serve as mission presidents along with their spouses (just as men serve as mission presidents with their spouses). So, women can lead in very meaningful ways. To say otherwise, is a false statement.
If they see the pageantry of priesthood ordinations and blessings, then they are simply mistaking the facts. Women already hold a magnificent power in themselves, one that does not need rigorous rules, or events or even interviews. Women enter Young Women’s and Relief Society without the need of having another person put their hands on her head. The power and authority to enter those groups is inherent. Why demean that with what the men go through? Men must be interviewed and set apart by those with the authority to do so. Men have gates and obstacles to their power and authority. Does OW really just want the pageantry of it? To strip the women of what they have already and make them earn it again? If that is the case, then what does that mean for the generations of women who served before OW? The many powerful and wonderful women who did such amazing things without being “ordained” to a priesthood? Did it mean nothing? I certainly hope not.
Or do they see laurels and priests conducting baptisms? Blessing the bread and water? Deacons and Mia Maids passing the sacrament? What about at the temple? Do they see women putting their hands through the veil? Using the sealing powers? In all of their rhetoric, OW will have a hard time finding scriptural/historical support for such use of the priesthood power, let alone doctrinal support.
Regardless, OW has created this false dichotomy, by establishing an “Us v. Them” argument. At the risk of making a strawman here, it appears to me that OW argues “Men have all the priesthood, and women are left with nothing.” This is terribly untrue. Even if it is stated as, “Men have more priesthood, and women are left with less,” it is still untrue. As is seen in the temple, men and women officiate in the Melchizedek Priesthood together. That is the purpose of the Priesthood – it is not meant to be held by one person, but by two. Just as the powers of procreation are not meant to be held by one person, but by two. To deem otherwise robs all parties of the amazing symmetry of God’s plan.
So, OW has shot itself in the foot. But I do not think it is only over the issue of protesting (repeatedly) outside of the General Priesthood Session. It has taken upon itself and perpetuated a dialogue that requires people to state that they are either in line with the leadership or against it; they either accept God’s chosen council, or oppose it, waiting for the “righteous” leaders to come along and change the corrupt system. Their outreach to the 1978 extension of the priesthood propagates a narrative that is in its core incorrect, and then capitalizes on it to gain persuasion and power over the people. They appear to have asked a question of the leaders, but deny the leadership’s response, and denounce it as not of God; presupposing that they can receive revelation for the Church above God’s anointed.
I have often asked people, where in the scriptures do we find the tactics of OW being used? The tactic of challenging the church to move, and refusing to comply with the Lord’s Prophet? In the Book of Mormon, these tactics were referred to as cunning, flattery, and pride. I ask, “In seminary, did you ever ask yourself, how could anybody follow these people?” Sherem, Nehor, Corihor, Amalickiah. I think if there is a lesson to learn from history, it is not that the Church once “ordained” women; it is that the Lord is in charge. Yes, we can bring our questions to the Church; but when the leadership answers, we accept it.
Tolsti, thank you for your well-constructed comment.You’ve done a very nice job of summing up the way many LDS see this issue and the points you raise need to be a part of the discussion. Your question about just how OW envision a priesthood is a good one. I suspect there may be many answers since all supporters are individuals.
Like in politics, I think people’s minds work on different tracks. I actually plan a post in the near future about how two people with very different ways of looking at a problem, ways that seem in opposition to one another (and that are perceived that way by one another), can, in reality, be heading in the same direction. I’m pretty sure the OW supporters don’t see themselves in opposition to the church or the Lord. God knows their heart. From what I can observe, hear and understand, they are well-meaning. I think its important all of us make the effort to learn to read one another’s map of the world.
Check back next week for The Parable of Convict Lake. It won’t be the post I plan about political discourse in the church (which has application to this discussion), but will address how it is people with such different views can be counted among the believers. I’d love to hear your feedback on that upcoming post.
Again, thank you for chiming in.
Something that puzzles me is this: The author writes beautifully about how Spencer W. Kimball was primed and ready for the revelation on blacks and the temple/priesthood.
Thomas S. Monson spent five years as the bishop to 70-some-odd widows and 50 years telling the Church stories about exercising the power of the priesthood to minister to faithful women who do not have the priesthood in their home simply because their respective husbands died. He has been primed and prepared to receive this revelation. And he has primed and prepared the Church to receive and accept it. Monson has spent virtually his entire mortal ministry speaking passionately on the subject and laying what could very logically be considered a groundwork for a revelation about women and the priesthood. A revelation about ordaining women would be a beautiful and poignant conclusion to his ministry.
How wonderful would it be for a man who has spent his entire life speaking about using his priesthood to bless widows to stand up and declare that the sweet widows and millions of other women of this Church should no longer have access to the priesthood through overburdened bishops, but through ordination?
And that may well be what happens. Thank you, Zack. Beautiful–and faithful–point. The thing about Kimball is that we know he felt the ban was prime for divine change, but it still took him 3 years to receive the revelation after he became prophet. Why? Maybe it took three years of his influence to move the 12. Or maybe he needed those three years for something else. Maybe things will change sooner than I and a lot of other people anticipate. If that be the Lord’s will, I’m all in. Trust me, I have no personal need to be right. In fact, being wrong can be as much a badge of honor as being right if, in error, humility presides.
This article in general is pretty well-balanced. Without going into my personal thoughts, I must point out that this statement above points out a fundamental problem with much of this debate. The women themselves receiving the priesthood would in fact not relieve these “overburdened bishops”. This establishes an error in thought expressed by OW as well: “we want to receive the blessings that the priesthood brings”. These statements don’t express the true purpose of the priesthood, which is serving others and blessing others. I cannot lay my hands on my own head, nor does the priesthood fortify my ability to do for myself. The priesthoods true purpose is service.
It is at this point I could express that I believe motherhood to be a form of priesthood. I could express what most members already have about women creating life, and raising children. But that really isn’t the point to those in this movement. I can only say that I hope we ALL have faith to accept revelation, either direction it goes. I am ready to accept what the Lord reveals, and I hope that those in the OW movement are ready to accept what is revealed even if it is not what they have fought for. We must all battle the prideful thought of “knowing better”. We should all pray for the faith to follow whatever His revelation may be.
It is true that, for the sake of convenience in sentence structure, my comment conflates the blessings received through priesthood blessings and the blessings of priesthood ordination. I consider both types of blessings to be tremendously important. Church leaders certainly speak at length not only about the blessings received by those receiving priesthood ordinances and blessings, but the blessings received by those performing them.
I do not think it is correct to characterize any type of hunger for equal access to all types of blessings of the priesthood as an “error in thought.” The world needs more priesthood in it. Every ward I have ever been in needed more priesthood and every one of them was a revelation away from potentially doubling their roster of active, worthy, and faithful priesthood holders. Those 85 widows (per Wikipedia I undershot the number in my prior comment) in Monson’s ward were certainly blessed by his faithful ministry. How much greater could their blessings have been if, though ordination, they had been able to minister to one another? Not only would this have eased the burden on their wonderful bishop, it would have greatly increased each of those widows’ access to priesthood blessings in times of sickness and despair, blessed them as they righteously exercised their priesthood, and advanced the work of the Lord.
I want to post this and I hope this does not come off as extremely confrontational. Here goes.
Motherhood does not equal the Priesthood. I like to think of it in terms of math. The correct parallel for this is: Motherhood=Fatherhood. Might seem like a very simple point, but I believe it is extremely important.
When we try to equate motherhood with the priesthood, many also make the illogical conclusions such as: women are awesome because they can have kids and, since they should stay at home with them, then that is their main job. While the man’s main job is to provide for said children AND run the church. That’s not true. Both women and men have a responsibility to pitch in and help “run” the church. Parenting should be a team effort. It should be an equal comparison. My “job” is to take care of the kids during the time my husband is gone with his “job” that provides for our kids. When he gets home, we jointly care and teach our kids.
We degrade women by creating the attitude that motherhood is some sort of substitution that we got because we aren’t eligible for the priesthood. It makes me want to weep that women are somehow subconsciously unaware of how important motherhood is, that they must try and make themselves feel like it is either part of the priesthood or better than the priesthood etc. Motherhood is a divine calling. We have been told it is near to the angels; why do we feel the need to make excuses for it? I understand in the busyness of kissing boo-boos and bath time and chores and the endless diapers and spit-up or teenage attitudes…. that it may not *feel* angelic, but it IS.
All of these ideas are just myths and we need to stop thinking like that.
Motherhood is not a consolation prize.
Rebekah, you have spoken very well the sentiments of a growing number of women. LDS women. The gender issues we have to deal with as a people don’t solely effect women. Thank you.
Motherhood is not a consolation prize. To say that suggests that you do not view it with the glory that is inherent to it. I am not married and do not have children. But I am here to prepare for my eternal mission, which will be different than that of a son of God. We have different elements of the mission and we are here to prepare.
Motherhood IS the prize. Even if we do not have that experience on this earth- we are here to prepare for it for eternity. If we do not value it as the most sacred of all responsibilities then we will value something else as greater and we will not be prepared for eternity.
Fatherhood is the prize for the sons of God. They are here to prepare for THEIR eternal experience. The Church and the family is structured in such a way to prepare them for eternal duties. The offices of the priesthood give them opportunity to learn what it means to become a father forever. We do not need that experience. Because WE are preparing to be mothers forever. And that will not be the same experience. Therefore the preparation is different.
So, you would rather these women have “had” something so they didn’t need anything and the Bishop wouldn’t have needed to visit them and then the sacred, beautiful experiences of coming together wouldn’t have happened?
There is GREAT purpose in us needing each other. President Monson’s experiences with his beloved widows shows this. We are meant to need each other. If we fulfilled all our own needs- we’d never become an eternal family.
Zack- In your mind it would have been better for the women to have no need of their Bishop and therefore just stay by themselves and for all those beautiful sacred experiences of pure love to never have taken place? It is BEAUTIFUL that they needed their Bishop. And beautiful that their Bishop knew that and served in the true manner of the priesthood.
We are meant to need each other. What some call burdens are really opportunities for loving and sacred relationships to happen. Miracles. If we all had all things and didn’t need each other- we wouldn’t establish a forever family. That is the essense of family. We need each other because we all bring different gifts and roles to play.
I think you have done an admirable job of defining many of the issues surrounding OW. I disagree with a couple of your points, however. Early in the article you state that the women will feel bad about leaving the the church they love. I don’t think that many of these women love the church as they do not support the church hierarchy and to be a good Mormon you have to believe that these men are called by and inspired by God. OW is attacking that principle. A few days ago, I read the letter sent by OW to the First Presidency requesting being able to attend Priesthood Meeting. I know that some of those women have already left the church, so obviously they don’t love it. I agree with you that history is not on the movement’s side in achieving success, and I think the result for them will be disassociation from the church either by their own or by church disciplinary action. Thank you for your discussion.
When I listened to the fMH podcast with Kate Kelly, one of things she did a very nice job of explaining was that the women in the movement who have left the church, often left the church because they felt marginalized. We can look at that as "not loving the church," I suppose. But what I hear in that is those women saying, "We didn't feel the church loved us." Humans are complicated. I thank God every day I'm not required to judge people's hearts. I do believe most of the women involved do love their religion.
One thing I especially hope–indeed, pray for–is that the disassociation you forecast does not happen.
Thank you for this post. I appreciate your ideas and as a member or OW, I appreciate you respectful tone even more. OW has started many discussions about Mormon women and I am glad about that. Your thoughts are intriguing and I’m glad you’ve added this voice to the conversation.
The majority of the OW sisters who have reached out to me have been particularly kind and respectful in return. Of course, I’ve read accusations that I am condescending in this post. >Heavy sigh< I'm working on mastering my literary skills, let's just say that. Suzette, I don't know if you'll be there on Saturday evening at the Conference Center, but please know that I truly do respect you and hope the best for you. God speed.
I very much appreciate the tone you take in this article although I disagree with some portions. It must be a very complex, difficult, and delicate thing for a person or group to disagree with current Church policy and still believe the Church is divinely led. I appreciate the way you are trying to unite, appreciate, foster, and reconcile rather than the contrary. Like the quiet masses you describe, I have much more empathy and respect for peaceful diplomacy than for dogmatic conflagration.
Thank you Anon. For me, it’s impossible to feel disrespect for a group who seems to be earnestly trying to navigate difficult waters.
Lisa, I think OW would have been well served to have followed your very excellent advice. Their insistence in moving forward despite being asked not to certainly does more to solidify their standing as an outside group (at least in the eyes of the Church and the general membership). I would not have the serious questions I have for and about the group that I have now if they had said, “you know what? We’ve been asked not to come to Temple Square, we’ll do something else instead.” They certainly wouldn’t have as many people talking about them, but the voices as a whole would have been more positive.
OW has stated that one of the reasons for their demonstration was to, well, demonstrate, that the membership of the group is ready and willing to take on the priesthood. I find it ironic that they chose to demonstrate their readiness for ordination through an act of civil disobedience.
Just want to make sure you have read this on LDS.org that defines how the Church officially changed the Priesthood “Revelation” to a correction of a cultural mistake. https://www.lds.org/topics/race-and-the-priesthood (also read the new heading on Official Declaration 2). We can thank Darius Gray and Margaret Young for the research behind the change and mistake admission by the Church.
As for OW, I am sympathetic to their issues on a personal level, men and women leaders make many mistakes and have their own way of doing things that could be done better; but I have received my own personal revelation that becoming a priesthood holder is in no way necessary to my salvation.
Thank you for your words. I have long thought, too, that the reason the Church did not ordain blacks to the priesthood until 1978 was because society was not ready. As you said the Civil Rights movement had to happen. All of your other points were quite good as well. Thank you so much!
Whether the hold-up was membership readiness or the readiness of church hierarchy, its sad and disturbing to think that white readiness caused the delay and the pain black members faced. Thanks for chiming in. | 2019-04-19T22:30:44Z | https://outsidethebookofmormonbelt.com/2014/03/31/headline-ordain-women-shoots-self-in-foot/ |
Copyright: © 2008 Lutz et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: Support was provided by NCCAM U01AT002114-01A1, Fyssen foundation to AL and NIMH P50-MH069315 to RJD, and by gifts from Adrianne and Edwin Cook-Ryder, Bryant Wangard and Ralph Robinson, Keith and Arlene Bronstein and the John W. Kluge Foundation. No funders or sponsors participated in the design or conduct of the study, or in the analysis, and interpretation of the data, or in the preparation, review, or approval of the manuscript.
Many contemplative traditions speak of loving-kindness as the wish of happiness for others, and of compassion as the wish to relieve others' suffering. In many traditions, these qualities are cultivated through specific meditation practices designed to prime behaviors compatible with these wishes in response to actual interpersonal encounters. Despite the potential social and clinical importance of these affective processes, the possibility that they can be trained in a manner comparable to attentional or sensory-motor skills has not yet been investigated with neuroimaging techniques, even though recent electrophysiological data support this hypothesis .
To cultivate these affective qualities practitioners in a number of traditions have developed meditative practices, which are thought to be essential to counteract self-centered tendencies . Techniques include concentration exercises that train attention, behavioral training such as the practice of generosity, cognitive strategies including reflection on the fleeting nature of the self and empathic strategies such as shifting perspectives from self-oriented to other-oriented, or the visualization of the suffering of others . Traditionally such mental training comprises years of scholastic study and meditative practice. The long-term goal of meditators undergoing such training is to weaken egocentric traits so that altruistic behaviors might arise more frequently and spontaneously. The purpose of this study is to examine the brain circuitry engaged by the generation of a state of compassion (short for “compassion and loving-kindness meditation state”) in long-term Buddhist meditators and novice meditators.
Here, “expert” meditators have more than 10,000 hours of practice in Buddhist meditation and are perceived in their communities as embodying qualities of compassion (see Methods). Experts were compared with age-and gender-matched “novices” who were interested in learning to meditate, but had no prior experience except in the week prior to the scanning session, in which they were given meditation instructions for the same practice performed by the experts. The meditative practice studied here involves the generation of a state in which an “unconditional feeling of loving-kindness and compassion pervades the whole mind as a way of being, with no other consideration, or discursive thoughts” (for details see Meditation Instruction). According to the tradition, as a result of this practice, feelings and actions for the benefit of others arise more readily when relevant situations arise. Our main hypothesis was thus that the concern for others cultivated during this meditation would enhance the affective responses to emotional human vocalizations, in particular to negative ones, and that this affective response would be modulated by the degree of meditation training. Here we broadly refer to empathy as the capacity to understand and share another person's experience. Recent fMRI or PET studies have demonstrated that observing or imaging another person's emotional state activates parts of the neuronal network involved in processing that same state in oneself, whether it is disgust, pain, or social emotion , (for reviews see , ). These data are consistent with perception-action models of empathy in which observing and imagining another person in a particular state is thought to activate a similar state in the observer.
Brain function was interrogated using a block and event-related paradigm during periods of mental practice alone, and in response to emotional human vocalizations (positive, neutral, or negative sounds from a normalized database, ). The block and event-related effects were modeled as independent factors in the analysis. To test our main hypothesis, we focus here only on the event-related data that allow the study of the modulation of responses to emotional stimuli by this voluntarily induced state. The voxel-wise analysis of the emotional sounds (event-related design) was performed using a 2×2×3 factorial design with the first factor representing “Group” (15 experts vs. 15 novices), the second factor “State” (compassion vs. rest), and the third factor “Valence” (negative, neutral or positive emotional sounds). We predicted that participants would feel more moved by the emotional sounds during compassion meditation than when at rest. Thus, the brain regions underlying emotions and feelings (insula, anterior cingulate cortex (ACC), and possibly somatosensory areas, for review see , , ) would be more activated in response to emotional sounds during compassion meditation than during the resting state. As this meditation is said to enhance loving-kindness when the joy of others is perceived or compassion when the suffering of others is perceived, this effect was predicted to be stronger for the negative sounds (sounds of a distressed woman) and positive sounds (a baby laughing) than for neutral sounds (background noise in a restaurant). As this state is practiced to foster altruistic behaviors, the predicted three-way interaction (Group by State by Valence) should be driven by a stronger empathic response to negative than the positive sounds during meditation than rest and a modulation of this effect by expertise.
In this study we did not include a behavioral task because practitioners reported that a task would disrupt their ongoing meditation. But verbal self-reported intensities of the meditation were collected after each block allowing us to identify good vs. poor blocks of meditation (see protocol). To further confirm our general prediction, we examined the interaction between the verbally reported quality of meditation (good vs. poor) and Group as factors. We predicted that insula and ACC would be more activated in response to emotional sounds during good vs. poor block of compassion, as verbally reported. Finally, we measured pupil diameter to obtain an independent index of autonomic arousal (eyes open and loosely fixated on a fixation point in both rest and meditation blocks) to determine if there were group differences in autonomic arousal during the task. To eliminate any possible group differences in autonomic arousal from influencing MR signal changes, we regressed out the effect of pupil dilation from BOLD responses in the empathic circuitry to remove the contribution of variations in emotional arousal from empathic responses.
As predicted there was a Group-by-State-by-Valence interaction in several regions critical for empathy (insula cortex, somatosensory cortex (SII), Fig. 1.A, Table 1). The interaction was a function of experts showing a larger increase than the novices during meditation vs. rest in response to emotional (positive and negative) vocalizations vs. neutral vocalizations (Figs. 1.B–C, Table 1). The activation in insula cortex during compassion was a function of the intensity of the meditation as verbally reported, which was stronger during the good vs. the poor blocks of meditation across the two groups (Figs. 1. D–E, Table 1). Since there was no difference between states in response to the neutral sounds in the clusters from figure 1 (table 1), following our prediction we ran a follow-up 2*2*2 ANOVA using only negative and positive sounds in a voxel-wise analysis. There was only one cluster showing a Group by State by Valence interaction, which was located in the right insula (3667 voxels, corrected p<0.05, Fig. 2.A). The effect was produced by a stronger activity in the responses to negative vs. positive sounds during meditation vs. rest for the experts compared to the novices (Fig. 2.B, t = 2.1, df = 28, P<0.05, paired t test). The activity in this cluster was also stronger during the good vs. poor blocks of meditation (main effect for verbal report, F(1,20) = 6.8, P<0.05, ANOVA). The voxel-wise 2*2 repeated ANOVA analysis with group and verbal report (poor vs. good) as factors confirmed these findings. There was a main effect for good vs poor blocks in the right insula (Figs. 2.C–D) and ACC (Table 2). Together these results support our main hypothesis that the brain regions underlying emotions and feelings are modulated in response to emotional sounds as a function of the state of compassion, the valence of the emotional sounds and the degree of expertise.
Figure 1. State by Group by valence Interaction: A.
(AI) and (Ins.) stand for anterior insula and insula, respectively (z = 12 and z = 19, 15 experts and 15 novices, color codes: orange, p<5.10ˆ-2, yellow, p<2.10ˆ-2). B, C. Impulse response from rest to compassion in response to emotional sounds in AI (B) and Ins. (C). D–E. Responses in AI (D) and Ins. (E) during poor and good blocks of compassion, as verbally reported, for 12 experts (red) and 10 novices (blue).
A. Voxel-wise analysis of the Group by State by Valence (negative versus positive sounds) interaction in insula (Ins.) (z = 2, corrected, colors code: orange, p<5.10ˆ-2, yellow, p<2.10ˆ-2, 15 experts (red) and 15 novices (blue)). B. Average response in Ins. from rest to compassion for experts (red) and novices (blue) for negative and positive sounds. C–D. Voxel-wise analysis of BOLD response to emotional sounds during during poor vs. good blocks of compassion, as verbally reported. C. Main effect for verbal report in insula (Ins.) (z = 13, corrected, colors: orange, p<10ˆ-3, yellow, p<5.10ˆ-4, 12 experts and 10 novices). D. Average response in (Ins.) for experts (red) and novices (blue).
Table 1. Group by State by Valence interaction.
Table 2. Activation during poor vs. good blocks of meditation, as verbally reported.
In addition, we explored the other effects from our main 3*2*2 factorial design. There was no main effect of group. The main effect for state showed stronger activation during meditation than rest in limbic regions (AI, ACC) and in a circuitry previously linked with “mentation” about the mental states of others (temporal lobes, pSTS, TPJ, medial prefrontal cortex (mPFC) and the posterior cingulate cortex (PCC)/ precuneus (Prc.)) (Table 3). The pattern exhibited stronger activity in the right hemisphere than in the left hemisphere (Table 4).
Table 3. Main effect for State.
There were no Group-by-Valence or State-by-Valence interactions. Voxelwise analysis of group-by-state interactions showed experts to have had considerably stronger activation in components of the posterior part of this network (right TPJ, right pSTS, Prc./PCC) (Figs. 3.C–D), in the right inferior frontal gyrus (IFG), bilateral amygdalae (Figs. 3.A–B) and in two motor regions (pre-central gyri and post. medial frontal cortex, BA6) (Table 5). The magnitude of the group-by-state interaction was driven by the BOLD response of experts, who showed a negative average impulse response to the sounds at rest (Figs. 4.A and 4.E) but a positive response in these regions during meditation (Figs. 4.B and 4.F) in right TPJ, right IFG, Prc./PCC and mPFC. Novices and experts showed similar positive activation in the auditory cortex during both rest and meditation, indicating, as expected, sensory correlates of the auditory sounds (Figs. 4.E–H). These group differences were also highlighted in patterns of asymmetric BOLD response in the TPJ where experts showed a strong right-sided activation bias while novices showed virtually no activation difference to the emotional sounds in this region during meditation vs. rest (Fig. 3.E, Table 6).
Figure 3. State by Group Interaction: A.
(Amyg.) stands for amygdala (y = −5, color codes: orange, p<2.10ˆ-3, yellow, p<5.10ˆ-4). B. Impulse response in (Amyg.) for 15 experts (red) and for 15 novices (blue) during rest (dashed line) and compassion (full line). C–D. Same as A–B in TPJ; y = −61. E. Side by state effect and side by state by group effect in TPJ on the average impulse response between meditation and rest; experts are in red, novices in blue.
Figure 4. Directionality of the brain activation.
Areas showing a negative ( dark blue, p<0.01, blue, p<0.005) or positive (orange, p<0.01, yellow, p<0.005) impulse response on average across 10 seconds in responses to all emotional sounds for the 15 novices and 15 experts at z = 31 compared to baseline (figs. A–D) and z = 13 (figs. E–H) (voxel-by-voxel paired t test compared to 0, corrected at p<0.01).
Table 5. Group by State interaction.
Finally, pupil diameter increased in response to all sounds in meditation vs. rest (9 controls and 7 experts participants, main effect for state, ANOVA, F(1,15) = 5.2, p<0.05) and was stronger for experts than novices (group by state interaction, ANOVA, F(1,15) = 5.2, p<0.05). The pupil diameter increase during meditation vs. rest positively correlated with the larger increase in anterior insula (AI) in responses to all sounds in meditation vs. rest (r = 0.54, p<0.05). In this cluster, there was a State-by-Group interaction, as well as a State effect, (stronger for experts than novices, ANOVA, F(1,28) = 11.3, p<0.005 ) that was preserved even when the variation in the pupil signal was covariated out from the BOLD signal (ANCOVA, F(1,27) = 20.2, p<0.0005 for State effect and F(1,27) = 5.1, p<0.05 for State–by-Group interaction).
Prior neuroimaging studies of empathy have shown that by observing another's emotional state, part of the neural circuitry underlying the same state becomes active in oneself, whether it is disgust, pain or social emotions (see –. Such findings are consistent with the perception-action model of empathy . Recently, researchers have begun to investigate whether these empathy processes can be modulated by the implicit context of the empathic experience , . We extended this contextual approach by showing that regions previously associated with empathic processes were modulated by voluntary regulation of one's emotional responses via the generation of compassion.
All participants exhibited stronger neural responses to all emotional sounds in the AI and ACC during compassion meditation than when at rest (Table 3), and experts exhibited stronger responses than novices to negative than to positive emotional sounds in somatosensory regions (SII, post-central gyrus) during compassion meditation than when at rest (see Fig. 1.A–B, Table 1). Those regions in which stronger activity was measured are also known to participate in affect and feelings , , . Furthermore, the amplitude of the activity in several of these regions, in particular the insula cortex, was associated: with the degree to which participants perceived that they had successfully entered into the meditative state (Figs. 1.D–E and 2.C–D, Table 2); with expertise of compassion meditation (Figs. 1.A–C, 2.A–B, Tables 1 and 5); and with the relevancy of the emotional sounds during the compassion meditation (stronger response to the voice of a distressed person than that of a laughing baby, or than to background noise from a crowd, 1.A–C, 2.A–B, Table 1). The peaks of activation in the IA (x = 37, y = 15, z = 1, Table 3) and ACC (one at (x = 9, y = 6, z = 42), and one at (x = 5, y = 24, z = 37), Table 3)) found in the main effect of state (compassion vs. rest) overlap with regions previously found to be activated during empathy for others' suffering (x = 39, y = 12, z = 3) for IA and ACC (x = −9, y = 6, z = 42) and (x = 0, y = 24, z = 33) from . A similar interaction effect in the somatosensory cortex was found in this brain region, reflecting greater activation when adopting the first-person vs. third-person perspectives, and even more during an emotional vs neutral context . These findings suggest that cultivating the intent to be compassionate and kind can enhance empathic responses to social stimuli. The functional group difference found in insula is consistent with larger cortical thickness in this region among meditators than among controls, in a group of meditators trained in a tradition that usually contains a compassion meditation component . The group difference in BOLD signal is consistent with the group difference in amplitude of gamma-band (25–50 Hz) oscillations in EEG data recorded from the same group of long-term meditators during the same meditation .
We found greater activation in a circuit commonly recruited during the reading of others' mental states (TPJ, pSTS, mPFC, PCC/Prc., Figs 3–4. , Tables 3 and 5) in response to sounds during compassion than when at rest , , . This pattern was strongly modulated by expertise in particular in the PCC/Prc and right pSTS/ TPJ (Figs. 3.C–D, Table 5). Many of these regions were lateralized to the right (Table 4) more strongly for experts than for novices, particularly in the right TPJ (Fig. 2.E, Table 4). The right lateralization of pSTS is in accordance with previous work on social cognition , . Of particular interest to our study, the link between expertise in compassion and the activation in the right pSTS is consistent with the finding that pSTS activation predicts self-reported altruism . The activation peak in pSTS in this study was part of the cluster illustrated in Fig. 3.C (x = 46, y = −64, z = 23) and of the cluster from the main effect of state (x = 41, y = −48, z = 45, Table 3). Our finding of greater activation in the right pSTS/ TPJ among experts suggests that the meditative practice of compassion may enhance emotion sharing, as well as perspective taking.
In addition to the right pSTS/TPJ, the scope of the brain circuitry which showed an interaction between expertise and meditation also encompassed the right IFG (Table 5). The TPJ and IFG together compose a circuitry classically viewed as an attentional system specialized to detect behaviorally relevant stimuli, in particular when the stimuli are salient or unexpected . A similar increase of activation in the amygdalae, linked to appraisal of emotional stimuli, (Figs. 3.A–B, Table 5) further supports this view. The greater increase in activation of this circuitry in experts than in novices suggests that experts might be more primed to detect salient events, such as the suffering of others, during this voluntarily induced state. Even if attention might have influenced the processing of emotional stimuli and thus have increased emotional arousal, the fact that the activation in the insula was still present when we regressed out changes in pupil diameter induced by the sounds supports the role of insula not only in emotional arousal, but also in empathic processes.
Most of the areas included in the “mentation network” also overlap with the proposed “default mode” or “resting state” networks (typically mPFC, rostral ACC, PCC, Prc and posterior lateral cortices, for review , ). A wide range of tasks have been found to produce a relative decrease in BOLD signal in this network in comparison to a passive resting state, implying that this network is also active during the resting state. Given recent interest in this network, it is worth noting the experts' ability to generate states that can selectively produce BOLD deactivation (rest, Figs. 4.A, 4.E) and activation (meditation, Figs. 4.B and 4.F) in precuneus and TPJ in response to sounds, suggesting that these regions were more activate during rest than meditation prior to the presentation of the sounds. Future study investigating in more detail the phenomenology of these states might shed new light on the functionality of this circuitry.
Because novices and experts differ in many respects other than simply the extent of meditative training (such as culture of origin and first language), longitudinal research that follows individuals over time in response to compassion training will be needed to further substantiate our findings. It will also be essential to assess the impact of such emotional training on behavioral tasks involving altruism, and, more generally, emotional reactivity and regulation. The long-term question is to evaluate whether repeated practice in such techniques could result in enduring changes in affective and social style . The fact that large and systematic changes in brain function were observed in response to auditory emotional stimuli presented during the meditative practice of compassion, and the fact that robust differences were observed between experts and novices, suggests that the next steps to evaluate the behavioral impact of this training and to longitudinally assess its effects are warranted.
Participants included 16 long-term Buddhist meditators, whom we classified as experts (mean = 45.0 years, SD, 12.7 years, ages 29 to 64 years for the 15 experts used in these analyses), and 16 healthy volunteers (ages 36 to 56 years, mean = 47.1 years, SD 8.8 for the 15 novices used in these analyses who did not differ in age (t test, p = 0.55). Two participants were not included in the analysis due to excessive motion (see Data Analysis). All participants were right-handed, except for one ambidextrous expert, as assessed by Edinburgh Handedness Inventory , and all but 4 were male (2 experts and 2 age-matched novices). Buddhist meditators recognized as experts (9 of Asian origin, 7 of European origin) were contacted by Dr. Ricard, an interpreter for the Dalai Lama who is a Western Buddhist monk with scientific training and 35 years of meditative training in Nepal. Experts had previously completed from 10,000 to 50,000 hours of meditative training in a variety of practices, including compassion meditation, in similar Tibetan traditions (Nyingmapa and Kagyupa). The length of their training was estimated based on their daily practice and time spent in meditative retreats. Ten hours of meditation per day of retreat was estimated as an average. Control participants were recruited via advertisements in local newspapers and consisted of members of the UW Madison community. The advertisement specifically recruited participants who had an interest in meditation, but who had had no prior meditative training. One week before the actual fMRI scan session, novices were given written instructions on how to perform the meditative practices, written by Dr. Ricard, following which they practiced this compassion meditation and two other meditations for one hour a day for a week (20 minutes per meditation). Written informed consent was obtained prior to scanning, in accordance with procedures and protocols approved by the UW-Madison Institutional Review Board. A proficient Tibetan speaking translator gave detailed procedural instructions and read the consent form to non-English speaking participants.
The state of loving-kindness and compassion is described as an “unconditional readiness and availability to help living beings”. This practice does not require concentration on particular objects, memories or images, although in other meditations that are also part of their long-term training, meditators focus on particular persons or groups of beings. Because “benevolence and compassion pervades the mind as a way of being”, this state is called “pure compassion” or “non-referential compassion” (dmigs med snying rje in Tibetan). As described in Dr. Ricard's instructions for novices: “During the training session, the subject will think about someone he cares about, such as his parents, sibling or beloved, and will let his mind be invaded by a feeling of altruistic love (wishing well-being) or of compassion (wishing freedom from suffering) toward these persons. After some training the subject will generate such feeling toward all beings and without thinking specifically about someone. While in the scanner, the subject will try to generate this state of loving kindness and compassion.” The Resting state (Tib. “sem lung ma bstan”–literally: neutral (lung ma ten) mind (sem)) was a non-meditative state without specific cognitive content and with a lack of awareness or clarity of the mind. Novice's Instructions were the following: “Neutral here means that your emotional state is neither pleasant nor unpleasant and that you remain relaxed. Try to be in the most ordinary state without being engaged in an active mental state.” Novices' ability to follow the instruction was assessed orally prior to the data collection.
Before the MRI scanning session, participants had a simulation session during which they viewed an abbreviated version of the experimental paradigm while lying in a mock MRI scanner (including head coil and digitized scanner sounds). This simulation session served to acclimate participants to the fMRI environment. We used a block design, alternating ∼3 min of the state of meditation (4 cycles) with ∼1.6 min of a resting, neural state (5 cycles), twice on separate days. There was an average time per session of 643 seconds of meditation and 550 seconds of neutral state (264 seconds and 190 seconds respectively for expert participant 2). A total of 25 2-second auditory sounds from the International Affective Digitized Sounds (IADS) for each valence (positive, neutral and negative) were randomly presented across these two sessions. These sounds were presented every 6–10 seconds after the first 40 seconds of the meditative blocks and after 15 seconds of the resting blocks. To have a comparison condition for the statistical analysis of the event-related data, null trials (silent events) were randomly presented between the auditory stimuli. Participants were instructed to maintain their practice during the presentation of the sounds. During the meditation and neutral states, eyes remained open and directed toward a fixation point on a black screen. In this study we did not include a behavioral task because practitioners reported that a task would disrupt their ongoing meditation.
We did, however, collect self report information about the quality of the blocks of meditation from all participants. After each scan run, participants were asked to verbally report the meditative intensity of each block on a scale from 1 to 9. Some participants, not comfortable using the number scale to quantify or qualify their meditative states, simply identified the two blocks, from among the four recorded, that were the best and the worst of the day. Using these quantitative and/ or qualitative reports we chose only those blocks rated as either best and worst, or the two blocks from among the four having the highest and lowest ratings on the 9 point scale for inclusion in our analyses of good vs. poor blocks of meditation. Two scans were run on two separate days (1 day apart for experts, in general less than 1 week apart for novices) due to the length of the scan run. Standard data collection and analysis processing procedures were followed and are described in SI Methods.
MR images were collected with a GE Signa 3.0 Tesla scanner equipped with a high-speed, whole-body gradient and a whole-head transmit-receive quadrature birdcage headcoil. Whole-brain anatomical images were acquired at the end of each session using an axial 3D T1-weighted inversion-recovery fast gradient echo (or IR-prepped fast gradient echo) sequence. The field of view (FOV) was 240×240 mm with a 256×256 matrix. The slice thickness was 1–1.2 mm, with 0.9 by 0.9 mm in-plane dimensions. Functional data were collected using whole-brain EPI (TR = 2000, TE = 30 ms). For functional images, sagittal acquisition was used to obtain 30 interleaved 4 mm slices with a gap of 1 mm between slices. The resulting voxel size was 3.75 by 3.75 by 5 mm (FOV = 240 mm, matrix = 64×64).
To ensure a high signal-to-noise ratio in areas prone to susceptibility artifacts, the field inhomogeneities were lessened during data collection using high-order shim coils that applied small correction gradients. In addition, acquisition of a 3D field map of the magnetic field provided a complementary strategy to further reduce distortion (these data were not acquired for the first three experts). Based on these field maps, echo planar imaging (EPI) data were unwarped so that accurate alignment to anatomical images could be made , . During the fMRI session, head movement was restricted using a vacuum pillow (Vac Fix System, S&S Par Scientific). A Silent Vision system (Avotec, Inc., Jensen Beach, FL) displayed the fixation point for the concentrative task. Eye movements, fixations and pupil diameter were continuously recorded during the fMRI scan using an iView system (sampling rate, 60 Hz) with a remote eye-tracking device (SensoMotoric Instruments, 2001). We collected pupil data from 13 controls and 10 experts participants.
Analysis techniques were similar to those described previously by our lab . Briefly, data processing was implemented via AFNI (Analysis of Functional Neural Images) version 2.51 software . Data processing steps included image reconstruction in conjunction with smoothing in Fourier space via a Fermi filter, correction for differences in slice-timing, 6-parameter rigid-body motion correction, and removal of skull and ghost artifacts. The motion estimates over the course of the scan for translation (inferior-superior, right-left, and anterior-posterior) and rotation (yaw, pitch, roll) were charted. Time points with more than 0.5 mm of motion, as well as time points in which head motion correlated with the presentation of the block (which could lead to spurious activations that might be mistaken for brain activation) were removed from the analysis. Due to excessive head motion, 1 expert and 1 novice were omitted from the group analysis. Two of the experts could not complete the second session, and one of the two sessions was omitted for 5 of the experts and 6 of the novices, due to excessive head motion. One of the sessions of 1 novice was omitted due to sleepiness. Between the two subject groups, participants with only one session of data were matched (7 experts and 7 novices).
The time series of meditative blocks and neutral blocks were modeled with a least-squares general linear model (GLM) fit that modeled the block effect, event-related sound responses and motion parameters in six directions. For the event-related sound responses, a 6-parameter sine function basis set was used to model the shape of the hemodynamic response in a 20 second window. The average of the estimated event-related response between 2 seconds and 12 seconds was converted to percentage signal change using the mean overall baseline and spatially smoothed using a 6 mm Gaussian filter. The resultant percentage signal change maps were transformed into the standardized Talairach space via identification of anatomical landmarks on the high-resolution anatomical image.
The main analysis of the emotional sounds (event-related design) was performed using a 2×2×3 factorial design (voxelwise 3-way ANOVA) with State (resting and meditation states) and Valence (negative, neutral and positive) as factors varying within subject and with Group as a between-subjects factor (Matlab package for AFNI, C. Gang). Monte Carlo simulations were run to correct for multiple testing to achieve an overall corrected mapwise p = 0.05. For the State effect, Group by State effect and State by Valence interaction and Group by State by Valence, we found that the minimum cluster sizes were, respectively, of 323, 1030, 1580 and 3580 contiguous voxels with the data thresholded at an uncorrected voxelwise p-value of p = 0.001, p = 0.01, p = 0.02 and p = 0.05 respectively. The data were then overlaid onto a high-resolution anatomical image. Complementary analyses were then run on the average percentage signal change in each of these clusters (Tables 1–5) using the same factorial analysis. Second, we tested for hemispheric differences in level of activation. In order to create a symmetrical cluster common in size in both hemispheres, the larger cluster from one side of the brain was flipped to the other side, combined with the smaller one and flipped back to initial side. Finally, in the table describing the main effect for state (Table 3) we ran paired t-tests comparing responses during meditation and resting states within each group in each of these ROIs (paired two-tailed ttest, threshold p = 0.05).An a priori anatomical template was then used to further delineate overlapping areas. We chose to delineate the posterior vs. anterior temporal lobes at y = .−25 mm. Paired ANOVAs were run on each ROI with laterality (right and left clusters) and group (experts versus novices) as factors. A complementary analysis of the emotional sounds was performed using only the positive and negative sounds (2×2×2 factorial design, minimum cluster size 3580 voxels for the Group by State by Valence interaction at an uncorrected voxelwise p-value of p = 0.05). A exploratory voxel-wise analysis of the relationship between verbal report and BOLD signal during meditation was performed on average response to positive and negative emotional sounds using 2×2 factorial design with verbal report (poor vs. good blocks of meditation as verbally reported) as a factor varying within subject and with group as a between-subjects factor (Matlab package for AFNI, C. Gang). We found that the minimum cluster sizes were 1030 contiguous voxels for the main effect of verbal report and 1580 contiguous voxels for the Group X Verbal report interaction, with the data thresholded at an uncorrected voxelwise p-value of p = 0.01 and p = 0.02, respectively. Only 12 experts and 10 novices had sufficient verbally reported information to be included in this analysis. Finally, a regression was applied to only the experts to examine any effect the number of lifetime hours of training and age had on the percentage of signal changes in these clusters. Neither of these factors had any significant effect.
Analysis was similar to Urry et al. (2006): the pupil dilation data were cleaned and processed using algorithms designed by Siegle, Granholm, and Steinhauer (2002, unpublished Matlab code) with Matlab software (MathWorks, Natick, MA) and adapted in our laboratory (L. L. Greischar, 2003, unpublished Matlab code). Blinks were identified and eliminated using local regression slopes and amplitude thresholds. Missing data points were then estimated using linear interpolation across artifacts shorter than 4 seconds in duration. Pupil diameter was aggregated into 1 s bins, and autonormalized compared to the mean and global variance across the session. The pupil dilation responses following the emotional sounds were normalized across participants by substracting the ongoing 1-second baseline preceding the stimulation. Irrelevant drifts in the pupil diameter data over the course of the scan session were removed by automatically rejecting trials that did not show the average phasic response to sounds. The group analysis was performed on the mean pupil diameter across the first 5 seconds following the end of the sound stimulus. Participants needed at least 6 trials in both the resting and meditation condition to be part of the group analysis (9 controls and 7 experts participants matched the above criteria). An analysis of variance was first conducted on the pupil data only (ANOVA, with State (resting and meditation states) as factor varying within subject and with Group as a between-subjects factor). An analysis of covariance was then conducted between the pupil data and the BOLD responses to sounds in the clusters showing a State or State by Group interactions (9 controls and 7 experts participants, ANCOVA with the pupil diameter as a continuous factor and State (resting and meditation states) as factor varying within subject. Insufficient power precluded treating Group as a factor).
We would like to acknowledge Dr. Matthieu Ricard for assistance with task design, participant recruitment and written meditation instructions. Dr. John Dunne for Tibetan translation and clarifications on Buddhist meditative techniques, and research assistants A. Shah, C. Lutz, A. Francis and S.P. Simhan for assistance in data collection and data processing. We thank Drs. Perrine Ruby and Tania Singer for helpful comments on an earlier draft on this manuscript. Thanks also to the Mind & Life Institute for help in securing funding and for the involvement of the experts.
Conceived and designed the experiments: AL RD TJ. Performed the experiments: AL. Analyzed the data: AL. Contributed reagents/materials/analysis tools: JB TJ. Wrote the paper: AL RD.
1. Brefczynski-Lewis JA, Lutz A, Schaefer HS, Levinson DB, Davidson RJ (2007) Neural correlates of attentional expertise in long-term meditation practitioners. Proc Natl Acad Sci U S A 104: 11483–11488.
2. Maguire EA, Gadian DG, Johnsrude IS, Good CD, Ashburner J, et al. (2000) Navigation-related structural change in the hippocampi of taxi drivers. Proc Natl Acad Sci U S A 97: 4398–4403.
3. Lutz A, Greischar LL, Rawlings NB, Ricard M, Davidson RJ (2004) Long-term meditators self-induce high-amplitude gamma synchrony during mental practice. Proceedings of the National Academy of Sciences of the United States of America 101: 16369–16373.
4. Gethin R (1998) The Foundations of Buddhism. Oxford: Oxford University Press.
5. Dalai Lama X (1995) The world of Tibetan Buddhism: An overview of its philosophy and practice. Boston: Wisdom Publisher.
6. Singer T, Seymour B, O'Doherty J, Kaube H, Dolan RJ, et al. (2004) Empathy for pain involves the affective but not sensory components of pain. Science 303: 1157–1162.
7. Ruby P, Decety J (2004) How would you feel versus how do you think she would feel? A neuroimaging study of perspective-taking with social emotions. J Cogn Neurosci 16: 988–999.
8. de Vignemont F, Singer T (2006) The empathic brain: how, when and why? Trends Cogn Sci 10: 435–441.
9. Sommerville JA, Decety J (2006) Weaving the fabric of social interaction: articulating developmental psychology and cognitive neuroscience in the domain of motor cognition. Psychon Bull Rev 13: 179–200.
10. Preston SD, de Waal FB (2002) Empathy: Its ultimate and proximate bases. Behav Brain Sci 25: 1–20; discussion 20–71.
11. Bradley MM, Lang PJ (2000) Affective reactions to acoustic stimuli. Psychophysiology 37: 204–215.
12. Dalgleish T (2004) The emotional brain. Nat Rev Neurosci 5: 583–589.
13. Damasio A (2001) Fundamental feelings. Nature 413: 781.
14. Damasio AR (2000) The Feeling of What Happens: Body and Emotion in the Making of Consciousness: Harcourt Brace.
15. Beatty J, Lucero-Wagoner B (2000) The pupillary system;. In: Cacioppo JT, Tassinary LG, Berntson G, editors. Cambridge, UK: Cambridge University Press. pp. 142–162.
16. Singer T, Seymour B, O'Doherty JP, Stephan KE, Dolan RJ, et al. (2006) Empathic neural responses are modulated by the perceived fairness of others. Nature 439: 466–469.
17. Lazar SW, Kerr CE, Wasserman RH, Gray JR, Greve DN, et al. (2005) Meditation experience is associated with increased cortical thickness. Neuroreport 16: 1893–1897.
18. Saxe R (2006) Uniquely human social cognition. Curr Opin Neurobiol 16: 235–239.
19. Ruby P, Legrand D (2007) Neuroimaging the self? In: Haggard P, Rossetti Y, editors. Sensorimotor Foundations of Higher Cognition: Oxford University Press.
20. Tankersley D, Stowe CJ, Huettel SA (2007) Altruism is associated with an increased neural response to agency. Nat Neurosci 10: 150–151.
21. Corbetta M, Shulman GL (2002) Control of goal-directed and stimulus-driven attention in the brain. Nat Rev Neurosci 3: 201–215.
22. Gusnard DA, Raichle ME (2001) Searching for a baseline: functional imaging and the resting human brain. Nat Rev Neurosci 2: 685–694.
23. Davidson RJ (2004) Well-being and affective style: neural substrates and biobehavioural correlates. Philos Trans R Soc Lond B Biol Sci 359: 1395–1411.
24. Oldfield RC (1971) The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia 9: 97–113.
25. Jaffer FA, Wen H, Jezzard P, Balaban RS, Wolff SD (1997) Centric ordering is superior to gradient moment nulling for motion artifact reduction in EPI. J Magn Reson Imaging 7: 1122–1131.
26. Jezzard P, Balaban RS (1995) Correction for geometric distortion in echo planar images from B0 field variations. Magn Reson Med 34: 65–73.
27. Urry HL, van Reekum CM, Johnstone T, Kalin NH, Thurow ME, et al. (2006) Amygdala and ventromedial prefrontal cortex are inversely coupled during regulation of negative affect and predict the diurnal pattern of cortisol secretion among older adults. J Neurosci 26: 4415–4425.
28. Cox RW (1996) AFNI: software for analysis and visualization of functional magnetic resonance neuroimages. Comput Biomed Res 29: 162–173. | 2019-04-22T16:12:53Z | https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0001897 |
10) Eli Givens Palo Alto (CCS) 10.79/21.54/Nate Esparza 62'5.5"/177'7"
9) Michael Gonzalez Lodi (SJS) 16'0/Erika Malaspina Pacific Collegiate (CCS) 13'2"
8) Darius Thomas St. Francis (CCS) 7'0"
4) Darius Carbin Mt. Pleasant (CCS) 47'4"/7'1"
3) Isaiah Holmes Oakmont (SJS) 23'9"/48'3.25"/7'0"
1) Elena Bruckner Valley Christian SJ (CCS) 53'5.5"/182'8"
Her official time at the University of Washington Invite was 15:09.31 just missing her outdoor PR of 15:08.61 from 2014. According to Dave Monti, Conley has now run the following times in the last two months.
Feel free to post your question in the comment section below. You can also ask questions that can be answered by frequent visitors to this site.
Today we chat with Mission high school senior runner, Salem Bouhassoun. He ran PRs of 1:59.27, 4:20.88 and 9:14.67 during his junior track and field season. This past cross country, Salem had top 5 finishes at the Hoka One One Earlybird, De La Salle/Carondelet and Stanford Invitationals. Following his victory at the San Francisco Section meet (photo to the left courtesy of Thomas Benjamin), Salem finished in 19th place in the Division III race at the California State Cross Country meet. Salem's twin sister, Nour, was featured in an article titled "The Chilling Rise of Islamophobia in Our Schools." which you can read at this LINK that details the hardships of Syrian refugees.
1) Where were you born and when did you come to United States? How much of an adjustment was it coming to a new country?
I was born in Syria, in a small city named Swieda. I lived there for 14 years and immigrated here the summer right before my freshman year in high school. The biggest adjustment I had to go through was learning English so I can make new friends, meet new people, and figure out what I want to do with my life. It felt hard in the first couple months because I missed my friends, relatives, and country. But that motivated me even more to learn English faster so I can create a new social life for myself. I am glad I was able to overcome that quickly and adapt to my new environment within a year.
2) What sports have you played aside from cross country and track and field? When and how did you start running?
I played a lot of soccer as a little kid at school during PE classes or in front if my house. Sometimes my friends and I would play with water bottles or soccer balls made out of socks just because we used to play all the time and sometimes we couldn't find soccer balls available. In 7th grade I found my talent in swimming after I learned how to swim in one day. I became so good at it and had the dream of coming here to the U.S to become a famous swimmer. But my parents couldn't spend money on me swimming and my school did not have a team so I had to choose a different sport. As a freshman I played soccer and then wrestling. The soccer coach used to make us go to some of the XC races so we can get more fitness that will help us in soccer, and thats is how I was introduced to XC. When I became a Sophomore, I decided to focus on one sport so I chose running to be active throughout the whole year since there are two seasons with cross country and track and field.
3) What were some of your highlights from your sophomore seasons in XC and TF?
It was very motivational for me to place 6th place at the all-city finals and make it to the State meet in Cross country even though it was my first year and I had only been running for about 2 months at that time. That was my biggest highlight in the sport and my teammates were so excited for me that they threw me in a mud puddle after the race. That proved to me that I have a lot of potential in the sport.
4) What about from your junior seasons in both sports?
As a junior, I saw a tremendous improvement after I learned that off season mileage makes a big difference on your season performance. I improved by 2 minutes and 18 seconds in XC, and went from placing 186th as a sophomore to 45th at state finals. In Track, I was even more fitter, and I was much more experienced when it came to racing. I also improved by a minute and 4 seconds when I went down from running 10:18 as sophomore to running 9:14 as a Junior. I was also the the city section champion in the 1600m and the 3200m which I consider a big accomplishment. I was very happy that I also got the chance to go to some very competitive meets and get the chance to race at state finals and against fast guys throughout the whole season.
5) What do you feel was your breakout race that put you on the running map in Northern California?
I think my breakout race was at the Arcadia invitational when I ran an unexpected 9:15. My previous personal record before that race was only a 9:29 so I was not really expecting to shave off that many seconds two weeks later.
6) How has your training changed from your sophomore year to now?
My sophomore year, I was only running about 20 miles a week with one workout, and sometimes only training 3 days a week. I used to come into my season out of shape. I also still did wrestling in between XC and Track during that year. But now, I average about 60 miles a week and I am always fit and ready early in the season from the miles I run in the off season.
7) Who coaches you and how has he helped you develop in the runner you are today?
My school never had a history with running. I was the only serious runner on the team as a junior with only a couple other runners who used to show up once a week for fun. I met Coach Octaviano Romero through my old teammate when I was a sophomore. He works for a sales company that is an hour a way from the city but is really into running and loves to coach. After his nephew graduated, he noticed my talent and decided to stay with me to coach me. He is working right now as volunteer coach for our school coaching me and one other sophomore named Mateo who has recently been stepping up.
8) What does a typical training week look like for you? Any double runs? Longest run? Typical pace per mile for most of your "easier" runs?
Saturday: a race or another workout.
Sunday: Long run that ranges between 12-15 miles at a recovery pace.
Total miles would be about 55-65 miles.
9) Favorite XC course? Favorite XC meet? Favorite XC workout? Favorite long run? Favorite opponent(s)? Favorite track event? Favorite track invite? Favorite track workout? Favorite free time activity?
My favorite Cross Country course is the Stanford golf course. I love the grass there and the course is very flat and nice. Favorite XC meet is the State finals just because I always have fun going down to Fresno to see my fiends from other schools and race the day after, although I really hate the Woodward Park course. My favorite XC workout is long hill repeats, I feel that I was supposed to do more of those the past season. My favorite long run is running around Lake Merced then through Golden Gate Park. There are some nice trails and good trees to run by. I always meet a lot of runners on my way. My favorite Track workout is 400 repeats and we would usually do 18-20 of those at target 3200m race pace. My main event is the 3200m. My favorite track invitational is Dan Gabor. I love the feeling I get there and the weather, plus it is the first Invitational of the year so I'm always excited for it. My favorite opponent is Eduardo Herrera (Madera South) even though I don't race him a lot or know him that well, but just because of how respectful and humble he is. I love my first opponent and section competitor Luis Aragon. He's very chill and our competition has made us great friends. Although I am looking forward to getting to know more elite runners that I will be running against more often this season like Sean Kurdy, Michael Vernau, Cooper Teare, and others. During my free time I love to swim. It brings a lot of memories to me. I also love bowling and I am really good at Yoga.
10) I believe you are in the midst of making your college decision. How much will running be a factor? What else do you feel is important for you when it comes to choosing your next school?
Running is going to be a big but not the only factor in terms of choosing my future school. I believe that I still have a lot of potential left in me and I want to make sure that I am picking the right program that will help me develop as a runner and help me achieve my future goals such as running and representing my country Syria in the Olympics. At the same time, I am looking at other factors like the academics, location, class size, cost and diversity. In the end, running will be a second priority for me as I want to focus on getting a degree at a medical school after excelling in a kinesiology or biology major during my 4-5 years at an undergraduate school. Therefore, I am looking at the best combination between all of these factors.
11) What are you most looking forward to this upcoming Track & Field season? What are some of your goals?
I am looking forward to staying healthy and injury free throughout the upcoming Track and Field season. Other than that, I think it would be nice and achievable for me to break the San Francisco section 3200m and 1600m records before I graduate, and maybe run a sub 9:00 for the 3200m and place top 10 at state finals if possible.
I am really thankful for my Coach Octaviano Romero who got me to be where I am today. I also want to thank you for reaching out to me. I love reading about other dedicated runners on this wonderful website and learn about their stories.
Other inductees include legendary Mira Loma/Del Campo coach Bob King, Sacramento's first state champion, JFK's Clifton West, ultra-marathoner Jim Howard and race director Greg Soderlund.
For more information on all the inductees, go to the link below which includes more info about the awards presentation on Saturday, February 27th.
He vaulted 17'6" last year as a sophomore. Definitely a do not try this at home.
Long Jump-Isaiah Holmes Oakmont (SJS) 23'9"
Triple Jump-Isaiah Holmes Oakmont (SJS) 48'3.25"
High Jump-Darius Carbin Mt. Pleasant (CCS) 7'1"
Pole Vault-Michael Gonzalez Lodi (SJS) 16'0"
Discus-Nate Esparza Amador Valley (NCS) 177'7"
Triple Jump-Kali Hatcher St. Mary's Berkeley (NCS) 39'0.5"
High Jump-Olga Baryshnikov Prospect (CCS), Julie Zweng Scotts Valley (CCS), Kaylee Shoemaker Corning (NS) 5'6"
Pole Vault-Erika Malaspina Pacific Collegiate (CCS) 13'2"
Shot Put-Elena Bruckner Valley Christian SJ (CCS) 53'5.5"
Discus-Elena Bruckner Valley Christian SJ (CCS) 182'8"
He sadly passed away at the terribly young age of 24. He inspired many runners during his racing career and still inspires people to this day.
Who inspires you to practice and compete at your best?
Best result overall for both meets?
Today we chat with Menlo School XC, soccer and TF coach Jorge Chen. During his time at Menlo School, Jorge has accumulated many individual and team league and section championships. Over the past few seasons, Jorge has coached double CCS track and field champion Maddy Price, Footlocker finalist Lizzy Lacy and the NorCal sophomore runner of the year, Robert Miranda.
1) What sports did you play before and during high school? What are some of your proudest achievements in sports during your pre and high school days?
--- I played soccer, and ran track and XC in HS along with playing tons of sports just for fun! Proudest moments were in Freshman XC when we won Artichoke Invite as a Team, Coach Dooley was so proud of us youngens...Also winning CCS Top 8 in the 4x400M when we were seeded last (32nd) going into the meet, we defeated #1 seed Riordan that night...And coming in 2nd at Stanford Invitational in the 4x100M.
2) Who were the coaches that had the biggest impact on you as an athlete? What did you learn from them that you use to this day?
3) Highlights from your four year college experience at Stanford?
--- Not many athletic highlights here unfortunately since I ran 2 years of Track and due to injuries switched over to Crew my last year at Stanford. Our boat actually came in 2nd in Pac-10s for Lightweight 4+s. But I've built many long-lasting relationships with my teammates that I still treasure today.
4) How did you get your start in coaching? What was your first coaching experience and what did you learn from that first season?
5) How long have you coached at Menlo? What sports have you coached at the school and what is the most sports teams you have coached in one school year?
--- I've been at Menlo for 14 years now and I absolutely love the kids, parents, and my colleagues. I've made Menlo my home and really give my all to them and try to teach my athletes to be great ambassadors to their sport as well as citizens of the community. I coach HS XC, Track and Girls Varsity Soccer along with Middle School XC, Track, Boys & Girls Soccer. One school year I coached 12 sports teams which was crazy...but I took on the challenge and it was a blast! Now I feel a little too old to do that again.
6) Who are some of your more outstanding athletes that you have coached at Menlo and what were some of their achievements?
--- There are too many to name since many athletes who didn't go on to run Track or XC went on to do great things in other sports but my main outstanding HS Track & XC athletes were the Parkers (Sam 800M league record holder & Max just a beast), Maddy Price (400M/200M CCS back to back double champ), Lizzie Lacy (D4 CCS Champ & Footlocker Nationals 10th place), and currently Sophomore Robert Miranda (XC & Track distance runner pictured above following his state meet race). I still keep in touch with them and I learn so much about coaching from my athletes as well.
7) Who are your coaching mentors that you lean on for advice during the different seasons you coach?
--- You for Track & XC, as well as great coaches like Rob Collins (SLV), Ken Wilner (SHP), and many others. Also, my own staff of phenomenal coaches Sean Weeks (pictured above), Tricia Lord, Tina Lount, & Donoson FitzGerald along with many of my Menlo colleagues (Buffie Ward and the Great Bill Shine, to name a few,) who always give me great coaching advice and pointers to become a better coach. I truly believe that the head coach of a program is only as good as his/her staff.
8) Aside from the training plan that they follow, what would you say are other important factors that are equally as important for athletes?
--- I believe in athletes staying healthy by listening to their bodies. And as a coach to actually plan our workouts that will fit the athletes and not the way around. But I am actually very happy that you asked this question since I believe that NUTRITION is the Key to Success in my athletes, especially female runners; and as caring adults, coaches and mentors, we should truly encourage our athletes to not only eat healthy but to eat enough to fuel their bodies! And another thing is to make practice fun by changing things up!
9) How do you feel you have changed as a coach from when you started to now? What do you feel like you do now that has really helped your athletes?
--- When I look back at myself 14 years ago, I just laugh at myself. I used to take wins and losses very seriously and personally, but now I truly try to not only teach my athletes the sport and to enjoy it, but the most important thing I try to teach my kids is LIFE LESSONS. If I can change one kid's life through sports, I wouldn't exchange that for any trophy. I believe that my athletes really trust me that I care about them. I spend many hours doing research in running and try to teach my athletes what I've learned with a touch of fun added to it and they really digest it well with good results.
10) What do you feel is the most important part of your job when it comes to dealing with high school students?
--- Again, I believe my main job is to teach HS students LIFE LESSONS in order to be great & caring people in this world. To actually make a difference in this world, no matter how cheesy it may sound... :) To be a good teammate since Track & XC truly is a team sport at Menlo, and that there is no shortcut to GOOD OL' PLAIN HARD WORK!
11) What would be your advice for a young coach that is just starting out and what can he or she do to be effective coaches for their teams?
--- To not allow your youth or inexperience hinder your coaching abilities. To just truly care about the ATHLETE and connect with them first, then everything else will be added. As coaches, we are here to help the kids to learn and enjoy their journey. And don't be afraid to ask for help or advice at any time. We are all in this together to help the future athletes of this country. And one main advice that I live by as a coach is: It's all about the KIDS, not us coaches; as soon as we think it's about us, that's when we become bad coaches.
---Thank you so much for allowing me to share my passion on your blog Coach! We have so much talent around this area and I hope we can all work together to help develop these amazing kids. And Thank you Coach for contributing so much to NorCal HS XC & Track through this site! Good luck to Everyone & as my old HS Coach Don Dooley would say: GO RUN 1!
The at-large marks have been corrected and updated due to several errors for different events.
Today we chat with Skyline senior runner, Zachary Katzman. This past season, Katzman had one of the finest seasons by an Oakland Section athlete in recent memory. He won the Ed Sias Large School race in 10:07.8. He finished in 4th place at the Stanford Invitational Division III race in 15:46.5. His official high school season concluded at the California state meet where he finished 15th in the uber competitive Division I race (15:25.0). For his efforts throughout the season, Katzman was named 1st team all NorCal. Katzman's PRs during the 2015 Track & Field season including a 4:24.95 mile and 9:20.20 3200m.
1) Looking back at this past XC season, what would you say was your best race? What are some of your proudest accomplishments?
This past season was only my second high school cross country season - I ran club track / XC from 6th through 10th grade. By the beginning of the year, I had finally adjusted to the new training and racing styles, and this showed early on when I won the Ed Sias Invitational. Later on, a comfortable 16:00 solo at the Oakland Athletic League City Championships was a great tune-up for the next couple of weeks. However, I think my best race was at the state meet, where I set an Oakland Section record and improved upon my 2014 place (151st) by 136 spots. That performance was quite encouraging for the upcoming track season.
2) What was your training like during the summer? Did you do anything differently than previous summers? Typical weekly mileage? Workouts?
During the summer, we mostly did base training. My mileage was around 55-65 miles/week, and most of our workouts were geared towards endurance development. This was similar to summer training in 2014; however, I had trained differently with my club team - more workouts, less mileage runs.
3) Any bumps along the way during your season? Any challenging races that made you even more determined for the next race?
I hurt my IT band in mid-September, so I had to cut back on training during the week of Stanford Invite. Though it made the race challenging, the most difficult part was hardly running at all during practice. Luckily, I recovered quickly, so the time I took off was well worth it.
4) What does a typical training week look like for you? Longest run? Typical pace for most of your runs? Training partner(s) or are most of your runs solo? Morning runs? Weight work?
*Controlled and easy runs are mileage but at slightly different paces.
My longest runs are 14 miles. I usually run 6:00-6:20 pace for mileage. Most of my runs are solo, but we recently started a new system in which a faster pace group runs easy on the next fastest pace group’s controlled run. We do this 1-2 times a week, and it’s been great to run with my teammates more often. I run in the morning primarily for enjoyment. We do several strength circuits and core work each week but no weights.
5) From your own experience, what do you feel like has really worked for you training wise? What changes have you made as you got more experienced as a runner?
Training with high intensity - not necessarily greater volume - seems to work for me, but I think this depends on the person.
After alternating between track and football in middle school, I got more serious about running once I started high school (still running club). At that point, I started running year-round and got more competitive in racing, rather than doing it just for fun. Since then, I've only made minor changes, although transitioning into high school track / XC was challenging. I think that for most athletes, maintaining the same training style is more effective than routinely making big changes.
6) Who are the people that have been the most influential in your success? How?
Coach Willie White, my club coach, played an instrumental role in my development not only as an athlete, but as a person. Over the five years we worked together, he helped me find my passion for running and, once I was ready, pushed me to improve and compete at my best. Coach White continues to be a mentor and never stops supporting me.
Coach Seán Kohles and Coach Javier Alvarado have been role models for me ever since I joined Skyline’s team. They are willing to offer advice and support regarding everything that's important to me. They’ve also been a huge help in the college search, application, and recruiting processes.
7) Why has running been so important to you? What have you learned from being a runner?
I love running. It’s become a part of my identity and has helped me develop many non-athletic assets, traits, and characteristics that I value greatly.
8) Favorite XC course? Favorite XC invitational? Favorite XC workout? Favorite opponent(s)? Favorite long run? Favorite TF invitational? Favorite TF event? Favorite TF workout? Favorite free-time activity?
- XC Course: It’s a tie between the Joaquin Miller Park and Skyline HS courses. Joaquin Miller is extremely hilly, but it’s the traditional home of the Oakland Section Finals. Being a part of that history - and not worrying about time - is a blast. The Skyline course is also hilly but a lot faster, making it a good place for quick solo times.
- XC Invite: Ed Sias Invitational. I love the atmosphere of the meet, and it’s a great season opener. Plus, it’s fun to run a 2-mile for cross country.
- Opponents: Anyone who presents a challenge. I live for a competitive race.
- Long Run: Bayview Trail. Skyline is right next to a number of beautiful city and regional parks, and we’re lucky to have the opportunity to run through them nearly every day.
- Free-time Activities: Outside of running, I’m in Skyline’s jazz, concert, and marching bands. I also recently earned my Eagle Scout rank. In my free time, I enjoy playing music, camping, backpacking, and most outdoor activities.
9) Have you decided yet where you will attend college next year? If not, how much of a factor will running be considered when it comes to choosing your next school?
I haven’t decided on a college yet. Running will be a big factor in my college choice, but academics and other factors are also very important.
10) Looking ahead to the track and field season, what are some of the invitationals you are really looking forward to and what are some of your goals that you would like to share?
I’m looking forward to the Dublin Distance Fiesta. I hope to qualify for and run in the Arcadia night meet. A solid performance at the state meet is another big goal of mine.
11) What is your advice for a young talented runner with aspirations of being a section/state champion in the future?
Enjoy the entire process. Mastering any skill is all about repetition. One massive workout or race doesn’t translate to long-term success - it’s finding what works for you and committing to it day after day that leads to results. And success is much more satisfying if you are able to appreciate the journey.
What you do outside of training is often more important than you might think. Managing these factors (recovery, injury prevention, sleep, eating, stress, etc.) effectively goes a long way.
Trust what your coach(es) have you do, but don’t be afraid to ask why.
Special thanks to my family for their support. A big thank you to my teammates and coaches for embracing me as a junior and newcomer to the team and to Coach White and East Oakland Track Group for their tremendous impact on my life. Thanks as well to my fellow team captains and anyone else I may not have mentioned.
I’d like to give a shout-out to the Oakland Tech girls team and Johanna Ross for setting the state meet team time record and individual record, respectively, and to the Tech boys team and Nick Kleiber for their school record performances. I would also like to recognize the entire Oakland Section for not only accepting, but thriving on, the recent changes we’ve made to improve our section.
Thank you for the interview and congratulations on your team’s success this past season!
Greg Wright has reasons to leap over an award he recently received.
Over the weekend at the United Canvas of Sling’s National Pole Vault Summit in Reno, Nev., the Lodi High track and field coach received the Coach of the Year Award.
Last weekend’s 25th annual gathering, sponsored by the equipment manufacturer, featured the best high school pole vaulters from the United States and Canada competing.
To read the rest of this article, go to this LINK.
From now on, comments will need to be approved by me before they are posted on this site. When you post a comment, I will get your comment by email and I will approve it unless it's deemed to be offensive in nature. I think this is best considering many of you post anonymously and we have had a lot of comments this past fall that went a different direction from the original intent of this site.
I apologize if anybody was offended in any way during this past season.
Today we chat with John F. Kennedy (Fremont) senior, David Frisbie (photo to the left thanks to Erik Boal of DyeStatCal). This past cross country season, Frisbie won the NCS Division III race breaking the 15 minute barrier and running 14:58.2 on the Hayward HS course. That time ties him for the 5th fastest time run on that course in NCS Division III action with former Campolindo runner, Thomas Joyce. Frisbie followed that race with a 5th place finish at the California state meet with a 15:20.9 finishing time. For his outstanding end of the season, Frisbie made the 1st team all-NorCal team.
I probably think that my best race was either NCS or state. Both were great races for me. My proudest accomplishment definitely has to be my NCS win. I finally broke 15 minutes on the Hayward course and added an NCS championship under my belt.
Training was a little hard to do mainly because I was training by myself during all preseason and to be completely honest the motivation just wasn't there until I started running with some guys during the regular season. The only thing I did differently was watch my mileage more. Even though my mileage was relatively low all season, I averaged between 50 - 53 miles per week. Same work outs like last season. A lot of mile repeats, 800s and 1200s. A few tempo runs but not a lot which probably wasn't good.
Early on in the regular season was pretty rocky for me. It was mostly because I didn't have a lot of base training during the summer. I had a few challenging races. Probably the most difficult one had to be the Stanford Invitational. It was probably one of my worst races but every race after that wasn't so bad. I was determined not to have any races like that and I knew I had to get back on track.
I always ran 5 - 6 days a week. Only 5 days if I really needed. Five days maybe 2 or 3 times through out the season. My weekly long runs ranged from 13 to 15 miles averaging 6:30 to 7:00 pace. For interval work, 4:50 to 5:00 pace. Most of my mileage runs were solo. Interval workouts were me and few of my school teammates but mostly guys from other schools and other sections (CCS). Morning runs every Monday and sometimes Fridays before school. Not so much with the weights but a lot of core.
It was mostly getting more sleep and watching what I ate but I wasn't too strict although I did watch my protein intake. Overall, no really big changes.
The people who have been the most influential on me are definitely my parents but not just them. My current coaches and past coaches definitely play a roll for my success. Al McGaughey, Willie Harmatz, Lee Webb, Mike Dudley, and Jerry Craft. They all took me under their wings and helped me believe in myself, had my back no matter what, been there for me on and off the track, helped me with academics, home life, and of course running. I can't thank my coaches and my parents enough. You guys don't just play a huge roll in my success but also my life.
Running is so important to me because I love it so much. Going into high school there was no way I thought I would be getting college offers. Running is now taking me places. What I have learned is to always stay humble. To me, it doesn't matter how many wins you have or how many records you've set, you have to remain humble. That's what my coaches and parents taught me and its something I believe.
My favorite XC course would have to be Hayward HS course. Not too many hills and not too flat. Just perfect to me.
Favorite XC invitational would definitely have to be Mt. Sac. Mainly because I love traveling and it's such a huge meet with great competition.
My favorite XC workout is definitely mile repeats on the grass. Lace up the spikes and feeling that lactic acid in your legs. Feels bad but good at the same time. I honestly don't know why.
I don't have a favorite opponent.
Every long run is a great run. As long as you're out there getting the miles in, it's good for me.
Favorite TF invite would have to be Arcadia. Great meet and great competition. Racing against the nations top runners.
Definitely 1600. Without a doubt.
Favorite work out would have to be 400 repeats. Again, I don't know why but that's my favorite workout.
I love to hang out with my girlfriend and go on Twitter and Instagram. If you know me, that's what I do a lot. Lol.
I haven't made my decision yet but running in college will play a huge factor.
I would definitely like to go back to Stanford and Arcadia invite because those are big meets and I didn't do too well last year. So pretty much just go back and do better than what I did last time. I want to make state in the 1600 and then qualify to the state final.
Coach Lee Webb at Logan HS always says "you have to believe to achieve" and that's exactly what I would say.
Today we chat with Lowell HS senior, Kristen Leung (Thomas Benjamin photo to the left). She had a breakout junior Track and Field season highlighted by several outstanding anchor efforts on her school's distance medley relay teams as well as individual times of 4:55.35 in the 1600m and 10:53.11 in the 3200m. This past cross country season, Leung had a terrific season culminated by a 5th place finish in the Division I race at the California state cross country meet and a huge personal record of 17:35.9 on the Woodward Park course.
1) What sports have you participated in competitively aside from XC and TF? When did you get your start in running?
I played basketball and ran the 400m for track in middle school for fun and wasn't very good at either sport. I only had an inkling that I might like long distance running because when I ran the mile for PE, I was that crazy kid who talked during the run and ran extra laps because of the high I got from it. However, I really didn't realize that a person could run more than a couple miles until I discovered running in my freshman year of cross country.
2) What do you remember about your freshman experiences in both sports? Highlights?
Oh, I had a blast my freshman year. Everything was so new and exciting.. It was the zaniest idea to me that I could run the same distances that I thought one could only travel by car and explore the beautiful nooks and crannies of San Francisco by running.
In retrospect, I think I was the purest runner, running for my love of running, back then before I was exposed to competition. Even though I ran by myself in most of my races, I was innately motivated to push myself to my PRs, and did so consistently. I miss that ability to run at my best without competition, with myself as my biggest competitor.
3) Did you do anything differently before your sophomore cross country season over the summer?
Before my sophomore cross country season, I ran every so often but didn't really know what to do as a freshman. My team doesn't hold off season practices, and my cross country coach is different from my track coach so there's no one telling us what to do in the summer. It wasn't until the past two years that I've realized the importance of the offseason, trained consistently over it, and organized team practices as a co-captain.
4) Where in your high school career do you feel like you made your biggest jump in terms of improvement as a runner? What do you feel led to your improvement?
My biggest improvements have been in the past cross country and track seasons, where I jumped from running a 19:37 at Woodward Park to a 17:35, and a 5:11 1600m to a 4:55. As aforementioned, I didn't realize how big of an impact offseason training could make on my season performances until my junior year. When I did, it led to my successes in those two seasons. My increase in mileage from 25-35 to 40-45 miles per week contributed to my improvements in the past cross country season too.
5) What does a typical training week look like for you? Morning runs? Strength work? Typical long run length? Pace of most of your runs?
We do team core workouts at least once a week, and I try to fit in a few strengthening exercises after every other run.
6) You have different coaches during your cross country and track and field seasons? Is that something you are used to now or is there a transition period every season?
Having different coaches for track and cross country is not really something I have had to get used to, as it's just something that has always been.
I actually like the situation. Having different coaches offers variety in training philosophies and attitude. One of the coaches is more focused on competition, while the other is more focused on offering his runners enriching life experiences.
The only real downsides are the offseason where we have no official coach, and that the two coaches don't particularly like each other because of a feud they had a number of years ago...it's like having divorced parents.
7) You finished in 5th place at this year's CA state XC Division I race running 17:35.9. During your first three state meet appearances, you finished in 127, 122 and 115 respectively. What was your goal heading into this state meet and why do you feel, you had such a dramatic improvement?
Well, for starters, I've never really performed at my best at state meet for various reasons. It's at the end of the season, my team tapered too much/too early, I was injured, ate too much, etc. But, as all the cross country t-shirts tell us, there are no XCuses. I buckled down this season and stopped making them.
*Tapering smarter- I did some funky end of the season stuff in the past... 200m "easy" workouts, nearly a month of easy running... it clearly didn't work.
As to what my goal heading into state was, I was unsure of what to expect. We used to estimate our state meet times would be a minute faster than our home course times (the SF section course was extremely difficult), but our course changed this year so I didn't have much to go off of. I hadn't faced competition during the season. No runner from my section has ever placed better than 10th. Even running blogs like yours said I was a wildcard, and, honestly, I didn't know what my full potential was either.
However, the uncertainty of my abilities freed me from external pressure to perform that runners often get psyched out by and I lightly decided to myself to shoot for my Mt. SAC time (17:36), because I felt I could, and/or place in the top 10. I ended up exceeding my expectations and doing both.
8) Favorite XC course? Favorite XC invitational? Favorite XC workout? Favorite long run? Favorite track event? Favorite track invitational? Favorite track workout? Favorite Lowell tradition? Favorite free time activity?
It was my team's first time going down to it this year, and we heard so many scare stories about it, but I ended up loving the course. It felt like it was made for cross country with the challenging, fun switchbacks and hills in just the right places during the race. Loved the cute posters from Brooks throughout the race too. The best races are the ones that make it fun.
Lowell has only gone once while I've been here, but we team had the best time ever. We flew up to Portland, got to explore the city, karaoke, stay at a fancy hotel...we made so many good memories there. To top it off, the Nike people treated us well, with a tour of the Nike campus, a dinner with all the competing teams, and a Q&A session with Jordan Hasay. We didn't even mind the pouring rain, mud, and hay bales we had to jump over on the horse track the race was on.
Favorite XC workout: Our assistant coach introduced a new workout this year where you have to run 800m or 400m repeats, getting the same time or better each repeat. If you are off your time more than once, you end the workout. The real challenge- no looking at your watch!
Favorite long run: Any long run with the team. The biggest goofballs I've ever met are on the team, and we always end up having these hilariously nonsensical conversations during runs and discovering beautiful parts of San Francisco. The other team captain brings along speakers and we all bellow Taylor Swift songs.
9) Do you have any good (humorous) Andy Leong stories that you can share with my audience?
10) Have you decided what college you will attend next year? If not, how much will running be a factor in your decision?
Nope, not committed yet. Before the past year, I couldn't imagine being a collegiate runner, but life takes us to unexpected places and now I can't imagine NOT running in college. However, I'm looking for a balance, and don't want training to be so overwhelming that it hinders my academics, and vice versa.
11) Looking ahead to the track and field season, what races are you most looking forward to and any goals that you want to share?
In the past, I've trained mostly for middle distance because my team had a strong DMR (4th fastest time in the nation last year!) and 4x800m, but with my successful XC season, I'm hoping to focus on longer distances more and run the 3200m at the Dublin Distance Fiesta to (hopefully) qualify for the Arcadia Invitational 3200m.
In my best event, the 1600m, I'm aiming to sub 4:50, which isn't too far off from my current PR of 4:55.
It definitely takes a hard-working, determined runner to be successful in cross country, but one has to pay respect to the huge amount of luck that is involved in any success. I am blessed to have such a supportive circle of coaches, family, teammates, and friends who inspire and encourage my running. I wouldn't be where I am today without them.
Thank you for reaching out to me for this interview and for all the work you put into this blog! I love reading about fellow runners and updates in world of NorCal XC, and it's an honor to be a part of it.
Alameda Mayor Trish Herrera Spencer visited#SJND today to make a special proclamation for one of our student athletes.
For his accomplishments and success this cross country season, Mayor Spencer proclaimed January 15 as Cooper Teare Day in Alameda! A huge congrats to Cooper '17 and his family.
Long Jump-Isaiah Holmes Oakmont (SJS) 23'9", Anthony Cable Rodriguez (SJS) 23'5.5"
Triple Jump-Isaiah Holmes Oakmont (SJS) 48'3.25", Darius Carbin Mt. Pleasant (CCS) 47'4"
High Jump-Darius Carbin Mt. Pleasanta (CCS) 7'1", Isaiah Holmes Oakmont (SJS) 7'0", Darius Thomas St. Francis (CCS) 7'0"
Pole Vault-Michael Gonzalez Lodi (SJS) 16'0", Jacob Bowler Del Oro (SJS) 15'9"
Shot Put-Jonah Williams Folsom (SJS) 60'10.25", Nate Esparza Amador Valley (NCS) 60'6.25"
Discus-Nate Esparza Amador Valley (NCS) 177'7", Jake Kenney Wilcox (CCS) 167'7"
Long Jump-Katherine Jackson Rodriguez (SJS) 19'10.5w, Caice Lanovaz Los Gatos 18'11.5"
High Jump-Julie Zweng Scotts Valley (CCS) 5'6", Kaylee Shoemaker Corning (NS) 5'6", Olga Baryshnikov Prospect (CCS) 5'6"
Shot Put-Elena Bruckner Valley Christian SJ (CCS) 53'5'5", Jasmine Pharms Stagg (SJS) 43'10"
Discus-Elena Bruckner Valley Christian SJ (CCS) 182'8", Hannah Chappell Oakdale (SJS) 151'2"
The meet will happen rain or shine.
Please have your student athletes fill out our online form found at http://www.dublincrosscountry.com/Dublin_Track/WT%26FCS.html. It will close at 9:00 PM on Friday, January 15.
ATHLETES THAT HAVE FILLED OUT THE ONLINE FORM: You will receive your bib number at the athlete gate upon your entrance into the stadium after paying you entrance fee of $10. Please let the gate know that you filled out the online form and they will give you your number there. You do not need to go to the registration area - you are ready go!
ATHLETES THAT DID NOT FILL OUT THE ONLINE FORM: After you pay your $10 entrance fee at the athlete gate, you will be redirected to the registration tent to turn in your SIGN-UP SHEET (which is available at the above link) to get your bib number. This line can get long so if you do not plan on doing the online form, arrive early enough to wait in line and get your form processed well before the meet begins.
"The high school division only meet will begin at 2PM rain or shine this Saturday. The Meet Information, Individual Sign-Up Online Form, the Individual Sign-Up Paper Sheet, and the Relay Sign-Up Sheet are at http://www.dublincrosscountry.com/Dublin_Track/WT%26FCS.html."
Please list the athlete, coach or any other person associated with either cross country and/or track and field in the comment section below. If you have their email address, please send it to me at albertjcaruana@gmail.com.
Please use email above if you would like me to interview an athlete or coach. Thank you.
In the comment section below. What do you want to accomplish in the upcoming Track & Field season? Be specific. You don't have to add your name.
The top 6 qualify to the state meet from the Southern Section unless you can qualify by surpassing the 2016 At-Large Time Standards. Feel free to comment below on athletes that I missed and should be added below.
USATF Coaching Education is offering a Level 1 School in your region. For further information and to register, click here. Register by midnight (EST) on January 9th, 2016 for the pre-registration price.
Current World Record Holders Mike Powell and Kevin Young will be at the clinic as well as many other former World Record Holders. There may never be a gathering like this is the future! Olympic Year. Come support Track and Field.
The clinic is dedicated to Reynaldo Brown. Coach Jim Santos will be recognized for his longtime dedication and achievement to the sport!
Covering High School Track & Cross-Country in the Redwood Empire.
For those of you that were faithful visitors to Jim Crowhurst's sites that covered Cross Country and Track & Field in the Redwood Empire (NBL, SCL and CMC), you are in luck. Jim has the site back up which you can find at the link below. The site includes tons of statistics dating back many decades and will be a great resource for those 3 leagues for this coming track and field season.
The site is no longer associated with the Santa Rosa Press Democrat so donations are welcome to help with the upkeep of the site. Check the link on the site for more information in regards to donations.
Please note that the Inland Empire meet held at Corona HS on February 6th has been moved up to February 23rd. | 2019-04-20T14:23:58Z | http://www.crosscountryexpress.com/2016/01/ |
window installation add warmth and color to an empty room. They make even an empty room feel like home in an instant. You may already know that well-dressed windows have a huge impact to the room’s aesthetic. However, not knowing the basics of window treatments will make the entire shopping and installation process a challenge. Fortunately, it doesn’t have to be this way! We have the ultimate guide for you to turn your boring windows into stunning focal points.
When shopping for window replacement, consider first the mood that you want the room to have. For more formal spaces like your home office, there’s velvet or heavy silk. More practical options include cotton sateen and silky rayon blends. To induce a casual feel, use crinkly crushed velvet or billowy linen. Cotton blends and pure cotton will work with all types of décor, and so do wool blends and seasonless wool.
You need to make a decision whether you want window cleaning that pop or curtains that blend with a room’s décor. For blending, choose window repair that are of the same tone to the color of your wall but about a couple of shades darker. You can also opt for a subtle, non-dominant color. Adding a bold colored window treatment is like hanging an exclamation point to the room. This will work well if you are looking to add a wow factor. Remember that in spaces where the sun will shine through unlined curtains, your window treatment’s color will infuse to the room. Pink can be cheery; blue, eerie.
A general rule of thumb: having a patterned bedding or furniture in a room will only give you the option of sticking to solid curtains. For solid-color bedding or furniture, consider patterned curtains. If what you want is a subtle hint of energy and style, then go for dots, paisley or any small neutral print. A graphic print will look daring and spectacular, but only if it relates to the entire room’s décor.
The installation of window companies can either be done by you or handymen. If you hire professionals, expect to pay up to $80 an hour. We suggest that you rely on experts instead because the installation of window treatments requires the use of a stepladder, handheld drills, a stud finder, screwdriver etc.
-High and wide – You can make a room look more luxurious and bigger if you hang drapes 12 inches above a window frame. You can even go all the way up to the ceiling by extending the curtain rod up to 6 inches on both sides.
-Layering – Designer windows always sport two or more window treatments. Emulate designers because this trick will boost the style of the room and turn your window into the room’s focal point.
-Take them to the floor – Novice decorators purchase store-bought curtains. Although there’s nothing wrong with this, their mistake is not measuring the wall’s height. Your drapes must puddle slightly and kiss the floor to achieve a tailored look.
When you want to use someone else’s property, you have to propose and negotiate with an easement. This situation arises, for instance, when you want to cross someone’s property to get to yours. You may also require an easement to conduct electric wires into your house across a neighbor’s property. When you need an easement, make a document which contains the full agreement terms, and negotiate and amend those terms together with the real estate owner. Make sure you cover all important problems that might arise during the period of the arrangement.
Write down the burdens and benefits of the easement. Each easement creates a load for the home owner granting the access. By the same token, the easement delivers a benefit to the person wanting access. To be able to negotiate, then you have to clearly state benefits and burdens on your arrangement document. California courts are far more likely to decide in your favor in case of a dispute in case you have clearly stated why you would like the easement and the annoyance it might cause the property owner.
Name the celebrations. The person living on a property is not necessarily the owner. Indicate the owner of the property who has the capability to give the easement. If the property has several owners, you will require the agreement of all of them. Moreover, if the home has a tenant, California law allows provisions for tenant acceptance.
Describe the property. Utilize the legal description of the property on record in your California county or state property office. Indicate the easement “runs with the land,” meaning that you’re asking for accessibility which definitely requires the property in question.
Write whether the easement is permanent or temporary. If the easement is temporary, like if you would like to move heavy construction equipment to get a one-time job, indicate if the easement will finish.
State the extent of the easement. Indicate exactly how much space you’re asking for, such as length, breadth and width. Attach a drawing which indicates easement boundaries.
Negotiate reserved rights. The property owner might have applications for the easement that happen concurrently to your usage. Write down all of the owner’s rights which will stay in position even after the granting of the easement.
List repayment amounts. In California, the property owner has the right to demand payment for an easement. The condition licenses appraisers who appreciate easements. In the event that you and the property owner are agreeable, then you aren’t required to find an appraisal and might agree on any number you see fit. During negotiations, point out any advantages the property owner can receive from an easement. For instance, if permitting the easement helps you improve the value of the house, the easement owner’s property might go up in value as well. This could reduce the fee to get the easement.
Once you understand how and where to look, distinguishing a black gum tree (Nyssa sylvatica) from a live oak (Quercus spp.) Becomes simple. Form, foliage, flowers and vegetables all highlight the gaps between these trees. Black gum simplifies the task once it drops its leaves for winter. A live oak — as its name implies — remains evergreen. Southern live oak (Quercus virginiana) illustrates that the qualities that separate various types of live oaks from a dark gum tree.
Black gum’s characteristic pyramidal shape remains intact during its existence. Narrow but conical when young, the tree gradually widens as it ages. In Mediterranean-climate landscapes, black gum generally grows 30 to 50 feet tall and 20 to 30 feet broad. It grows much larger in its native eastern U.S. habitat however still includes a pyramidal type. Southern live oak grows 50 feet tall in Mediterranean climates and taller in its native southern U.S. kingdom; its rounded, umbrellalike canopy spreads twice as broad as the tree grows tall. A broad southern live oak on the horizon won’t be mistaken for a black gum tree.
Following a winter spent with bare branches, black gum creates shiny, green foliage in spring. Each widely oval leaf widens at the center prior to its smooth margins form a stage. Even in mild climates, black gum treats onlookers to a fiery show of orange, red, purple and gold foliage before fall takes its leaves. The southern live oak holds its leaves year-round in all but the coldest regions of its growing range, and also it falls only a portion of its leaves. The narrow, oval, smooth-edged leaves have softly rounded tips and are shiny and dark green on top and white beneath.
Black gum and southern live oak develop best in full-sun and partially shady websites, and both bear highly acidic to highly alkaline soil pH. Hardy at U.S. Department of Agriculture plant hardiness zones 5 through 9, dark gum withstands wide-ranging dirt and soil conditions. Compacted, drought-stricken, urban plots and excessively moist, poorly drained sites evenly match the versatile black gum. The tree has moderate salinity tolerance at a coastal site. Hardy in USDA zones 7 through 10, southern live oak adapts to a range of conditions, but its optimal growth requires always moist to wet soil. Even so, southern live oak excels in warm, inland climates and low-desert gardens that provide sufficient moisture. Its salinity tolerance is great to medium on a shore and great at an inland site.
Some tree flowers have male and female parts within the exact same blossom, but dark gum flowers are either male or female. With occasional exceptions, the male and female flowers occur on separate trees. Following a male black gum tree flowers pollinate a feminine black gum tree inconspicuous spring blooms, the feminine tree bears fruits, that appear in clusters and are dark or blue-black, 1/2- to 1 1/2-inch, olivelike drupes in autumn and winter. Southern live oak also has separate male and female blooms, but they appear on the exact same tree. Those blooms are trivial and appear in spring; they give 1/2- to 1 1/2-inch vegetables — brown acorns with spiny tips.
While dark gum blazes with fall color and berrylike fruits, live oak species native to Mediterranean, coastal climates stick with acorns and spiny, evergreen leaves. The coast live oak (Quercus agrifolia) is among these species, and canyon live oak (Quercus chrysolepis) is just another one. Hardy in USDA zones 9 through 10, the coast live oak grows up to 70 feet tall, has a canopy spread wider and creates acorns quantifying 1 1/2 to 3 inches. Every spring, it deals leathery old leaves for shiny new ones — all with sharp, spiny borders. The canyon live oak is hardy in USDA zones 8 through 10, grows 65 feet tall and broad, and has sharp-tipped, 1/2- to 1 1/2-inch acorns. Its leaves may have sharp spines or be smooth; one tree may have both spiny leaves and smooth leaves. Each leaf is gray-green on the planet and pale-blue on its bottom.
California king mattresses are several inches longer than conventional king-size mattresses and therefore require bedspreads with various dimensions. California king mattresses measure 72 by 84 inches, with the corresponding bedspread typically measuring 114 by 120 inches. Alternate sizes can be found, however. Selecting the proper size bedspread for your bed mainly comes down to the depth of your mattress, which fluctuates according to manufacturer and fashion.
Most mattresses, irrespective of length and width, have a depth ranging from 9 to 12 inches. Pillow-top mattresses using thick memory foam toppers may be anywhere from 16 inches to 22 inches deep, so bedspreads and comforters also need extra inches for proper coverage. Several Cal-king bedspreads offered retail outlets now quantify up to 120 by 125 inches. Calculate the inches from the ground to the peak of the mattress — and topper pad if applicable — to get the most accurate indicator of which size bedspread to purchase.
If you choose a comforter or duvet instead of a bedspread, the suitable size for a California King mattress is considerably smaller. This is since comforters typically sit on top of the bed with only slight overhang. Cal-King comforters normally measure from 102 by 86 inches to 102 by 94 inches. The more filling in the comforter, the less overhang you get — so choose wisely.
Damping-off disease dampens a gardener’s mood quickly. You’re going to get a sinking feeling yourself when you see rows of seedlings keeling over. You can often forestall the issue — caused by fungi in the soil — from sowing your seeds on top of sterile seed-starting mixture, covering them with milled sphagnum moss, sand, or chicken grit, and watering them from the bottom. When it’s too late for this, you might be able to save seedlings that haven’t succumbed yet with the assistance of homemade fungicides.
Because cinnamon is just a natural antioxidant, it can stop or halt damping-off disease in your seedlings. Since powdered charcoal has also been recommended for this purpose, you might choose to concoct a double-strength fungi-fighting formula by combining 1 part cinnamon with 1 part charcoal. Sprinkle a light dusting of the powder above the surface of the ground or seed-starting mixture and leave it there until the seedlings are big enough for transplanting.
Hydrogen peroxide, the bubbly liquid you use to clean wounds and scrapes, also relaxes the clocks of the fungi that cause damping-off disease. To implement it, mix 1 tbsp of 3 percent hydrogen peroxide — the kind usually sold in drug stores — with 1 gallon of water. Spray the surface of the soil or seed-starting mixture with the solution or set the seedling container at about 1 inch of that solution, until it’s drawn up through the drainage holes and also lightly dampens the surface.
A cup of tea is going to have a bracing effect on your seedlings if you create it using a antioxidant herb or herbs. Because you want the tea to be more powerful than what you drink, use extra bags or steep the tea for a longer time than you normally would. For instance, steep three chamomile tea bags in 1 cup of boiling water for 20 minutes, or steep two bags in that cup for many hours. Nettle or clove teas also kill fungi. Again, either mist your soil or seed-starting mix with the water or tea the seedlings’ container from the bottom with it.
Like vampires, fungi too can be repelled with garlic. Chop or sterilize a clove of the herb, then drop the resulting slivers into 1 gallon of water, and let them soak overnight. The morning after, strain the garlic pulp from the water and use the water to mist or even bottom-water your seedling flat.
Most of us could use a little extra space — for an office, for entertaining or just for relaxing — but a number of us have the funds or room to add on to our existing homes. However there may be a blank slate nearby, just waiting to be reinvented: the garage.
These six er garage conversions have gone above and beyond the average remodel. No longer in need of a parking spot, or tired of looking at the mess that had piled up, these homeowners took advantage of the empty or dilapidated garages. The resulting dream rooms gave these households the extra distance they were searching for.
A brand new work-from-home job supposed that Suzanne Dingley’s husband needed a new office. Rather than cramming into their house, the couple turned into their detached garage, which had turned into a dark and filthy dumping ground for junk. They gutted the distance, exposed the rafters and pitched roof, and put in new floors and built-in storage.
The white and red colour palette evolved from the Ikea photograph of a London bus — a tribute to the couple’s British roots.
The set replaced the existing garage doors with two sets of French doors and two new windows to allow in natural lighting. The newly insulated ceiling and flooring control the internal temperature, but a window warmer and distance heater help out, too. “My husband is quite happy with his distance, especially with his short commute throughout the yard,” states Dingley.
This 1930s garage was not just worn out and beat up; its odd design and tiny garage door made it impossible for Rick Giudicessi to park his car inside. Rather than using it for storage, then he turned it into a tiki bar with an attached terrace in which his family can amuse year-round. “When the weather finishes using the terrace and tiki bar area, we proceed inside to the heated area,” he states.
Taking the garage down to the studs and designing an open ceiling turned the bland distance into just what the family now calls The Annex. Although Giudicessi made a lot of the new space himself, all of the structural work required professional help.
New cabinetry, a bar top, a satellite TV and bar stool seats make The Annex the ideal sports bar, ideal for entertaining in rain or shine.
Though Michael Brown used his attached garage, it became a fast solution for some extra distance when his in-laws moved in. The house’s original kitchen was too small to sponsor two extra people, therefore Brown had the garage changed into a professional-grade kitchen, using a brand new garage attached to the side. The remodel required help from architects and contractors, but the result was well worthwhile. “We have never regretted doing this, not even for another,” he states.
Megan Hirsch enjoys having outside parties, but her yard and main dining area were too little to contain the large groups she wished to sponsor. The garage, which opens into the house’s backyard, had plenty of room to spare. Reserving part of the street-facing section of the garage for parking nevertheless left area for indoor-outdoor enjoyable. A 14-foot watching screen rolls down within the back of the garage so that the family can host outdoor film nights and Ohio State University football parties.
The brand new black standing-seam-metal roof contrasts with the fresh white siding, placing the garage apart from the brick main house. Considering that the garage is visible in the road, the Hirsches needed something that would make an impact.
Fans, a disco ball, classic fixtures and a large dinner table set the ambiance for dinner parties inside. The Hirsches set up the hanging lantern onto a pulley in order that they can lift and lower it on the dinner table, lighting nighttime feasts.
Nancy Rice didn’t need a place to park her car, so she took advantage of the chance to flip her garage into her dream area: a personal library.
Three active kids, a pool and consistently warm California weather required an outdoor hangout area for this Long Beach family. Rather than building something new, Michelle Walton and her family worked together with Royce Flom Construction to flip their garage into a joint pool house and storage space for their outdoor equipment.
“I’m from Ohio and grew up with basements. My husband is from California and states, ‘The garage is your California basement. No one parks within their own garage in California,'” Walton says.
New French doors create a pass-through in the pool into the house on the other side of their garage. Walton painted the floor together with nonskid paint, so the kids would not slip and slide while coming in from the pool.
The white, casual, beachy vibe was a given for the family. “We love the shore, and we have a great deal of surfboards,” Walton says.
My backyard just turned 6 years of age. For years, I kept buying new crops to fill in the gaps — even after I’d no openings left. It got to the point where if I had been near a nursery when running errands, I would poke my head and nab a few things — especially during late summer and throughout the fall sale season. When I got home, I would slip my purchases into the backyard, nestle them among adult plants and hope that my wife never noticed. In fact, I knew she would not care, but maybe deep down I cared. My addiction was costing me money, but it did not have to.
Fall is my favourite season — crisp mornings and evenings, hot afternoons, bright blue skies, stunning sunsets and a backyard with a rainbow of fall blooms and foliage colors. When the leaves begin to drop, it is a lot easier to tell where any plant openings are and to plan what could yet proceed in.
Fall is perhaps the best time for gardening — the cooler temps make things easier on you and the plants, and the warm soil enables roots to get established and plan to remove even sooner next spring. But why buy plants when you can easily harvest and cultivate your own?
Just look at this bounty. Fall not only shows the structural bones of your backyard, but seed heads add another level of attention. These seed heads mean hundreds of free crops for you, aside from the fact that they are feeding birds and other creatures. But when can you gather the seeds?
My guideline for seed collecting over the course of late summer into fall is rather laissez-faire: When the seeds begin falling off or blowing away, they are prepared. (Then you really have to be on the ball, especially if it gets windy.) Here, old Liatris blooms are all puffed up, prepared for the seeds to be collected.
I walk around the backyard a few times every week using any temporary container that I will find, from glass to plastic to paper bags. The wider the container mouth, the greater for seeds which take easily to the wind — you want to grab as many as you can once you start choosing.
Sometimes it’s a lot easier to cut off the tops of crops, like this ironweed, and drop the entire mess into a bag. The seed heads are so small, you’d be out there indefinitely otherwise. Why not save picking the seeds out for a chilly winter day in front of a fireplace? You can even turn it into a date with your spouse or some kind of amorous game. Hey, you have to spice up seed pruning.
Grass seeds are frequently very easy to collect. Just run your hand up the stem, from bottom to top, cupping and collecting seeds as you go.
One major benefit of collecting wildflower seeds grown in your backyard is that you may trust them — if you do not use pesticides or chemicals, you know the seeds are organic.
In addition you know the mother plant — where it grew, what it enjoys, the very fact it thrives in your soil. Using locally sourced seeds is all roughly as ecologically friendly as any act you can perform in the backyard, and you can’t get more locally sourced than outside the back door.
Coneflower seeds are not these spiky pointy things. Rather, the seed is deep down in there, little rectangular tan bits half the size (or less) of your pinky’s fingernail. To get at them, I have discovered that sacrificing my thumb is greatest — I push it across the flower head, getting poked and jabbed, causing the spikes to pop off and allowing the seeds slip out.
Mountain mint and monarda seed heads make your hands smell great, but the seeds are very small and loose within the faded tubular blooms. I snip off entire clumps of seed heads and, while holding them within a container, crush them with my fingers or hands. This easily releases the very small seeds.
OK, so now you’ve got these seeds. Some have fallen onto the backyard bed and will resow, and you’ll be able to move them in spring or let them have free will and choose their own places.
Or you can winter sow. Many seeds need chilly or wet stratification — which can be a period of several weeks or months of freezing, moist conditions. Here from the U.S. Central Plains we call this period winter.
I hope you kept your old nursery pots or got some from a neighbor who had been throwing them away. Fill them around halfway with potting soil or perhaps just your normal garden soil (clay, sand, whatever), and broadcast the seeds evenly across the surface. Let winter snow bury the seeds to you.
Come spring you’ll have dozens of seedlings in each container, ready to pot up or put in the backyard once they have rooted better within a couple weeks. Congratulations! You’ve become your own wildflower nursery.
Strip each seed from the chaff, which is frequently the feathery or crunchy piece connected to the seed. Let the seeds dry out, in the few days to a week. If you select seeds when they’re falling off the plant, then they ought to be pretty dry. But if you pick them following rain or other wet weather, they will need several weeks or days to wash out inside — dispersing them on a table or pan helps accelerate the drying. Shop in a paper bag, which provides good air flow (glass and plastic will encourage mould growth). I have discovered that college lunch totes, folded over two and stapled, function good. I label them with all the plant and year accumulated. Store the bags in a cool, dark, dry place. That may be in a dry garage, an outbuilding, a storage seat out or in a cellar. The benefit of storing them out is that you’ll be cold-stratifying the seeds — some may also require moisture, but people that just need it chilly will be prepared to sow again come spring or summer.
The first year I winter-sowed seeds in 24 containers. Let me tell you I was as giddy as a kid in a candy store the subsequent spring. I had sufficient plants to strike some problem areas in my beds, with plenty left over to gift or even sell.
Now I have discovered I have a new addiction — amassing plastic pots and cluttering up my backyard every October and November. However, no need to hide this addiction, since the crops are liberated, and that I know for certain they will thrive in my backyard.
The base of this space is decks and stone pavers surrounded with a idle moat; fires, fountains and a dazzling LED lighting scheme create comfortable zones for lounging and entertaining.
The deck is made of ipe and contains many different spaces for relaxing out front. Two tall concrete constructions provide privacy from the street — one features fire; the other, water. The fountain is 6 feet wide and 6 feet high; the fireplace is 8 feet wide and 6 feet high. The front lawn foreshadows the contemporary renovation interior.
“Our area is seriously legacy, but strangely the house is getting all kinds of applause,” Laan states.
Because they can be found at a historic high-end shopping district with a lot of pedestrian traffic, people are constantly walking by and peeking into the intriguing space. “We have met more people because we renovated the terrace than we’ve over the past 23 years dwelt,” she states.
This terrace stone divides the space and highlights the entrance sequence.
The front terrace is good for big parties and intimate get-togethers; the seating area on the right is where a set of two to four are inclined to hang out. The chairs encircle a 3- by 3-foot concrete gas fire pit. LED lights give off a soft glow under.
A lazy-river moat surrounds the decks. Eight pumps filter and circulate the chlorinated water, keeping it tidy. The moat also comprises 50 LED lights. “It is very cool when it is lit up at night,” Laan states.
The furniture’s modern lines operate well with the architecture, yet all the bits are very comfortable and stand up to the elements.
This 8-foot-long gas fireplace to keep things toasty on chilly Ontario nights. The front lawn has extended the family’s living space and is a favorite place for friends new and old. “We are very casual entertainers … we don’t mean to entertain, but we all know so many people and it is very tough to book patio space in the town, so today I receive text messages from my pals asking, ‘Is the patio available?'” Laan states.
Big events tend to bring the guests as well, some invited and some that occur by. “During the next week’s jazz festival there will be approximately 50 people here, and usually some stragglers we don’t know who feel entitled to visit … we’re cool with that,” she states.
Perhaps you have imaginative used your front lawn? Please show us everything you did at the Comments!
If you’re looking for a great houseplant to give your decor a tropical flair, then Strelitzia,commonly known as bird of heaven,won’t disappoint. While it’s often wrongly known as a banana plant (it is a toast), you won’t find any bananas growing, however if you’re lucky (or rather, if a plant is truly joyful) following three to five years then you might just discover some gorgeous blooms.
The two most common species at the Strelitzia household, each of which may be purchased as a houseplant, seem very similar, and it is a good idea to know which kind you’re purchasing, as the end result will not be exactly the same. Strelitzia reginae grows to a maximum height of 5 to 6 feet and blooms with the traditional orange bird of paradise flowers; it also has a dwarf version, where the leaves are quite small comparatively. Strelizia nicolai can grow to be a giant shrub and blooms with dramatic cream and black bird of paradise flowers; it is not as likely to be sold as a houseplant, although surely some confusion is always a chance. Choose wisely. It’s always a good idea to become an educated houseplant purchaser, especially when it comes to investing in something that you hope to enjoy for years to come.
Below you’ll find examples in which bird of heaven was utilized in an assortment of configurations, and I have added some hints that I hope will be helpful as you navigate the path into using houseplants to enhance your property.
The large green bird of heaven leaves are a great complement to the peacock blue partitions within this eclectic San Francisco living room. I want to find some more green plants to balance the ocean of intense shade, especially behind the lotion seat in the foreground, however this is a superb start.
The plant is a fabulous addition to this neutral Seattle living room. Its tropical flavor is an ideal match for its bamboo dividers, sisal area rug and neutral beachy decor, which adheres to the sea view beyond. 1 plant looks great, though a second one on the right side of this window will perfectly frame the opinion and really bring the outdoors in.
This complete bird of paradise plant well balances the blooming orchid on the coffee table in this Portland, Oregon, home and functions to cancel the formal decoration that could on occasion make a room look more like a hotel lobby than a house.
Plants are a great way to add life to a distance, since not only are they alive and breathing, but they are also not ideal! Just a little imperfection in the shape of a plant that has its own mind may be a great way to make a home feel comfortable and resided in, especially if you’d rather not have a cluttered appearance. On the flip side, a cluttered plant accounts a modest family-made mess also.
Bird of heaven is a superb selection with this contemporary high-rise living area in Miami, since it connects the residents to the tropical surroundings far beneath and detracts from the sterility of the cityscape. The pair of plants provides grounding symmetry within this open area, as well as adds perpendicular interest, which is always a significant element in any room.
In this Manhattan pied-à-terre, bird of heaven does a great job of providing a human touch to the intriguing though impartial cityscape view. Some true green is a welcome touch of colour in a sea of black upholstery, and in fact, I would really like to see much more green in that way corner by way of a chunkier bud and two birds of heaven planted together — there is quite a bit of blank wall area that could benefit from a large tropical leaves.
Following is a perfect example of an area that will profit greatly from a Strelitzia nicolai, as the ceiling height in this Seattle house warrants an extremely tall plant. The plant used is well positioned, however, functioning as a visual anchoring point at the conclusion of the curved couch, and also our eye has a moment of relaxation before taking in the huge ocean perspective beyond.
Bird of paradise works well within this modern Philidelphia house and matches this corner well, bringing the outdoors in. In this scenario, however, the plant really could be taller to fill the vertical space and also to bring up the eye rather than down. Here I find myself looking at the bottoms of the seat rather than at the garden beyond, and wondering whether the ceiling is really low or whether it only seems like that. Is not it fascinating how one thing whose proportions aren’t quite right can change everything?
To not worry; there are a few ways to solve the issue of a plant that is lacking height while you’re hoping and waiting for it to grow: a plant stand, a tiny low table, a stool of some type or maybe even a couple of cinder blocks if your decoration (and your partner?) Can handle them. Just do not forget to fertilize so you can send these cinder blocks on their way earlier rather than later.
Choosing the right plant for your house isn’t always so simple, as there are lots of options, and one must always consider the requirements of this plant first. Add Strelitzia to a list of possibilities, and even if it not blossoms indoors, you’ll still appreciate its exquisite green leaves and the touch of heaven it brings into your house.
Cabinets, countertops and appliances generally steal the show from kitchens. However, the pantry door is no slouch and should have a chance to stand out. A playful pantry door lets you step away in the standard package of cabinets and also add a personal touch to your cooking space. Whether you decide to paint the doors a bold colour or add chalkboard paint, frosted glass panels or barn door hardware, your cabinet can add whimsy to your kitchen.
Don’t be afraid to show off your personal style from the kitchen with a bright shade, such as the red on this salvaged cabinet door. It will immediately create your pantry the star of your kitchen. Does not this one remind you of an old telephone booth, too?
Bringing vivid colours to the kitchen can be nerve racking, because numerous kitchen substances are large investments. Using color in your pantry doors is a great thing to do. Since it’s just paint on a small surface, you can change it as often as you’d like. The turquoise with this vintage screened pantry door feels fresh but still ties in to the kitchen reclaimed, vintage look.
The glass panes onto this conventional barn door provide the proprietor a glimpse inside. If you’re clutter-prone, this probably is not the ideal alternative for you. A frosted pane or strong door can conceal your messes from drifting eyes.
This modern kitchen mixes things up using a barn-style cabinet door that contrasts with the more modern main cabinetry. I love the way the designer painted the slipping barn-style doorway with chalkboard paint and magnetized it, too.
Turning a plain old cabinet doorway into a magnetized chalkboard door requires many, many applications of both kinds of paint. But if you’re eager to place the job in, the final result will be stylish, playful and functional. It is possible to add photos of the children or nearest and dearest, and maintain a list of grocery items in plain sight.
Chalkboard paint aids this cabinet door stick out in the oak cabinetry. Many pantry doors come with frosted glass panels. If you’re not fond of this frosted glass, just cover it with chalkboard paint. However, you’ll want to write your grocery list lightly, because of the glass underneath.
This frosted door seems great in this modern kitchen also helps conceal any clutter. The door hardware mimics the refrigerator hardware, too.
These homeowners highlighted their pantry space with sliding frosted doors. This is a bold choice that requires some business behind the doors, but the substance certainly sets the pantry besides the remainder of the kitchen chimney.
This cabinet door appears like the remainder of the cabinetry within this kitchen, using a chalkboard to break the lender of all-white cabinets.
This subtle door actually opens, like a key passageway. This smart technique avoids a doorway that cuts into the primary kitchen area.
This narrow pantry door swings out to reveal an entire space supporting the cabinetry.
Inform us : What do your fantasy cabinet door look like? | 2019-04-21T03:15:52Z | http://www.memberyou.net/author/atrace |
The Ontario Association of ACT and FACT invites you to join us in Niagara Falls, Ontario for its biannual conference: Refresh Renew Refocus.
You will enjoy two and a half full days of keynote presentations as well as workshop and poster presentations.
Zsolt Bugarszki, PhD is a social worker and expert of social policy. He works as a lecturer at Tallinn University in Estonia and his main field is mental health and disability care.
He graduated in Budapest, Hungary where he had been working as the Director of Soteria Foundation for 12 years. Soteria is one of the first organizations that established community based mental health services in Hungary.
Between 2001 and 2013 he also worked as lecturer at ELTE University Faculty of Social Sciences. As a consultant expert of Mental Health Initiative, Mr. Bugarszki has been working in different Eastern-European and Post-Soviet countries to promote community based mental health services. He works as expert in the planning and implementation of mental health reforms in Armenia, Kazakhstan and Kyrgyzistan.
In 2013, Mr. Bugarszki relocated to Estonia and his interest turned toward ICT supported innovative solutions in the welfare system. Working at Tallinn University he started to co-operate with Finnish and Swedish Universities in different development projects and he became the co-founder of Estonian start-up Helpific.
Mr. Bugarszki's research interest is in community development, recovery oriented mental health services, participation and citizenship of vulnerable groups. He has conducted several researches on the process of deinstitutionalization and community based services in different countries.
Assistive technology includes assistive, adaptive and rehabilitative devices for vulnerable people and also includes the process used in selecting, locating and using them.
Information and communication technology (ICT) is a term that stresses the role of the integration of telecommunications, computers, enterprise software, databases and audio-visual systems which enable users to access, store, transmit and generate information. There is an increasing interest to connect assistive technology and ICT based solutions with the welfare system.
First instances arrived in medical administration and management, assistive technology became relevant in elderly care and disability care but nowadays we see a growing presence of technology in mental health care, too. Popular applications to promote mindfulness, robot pets to maintain social bond, virtual reality based technology to treat fear and phobia, shared welfare initiatives to increase peer-to-peer support are just a few examples of this emerging field.
In my workshop, I want to demonstrate the potential of assistive technology in mental health care arguing for a conscious development strategy to embrace innovation.
Luis Lopez, MS - Are We Trauma Informed Care Practitioners?
Luis O. Lopez, MS, is the Coordinator for Fidelity and Best Practices at the ACT Institute at the New York State Psychiatric Institute. He is also a counsellor, trainer, consultant and coach.
Mr. Lopez has been involved in the implementation and application of evidence based practices since 2003. He has expertise in the areas of Ethics, Trauma Informed Care, Motivational Interviewing, Integrated Treatment, Family Psycho-Education, Cultural Competency, Stages of Change, Harm Reduction, WRAP, CBT, and Wellness Self-Management.
He has facilitated workshops in over 80 conferences nationally and in Canada. He has conducted consultations in Puerto Rico and the US Virgin Islands. He is also member of the New Jersey Counseling Association.
Presentation: Are We Trauma Informed Care Practitioners?
Historically, the behavioural health system has not addressed issues related to trauma effectively and efficiently. The field has concentrated in pathologizing and labelling behaviours, developing treatment on the basis of "What is wrong with you?" and not, "What happened to you?"
This workshop will briefly review the principles and practices of the Trauma Informed Care approach. It will review how it impacts the work we do at the ACT Institute. It will additionally answer the question, "Are we practicing Trauma Informed Care?"
Olivier Jackson has worked for nearly a decade as a clinical nurse and team leader for an Assertive Community Team (ACT).
In 2011, he was recruited by the National Centre of Excellence in Mental Health (NCEMH) due to his experience in ACT program, Intensive Case Management team and Early Psychosis Intervention team. The NCEMH is responsible for supporting best practices in the mental health care system, training new team members, and fidelity reviews for each ACT team in the province of Quebec.
Recently, Mr. Jackson has trained "Housing First" teams in France and has presented the evolution of the ACT model in Hamburg in 2017.
8:45 - 10: 30 a.m.
Participants will learn about the FACT model, new standards and guidelines, and referral form with the Co-ordinated Support Plan.
Sessions - 1 to 2:30 p.m.
In 2016, the ACT Institute developed a one-year curriculum to address the NYC Mayor's office SAFE ACT. This was a program that provided support to New Yorkers experiencing emotional and psychological challenges in the mental health system or in the streets.
As part of the NYC SAFE ACT, the NYS ACT Institute developed a one-year curriculum for substance use specialists (SUS) and team leaders, which includes a number of blended training approaches (training in vivo, online, and with modules).
After completing this project with NYC, the ACT Institute redesigned the curriculum and started working with 'SUS' and team leaders from across the state of NY. This workshop will review both initiatives from design to implementation.
With the culture of constant change, and the rapid move toward a Values-Based Reimbursement environment, who can keep up? This session will assist supervisors and staff with tools to manage this continually evolving world of behavioral healthcare provision.
As we assist those we serve with life changes, practitioners also need support to manage change within our agencies and among our staff. We hope you get the outcomes you want from the change, but also the support you need for the transition!
People with serious mental illness have at least a 15 year shorter life span than the general population. A portion of this mortality is attributable to poor access to good quality preventive medical care.
This workshop discusses the role the ACT Team play in improving primary care to their population generally and describes a quality improvement project within our team to reduce cardiovascular risk and promote early detection of breast and colon cancer in our population.
CMHA Kenora Branch embarked on a Quality Initiative two years ago with the intention of transitioning clients who were deemed appropriate out of the ACTT service to more appropriate care. The program's focus was to ensure that the fidelity of the ACT Team Model was adhered to as per the Ministry of Health and Long-Term Care ACTT Standards (2005).
They recruited the Excellence through Quality Improvement Project (EQIP) to assist with QI coaching expertise to navigate them through the Model for Improvement, with the intention of making evidence based program changes and not reactionary ones.
On project inception, the perceived outcome was to transition clients appropriately to a lower level of care as distinguished by the ACTT Transitional Readiness (ATR) tool. As the project progressed, it became evident from the data that stepdown services were not the critical area that required focus. While these transitions were streamlined and improved, the critical need was the delivery of service to higher acuity clients. Approximately a third (27%) of ACTT clients serviced exceeded the level of care mandated by the ACTT Standards. The data revealed that CMHA Kenora Branch had been consistently providing a higher level of care to clients above and beyond the mandate.
Sessions - 1 to 4 p.m.
Compared with the general population, schizophrenia patients have a shorter life expectancy by 25 to 28 years, and the mortality gap is still growing. Variety of factors, genetic and environmental such as smoking, diet rich in lipids and simple carbohydrates and sedentary lifestyle contribute to the increased risk for metabolic syndrome, diabetes, and cardiovascular diseases.
In recent years, the primary focus shifted from direct illness management to physical and psychological well-being of schizophrenia patients. Our patients have a high rate of suicide as well as a very high incidence of co-medical illnesses as well such as Diabetes Mellitus, Dyslipidemia, Essential Hypertension, Obesity, Osteoarthritis, etc., such that they require significant medical attention as well. This requires the skills of clinicians who are highly trained with combined expertise of both classic physical medicine and mental health in order to keep these individuals healthy and improve their well-being.
New 2016 quality standards published by Health Quality Ontario on schizophrenia care for adults in hospitals stressed the importance of promoting smoking cessation, promoting physical activity and healthy eating, and screening for substance use. Such guidelines have a wider application scope than hospitals and should be employed in the outpatient settings as well to maximize benefits.
The important role of the team leader in the dissemination and implementation of high fidelity Assertive Community Treatment (ACT) has gone mostly unexplored. In this presentation, the author will share original qualitative research that addressed three questions: (1) describe the ACT team leaders on high fidelity teams; (2) understand their approach to leadership; and (3) understand what roles they play in promoting high fidelity to ACT as an evidence-based practice (EBP).
Results included themes that ACT team leaders had notable attributes and a personal job match with the roles and responsibilities of an ACT team leader. Additionally, the team leaders performed many prominent functions, had a distinct communication style, paid deliberate attention to team members' well-being, and set a very intentional, positive work environment. Team leaders in the study played critical roles in the promotion of high fidelity ACT services and used ACT fidelity as a guide for service delivery to program participants.
It is the hope that this information on the specific activities of exceptional ACT team leaders will illuminate processes important to the implementation of the EBP of ACT in effort to close the gap between EBP knowledge and actual service delivery. Further, it can serve as a guide for the training and technical assistance provided to new team leaders.
Target audience: Team leaders, agency leadership and mental health authorities who create policy and provide technical assistance for ACT.
Sessions - 3 to 4:30 p.m.
It is well documented that family / natural supports' involvement in care facilitates community integration and recovery for individuals with behavioural challenges yet engaging families/natural supports in care and care planning often presents challenges for providers due to varying expectations for treatment and interpretations of recovery and wellness, challenging family dynamics and limited inter-agency collaboration to manage family related issues.
In this workshop, lessons learned from Assertive Community Treatment providers will be presented including strategies for understanding family systems, the family life-cycle, inter-agency collaboration, and how to create opportunity for collaboration among individuals with behavioural health concerns, their family/natural supports and providers will be presented through presentations of vignettes. The workshop will provide ample opportunity for knowledge exchange among workshop participants.
The HARM is a structured clinical judgment tool that guides the assessor(s) to formulate opinions regarding risk of violence. It combines both historical / static and dynamic factors to assess risk as reflected in the literature.
The HARM captures three stages of the assessment: Past, Current, and Future. Each stage flows into the next, so that, in moving through the past and current stages to the future stage, the assessor can arrive at a prediction of a patient's or service user's risk of aggression and formulate risk management strategies.
Introducing the Electronic Hamilton Anatomy of Risk Management (eHARM). An Excel-based tool that has revolutionized the way HARM reports are completed.
The Community High Intensity Treatment Team (CHITT) provides services for individuals with serious, complex, and persistent mental illnesses living in the Kingston area. Over recent years, team members have become increasingly aware of unattended existential and spiritual dimensions of the client's care.
This workshop will present and examine findings from a pilot project in which CHITT recruited a Spiritual Health Practitioner (SHP) to serve as full-time member of the team from September 2016 to March 2017. The intent of this project was to explore the following: how the client's spiritual health needs are addressed through the clinical practice of a SHP; the nature of the SHP's clinical and educative consultations with the team, the type of referrals made, and the impact of an SHP upon team functioning.
Findings of the project and a discussion of the generalist versus specialist role of the SHP will be presented, recommendations made, and possible models of care suggested. Workshop participants will be encouraged to engage in discussion, exploring opportunities for integrating an SHP into their own ACT teams.
Flexible Assertive Community Treatment (FACT) is a relatively new model of community mental health service in Ontario. The FACT model originated in Holland - FACT teams have higher caseloads than ACTT and varying levels of service intensity (20% high intensity, 80% low intensity).
In 2015, The Community Mental Health Program at The Royal implemented a FACT team for persons with a Dual Diagnosis. The development of FACT DD was in response to a 2012 Provincial Review of Dual Diagnosis Services conducted in Ontario, which identified there were no community based treatment options to provide service to this population in the Champlain LHIN.
Although all ACT teams carry some Dual Diagnosis clients, we would like to share our experience of implementing and delivering ACT and FACT services to this very specialized population of Dual Diagnosis clients. In particular, we will be looking at differences in staff skill mix, the role of behavioral supports and interventions when working with this population, and how FACT and ACTT fit into the continuum of care for Dual Diagnosis clients.
Participants will understand and have a working knowledge of specialized ACTT and FACTT services. FACTT for Dual Diagnosis clients is a new model of care – the first in Canada.
Target audience: Clinicians, managers, or physicians with an interest in the FACT team model and services for Dual Diagnosis clients.
A thought-provoking journey through Europe and North America to ask one of the most fundamental philosophical questions of our time: should we be giving doctors the right in law to end the life of others by euthanasia or assisted suicide?
Understand the issues related to physician assisted suicide for persons with mental illness as the sole underlying condition.
Filmmaker, Kevin Dunn, uses powerful testimonies and expert opinion from both sides of the issue to uncover how these highly disputed laws affect society over time. This film is about the adoption of - and resistance to - a new cultural philosophy that may affect you at the most vulnerable time of your life. Visit FatalFlawsFilm.com for more information.
Sessions - 7 to 9 p.m.
This workshop will enhance leadership competencies regarding the best practice guideline to ensure that clients are receiving "the right care, at the right time, in the right place" (The Ministry of Health and Long-Term Care, 2015). It will also examine the current system issue of long wait times for ACT services in the Greater Toronto Area. The Cota / St. Joseph's Health Centre Stepped - Care Model of Service is an innovative solution to this growing issue in the ACT sector.
The dream of this initiative was to create an option for clients that would allow them to flow between the teams based on the level of care they need while using existing health care dollars. We will explore the creation of this model from both the service provider and client perspective.
To identify the system need for client "flow" in ACT work. They will be shown how clients can move through this model with seamless transitions between teams and care providers through the "step-up, step-down" ability.
A "lessons learned" session: we will do a deep dive into the creation of the partnership. We will describe the process of two separate organizations (hospital and community agency) designing, negotiating and implementing a Stepped Care initiative. We will review the details of how the MOU was formed, how long it took and the success and challenges along the way. These lesson learned will be of interest for any participants looking to innovate in their own teams.
Through case based learning we will explore the client experience through the Stepped Care Model of Service. We will also examine the clinician perspective of working within this framework and review the change management approach that was used during the implementation phase.
Target audience: This workshop will be of particular interest to clinicians and managers working on mature ACT Teams - teams with long-term clients who are stable and could do well at a lower level of care.
Community Treatment Orders - Why the Controversy?
Community treatment orders (CTOs) are initiated primarily for patients with chronic psychotic illness who lack insight into their illness and the need for an adequate medication regimen. The vast majority of these patients have been diagnosed with schizophrenia.
For more than a decade, numerous articles have described efforts to assess the effectiveness of CTOs. The results have been conflicting, the implication being that the effectiveness of CTOs is unclear and that the issue is complex and controversial. However, as demonstrated in the data and commentary that follow, a valid analysis of the subject demonstrates that the issue is in fact not at all unclear and need not be controversial.
A chart review was performed of 50 patients of three assertive community treatment teams, these patients all subject to a CTO. Thirty-four of the patients demonstrated a marked decrease in time in hospital since initiation of a CTO. Of the remaining 16 patients, the lack of improvement in hospitalization time was demonstrated to have no correlation with their CTOs. CTOs can be of great clinical benefit but only if properly employed and followed by effective treatment.
This one-hour workshop will encourage ACT team members, managers, administrators, and monitors (without a research / statistics background) to utilize quantitative data to improve outcomes.
Participants will gain an understanding of how to identify data trends related to the successes of ACT programs in regard to housing stability, community tenure, hospitalization and recidivism, employment, etc. This approach can be applied to individual teams as well as to multiple teams across a region.
Presenters from Georgia's Department of Behavioral Health and Developmental Disabilities (DBHDD) will share the challenges and successes experienced in building a state-wide data collection system and the ways in which that data has influenced policy and driven service delivery.
Sample data collection tools will be shared with participants.
With a clear focus on being more trauma informed, the need to formalize a response regarding care and support for our clinicians during a traumatic event is clear. This is especially relevant to those on ACT Teams, providing community based services to populations who are often subject to experiences that increase their risk of exposure to adversity.
Has your team experienced multiple client deaths within a short period of time? A team member affected by seeing a perpetrated violent crime? A client that participated in a completed suicide? A team member who was a victim of violence in the context of work? This presentation will review the process that our programs implemented as a clinical response to assist clinicians with managing their own feelings about events of this nature and attending to their own wellness. We will answer the question: Whos’s watching the watchers?
This interactive and experiential presentation will explore and provide information on the challenges and unmet needs peer workers on ACT Teams presently face. To meet these needs, innovative adaptations to evidence-based peer support programs and ideas for integrating these solutions into regular practice, will be discussed.
This presentation will explore how "Pathways to Recovery", "Wellness Action Recovery Plan", mindfulness, visualization meditation, yoga, stretching and recreational activities can be incorporated into sessions to meet client needs. The hope of this work is to provide inclusive and accessible services to a diverse group of ACT participants.
Target audience: This presentation is for front-line ACT Team workers and managers and anyone else interested in learning more about peer support on ACT Teams.
When we talk about peer support, we often talk about the many amazing consumer / survivor initiatives or the peer support organizations across the county. Peer support is not a treatment, it is a relationship that focuses on the individual and shared mutuality rather than diagnostic criteria. It provides the opportunity to feel safe, respected, valued, understood, and comfortable while receiving support.
So what about the peer support being done in a clinical setting? If there is a space for peer support in this setting what does this peer support look like, how do we maintain a recovery-focus and avoid peer drift? How do we work collaboratively within a clinical team to bring our invaluable peer lens to our role, while keeping the spirit of peer support, to shift the focus from treatment planning to recovery planning?
To address these challenges, RHD-LA launched a decentralized approach to decision-making and strategic planning that included close partnership with ACT psychiatrists, nurses, staff and clients. This presentation will describe this process and lessons learned in order to provide the audience with tools and practical examples that can be applied locally.
North Carolina (NC), USA has been under a US Department of Justice Settlement around housing and employment issues with the SPMI population. As a result, ACT has undergone the most transformation NC has seen with this service in decades.
This presentation will focus on the efforts of the state, in partnership with the University of NC Institute for Best Practice to provide Motivation Interviewing (MI) training to ACT staff to improve MI proficiency throughout the state.
From introducing MI to participants, to training specialists, we will discuss the various ways to “slice the pie†in terms of disseminations and sustainable efforts to incorporate MI within all facets of ACT from peer specialists work to clinical supervision.
Target audience: ACT clinicians may appreciate the MI training content and resources. ACT team leaders may want to adopt some of the supervision and cross-training ideas. Clinical managers and agency leadership may like the timeline and structure in bringing MI to their agency. All participants can walk away with some MI expansion planning ideas.
Traditionally, we prefer full or part time employment as an outcome of vocational rehabilitation in mental health care. Employment provides clients with a stable job position, preferably connected with health insurance or pension related benefits, access to the social security system, paid holidays and well-regulated working conditions. On the other hand, entrepreneurship and any kind of business activities are considered as unstable, stressful adventures with no fixed income and benefits that don't fit vulnerable people.
In a rapidly changing economic landscape, we need to take into consideration that the preferred stable employment opportunities are vanishing. Digitalization, automatization, robotics, and the advancement of artificial intelligence are challenging not only traditional blue-collar jobs but also white-collar ones. Entrepreneurial skills, creativity, and enormous flexibility are needed on the future labour market, and we need to embrace these new requirements in vocational rehabilitation, too.
I would like to argue that technology and new economic models can also be very enabling, opening a new horizon for vulnerable people in a transforming labour market, bringing examples from successful initiatives.
Striving to Be the Best: What is Effective ACT Leadership?
Achieving effective leadership of an ACT Team is a multidimensional process, which requires continuous development of self-awareness, mastery of managerial skills, and deliberate development of clinical expertise. This workshop will examine specific elements of key areas within the leadership role and provide strategies for leadership development.
Compassion Focused Therapy for Psychosis is an intervention that aims to help individuals who hear voices decrease their distress and balance their emotions by developing compassion for themselves and for their voices.
Since 2017, Niagara Region Mental Health ACT Teams 1 and 2 have offered a group for clients called Coping with Compassion that is based on the principles of Compassion Focused Therapy (CFT). In the group, clients learn about the CFT model of emotions, practice strategies to increase soothing and connection to others, and develop their compassionate self.
In this workshop, we will introduce the principles of CFT for psychosis using some experiential activities and videos and describe our experiences delivering this therapy in a group-based format to ACTT clients. Participants will have the opportunity to learn about CFT for psychosis and discuss how this model may be used on an ACT team.
Measurable standards are critical in assessment of evidenced based practices to prevent drift from model delivery as it is intended. Historically, evaluation of Assertive Community Treatment (ACT) has relied on costly, cumbersome, and time-consuming fidelity visits. Beyond the several days of observation, interviews, and chart review, quality measures are infrequently considered in teams' day-to-day practice.
This workshop introduces a novel method of assessing fidelity of ACT teams by moving the assessment from an outside entity to a process of self-assessment conducted by teams themselves using Qualtrics survey software, an online, user-friendly self-assessment of 49 items was created for ACT teams to measure fidelity. Alignment of all items with New York State ACT Standards of Care was reviewed, and a three-item subscale assessing Clinical Transition Activities was added in consideration of the emerging focus on transition from ACT.
Upon completion of the ACT fidelity assessment submitted online, an automated response is made available immediately to the team. The feedback provides the ACT team with an overall score and individual item scores. In addition, the feedback is accompanied with suggestions for improvements including training and suggested quality improvement projects.
This workshop will include lessons learned about the self-assessment process to provide a knowledge exchange among team leaders, and the surprising ways in which team leaders have implemented the use of the fidelity assessment in day-to-day practices.
This two-part workshop will review challenges and strategies for implementing a harm reduction culture in clinical work. Presenter will share their expertise in areas of outreach and engagement.
Presenter will also review with participants a number of barriers and limitations in implementing this model, particularly in ACT teams. Finally, participants will brainstorm specific, practical strategies to start implementing ideas immediately.
"Manufacturing Recovery" introduces the concept of Six Sigma quality improvement to recovery-based practice. Although originally created to eliminate errors in manufacturing in the 1980s, it can have powerful results in the mental health sector for individuals, organizations, and communities.
The presentation is divided into three parts. The first is an informative overview of Six Sigma methodology and how it can go hand in hand with the recovery model by recognizing the role of the client as both a customer and stakeholder.
The second portion goes through a case study of a Six Sigma project conducted with an Assertive Community Treatment team to address documentation challenges, which demonstrates the radical and measurable improvements this process can yield.
The final portion of the workshop is a brief discussion of other Six Sigma Projects completed in community mental health as well as discussing potential projects suggested by the audience.
Target audience: The Target audience would be anyone interested in initiating, leading, or participating in quality improvement initiatives at any level. This includes team leaders, managers, coordinators, directors as well as front line staff and people with lived experience who participate in the services being offered. No previous knowledge of Six Sigma is required, and those with previous knowledge will develop new insights into its functionality.
The cost of substance abuse is significant and goes beyond the individual. Those affected, their immediate family, their community and professionals with a wide range of backgrounds, in an ideal scenario, partner up to find the best approaches within the confines of the current scientific advances as well as financial resources.
Among the many avenues explored in this arena, from an alternative, modern yet ancient tradition, acupuncture lends us a quick and effective tool in the form of a simple five points protocol called NADA. Easy to learn and use, it can bring an individual from the brinks of despair to a state of relative balance within minutes.
Whether in the throes of the addictive substance, in withdrawal, or unrelated emotional turmoil, skillfully placed needles can put one in a better position to stop, think, regroup, and rebalance in order to take the best next steps in a more desirable direction.
A short history of the protocol, research data as well as its possible applicability for an ACT environment will be discussed. Case studies (and demonstrations if applicable) will illustrate the presenter's experience with the intervention.
How does my positionality impact my relationship with my supervisees and the connection with our clients?
Use of self is very important in the work of leadership. Understanding your positionality helps you to understand the power dynamic of providing supervision to someone that may not have the same positionality as yourself.
You will leave this workshop with a better understanding of your positionality and encouraged to help your staff recognize their own stance as it relates to connecting with the clients they serve. Your team will in turn become comfortable enough to help clients identify their own personality and how it impacts their relationship with systems and providers.
Since 2005, the province of Québec developed a lot of ACT and ICM teams to provide services for people with severe mental health illness. These teams faced many challenges and one of them is to learn to work well together. At that time, most of the teams work alone without using the others to create a good continuum of care for people in services.
The ACT and ICM National Center for Excellence in Mental Health (NCEMH) advisors worked together to provide activities and tools helping ACT and ICM teams to work well together. This session will present what was more helpful.
Social workers play a key role supporting family members and involving them in the care of their loved one when possible.
The group will share ideas and challenges as well as areas for further development to bring back to their teams. | 2019-04-24T16:21:57Z | https://www.niagararegion.ca/living/health_wellness/mentalhealth/act-conference.aspx |
MR. CARNEY: Hello, everybody. Thanks for being here. Let me begin with a couple of items here. First I have a readout of the President’s call with Governor Brewer of Arizona earlier today in which he expressed his concern for the citizens of Arizona who are dealing with multiple severe fires and to ensure that the governor has everything she needs. Beyond deploying liaisons to the Arizona Emergency Operations Center to help provide support to the state, FEMA has approved both of the requested Fire Management Assistance Grants to help respond, and the U.S. Forest Service has deployed more than 2,500 interagency firefighters to protect lives and property.
The President told the governor that the administration will continue to support Arizona throughout the response to these fires.
Second, I wanted to let you know that on June 13th, next Monday, the President will travel to Durham, North Carolina, to meet with the Jobs and Competitiveness Council at the corporate and U.S. manufacturing headquarters of Cree, a leading manufacturer of energy-efficient LED lighting.
The President will tour Cree’s manufacturing facilities, deliver remarks, and meet with the Jobs Council to discuss initiatives and policies to spur economic growth, promote job creation, and accelerate hiring across the nation.
And a final note here, the First Family will spend this coming weekend at Camp David. They will depart on Friday afternoon and return on Sunday.
Those are my announcements. Darlene.
Q Thank you. Jay, is the White House thinking about cutting the payroll taxes that businesses have to pay on wages to encourage hiring?
MR. CARNEY: Darlene, I saw that story. I think that, first of all, we need to step back and explain a few things, because there’s been some I think conflating here and a little misunderstanding.
As you know, we are not even six months into a year payroll tax cut for employees, which the President fought very hard for as part of the tax cut deal that he made with Congress in December. And he I believe made a reference to that in the press conference he gave with Chancellor Merkel the other day.
As for the story that you reference, obviously there are a lot of ideas that get bandied about, both within the administration and outside. This is an idea that’s been around for a long time, has been supported or has seen expressions of support from the business community, from conservative economists and others. But I don’t have any policy announcement to make.
Q Would you know if this issue has come up in the talks -- in the Biden talks?
MR. CARNEY: I don’t believe it has. And I think that, having said that, going -- we have made clear from the start that the talks being led by the Vice President about our need to reduce our deficit will allow everything to be on the table. But the starting point for this -- for our side has always been the President’s proposal for deficit reduction, $4 trillion dollars over 10 to 12 years, a balanced approach that looks at all the drivers and takes into consideration all the drivers of our deficits and long-term debt, including obviously non-defense discretionary spending, defense spending, entitlements, and tax expenditures. And obviously the other partners in those negotiations have brought other ideas to the table.
Q Separately, you talked about the trip Monday to North Carolina. Can you say anything about the substance of what the President is going to be doing in Puerto Rico the day after?
MR. CARNEY: I don’t have anything on that for you now. I’m sure we’ll have more to say about further travel down the road.
Q -- and a final deal with have to be something significantly less?
MR. CARNEY: The President put on the table a proposal for $4 trillion in deficit reduction over 10 to 12 years. So I’m not -- that’s what we brought to these talks. And we remain optimistic that common ground can be found, that enough common ground can be found in these negotiations for a significant deficit-reduction package. I’m not going to negotiate the size of that from here. We’ve tried pretty hard to allow those talks and the content of those talks and the progress those talks have been making, any specifics about it, to be kept within the room.
So we are committed to serious deficit reduction in a balanced way that doesn’t -- at a time when we are continuing to emerge from the worst recession since the Great Depression, it is very important that we do nothing to set us back in terms of economic growth and job creation. Obviously that’s the paramount concern of the President and, I think, generally speaking, of everyone in that room, that deficit reduction and getting our long-term debt under control is not an esoteric exercise. It is not -- these are not goals that are somehow worthy unto themselves, because the point here is to do the right things by our economy so that we create the kind of confidence that allows for more job creation and more growth.
So that’s our approach. And we are also quite confident that Congress will vote to raise the debt ceiling. As I’ve said before, this is not a matter of spending. This is a matter of the United States meeting its obligations, not defaulting on its obligations. And I think it’s clear, it’s been made clear to everyone involved in this discussion that there really is no alternative to raising the debt ceiling; that will be done. We are pursuing both within the same time frame, and we believe both -- well, one will definitely, has to, must happen -- raising the debt ceiling -- and we are confident that an agreement can be reached on deficit reduction within the same time frame.
Q Okay. The Libya contact group meeting has offered up some strong words of support for the Transitional National Council of the rebels, the opposition group. Is the U.S. any closer to granting diplomatic recognition to the Transitional Council?
MR. CARNEY: Well, we have worked quite closely with -- as other partners have -- with the Council. We believe that they have very positive intentions. They have said very important things about support for democratic reforms.
We support efforts at the international level through the United Nations to make Qaddafi regime assets available to the Council to help fund it. And obviously you didn’t raise this, but we also welcome the bipartisan legislation by senators Johnson and Shelby that would allow us to provide critical humanitarian relief to the Libyan people by redistributing Qaddafi assets.
So in terms of recognition, that’s -- our dealings with the Council are ongoing. But I have no announcements to make about that.
Q The President said the other day that a “big chunk” of the objectives in Afghanistan had been achieved. That’s a quote -- “big chunk.” Can you explain what the big chunk is?
MR. CARNEY: Well, I think he explained. We have eliminated Osama bin Laden, who was the leader of al Qaeda, a leader of the organization that attacked the United States. And we have had significant success more broadly against al Qaeda central in the Afghanistan-Pakistan region. And we have -- which is the principal goal of the President’s Afghanistan policy. And we have halted the momentum the Taliban had prior to the implementation of the President’s strategy.
So we have made significant progress towards achieving the goals that the President laid out, which is disrupt, dismantle, ultimately defeat al Qaeda, and stabilize Afghanistan so that Afghans can take over security of their won country and prevent Afghanistan from becoming a haven for al Qaeda and its adherents in the future.
So I think that’s a pretty detailed explanation of what he said.
Q The Kerry report, the Senate Foreign Relations Committee report that you said that the administration did or the White House did not agree with all of its conclusions, one of the conclusions was that stabilization projects in Afghanistan do not always lead to stabilization. Do you disagree with that conclusion?
MR. CARNEY: Well, I think that’s a pretty broad statement. If you’re asking do we agree that every program pursued in our civilian assistance efforts since the conflict began in late 2001 has worked, we certainly share the assessment that that has not been the case. But we think that the policy we have in place now and that we’ve been implementing for the past year and a half is having success, making progress. We have no illusions about how difficult it is and remains in Afghanistan. The challenges that Afghanistan faces as a country remain significant. But we believe it is in our national security interest to pursue and implement the policy that the President laid out in December of 2009.
Q I guess there just seems to be a contradiction between every report I’ve read on progress on Afghanistan, including the White House’s own AfPak review, the declassified version that you shared, and the idea that a big chunk of the objectives have been achieved in Afghanistan. You list a number of counterterrorism objectives that I’m not going to dispute, but in terms of counterinsurgency, which has been a key part of the mission, those appear to be flailing hopelessly.
MR. CARNEY: You’re quoting our report. And obviously we stand by that and agree with that.
But that’s a long -- but to say that we haven’t reached all of our goals does not mean we haven’t made progress towards them, because we believe we have.
Q All right, and lastly, could you explain why Michael Leiter is departing and what the ramifications are?
MR. CARNEY: Well, I would point out that he has been in that position through two administrations for four years, which is a significant period of time. He’s offered tremendous service to both administrations and to his country. And I think we put out a statement about his departure and we appreciate his service very much.
Q So it was entirely his own decision?
Q Jay, the President’s meeting today with President Bongo, how did he get the meeting? Did he do the asking or was he invited by the White House?
MR. CARNEY: I’d have to check with you, maybe get -- I can get back to you on that. I think we discussed this yesterday about why this meeting is important to have for the President.
Q And I know you did talk about it yesterday, but just -- I mean, considering the context of this, a very foreign nation, this is a family -- I mean, you’re familiar with their background -- accused of corruption and using oil riches to finance a very lavish lifestyle. Considering that, why is the President comfortable meeting with him?
MR. CARNEY: Look, I think that it’s a little naïve to believe that the President of the United States should not meet with leaders who don’t meet all the standards that we would have for perfect governance, okay? This is an important relationship. Gabon has made some very significant and courageous votes in the United Nations in support of objectives that the United States has, including dealing with Iran and Libya and Côte d’Ivoire, including with issues that have to do with human rights.
President Bongo has made a number of reforms in Gabon, and Gabon is playing an increasingly important role, as I said, in the -- as a regional and global leader. This year’s human rights report tracks improvements in Gabon, and we will continue to push, as an administration and the President himself, for further progress on these issues.
So I just think, given the role that Gabon has played, given the fact that it’s now holding the presidency of the United States Security Council, it is in the U.S. interest for the President to have this meeting.
MR. CARNEY: Because that’s the President’s office.
Q I mean, it gives the impression we don’t get to ask a question and get it on the record and get it on TV.
MR. CARNEY: You know that this is -- how we do this differs depending on a visitor. This is not the first time we’ve had a visitor with stills only. Sometimes we have a full pool. Sometimes we have statements. Sometimes we have questions. So there’s no mystery here or objective here besides the tightness of the President’s schedule.
Q And if I can ask also just a health care question. A new poll out today, CNN/Opinion Research Corporation poll that shows most Americans continue to oppose the individual mandate -- 54 percent of them, as well as 57 percent of independents. How do you effectively implement a program that so many Americans are opposed to?
MR. CARNEY: Well, look, there’s no question, as we said before, that there was a lot of -- many, many millions of dollars spent to vilify the health care reform, the Affordable Care Act. And we are continuing to implement it. We believe that the benefits of the Affordable Care Act are already being felt in terms of protecting people with preexisting conditions and allowing them to maintain their insurance, allowing younger people to stay on their parents’ insurance up to age of 26. And we are proceeding with implementation. We believe, as outside analysts believe, that it will expand significantly insurance in this country, access to insurance, and will also reduce costs significantly. So we’re proceeding at pace.
Q When do you think the numbers will change then in your favor? Do you think they will?
MR. CARNEY: Well, I think the issue is about implementing a reform that will expand insurance -- expand our private insurance system, first of all. The individual responsibility provision in this is necessary in order to allow those with preexisting conditions to stay and keep insurance. And we believe that the benefits are significant and that as the implementation proceeds, more and more Americans will recognize that and -- so that’s why we continue to implement it.
Q Hi, Jay. Thank you. The front page of The New York Times today had probably the most thorough analysis yet of Saudi Arabia basically pouring money into the economy to keep people just content enough -- fat and happy enough that they’re not out in the streets protesting. Does the President object to a country that has enough money basically using money to keep people from protesting in a country that has very little in the way of political rights, especially for women? Is he content that they’re pouring money into it instead of doing something about the lack of political rights?
MR. CARNEY: The President has made clear his support for political reform throughout the region, in every country. And he makes that clear in his discussions with leaders of Saudi Arabia as well as leaders of other countries in the region.
Q It’s the front page of The New York Times, Jay. I mean, I’m sure you read it. A fatwa was issued saying that street protests are contrary to Islam.
MR. CARNEY: -- and that violence should never be -- and we condemn the use of violence against civilian protesters.
Q So if they use money to avoid those protests, then the denial of political rights is okay?
MR. CARNEY: I’m not even sure what your question is. The people of the countries in the region express themselves by going to the streets. And if they feel passionately about their demand for greater freedom and greater rights, they will demonstrate accordingly.
Q And if you can buy them off from doing that, that’s okay?
MR. CARNEY: I’m not -- Chip, I don’t know what you want me to say here.
Q I don’t want you to -- I want you to answer the question.
MR. CARNEY: I think that we support -- we support the human rights of everyone in the region. We support the political liberalization and reform of every country in the region where it needs to take place, and we’ve made that clear.
Q On the budget negotiations, does -- and forgive me if you went over this yesterday, but does the recent -- does the economic slowdown, at least as indicated by the May figures, does that give -- does that strengthen the position of the White House negotiators in arguing for some spending and against the kind of deep budget cuts that the Republicans want?
MR. CARNEY: Well, first -- I’ll make two points on that. The President supports significant deficit reduction, $4 trillion over 10 to 12 years. So there is no doubt about the need to do that. So, point one.
Q -- made clear they want more than the Democrats in the room.
MR. CARNEY: Well, they want more spending cuts -- if you’re asking about -- well, I mean, again, setting aside what’s going on in the room, because we haven’t commented about any specifics in the negotiations, obviously the Republican proposal has deeper spending cuts in order to pay for significant tax cuts. The overall deficit reduction is the same, because we don’t believe that you need to change Medicare, end it as we know it, in order -- that you can reform entitlements in a way that protects our commitment to seniors and protects the Medicare program, because we don’t need to fund expansive tax cuts for wealthy Americans, that that is not the time to do it, because that is -- that contributes to our deficit. One of the reasons why we have a deficit as significant as we do is because of the un-funded, unpaid-for tax cuts of the previous administration.
So to go to your other point, I think that we have always taken the approach -- and I have made it clear here, the President has made it clear, and others have -- that the actions we take to reduce the deficit, if we do it in the right way, will be positive for economic growth. It will create confidence that we have our fiscal house in order. But we must not do anything that would arrest the positive growth that we’ve seen, arrest the growth in job creation that we’ve seen. And we must also do it in a way that allows us to continue to invest in areas that will create the kind of economic foundation that we need to compete in the 21st century -- in education, infrastructure, innovation.
So, I mean, that’s a comprehensive way of answering your question that we believe that we have to take a balanced approach and that we need to continue to invest in areas that allow us to grow and compete in the mid-term and long term.
Q Last question. Before the Vice President goes up for a meeting like this, does he sit down and get instructions from the President? And do they communicate by phone or indirectly during -- while the meeting is going on, during breaks? Is the President that involved?
Q -- does he talk to him right before and during the meetings?
MR. CARNEY: He talks to him regularly. I’m not sure of any conversations they’ve had during the meeting. I can imagine that might happen, but I’m not aware of any that have in previous meetings. But as you know well, the President and the Vice President are in meetings together all the time. They have their private lunches every week. They consult regularly. And then we have meetings here involving the Vice President and the economic team specifically on our budget and fiscal issues.
So he is speaking regularly with the Vice President about the progress of the talks, regularly with other members of his economic team who are involved in the talks, and his engagement is consistent and constant.
MR. CARNEY: I don’t remember the President suggesting it was a blip, Wendell.
Q -- interview with one of the Detroit stations.
MR. CARNEY: Well, look, first of all, I think we’ve made clear and the President has made clear that we do not look at this -- and I don’t remember you guys asking us if the -- if we -- if happy days were here again after we had three straight months of highly significant private sector job growth that totaled three quarters of a million jobs in three months.
And you didn’t hear us claiming that, because we take a long view around here. We believe the overall trend has been very positive -- 15 straight months private sector job growth; 2.1 million jobs -- more than double the net job creation the previous 10 years, I believe.
So obviously we don’t have a crystal ball here, but if you look at outside economic analysts, most still believe we will have steady economic growth in the second half of this year. We certainly -- we share that general belief. And we are working every day to ensure that it is true by pursuing the kinds of policies that we’ve talked about, not just the ones that have been implemented, including the payroll tax cut and others, but the President’s insistence on making permanent, for example, the research and development tax credit, making permanent and expanding that tax credit to create more jobs, and other measures he has taken to continue the kind of economic growth that we’ve seen.
Q On your long-term view, TIME Magazine has an article entitled, “Don’t Hold Your Breath,” suggesting that the recession may have caused a structural change in the economy that will significantly affect a generation looking for jobs right now. How confident are you that that’s not the case?
MR. CARNEY: Well, again, we have in plan -- we have in place an economic -- series of economic measures that we believe have directly brought about the emergence from the worst recession since the Great Depression. Remember the state we were in, this country was in economically when the President took the oath of office, and an economic plan that is built with a long view towards being highly competitive towards growth and job creation in the 21st century.
That’s why the President continues to push very hard for clean energy investment, because those kinds of jobs -- I mean, clean energy investment is so key for two reasons. One, this is an industry of the future that will create high-paying jobs in the United States that will stay here. And it will allow us to be more energy independent in the future, and insulate us from some of the kinds of energy price shocks that we’ve been experiencing in the last several months. So this is a very long view towards American competitiveness in the 21st century.
The President is going to North Carolina on Monday to visit a facility that creates energy-efficient LED lighting in the United States that is one of those areas of the economy that we believe will help fuel economic growth in this country and create jobs. And that’s part of his economic vision.
Q On a separate issue, the drone strike in Yemen, where does Yemen fit into this country’s counterterrorism effort?
MR. CARNEY: It’s a very important part of our counterterrorism effort. We’ve made clear -- we have not in any way been secretive about it, al Qaeda in the Arabian Peninsula is an important threat to the United States and to our interests around the world. And we have worked very closely with the Yemeni government to go after al Qaeda, because of the threat that it represents. It’s a very -- it’s a serious issue.
Q Jay, you say your long-term view is positive of the economy. Wendell cited some recent statistics that would sort of cast a cloud over that view. You say that many economists forecast growth for the coming quarters and the coming foreseeable future. But they’re also downgrading or reducing their forecasts for growth in GDP. And you say that ideas are being bandied about as well, including a payroll tax cut for employers, and other ideas are being bandied about. Don’t you risk appear being too sanguine about the economy? Are there arrows left in your policy quiver?
MR. CARNEY: Look, we are not remotely sanguine about the economy. This recession caused 8 million jobs to be lost. We have emerged -- we have created more than 2.1 million. But we are still in a hole that we need to continue to dig out of, caused by this recession. That’s why this President wakes up every morning thinking about what he can do to continue economic growth and expand -- and to continue job creation in this country.
But I don’t want to leave the impression at all that we’re sanguine. I was citing outside statistics that suggest that growth will continue this year and that job creation will continue. But we take nothing for granted. That’s why the President is pursuing deficit-reduction budget talks so aggressively; why he says it’s important to reach compromise and find common ground to get that done, because that can be a very positive development if we reach an agreement for our economy if it’s done right; and why he has proposed other measures, including some of the ones that I talked about, in terms of creating clean energy jobs by making the research and development tax credit permanent and expanding it; providing tax credits for electric vehicles; increasing exports through our free trade agreements with Panama, South Korea, and Colombia that will support tens of thousands of jobs in this country; and then, obviously, his infrastructure plan that he supports, improving our roads, rails, and runways, which is again a twofer, kind of like investing in clean energy, right? That you -- if you invest in infrastructure, that in and of itself is a job creator in the near term. But creating the kind of infrastructure foundation we need for the 21st century allows us to be more competitive globally. So these -- he is not resting at all, and we are not the least bit sanguine.
Q Okay. Change of subject -- Iraq. The President has said in a number of speeches, including political speeches and fundraisers, to much applause, that he has led a policy that will lead to the withdrawal of American troops from Iraq this year. Today his nominee to be the Secretary of Defense says that he has, “every confidence that Iraq will in fact ask for troops to stay there.” Does this perhaps cast doubt on the President’s plan and the advocacy of the Iraqi government to take care of its own citizens without the help of American combat troops?
MR. CARNEY: Well, first, let me clarify what the President has said in the past. The President said he would end the war responsibly in Iraq and he would end our combat mission. That happened last August. We ended our combat mission. Since the President came into office we have withdrawn over 100,000 U.S. troops from Iraq.
And in accordance with the status of forces agreements, the SOFA that we have with the Iraqi government, signed by the previous administration, we will withdraw all of our troops from Iraq by the end of this year. We are on track to do that.
We have also said -- because we want an enduring relationship, and we will have an enduring relationship with Iraq and commitment to Iraq -- that if the Iraqi government wants to -- it comes to us with a proposal for some sort of continued presence of some amount of American troops in Iraq, we would certainly consider it. But that has not happened, and we are on track to withdraw our forces by the end of the year.
Obviously if they come to us -- and we consult with the Iraqi government all the time -- we will consider that. But we have made no decision to maintain U.S. forces in Iraq.
MR. CARNEY: We have always said that we -- of course, we would consider -- I mean, it depends what the conversations are like and what the proposals are like. But let’s be clear, the combat mission ended in Iraq last summer. We have not been in the lead in combat, in any kind of combat role for even longer than that in Iraq. And we have withdrawn over 100,000 U.S. forces since the President came into office, and I think he’s been enormously clear about that.
We have also been clear that we need to and we will continue a significant relationship with Iraq as a partner in the region, and that will largely be civilian. We have a large presence, civilian presence there already. And we I think announced opening some consulates there. We have an important relationship with Iraq.
As for our military presence, which, again, is drawing down significantly and has drawn down significantly, we have said now for a long time that we would consider a request by the Iraqi government for some sort of continued presence beyond the agreed departure date of December 31st.
MR. CARNEY: Well, I didn’t say that was likely at all. I said that we would consider it.
Q Well, the Secretary did, or really the Director did, soon-to-be Secretary, said he had every confidence that that request was coming.
MR. CARNEY: Well, we’ll see. And we’ll see what the request is and obviously we’ll consider it if it comes. But as of now, we have not received that request and we are fulfilling our obligations according to the agreement signed with the Iraqi government to withdraw our forces by the end of the year.
Q Earlier this week The Washington Post ran a profile of Treasury Secretary Tim Geithner and focused on his interactions with the President and the President’s political team. And I’m wondering if you found that that article was accurate.
MR. CARNEY: Besides having great respect and admiration and fondness for our Treasury Secretary, I’m not sure -- you’d have to be more specific.
Q Okay. It’s a good read.
MR. CARNEY: I read it, but what do you mean -- what are you asking exactly?
Q Specifically, it depicts the Treasury Secretary as being much more aggressive on deficit cutting in terms of a timetable than the President’s political team, and it depicts the President as always deferring to the political team over his economic team.
MR. CARNEY: Having clarified, which I appreciate, I will respond this way. One, the internal recommendations that are made to the President are not things that I will discuss here. Two, the President sets the policy of this administration. Three, the President is extremely committed to deficit reduction, and I think that has been proven again and again, most recently by the continuing resolution negotiation that avoided -- averted a government shutdown earlier this year with Republicans, that cut the deficit the significantly in this fiscal year; by the Affordable Care Act, which is a long-term deficit reducer of substantial size; and by his -- the plan that he outlined at George Washington University which commits him to $4 trillion in deficit reduction over 10 to 12 years.
So I don’t think the President could be any more clear about his commitment to responsible deficit reduction as part of an overall economic approach that grows the economy and creates jobs.
Q And second, just a clarification. When you talk about the President’s long-term vision for the economy and industries that in the future will create jobs, I mean, what kind of message does that take to the kids getting out of college today that can’t find any jobs anywhere? I mean, do they wait -- are they supposed to be patient and wait for this?
MR. CARNEY: No. No, we are doing everything we can today to help this economy grow, to spur private sector investment and hiring. I’m sure you’re aware of the fact that the President has cut taxes for small businesses 17 times since he took office. Why? Because private -- because small businesses are the engine of economic growth in this country and they’re the place where most Americans will find work as small businesses expand.
So the fact that he has a long vision about where this country needs to go economically and how we should position ourselves for the 21st century I think is a commendable thing. He is also very focused on the short term, as he was when he came into office and encountered the largest deficits we’ve ever seen and the worst recession we’ve seen since the 1930s, where he took immediate action to deal with an immediate problem.
Q Jay, if Gabon was not president of the U.N. Security Council, would the President be meeting with President Bongo today?
MR. CARNEY: Look, that’s a hypothetical. I don’t know. The fact is I think I’ve made clear that we understand concerns about corruption. We believe that President Bongo has made a number of steps towards positive reforms in that country, which we support. The human rights report tracks improvements in Gabon, which we also think are a good thing. And they have been a strong partner on human rights issues at the U.N. Security Council and Human Rights Council, particularly on Côte d’Ivoire and Libya.
So for those reasons, including the one that you mentioned, this meeting was worth having.
MR. CARNEY: No, no, I guess what I meant was that there is -- I probably reversed my language there -- is that at different meetings -- it’s usually driven by how much time the President has in his schedule. And when it’s extremely tight, we have -- we do less. And when we have more time, we do more. But he’s meeting in the Oval Office. There’s going to be a photograph. There’s a pool spray. I mean, in that sense, there is no mystery here about this meeting.
Q Who gets to pick the fourth on the foursome, on the golf? Is that Boehner or the President?
MR. CARNEY: Are you available?
Q I’m a lousy golfer, no.
Q I mean, protocol question.
MR. CARNEY: We could probably have a lottery. I believe that if the Vice President is playing with the President, then the Speaker will pick the fourth.
Q And then today the report is due, the Jeff Zients report on reorganization. Has the President seen the report?
MR. CARNEY: I don’t have any updates on that. Obviously there are -- there’s a lot of work happening in that arena, but I don’t have any updates in terms of the President’s decision-making process on that.
Q Thanks, Jay. I know that no decision has been made on the troop drawdown from Afghanistan yet, but the President and today Panetta said it will be significant. Gates just said it will be modest. Where is the overlap between those two descriptions?
MR. CARNEY: This is the President’s decision. And I think it’s important, again, to step back and look at the fact that the President has been leading monthly meetings on his AfPak policy in the Situation Room with all his national security principals, commanders out in the field, since December of 2009.
So this is not -- while he will announce soon, as he said, his decision about the pace and slope -- size, pace, slope of the drawdown that begins in July 2011, as prescribed in his policy, this is not something he’s suddenly turning to now. He has been engaged and focused on this for -- on a regular basis and has meetings -- monthly meetings that he chairs in the Situation Room; weekly meetings with Secretary Gates and Secretary Clinton; regular meetings with obviously other national security principals and national security staff.
So he has not made a decision. When he does, he will announce it.
Q But if the principals are going to talk about it in what seem to be flatly contradictory terms, what are the American people to take away from that?
MR. CARNEY: I think the American people will take away from the President’s decision the fact that he says what he will do and he does what he says he’ll do. And that is, back in December of 2009, as part of the policy and approach and strategy that he laid out for Afghanistan, he said that we would, after ramping up our troop presence in Afghanistan, adding another 30,000 troops, in addition to the troops he had added before, that we would begin a transition and we would begin to draw down our troops. And that the size and pace of the drawdown would depend on conditions on the ground. We are now at that point. But this is not -- this is a milestone in the implementation of a policy that’s been in place since December of 2009. And I think what Americans will take away from the announcement is the policy is on track; the President is doing what he said he would do. He has very clear objectives in Afghanistan and in the AfPak region, which is to disrupt, dismantle, and ultimately defeat al Qaeda; to halt the Taliban’s momentum in order to stabilize Afghanistan so that it does not again become a haven for al Qaeda. And those -- there has been great progress towards those objectives, but this is the policy.
Remember that, as articulated by NATO in Lisbon, that the transition to Afghan lead continues through 2014, at which point the Afghans will have full lead over security. That transition process, therefore, will take place over a fair amount of time.
Q Thanks, Jay. Senator Kerry since Osama bin Laden’s killing has said a couple of things about the policy -- right after his death, $10 billion a month is entirely too much to be spending on this. Yesterday, the report came out very critical of the aid part of it. And he used the opportunity of Ambassador Crocker’s hearing yesterday to again say the whole policy strategy, in the terms we’re using, needs to be rethought. What would you say to that?
MR. CARNEY: Well, I didn’t -- I have not read the transcript or focused in detail on what Senator Kerry said yesterday in the Crocker hearing.
The President obviously is very mindful of how we use our resources and setting priorities for how we use our resources. The President made very clear when he announced his strategy in Afghanistan in December of 2009 what the objectives were, what they were not, and his timeframe for surging troops and beginning to draw down troops, and the benchmarks for -- that needed to be met as we transferred security lead over to the Afghans bit by bit.
The fact is that we believe we are making progress. This is obviously a tough situation. Afghanistan is -- faces a lot of challenges as a country. But as the President said the other day, we have achieved some important goals, including eliminating Osama bin Laden, including the success we’ve had against al Qaeda in the region, including halting the Taliban’s momentum.
And so he will, when he announces the decision he makes in terms of the drawdown, I’m sure he will also put it in the context of the implementation of the strategy he put in place of December of 2009.
Helene -- no? Yes, sir.
Q Thank you, Jay. Many of the Republican House members who met with the President here on June the 1st, came out with glowing comments about the meeting. And there were two questions I had about it, based on my conversations with the lawmakers. One, did the President, in his exchange with Congressman Ryan, offer the administration’s own plan on Medicare reform? And does it have a plan?
MR. CARNEY: He did not offer a plan on Medicare reform in that meeting, no. I think we’ve made clear what the President’s plan on Medicare reform is. It’s part of his proposal for his 10- to 12-year budget -- deficit-reduction plan of $4 trillion. So we have -- there are reforms to entitlements, including Medicare, in the Affordable Care Act. There are more reforms that strengthen and improve Medicare in a proposal he’s put forward for his long-term deficit-reduction plan.
MR. CARNEY: I am not going to comment on exchanges or word-for-word quotations. I think the President has articulated a position on that in the past. I think it remains. But I don’t have a comment on that exchange.
Q So he favors medical liability reform?
MR. CARNEY: Well, I think he has laid out what he favors in that arena in the past.
MR. CARNEY: I’m going to jump around, Keith, if I could. I’ve been calling on you a lot lately and I’ll get back to you.
Q Jay, real quickly. Briefly, on the deficit talks today, is there a way to quantify whether they’ve reached a halfway point, or how many trillion dollars out of the four that the President has proposed over 10 to 12 years? Has the President given a quantitative assessment of how the talks are going?
MR. CARNEY: Well, I think he’s given a very detailed assessment of how the talks are going. And I’m sure that numbers are discussed -- I know that numbers are discussed in the assessments that he’s given, but they won’t be discussed from here by me.
Q And does the Vice President bring in some of those balancing elements that the President talks about, where it’s not actually deficit cutting, but spending that he feels is productive? Are there spending items on the table as well that the President is introducing, or revenue?
MR. CARNEY: Absolutely. We have made clear that everything is on the table, that we -- our approach, as outlined by the President in his speech at George Washington University, includes spending cuts to non-defense discretionary spending. It includes savings in defense spending. It includes savings out of the tax code, as well as savings out of entitlements. It also includes -- I mean, again, deficit reduction is not a grand goal in and of itself. It serves a purpose here, which is to get our priorities in order and our fiscal house in order precisely so we can do the things we need to do to ensure that the economy grows and to ensure that we create jobs so that every American who’s looking for a job can find one.
And that includes, as part of setting your priorities, cutting where you can, spending where you must; it includes investing in things like education, innovation, and infrastructure so that we can build the platform we need to build so that we can compete in the 21st century and win the future.
Q You said it with such conviction.
Q Does the U.S. government have any information to corroborate the claims by Syrian refugees in Turkey that Iranian troops have been actively firing on Syrian protests?
MR. CARNEY: I don’t have anything I can say from here about that.
Q The President’s message on the economy has pretty consistently been that recovery takes time; there will be bumps along the road. Do you expect him to retool this message, given the fact that unemployment is at 9.1 percent?
MR. CARNEY: Well, unemployment has been high since he took office. And we have worked aggressively to create jobs and bring down unemployment. I don’t think -- the message, as you described it -- that we are still emerging from the worst recession since the Great Depression, that there will be days with bad economic data and there will be days with surprisingly good economic data, as there has been recently.
So I don’t think -- I think the message has been consistent and will continue to be. So I don’t -- no, I don’t expect a change.
Q Also, does he have any reaction to al Qaeda’s number-two coming out yesterday with a tape eulogizing the death of Osama bin Laden?
MR. CARNEY: I haven’t discussed that with him. But I think it is a reminder that while we successfully eliminated Osama bin Laden, that the al Qaeda threat continues and we take that very seriously, and obviously remain highly vigilant in our efforts to combat terrorism and terrorists around the globe.
MR. CARNEY: I don’t have anything in terms of legal action. I don’t -- I’m not aware of that at all. I would simply say that it’s -- our approach is clear. The President made it clear when he gave his speech at the border not that long ago about the need for comprehensive immigration reform. And we continue to pursue that and believe that a bipartisan consensus can be found to achieve that comprehensive reform.
Q Jay, yes, yesterday at the community college, the President’s remarks, there wasn’t anything in there about Joining Forces, and I haven’t heard him say much about that. He’s been -- the Skills for America’s Future has been really prominent, but Joining Forces has kind of fallen a little bit.
I didn’t know if there -- if those two initiatives are separated somewhat or if the White House views them as one initiative. How is that being viewed, especially in light of the drawdown?
MR. CARNEY: I think we pursue both initiatives. I think he was highlighting specifically the program that allows bringing together manufacturers and those who are getting training through community colleges yesterday, so I think we’re pursing both and think both are very important.
Q Is there going to be a focus on the modestly significant number of people coming back in July?
MR. CARNEY: Look, we have -- as we discussed earlier, we have troops returning from Iraq regularly and have. We obviously will begin to have troops returning from Afghanistan. Troops rotate out all the time, even when we’re maintaining overall troop levels in different arenas, so this is an important focus of this administration.
We are now in our tenth year of these endeavors, these wars overseas, and an enormous number of Americans have served overseas and as veterans deserve our support. And this administration is very focused and very committed to that.
MR. CARNEY: Yes, sir. Last one.
Q One the leads, the AP leads coming out of the UAE is that Secretary Clinton said Qaddafi associates, according to this, are reaching out to negotiate a possible transition for Qaddafi. Is there, in fact, something going on now? We’ve heard a lot from Secretary Clinton before on these aspects, and some from you as well. What are we to make of this?
MR. CARNEY: What I think you should make of it is that it is more evidence of the fact that the policies that we have pursued unilaterally, the policies we have pursued with our partners are having the desired effect, which is putting the squeeze on the Qaddafi regime; making it clear to those around him that their days are numbered, that they are making a very fateful choice if they stay with Qaddafi because they will not control Libya in the future.
And we’ve seen a number of defections, and I believe -- I’m not trying to hint at any news here, but I’m confident there’ll be more because the United States and our partners remain committed to the policies that we’ve been implementing now for a number of months. | 2019-04-25T10:49:22Z | https://obamawhitehouse.archives.gov/photos-and-video/video/2011/06/09/press-briefing |
Amerge is indicated for the acute treatment of migraine with or without aura in adults.
Use only if a clear diagnosis of migraine has been established. If a patient has no response to the first migraine attack treated with Amerge, reconsider the diagnosis of migraine before Amerge is administered to treat any subsequent attacks.
Amerge is not indicated for the prevention of migraine attacks.
Safety and effectiveness of Amerge have not been established for cluster headache.
The recommended dose of Amerge is 1 mg or 2.5 mg.
If the migraine returns or if the patient has only partial response, the dose may be repeated once after 4 hours, for a maximum dose of 5 mg in a 24-hour period.
The safety of treating an average of more than 4 migraine attacks in a 30‑day period has not been established.
Amerge is contraindicated in patients with severe renal impairment (creatinine clearance: <15 mL/min) because of decreased clearance of the drug [see Contraindications (4), Use in Specific Populations (8.6), Clinical Pharmacology (12.3)].
In patients with mild to moderate renal impairment, the maximum daily dose should not exceed 2.5 mg over a 24‑hour period and a 1-mg starting dose is recommended [see Use in Specific Populations (8.6), Clinical Pharmacology (12.3)].
Amerge is contraindicated in patients with severe hepatic impairment (Child-Pugh Grade C) because of decreased clearance [see Contraindications (4), Use in Specific Populations (8.7), Clinical Pharmacology (12.3)].
In patients with mild or moderate hepatic impairment (Child-Pugh Grade A or B), the maximum daily dose should not exceed 2.5 mg over a 24-hour period and a 1-mg starting dose is recommended [see Use in Specific Populations (8.7), Clinical Pharmacology (12.3)].
Amerge is contraindicated in patients with ischemic or vasospastic CAD. There have been rare reports of serious cardiac adverse reactions, including acute myocardial infarction, occurring within a few hours following administration of Amerge. Some of these reactions occurred in patients without known CAD. Amerge may cause coronary artery vasospasm (Prinzmetal’s angina) even in patients without a history of CAD.
Perform a cardiovascular evaluation in triptan-naive patients who have multiple cardiovascular risk factors (e.g., increased age, diabetes, hypertension, smoking, obesity, strong family history of CAD) prior to receiving Amerge. If there is evidence of CAD or coronary artery vasospasm, Amerge is contraindicated. For patients with multiple cardiovascular risk factors who have a negative cardiovascular evaluation, consider administering the first dose of Amerge in a medically supervised setting and performing an electrocardiogram (ECG) immediately following administration of Amerge. For such patients, consider periodic cardiovascular evaluation in intermittent long-term users of Amerge.
Life-threatening disturbances of cardiac rhythm, including ventricular tachycardia and ventricular fibrillation leading to death, have been reported within a few hours following the administration of 5-HT1 agonists. Discontinue Amerge if these disturbances occur. Amerge is contraindicated in patients with Wolff-Parkinson-White syndrome or arrhythmias associated with other cardiac accessory conduction pathway disorders.
Sensations of tightness, pain, and pressure in the chest, throat, neck, and jaw commonly occur after treatment with Amerge and are usually non-cardiac in origin. However, perform a cardiac evaluation if these patients are at high cardiac risk. 5-HT1 agonists, including Amerge, are contraindicated in patients with CAD and those with Prinzmetal’s variant angina.
Cerebral hemorrhage, subarachnoid hemorrhage, and stroke have occurred in patients treated with 5-HT1 agonists, and some have resulted in fatalities. In a number of cases, it appears possible that the cerebrovascular events were primary, the 5-HT1 agonist having been administered in the incorrect belief that the symptoms experienced were a consequence of migraine when they were not. Also, patients with migraine may be at increased risk of certain cerebrovascular events (e.g., stroke, hemorrhage, TIA). Discontinue Amerge if a cerebrovascular event occurs.
Before treating headaches in patients not previously diagnosed as migraineurs, and in migraineurs who present with symptoms atypical for migraine, exclude other potentially serious neurological conditions. Amerge is contraindicated in patients with a history of stroke or TIA.
Amerge may cause non-coronary vasospastic reactions, such as peripheral vascular ischemia, gastrointestinal vascular ischemia and infarction (presenting with abdominal pain and bloody diarrhea), splenic infarction, and Raynaud’s syndrome. In patients who experience symptoms or signs suggestive of non-coronary vasospasm reaction following the use of any 5-HT1 agonist, rule out a vasospastic reaction before receiving additional doses of Amerge.
Reports of transient and permanent blindness and significant partial vision loss have been reported with the use of 5-HT1 agonists. Since visual disorders may be part of a migraine attack, a causal relationship between these events and the use of 5-HT1 agonists have not been clearly established.
Serotonin syndrome may occur with Amerge, particularly during coadministration with selective serotonin reuptake inhibitors (SSRIs), serotonin norepinephrine reuptake inhibitors (SNRIs), tricyclic antidepressants (TCAs), and monoamine oxidase (MAO) inhibitors [see Drug Interactions (7.3)]. Serotonin syndrome symptoms may include mental status changes (e.g., agitation, hallucinations, coma), autonomic instability (e.g., tachycardia, labile blood pressure, hyperthermia), neuromuscular aberrations (e.g., hyperreflexia, incoordination), and/or gastrointestinal symptoms (e.g., nausea, vomiting, diarrhea). The onset of symptoms usually occurs within minutes to hours of receiving a new or a greater dose of a serotonergic medication. Discontinue Amerge if serotonin syndrome is suspected.
There have been reports of anaphylaxis and hypersensitivity reactions, including angioedema, in patients receiving Amerge. Such reactions can be life threatening or fatal. In general, anaphylactic reactions to drugs are more likely to occur in individuals with a history of sensitivity to multiple allergens. Amerge is contraindicated in patients with a history of hypersensitivity reaction to Amerge.
In a long-term open-label trial where patients were allowed to treat multiple migraine attacks for up to 1 year, 15 patients (3.6%) discontinued treatment due to adverse reactions.
In controlled clinical trials, the most common adverse reactions were paresthesias, dizziness, drowsiness, malaise/fatigue, and throat/neck symptoms, which occurred at a rate of 2% and at least 2 times placebo rate.
Table 1 lists the adverse reactions that occurred in 5 placebo-controlled clinical trials of approximately 1,752 exposures to placebo and Amerge in adult patients with migraine. Only reactions that occurred at a frequency of 2% or more in groups treated with Amerge 2.5 mg and that occurred at a frequency greater than the placebo group in the 5 pooled trials are included in Table 1.
The incidence of adverse reactions in controlled clinical trials was not affected by age or weight of the patients, duration of headache prior to treatment, presence of aura, use of prophylactic medications, or tobacco use. There were insufficient data to assess the impact of race on the incidence of adverse reactions.
Ergot-containing drugs have been reported to cause prolonged vasospastic reactions. Because these effects may be additive, use of ergotamine-containing or ergot-type medications (like dihydroergotamine or methysergide) and Amerge within 24 hours of each other is contraindicated.
Concomitant use of other 5-HT1B/1D agonists (including triptans) within 24 hours of treatment with Amerge is contraindicated because the risk of vasospastic reactions may be additive.
Cases of serotonin syndrome have been reported during coadministration of triptans and SSRIs, SNRIs, TCAs, and MAO inhibitors [see Warnings and Precautions (5.7)].
There are no adequate data on the developmental risk associated with use of Amerge in pregnant women. Data from a prospective pregnancy exposure registry and epidemiological studies of pregnant women have documented outcomes in women exposed to naratriptan during pregnancy; however, due to small sample sizes, no definitive conclusions can be drawn regarding the risk of birth defects following exposure to naratriptan [see Data]. In animal studies, naratriptan produced developmental toxicity (including embryolethality and fetal abnormalities) when administered to pregnant rats and rabbits. The lowest doses producing evidence of developmental toxicity in animals were associated with plasma exposures 2.5 (rabbit) to 11 (rat) times that in humans at the maximum recommended daily dose (MRDD) [see Data].
In the U.S. general population, the estimated background risk of major birth defects and of miscarriage in clinically recognized pregnancies is 2% to 4% and 15% to 20%, respectively. The reported rate of major birth defects among deliveries to women with migraine ranged from 2.2% to 2.9% and of miscarriage was 17%, which were similar to rates reported in women without migraine.
Several studies have suggested that women with migraine may be at increased risk of preeclampsia during pregnancy.
Human Data: The numbers of exposed pregnancy outcomes accumulated during the Sumatriptan/Naratriptan/Treximet® (sumatriptan and naproxen sodium) Pregnancy Registry, a population-based international prospective study that collected data from October 1997 to September 2012, and smaller observational studies, were insufficient to define a level of risk for naratriptan in pregnant women. The Registry documented outcomes of 57 infants and fetuses exposed to naratriptan during pregnancy (52 exposed during the first trimester and 5 exposed during the second trimester). The occurrence of major birth defects (excluding fetal deaths and induced abortions without reported defects and all spontaneous pregnancy losses) during first-trimester exposure to naratriptan was 2.2% (1/46 [95% CI: 0.1% to 13.0%]) and during any trimester of exposure was 2.0% (1/51 [95% CI: 0.1% to 11.8%]). Seven infants were exposed to both naratriptan and sumatriptan in utero, and one of these infants with first-trimester exposure was born with a major birth defect (ventricular septal defect). The sample size in this study had 80% power to detect at least a 3.8- to 4.6-fold increase in the rate of major malformations.
In a study using data from the Swedish Medical Birth Register, women who used triptans or ergots during pregnancy were compared with women who did not. Of the 22 births with first-trimester exposure to naratriptan, one infant was born with a malformation (congenital deformity of the hand).
Animal Data: When naratriptan was administered to pregnant rats during the period of organogenesis at doses of 10, 60, or 340 mg/kg/day, there was a dose-related increase in embryonic death; incidences of fetal structural variations (incomplete/irregular ossification of skull bones, sternebrae, ribs) were increased at all doses. The maternal plasma exposures (AUC) at these doses were approximately 11, 70, and 470 times the exposure in humans at the MRDD. The high dose was maternally toxic, as evidenced by decreased maternal body weight gain during gestation. A no-effect dose for developmental toxicity in rats exposed during organogenesis was not established.
When naratriptan was administered orally (1, 5, or 30 mg/kg/day) to pregnant Dutch rabbits throughout organogenesis, the incidence of a specific fetal skeletal malformation (fused sternebrae) was increased at the high dose, the incidence of fetal variations (major blood vessel variations, supernumerary ribs, incomplete skeletal ossification) was increased at the mid and high doses, and embryonic death was increased at all doses (4, 20, and 120 times, respectively, the MRDD on a body surface area basis). Maternal toxicity (decreased body weight gain) was evident at the high dose. In a similar study in New Zealand White rabbits (1, 5, or 30 mg/kg/day throughout organogenesis), decreased fetal weights and increased incidences of fetal skeletal variations were observed at all doses (maternal exposures equivalent to 2.5, 19, and 140 times exposure in humans receiving the MRDD), while maternal body weight gain was reduced at 5 mg/kg or greater. A no-effect dose for developmental toxicity in rabbits exposed during organogenesis was not established.
When female rats were treated orally with naratriptan (10, 60, or 340 mg/kg/day) during late gestation and lactation, offspring behavioral impairment (tremors) and decreased offspring viability and growth were observed at doses of 60 mg/kg or greater, while maternal toxicity occurred only at the highest dose. Maternal exposures at the no-effect dose for developmental effects in this study were approximately 11 times the exposure in humans receiving the MRDD.
There are no data on the presence of naratriptan in human milk, the effects of naratriptan on the breastfed infant, or the effects of naratriptan on milk production. Naratriptan is present in rat milk.
The developmental and health benefits of breastfeeding should be considered along with the mother’s clinical need for Amerge and any potential adverse effects on the breastfed infant from naratriptan or from the underlying maternal condition.
Safety and effectiveness in pediatric patients have not been established. Therefore, Amerge is not recommended for use in patients younger than 18 years of age.
One controlled clinical trial evaluated Amerge (0.25 to 2.5 mg) in 300 adolescent migraineurs aged 12 to 17 years who received at least 1 dose of Amerge for an acute migraine. In this study, 54% of the patients were female and 89% were Caucasian. There were no statistically significant differences between any of the treatment groups. The headache response rates at 4 hours (n) were 65% (n = 74), 67% (n = 78), and 64% (n = 70) for placebo, 1-mg, and 2.5-mg groups, respectively. This trial did not establish the efficacy of Amerge compared with placebo in the treatment of migraine in adolescents. Adverse reactions observed in this clinical trial were similar in nature to those reported in clinical trials in adults.
Clinical trials of Amerge did not include sufficient numbers of patients aged 65 and older to determine whether they respond differently from younger patients. Other reported clinical experience has not identified differences in responses between the elderly and younger patients. In general, dose selection for an elderly patient should be cautious, usually starting at the low end of the dosing range, reflecting the greater frequency of decreased hepatic, renal, or cardiac function and of concomitant disease or other drug therapy.
Naratriptan is known to be substantially excreted by the kidney, and the risk of adverse reactions to this drug may be greater in elderly patients who have reduced renal function. In addition, elderly patients are more likely to have decreased hepatic function, they are at higher risk for CAD, and blood pressure increases may be more pronounced in the elderly.
A cardiovascular evaluation is recommended for geriatric patients who have other cardiovascular risk factors (e.g., diabetes, hypertension, smoking, obesity, strong family history of CAD) prior to receiving Amerge [see Warnings and Precautions (5.1)].
The use of Amerge is contraindicated in patients with severe renal impairment (creatinine clearance: <15 mL/min) because of decreased clearance of the drug. In patients with mild to moderate renal impairment, the recommended starting dose is 1 mg, and the maximum daily dose should not exceed 2.5 mg over a 24-hour period [see Dosage and Administration (2.2), Clinical Pharmacology (12.3)].
The use of Amerge is contraindicated in patients with severe hepatic impairment (Child-Pugh Grade C) because of decreased clearance. In patients with mild or moderate hepatic impairment (Child-Pugh Grade A or B), the recommended starting dose is 1 mg, and the maximum daily dose should not exceed 2.5 mg over a 24-hour period [see Dosage and Administration (2.3), Clinical Pharmacology (12.3)].
Adverse reactions observed after overdoses of up to 25 mg included increases in blood pressure resulting in lightheadedness, neck tension, tiredness, and loss of coordination. Also, ischemic ECG changes likely due to coronary artery vasospasm have been reported.
The elimination half-life of naratriptan is about 6 hours [see Clinical Pharmacology (12.3)]; therefore monitoring of patients after overdose with Amerge should continue for at least 24 hours or while symptoms or signs persist. There is no specific antidote to naratriptan. It is unknown what effect hemodialysis or peritoneal dialysis has on the serum concentrations of naratriptan.
The empirical formula is C17H25N3O2S•HCl, representing a molecular weight of 371.93. Naratriptan hydrochloride is a white to pale yellow powder that is readily soluble in water.
Each Amerge tablet for oral administration contains 1.11 or 2.78 mg of naratriptan hydrochloride, equivalent to 1 or 2.5 mg of naratriptan, respectively. Each tablet also contains the inactive ingredients croscarmellose sodium; hypromellose; lactose; magnesium stearate; microcrystalline cellulose; triacetin; and titanium dioxide, iron oxide yellow (2.5-mg tablet only), and indigo carmine aluminum lake (FD&C Blue No. 2) (2.5-mg tablet only) for coloring.
Naratriptan binds with high affinity to human cloned 5-HT1B/1D receptors. Migraines are likely due to local cranial vasodilatation and/or to the release of sensory neuropeptides (including substance P and calcitonin gene-related peptide) through nerve endings in the trigeminal system. The therapeutic activity of Amerge for the treatment of migraine headache is thought to be due to the agonist effects at the 5-HT1B/1D receptors on intracranial blood vessels (including the arterio-venous anastomoses) and sensory nerves of the trigeminal system, which result in cranial vessel constriction and inhibition of pro-inflammatory neuropeptide release.
In the anesthetized dog, naratriptan has been shown to reduce the carotid arterial blood flow with little or no effect on arterial blood pressure or total peripheral resistance. While the effect on blood flow was selective for the carotid arterial bed, increases in vascular resistance of up to 30% were seen in the coronary arterial bed. Naratriptan has also been shown to inhibit trigeminal nerve activity in rat and cat.
In 10 subjects with suspected CAD undergoing coronary artery catheterization, there was a 1% to 10% reduction in coronary artery diameter following subcutaneous injection of 1.5 mg of naratriptan [see Contraindications (4)].
Naratriptan is well absorbed, with about 70% oral bioavailability. Following administration of a 2.5-mg tablet, the peak concentrations are obtained in 2 to 3 hours. After administration of 1- or 2.5-mg tablets, the Cmax is somewhat (about 50%) higher in women (not corrected for milligram-per-kilogram dose) than in men. During a migraine attack, absorption is slower, with a Tmax of 3 to 4 hours. Food does not affect the pharmacokinetics of naratriptan. Naratriptan displays linear kinetics over the therapeutic dose range.
In vitro, naratriptan is metabolized by a wide range of cytochrome P450 isoenzymes into a number of inactive metabolites.
Naratriptan is predominantly eliminated in urine, with 50% of the dose recovered unchanged and 30% as metabolites in urine. The mean elimination half-life of naratriptan is 6 hours. The systemic clearance of naratriptan is 6.6 mL/min/kg. The renal clearance (220 mL/min) exceeds glomerular filtration rate, indicating active tubular secretion. Repeat administration of naratriptan tablets does not result in drug accumulation.
Age: A small decrease in clearance (approximately 26%) was observed in healthy elderly subjects (65 to 77 years) compared with younger subjects, resulting in slightly higher exposure [see Use in Specific Populations (8.5)].
Renal Impairment: Clearance of naratriptan was reduced by 50% in subjects with moderate renal impairment (creatinine clearance: 18 to 39 mL/min) compared with the normal group. Decrease in clearances resulted in an increase of mean half-life from 6 hours (healthy) to 11 hours (range: 7 to 20 hours). The mean Cmax increased by approximately 40%. The effects of severe renal impairment (creatinine clearance: ≤15 mL/min) on the pharmacokinetics of naratriptan have not been assessed [see Contraindications (4)].
Hepatic Impairment: Clearance of naratriptan was decreased by 30% in subjects with moderate hepatic impairment (Child-Pugh Grade A or B). This resulted in an approximately 40% increase in the half-life (range: 8 to 16 hours). The effects of severe hepatic impairment (Child-Pugh Grade C) on the pharmacokinetics of naratriptan have not been assessed [see Contraindications (4)].
Monoamine Oxidase and P450 Inhibitors: Naratriptan does not inhibit monoamine oxidase (MAO) enzymes and is a poor inhibitor of P450; metabolic interactions between naratriptan and drugs metabolized by P450 or MAO are therefore unlikely.
Smoking: Smoking increased the clearance of naratriptan by 30%.
Alcohol: In normal volunteers, co-administration of single doses of naratriptan tablets and alcohol did not result in substantial modification of naratriptan pharmacokinetic parameters.
In carcinogenicity studies, mice and rats were given naratriptan by oral gavage for 104 weeks. There was no evidence of an increase in tumors related to naratriptan administration in mice receiving up to 200 mg/kg/day. That dose was associated with a plasma (AUC) exposure that was 110 times the exposure in humans receiving the MRDD of 5 mg. Two rat studies were conducted, one using a standard diet and the other a nitrite-supplemented diet (naratriptan can be nitrosated in vitro to form a mutagenic product that has been detected in the stomachs of rats fed a high-nitrite diet). Doses of 5, 20, and 90 mg/kg were associated with AUC exposures that in the standard-diet study were 7, 40, and 236 times, respectively, and in the nitrite-supplemented–diet study were 7, 29, and 180 times, respectively, the exposure in humans at the MRDD. In both studies, there was an increase in the incidence of thyroid follicular hyperplasia in high-dose males and females and in thyroid follicular adenomas in high-dose males. In the standard-diet study only, there was also an increase in the incidence of benign c-cell adenomas in the thyroid of high-dose males and females. The exposures achieved at the no-effect dose for thyroid tumors were 40 (standard diet) and 29 (nitrite-supplemented diet) times the exposure achieved in humans at the MRDD. In the nitrite-supplemented–diet study only, the incidence of benign lymphocytic thymoma was increased in all treated groups of females. It was not determined if the nitrosated product is systemically absorbed. However, no changes were seen in the stomachs of rats in that study.
Naratriptan was not mutagenic when tested in in vitro gene mutation (Ames and mouse lymphoma tk) assays. Naratriptan was also negative in the in vitro human lymphocyte assay and the in vivo mouse micronucleus assay. Naratriptan can be nitrosated in vitro to form a mutagenic product (WHO nitrosation assay) that has been detected in the stomachs of rats fed a nitrite-supplemented diet.
In a reproductive toxicity study in which male and female rats were administered naratriptan orally prior to and throughout the mating period (10, 60, 170, or 340 mg/kg/day; plasma exposures [AUC] approximately 11, 70, 230, and 470 times, respectively, the human exposure at the MRDD), there was a drug-related decrease in the number of females exhibiting normal estrous cycles at doses of 170 mg/kg/day or greater and an increase in pre-implantation loss at 60 mg/kg/day or greater. In high-dose males, testicular/epididymal atrophy accompanied by spermatozoa depletion reduced mating success and may have contributed to the observed pre-implantation loss. The exposures achieved at the no-effect doses for pre-implantation loss, anestrus, and testicular effects were approximately 11, 70, and 230 times, respectively, the exposures in humans at the MRDD.
In a study in which rats were dosed orally with naratriptan (10, 60, or 340 mg/kg/day) for 6 months, changes in the female reproductive tract including atrophic or cystic ovaries and anestrus were seen at the high dose. The exposure at the no-effect dose of 60 mg/kg was approximately 85 times that in humans at the MRDD.
The efficacy of Amerge in the acute treatment of migraine headaches was evaluated in 3 randomized, double-blind, placebo-controlled trials in adult patients (Trials 1, 2, 3). These trials enrolled adult patients who were predominantly female (86%) and Caucasian (96%) with a mean age of 41 years (range: 18 to 65 years). In all studies, patients were instructed to treat at least 1 moderate to severe headache. Headache response, defined as a reduction in headache severity from moderate or severe pain to mild or no pain, was assessed up to 4 hours after dosing. Associated symptoms such as nausea, vomiting, photophobia, and phonophobia were also assessed. Maintenance of response was assessed for up to 24 hours postdose. A second dose of Amerge or other rescue medication to treat migraines was allowed 4 to 24 hours after the initial treatment for recurrent headache.
In all 3 trials, the percentage of patients achieving headache response 4 hours after treatment, the primary outcome measure, was significantly greater among patients receiving Amerge compared with those who received placebo. In all trials, response to 2.5 mg was numerically greater than response to 1 mg and in the largest of the 3 trials, there was a statistically significant greater percentage of patients with headache response at 4 hours in the 2.5-mg group compared with the 1-mg group. The results are summarized in Table 2.
bP<0.05 compared with 1 mg.
The estimated probability of achieving an initial headache response in adults over the 4 hours following treatment in pooled Trials 1, 2, and 3 is depicted in Figure 1.
a The figure shows the probability over time of obtaining headache response (reduction in headache severity from moderate or severe pain to no or mild pain) following treatment with Amerge. In this Kaplan-Meier plot, patients not achieving response within 240 minutes were censored at 240 minutes.
For patients with migraine-associated nausea, photophobia, and phonophobia at baseline, there was a lower incidence of these symptoms 4 hours following administration of 1-mg and 2.5-mg Amerge compared with placebo.
Four to 24 hours following the initial dose of study treatment, patients were allowed to use additional treatment for pain relief in the form of a second dose of study treatment or other rescue medication. The estimated probability of patients taking a second dose or other rescue medication to treat migraine over the 24 hours following the initial dose of study treatment is summarized in Figure 2.
a Kaplan-Meier plot based on data obtained in the 3 controlled clinical trials (Trials 1, 2, and 3) providing evidence of efficacy with patients not using additional treatments censored at 24 hours. The plot also includes patients who had no response to the initial dose. Remedication was discouraged prior to 4 hours postdose.
There is no evidence that doses of 5 mg provided a greater effect than 2.5 mg. There was no evidence to suggest that treatment with Amerge was associated with an increase in the severity or frequency of migraine attacks. The efficacy of Amerge was unaffected by presence of aura; gender, age, or weight of the subject; oral contraceptive use; or concomitant use of common migraine prophylactic drugs (e.g., beta-blockers, calcium channel blockers, tricyclic antidepressants). There was insufficient data to assess the impact of race on efficacy.
Amerge tablets containing 1 mg and 2.5 mg of naratriptan (base) as the hydrochloride salt.
Amerge tablets, 1 mg, are white, D-shaped, film-coated tablets debossed with “GX CE3” on one side in blister packs of 9 tablets (NDC 0173-0561-00).
Amerge tablets, 2.5 mg, are green, D-shaped, film-coated tablets debossed with “GX CE5” on one side in blister packs of 9 tablets (NDC 0173-0562-00).
Store at controlled room temperature, 20° to 25°C (68° to 77°F) [see USP].
Inform patients that Amerge may cause serious cardiovascular side effects such as myocardial infarction or stroke. Although serious cardiovascular events can occur without warning symptoms, patients should be alert for the signs and symptoms of chest pain, shortness of breath, irregular heartbeat, significant rise in blood pressure, weakness, and slurring of speech and should ask for medical advice if any indicative sign or symptoms are observed. Apprise patients of the importance of this follow-up [see Warnings and Precautions (5.1, 5.2, 5.4, 5.5, 5.8)].
Inform patients that anaphylactic reactions have occurred in patients receiving Amerge. Such reactions can be life-threatening or fatal. In general, anaphylactic reactions to drugs are more likely to occur in individuals with a history of sensitivity to multiple allergens [see Contraindications (4), Warnings and Precautions (5.9)].
Inform patients that use of Amerge within 24 hours of another triptan or an ergot-type medication (including dihydroergotamine or methysergide) is contraindicated [see Contraindications (4), Drug Interactions (7.1, 7.2)].
Caution patients about the risk of serotonin syndrome with the use of Amerge or other triptans, particularly during combined use with SSRIs, SNRIs, TCAs, and MAO inhibitors [see Warnings and Precautions (5.7), Drug Interactions (7.3)].
Advise patients to notify their healthcare provider if they become pregnant during treatment or intend to become pregnant [see Use in Specific Populations (8.1)].
Treatment with Amerge may cause somnolence and dizziness; instruct patients to evaluate their ability to perform complex tasks after administration of Amerge.
Amerge is a registered trademark of the GSK group of companies. The other brand listed is a trademark of its owner and is not a trademark of the GSK group of companies. The maker of this brand is not affiliated with and does not endorse the GSK group of companies or its products.
Read this Patient Information before you start taking Amerge and each time you get a refill. There may be new information. This information does not take the place of talking with your healthcare provider about your medical condition or treatment.
What is the most important information I should know about Amerge?
Amerge is a prescription medicine used to treat acute migraine headaches with or without aura in adults who have been diagnosed with migraine headaches.
Amerge is not used to prevent or decrease the number of migraine headaches you have.
Amerge is not used to treat other types of headaches such as hemiplegic migraines (that make you unable to move on one side of your body) or basilar migraines (rare form of migraine with aura).
It is not known if Amerge is safe and effective to treat cluster headaches.
It is not known if Amerge is safe and effective in children younger than 18 years of age.
Who should not take Amerge?
an allergy to naratriptan or any of the ingredients in Amerge. See the end of this leaflet for a complete list of ingredients in Amerge.
What should I tell my healthcare provider before taking Amerge?
are breastfeeding or plan to breastfeed. It is not known if Amerge passes into your breast milk. Talk with your healthcare provider about the best way to feed your baby if you take Amerge.
Using Amerge with certain other medicines can affect each other, causing serious side effects.
How should I take Amerge?
Certain people should take their first dose of Amerge in their healthcare provider’s office or in another medical setting. Ask your healthcare provider if you should take your first dose in a medical setting.
Take Amerge exactly as your healthcare provider tells you to take it.
Take Amerge with water or other liquids.
If you do not get any relief after your first Amerge tablet, do not take a second tablet without first talking with your healthcare provider.
If your headache comes back or you only get some relief from your headache, you can take a second tablet 4 hours after the first tablet.
Do not take more than a total of 5 mg of Amerge in a 24-hour period.
Some people who take too many Amerge tablets may have worse headaches (medication overuse headache). If your headaches get worse, your healthcare provider may decide to stop your treatment with Amerge.
If you take too much Amerge, call your healthcare provider or go to the nearest hospital emergency room right away.
You should write down when you have headaches and when you take Amerge so you can talk with your healthcare provider about how Amerge is working for you.
What should I avoid while taking Amerge?
Amerge can cause dizziness, weakness, or drowsiness. If you have these symptoms, do not drive a car, use machinery, or do anything where you need to be alert.
What are the possible side effects of Amerge?
medication overuse headaches. Some people who use too many Amerge tablets may have worse headaches (medication overuse headache). If your headaches get worse, your healthcare provider may decide to stop your treatment with Amerge.
These are not all the possible side effects of Amerge. For more information, ask your healthcare provider or pharmacist.
How should I store Amerge?
Store Amerge between 68°F and 77°F (20°C and 25°C).
Keep Amerge and all medicines out of the reach of children.
General information about the safe and effective use of Amerge.
Medicines are sometimes prescribed for purposes other than those listed in Patient Information leaflets. Do not use Amerge for a condition for which it was not prescribed. Do not give Amerge to other people, even if they have the same symptoms you have. It may harm them.
This Patient Information leaflet summarizes the most important information about Amerge. If you would like more information, talk with your healthcare provider. You can ask your healthcare provider or pharmacist for information about Amerge that is written for healthcare professionals.
For more information, go to www.gsk.com or call 1-888-825-5249.
What are the ingredients in Amerge?
2.5-mg tablets also contain iron oxide yellow and indigo carmine aluminum lake (FD&C Blue No. 2) for coloring.
Amerge and IMITREX are registered trademarks of the GSK group of companies. The other brands listed are trademarks of their respective owners and are not trademarks of the GSK group of companies. The makers of these brands are not affiliated with and do not endorse the GSK group of companies or its products.
Each tablet contains 1 mg of naratriptan as the hydrochloride.
Store at controlled room temperature, 20o to 25oC (68o to 77oF) (see USP).
Do not use if blisters are torn, broken, or missing.
Each tablet contains 2.5 mg of naratriptan as the hydrochloride.
Store at controlled room temperature, 20 o to 25 o C (68 o to 77 o F) (see USP). | 2019-04-25T00:05:37Z | https://www.drugs.com/pro/amerge.html |
Tartèse, Romain; Anand, Mahesh and Franchi, Ian (2019). H and Cl isotope characteristics of indigenous and late hydrothermal fluids on the differentiated asteroidal parent body of Grave Nunataks 06128. Geochimica et Cosmochimica Acta (Early Access).
Černok, Ana; White, Lee Francis; Darling, James; Dunlop, Joseph and Anand, Mahesh (2019). Shock‐induced microtextures in lunar apatite and merrillite. Meteoritics & Planetary Science (Early Access).
Snape, Joshua F.; Curran, Natalie M.; Whitehouse, Martin J.; Nemchin, Alexander A.; Joy, Katherine H.; Hopkinson, Tom; Anand, Mahesh; Belluci, Jeremy J. and Kenny, Gavin G. (2018). Ancient volcanism on the Moon: Insights from Pb isotopes in the MIL 13317 and Kalahari 009 lunar meteorites. Earth and Planetary Science Letters, 502 pp. 84–95.
Baziotis, Ioannis; Asimow, Paul D.; Hu, Jinping; Ferrière, Ludovic; Ma, Chi; Cernok, Ana; Anand, Mahesh and Topa, Dan (2018). High pressure minerals in the Château-Renard (L6) ordinary chondrite: implications for collisions on its parent body. Scientific reports, 8(1), article no. 9851.
Lim, S.; Levin Prabhu, V.; Anand, M.; Bowen, J.; Morse, A. and Holland, A. (2018). Numerical modelling of microwave sintering of lunar simulants under near lunar atmospheric condition. In: European Lunar Symposium (ELS) 2018, 13-16 May 2018, Toulouse, France.
Prabhu, V. L.; Lim, S.; Bowen, J.; Cowley, A.; Katrib, J.; Dodds, C. and Anand, M. (2018). Microwave Heating of Lunar Simulants JSC-1A and NU-LHT-3M: Experimental And Theoretical Analysis. In: European Lunar Symposium (ELS), 13-16 May 2018, Toulouse, France.
Greenwood, Richard C.; Barrat, Jean-Alix; Miller, Martin F.; Anand, Mahesh; Dauphas, Nicolas; Franchi, Ian A.; Sillard, Patrick and Starkey, Natalie A. (2018). Oxygen isotopic evidence for accretion of Earth's water before a high-energy Moon-forming giant impact. Science Advances, 4(3), article no. eaao5928.
Gibson, E. K.; Tindle, A. G.; Schwenzer, S. P.; Kelley, S. P.; Morgan, G. H.; Anand, M. and Pillinger, J. M. (2018). The Apollo Virtual Microscope Collection: Lunar Mineralogy and Petrology of Apollo 11, 12, 14, 15 and 16 Rocks. In: 49th Lunar and Planetary Science Conference, 19-23 Mar 2018, The Woodlands, Houston, Texas, USA.
Potts, Nicola J.; Barnes, Jessica J.; Tartèse, Romain; Franchi, Ian A. and Anand, Mahesh (2018). Chlorine isotopic compositions of apatite in Apollo 14 rocks: Evidence for widespread vapor-phase metasomatism on the lunar nearside ~4 billion years ago. Geochimica et Cosmochimica Acta, 230 pp. 46–59.
Barnes, Jessica J.; Franchi, Ian A.; McCubbin, Francis and Anand, Mahesh (2018). Multiple reservoirs of volatiles in the Moon revealed by the isotopic composition of chlorine in lunar basalts. Geochimica et Cosmochimica Acta (Early Access).
Lim, Sungwoo; Levin Prabhu, Vibha; Anand, Mahesh and Taylor, Lawrence (2017). Extra-terrestrial construction processes - advancements, opportunities and challenges. Advances in Space Research, 60(7) pp. 1413–1429.
Cernok, Ana; White, Lee Francis; Darling, James; Dunlop, Joseph; Johnson, Diane and Anand, Mahesh (2017). Microstructural shock features in Lunar Mg-suite accessory phases. In: Shock metamorphism in terrestrial and extra-terrestrial rocks (Miljković, Katarina ed.), 26 Jun - 2 Jul 2017, Curtin University, Perth, Western Australia.
Fegan, E. R.; Rothery, D. A.; Marchi, S.; Massironi, M.; Conway, S. J. and Anand, M. (2017). Late movement of basin-edge lobate scarps on Mercury. Icarus, 288 pp. 226–324.
Cernok, A.; Darling, J.; White, L.; Dunlop, J. and Anand, M. (2017). Shock-Induced Texture in Lunar Mg-Suite Apatite and its Effect on Volatile Distribution. In: 5th European Lunar Symposium, 02-03 May 2017, Münster, Germany.
Barnes, J. J.; Anand, M. and Franchi, I. A. (2017). Chlorine in Lunar Basalts. In: 48th Lunar and Planetary Science Conference, 20-24 Mar 2017, The Woodlands, Houston, Texas.
Walton, C. R. and Anand, M. (2017). Textural Evidence for Shock-Related Metasomatic Replacement of Olivine by Phosphates in the Chelyabinsk Chondrite. In: 48th Lunar and Planetary Science Conference, 20-24 Mar 2017, The Woodlands, Houston, Texas.
Mortimer, James; Anand, Mahesh; Verchovsky, Sasha; Nicoara, Simona; Greenwood, Richard C.; Gibson, Jenny; Franchi, Ian A.; Ahmed, Farah; Strekopytov, Stanislav and Carpenter, James (2017). Preparing and Characterizing Carbonaceous Chondrite Standards for Verification of ESA’S ‘Prospect’ Package. In: 48th Lunar and Planetary Science Conference, 20-24 Mar 2017, The Woodlands, Houston, Texas.
Verchovsky, A. B.; Mortimer, J.; Buikin, A. I. and Anand, M. (2017). Trapping of Atmospheric Gases During Crushing of Lunar Samples. In: 48th Lunar and Planetary Science Conference, 20-24 Mar 2017, The Woodlands, Houston, Texas.
Robinson, K. L.; Barnes, J. J.; Villeneuve, J.; Johnson, D.; Deloule, E.; Franchi, I. A. and Anand, M. (2017). Ion Microprobe Analyses of Trace Elements in Lunar Apatites. In: 48th Lunar and Planetary Science Conference, 20-24 Mar 2017, The Woodlands, Houston, Texas.
Ashcroft, H. O.; Anand, M.; Korotev, R. L.; Greenwood, R. C.; Franchi, I. A. and Strekopytov, S. (2017). NWA 10989 – A New Lunar Meteorite with Equal Proportions of Feldspathic and VLT Material. In: 48th Lunar and Planetary Science Conference, 20-24 Mar 2017, The Woodlands, Houston, Texas.
Hauri, Erik H.; Saal, Alberto E.; Nakajima, Miki; Anand, Mahesh; Rutherford, Malcolm J.; Van Orman, James A. and Le Voyer, Marion (2017). Origin and Evolution of Water in the Moon’s Interior. Annual Review of Earth and Planetary Sciences, 45 pp. 89–111.
Mortimer, J.; Verchovsky, S. and Anand, M. (2016). Predominantly Non-Solar Origin of Nitrogen in Lunar Soils. Geochimica et Cosmochimica Acta, 193 pp. 36–53.
McCubbin, Francis M.; Boyce, Jeremy W.; Novák‐Szabó, Timea; Santos, Alison R.; Tartèse, Romain; Muttik, Nele; Domokos, Gabor; Vazquez, Jorge; Keller, Lindsay P.; Moser, Desmond E.; Jerolmack, Douglas J.; Shearer, Charles K.; Steele, Andrew; Elardo, Stephen M.; Rahman, Zia; Anand, Mahesh; Delhaye, Thomas and Agee, Carl B. (2016). Geologic history of Martian regolith breccia Northwest Africa 7034: Evidence for hydrothermal activity and lithologic diversity in the Martian crust. Journal of Geophysical Research: Planets, 121(10) pp. 2120–2149.
Snape, Joshua F.; Nemchin, Alexander A.; Bellucci, Jeremy J.; Whitehouse, Martin J.; Tartèse, Romain; Barnes, Jessica J.; Anand, Mahesh; Crawford, Ian A. and Joy, Katherine H. (2016). Lunar basalt chronology, mantle differentiation and implications for determining the age of the Moon. Earth and Planetary Science Letters, 451 pp. 149–158.
Sossi, Paolo A.; Nebel, Oliver; Anand, Mahesh and Poitrasson, Franck (2016). On the iron isotope composition of Mars and volatile depletion in the terrestrial planets. Earth and Planetary Science Letters, 449 pp. 360–371.
Robinson, Katharine L.; Barnes, Jessica J.; Nagashima, Kazuhide; Thomen, Aurélien; Franchi, Ian A.; Huss, Gary R.; Anand, Mahesh and Taylor, G.Jeffrey (2016). Water in evolved lunar rocks: Evidence for multiple reservoirs. Geochimica et Cosmochimica Acta, 188 pp. 244–260.
Potts, Nicola J.; Tartèse, Romain; Anand, Mahesh; van Westrenen, Wim; Griffiths, Alexandra A.; Barrett, Thomas J. and Franchi, Ian A. (2016). Characterization of mesostasis regions in lunar basalts: Understanding late-stage melt evolution and its influence on apatite formation. Meteoritics & Planetary Science, 51(9) pp. 1555–1575.
Barnes, Jessica J.; Tartèse, Romain; Anand, Mahesh; McCubbin, Francis M.; Neal, Clive R. and Franchi, Ian A. (2016). Early degassing of lunar urKREEP by crust-breaching impact(s). Earth and Planetary Science Letters, 447 pp. 84–94.
Srivastava, Vibha; Lim, Sungwoo and Anand, Mahesh (2016). Microwave processing of lunar soil for supporting longer-term surface exploration on the Moon. Space Policy, 37(2) pp. 92–96.
Barrett, T. J.; Barnes, J. J.; Anand, M.; Franchi, I. A.; Greenwood, R. C.; Charlier, B. L. A. and Grady, M. M. (2016). Chlorine isotope variation in eucrites. In: 79th Annual Meeting of the Meteoritical Society, 7-12 Aug 2016, Berlin, Germany.
Barrett, T. J.; Barnes, J. J.; Tartèse, R.; Anand, M.; Franchi, I. A.; Greenwood, R. C.; Charlier, B. L. A. and Grady, M. M. (2016). The abundance and isotopic composition of water in eucrites. Meteoritics & Planetary Science, 51(6) pp. 1110–1124.
Barnes, Jessica; Kring, David A.; Tartèse, Romain; Franchi, Ian A.; Anand, Mahesh and Russell, Sara S. (2016). An asteroidal origin for water in the Moon. Nature Communications, 7, article no. 11684.
Bonnand, Pierre; Parkinson, Ian J. and Anand, Mahesh (2016). Mass dependent fractionation of stable chromium isotopes in mare basalts: implications for the formation and differentiation of the Moon. Geochimica et Cosmochimica Acta, 175 pp. 208–221.
Mcdermott, Kathryn H.; Greenwood, Richard C.; Scott, Edward R. D.; Franchi, Ian A. and Anand, Mahesh (2016). Oxygen isotope and petrological study of silicate inclusions in IIE iron meteorites and their relationship with H chondrites. Geochimica et Cosmochimica Acta, 173 pp. 97–113.
Koike, Mizuho; Sano, Yuji; Takahata, Naoto; Ishida, Akizumi; Sugiura, Naoji and Anand, Mahesh (2016). Combined investigation of H isotopic compositions and U-Pb chronology of young Martian meteorite Larkman Nunatak 06319. Geochemical Journal, 50(5) pp. 363–377.
Barrett, T.J.; Barnes, J.J.; Anand, M.; Franchi, I.A.; Greenwood, R.C.; Charlier, B.L.A. and Grady, M.M. (2016). The isotopic composition of chlorine in apatite from eucrites. In: 47th Lunar and Planetary Science Conference, 21-25 Mar 2016, Houston, Texas.
Lim, Sungwoo; Anand, Mahesh; Cowley, Aidan; Crawford, Ian; Doule, Ondrej; Harkness, Patrick; Kanamori, Hiroshi; Maurer, Matthias; Montano, Giuseppe; Osborne, Barnaby; Patrick, Richard; Rousek, Tomas; Taylor, Lawrence and Vibha, Vibha (2015). 3D Printing on the Moon: Challenges and Opportunities. In: International Symposium on Moon 2020 - 2030: A new era of coordinated human and robotic exploration, 15-16 Dec 2015, ESTEC, Noordwijk, The Netherlands.
Thomas, Rebecca J.; Rothery, David A.; Conway, Susan J. and Anand, Mahesh (2015). Explosive volcanism in complex impact craters on Mercury and the Moon: influence of tectonic regime on depth of magmatic intrusion. Earth and Planetary Science Letters, 431 pp. 164–172.
McCubbin, Francis M.; Vander Kaaden, Kathleen E.; Tartèse, Romain; Boyce, Jeremy W.; Mikhail, Sami; Whitson, Eric S.; Bell, Aaron S.; Anand, Mahesh; Franchi, Ian A.; Wang, Jianhua and Hauri, Erik H. (2015). Experimental investigation of F, Cl, and OH partitioning between apatite and Fe-rich basaltic melt at 1.0–1.2 GPa and 950–1000 °C. American Mineralogist, 100(8-9) pp. 1790–1802.
Mortimer, J.; Verchovsky, A. B.; Anand, M.; Gilmour, I. and Pillinger, C. T. (2015). Simultaneous analysis of abundance and isotopic composition of nitrogen, carbon, and noble gases in lunar basalts: insights into interior and surface processes on the Moon. Icarus, 255 pp. 3–17.
Santos, Alison R.; Agee, Carl B.; McCubbin, Francis M.; Shearer, Charles K.; Burger, Paul V.; Tartèse, Romain and Anand, Mahesh (2015). Petrology of igneous clasts in Northwest Africa 7034: Implications for the petrologic diversity of the martian crust. Geochimica et Cosmochimica Acta, 157 pp. 56–85.
Lim, Sungwoo and Anand, Mahesh (2015). In-Situ Resource Utilisation (ISRU) derived extra-terrestrial construction processes using sintering-based additive manufacturing techniques – focusing on a lunar surface environment. In: European Lunar Symposium (ELS) 2015, 13-14 May 2015, Frascati, Italy.
Thomas, Rebecca J.; Lucchetti, Alice; Cremonese, Gabriele; Rothery, David A.; Massironi, Matteo; Re, Cristina; Conway, Susan J. and Anand, Mahesh (2015). A cone on Mercury: analysis of a residual central peak encircled by an explosive volcanic vent. Planetary And Space Science, 108 pp. 108–116.
Barrett, T. J.; Mittlefehldt, D. W.; Ross, D. K.; Greenwood, R. C.; Anand, M.; Franchi, I. A.; Grady, M. M. and Charlier, B. L. A. (2015). The Mineralogy and Petrology of Anomalous Eucrite Emmaville. In: Lunar and Planetary Science Conference 46th, 16 to 20 Mar 2015, The Woodlands, Texas.
Lim, S.; Anand, M. and Rousek, T. (2015). Estimation of energy and material use of sintering-based construction for a lunar outpost - with the example of SinterHab module design. In: 46th Lunar and Planetary Science Conference (LPSC), LPSC, article no. 1076.
Barnes, J. J.; Tartèse, R.; Anand, M.; Franchi, I. A.; Russell, S. S. and Kring, D. A. (2015). Determining the source(s) of water in the lunar interior. In: 46th Lunar and Planetary Science Conference, article no. 2159.
Barnes, J. J.; Tartèse, R.; Anand, M.; McCubbin, F. M.; Franchi, I. A.; Starkey, N. A. and Russell, S. S. (2015). Volatiles in the lunar crust - an evaluation of the role of metasomatism. In: 46th Lunar and Planetary Science Conference, article no. 1352.
Potts, N. J.; van Westrenen, W.; Tartèse, R.; Franchi, I. A. and Anand, M. (2015). Apatite-Melt Volatile Partitioning Under Lunar Conditions. In: 46th Lunar and Planetary Science Conference, 16-20 Mar 2015, The Woodlands, Texas, USA.
Potts, N. J.; Tartèse, R.; Franchi, I. A. and Anand, M. (2015). Understanding the Chlorine Isotopic Compositions of Apatites in Lunar Basalts. In: 46th Lunar and Planetary Science Conference, 16-20 Mar 2015, The Woodlands, TX, USA.
McCubbin, Francis M.; Vander Kaaden, Kathleen E.; Tartèse, Romain; Klima, Rachel L.; Liu, Yang; Mortimer, James; Barnes, Jessica J.; Shearer, Charles K.; Treiman, Allan H.; Lawrence, David J.; Elardo, Stephen M.; Hurley, Dana M.; Boyce, Jeremy W. and Anand, Mahesh (2015). Magmatic volatiles (H, C, N, F, S, Cl) in the lunar mantle, crust, and regolith: abundances, distributions, processes, and reservoirs. American Mineralogist, 100(8-9) pp. 1668–1707.
Anand, M; Barnes, J. J. and Hallis, L. J. (2015). Lunar geology. In: Lee, M. R. and Leroux, H. eds. Planetary Mineralogy, Volume 15. European Mineralogical Union and the Mineralogical Society of Great Britain and Ireland, pp. 129–164.
Tartèse, Romain; Anand, Mahesh; Joy, Katherine H. and Franchi, Ian A. (2014). H and Cl isotope systematics of apatite in brecciated lunar meteorites Northwest Africa 4472, Northwest Africa 773, Sayh al Uhaymir 169, and Kalahari 009. Meteoritics & Planetary Science, 49(12) pp. 2266–2289.
Thomas, Rebecca J.; Rothery, David A.; Conway, Susan J. and Anand, Mahesh (2014). Mechanisms of explosive volcanism on Mercury: implications from its global distribution and morphology. Journal of Geophysical Research: Planets, 119(10) pp. 2239–2254.
Barnes, Jessica J.; Tartèse, R.; McCubbin, Francis M.; Anand, M.; Franchi, I. A.; Starkey, Natalie A. and Russell, S. S. (2014). Using apatite to unravel the origin of water in ancient Moon rocks. In: Geological Society of America Annual Meeting, 19-22 Oct 2014, Vancouver, Canada.
Anand, M.; Tartèse, R.; Barnes, J. J.; Franchi, I. A. and Russell, S. S. (2014). Apatite: a versatile recorder of the history of lunar volatiles. In: Geological Society of America Abstracts with Programs, 46(6) p. 27.
Thomas, Rebecca J.; Rothery, David A.; Conway, Susan J. and Anand, Mahesh (2014). Long-lived explosive volcanism on Mercury. Geophysical Research Letters, 41(17) pp. 6084–6092.
Barrett, T. J.; Tartèse, R.; Anand, M.; Franchi, I. A.; Grady, M. M.; Greenwood, R. C. and Charlier, B. L. A. (2014). The abundance and isotopic composition of water in howardite-eucrite-diogenite meteorites. In: 77th Annual Meeting of the Meteoritical Society (MetSoc 2014), 8-13 Sep 2014, Casablanca, Morocco.
Morlok, Andreas; Mason, Andrew B.; Anand, Mahesh; Lisse, Carey M.; Bullock, Emma S. and Grady, Monica M. (2014). Dust from collisions: A way to probe the composition of exo-planets? Icarus, 239 pp. 1–14.
Lim, Sungwoo and Anand, Mahesh (2014). Space Architecture technology for settlement and exploration on other planetary bodies – In-Situ Resource Utilisation (ISRU) based structures on the Moon. In: EU-Korea Conference on Science and Technology (EKC 2014), 23-25 Jul 2014, Vienna, Austria.
Anand, Mahesh and Lim, Sungwoo (2014). Water in and on the Moon: recent discoveries and future prospects. In: EU-Korea Conference on Science and Technology (EKC )2014, 23-25 Jul 2014, Vienna, Austria.
Hallis, L. J.; Anand, M. and Strekopytov, S. (2014). Trace-element modelling of mare basalt parental melts: Implications for a heterogeneous lunar mantle. Geochimica et Cosmochimica Acta, 134 pp. 289–316.
Lim, Sungwoo and Anand, Mahesh (2014). Space Architecture for exploration and settlement on other planetary bodies – In-Situ Resource Utilisa-tion (ISRU) based structures on the Moon. In: European Lunar Symposium 2014, 15-16 May 2014, London, UK.
Barnes, Jessica J.; Tartèse, Romain; Anand, Mahesh; McCubbin, Francis M.; Franchi, Ian A.; Starkey, Natalie A. and Russell, Sara S. (2014). The origin of water in the primitive Moon as revealed by the lunar highlands samples. Earth and Planetary Science Letters, 390 pp. 244–252.
Potts, N. J.; Tartèse, R.; Anand, M.; Franchi, I. A.; van Westrenen, W.; Barnes, Jessica and Griffiths, A. A. (2014). Characterization of mesostasis areas in mare basalts: constraining melt compositions from which apatite crystallizes. In: 45th Lunar and Planetary Science Conference, 17-21 Mar 2014, Houston, Texas.
Thomas, Rebecca J.; Rothery, David A.; Conway, Susan J. and Anand, Mahesh (2014). Hollows on Mercury: materials and mechanisms involved in their formation. Icarus, 229 pp. 221–235.
Tartèse, Romain; Anand, Mahesh; McCubbin, Francis M.; Elardo, Stephen M.; Shearer, Charles K. and Franchi, Ian A. (2014). Apatites in lunar KREEP basalts: the missing link to understanding the H isotope systematics of the Moon. Geology, 42(4) pp. 363–366.
Anand, Mahesh (2014). Analyzing Moon Rocks. Science, 344(6182) pp. 365–366.
Tartèse, Romain; Anand, Mahesh; Barnes, Jessica; Starkey, Natalie A.; Franchi, Ian A. and Sano, Yuji (2013). The abundance, distribution, and isotopic composition of hydrogen in the Moon as revealed by basaltic lunar samples: implications for the volatile inventory of the Moon. Geochimica et Cosmochimica Acta, 122 pp. 58–74.
Zambardi, Thomas; Poitrasson, Franck; Corgne, Alexandre; Méheut, Merlin; Quitte, Ghylaine and Anand, Mahesh (2013). Silicon isotope variations in the inner solar system: Implications for planetary formation, differentiation and composition. Geochimica et Cosmochimica Acta, 121 pp. 67–83.
Schwenzer, S. P.; Greenwood, R. C.; Kelley, S. P.; Ott, U.; Tindle, A. G.; Haubold, R.; Herrmann, S.; Gibson, J. M.; Anand, M.; Hammond, S. and Franchi, I. A. (2013). Quantifying noble gas contamination during terrestrial alteration in Martian meteorites from Antarctica. Meteoritics & Planetary Science, 48(6) pp. 929–954.
Mortimer, J. I.; Anand, M.; Gilmour, I.; Pillinger, C. T. and Verchovsky, S. (2013). Investigating the distribution and source(s) of volatiles on the lunar surface. In: NLSI Workshop Without Walls: Lunar Volatiles 1, 21-23 May 2013, Online.
Potts, N.J.; Anand, M.; van Westrenen, W.; Tartèse, R. and Franchi, I.A. (2013). Using lunar apatite to assess the volatile inventory of the Moon. In: NLSI Workshop Without Walls: Lunar Volatiles 1, 21-23 May 2013.
Mortimer, James; Anand, Mahesh; Gilmour, Iain; Pillinger, Colin; Sheridan, Simon and Morse, Andrew (2013). Using stable isotope geochemistry to investigate the source(s) of volatiles in the lunar regolith. In: Geochemistry Group RiP meeting 2013, 14 Mar 2013, Milton Keynes, UK.
Tartèse, Romain and Anand, Mahesh (2013). Late delivery of chondritic hydrogen into the lunar mantle: Insights from mare basalts. Earth and Planetary Science Letters, 361 pp. 480–486.
Barnes, Jessica; Franchi, I. A.; Anand, M.; Tartèse, R.; Starkey, N. A.; Koike, M.; Sano, Y. and Russell, S. S. (2013). Accurate and precise measurements of the D/H ratio and hydroxyl content in lunar apatites using NanoSIMS. Chemical Geology, 337-8 pp. 48–55.
Tartèse, Romain; Anand, Mahesh and Delhaye, Thomas (2013). NanoSIMS Pb/Pb dating of tranquillityite in high-Ti lunar basalts: Implications for the chronology of high-Ti volcanism on the Moon. American Mineralogist, 98(8-9) pp. 1477–1486.
Crawford, I. A.; Anand, M.; Cockell, C. S.; Falcke, H.; Green, D. A.; Jaumann, R. and Wieczorek, M. A. (2012). Back to the Moon: the scientific rationale for resuming lunar surface exploration. Planetary and Space Science, 74(1) pp. 3–14.
Jaumann, R.; Hiesinger, H.; Anand, M.; Crawford, I. A.; Wagner, R.; Sohl, F.; Jolliff, B. L.; Scholten, F.; Knapmeyer, M.; Hoffmann, H.; Hussmann, H.; Grott, M.; Hempel, S.; Köhler, U.; Krohn, K.; Schmitz, N.; Carpenter, J.; Wieczorek, M.; Spohn, T.; Robinson, M. S. and Oberst, J. (2012). Geology, geochemistry, and geophysics of the Moon: status of current understanding. Planetary And Space Science, 74(1) pp. 15–41.
Barnes, Jessica; Tartèse, R.; Anand, M.; Starkey, N.; Franchi, I.; Russell, S.S. and Sano, Y (2012). Water in the Moon: insights from SIMS analyses of lunar apatites. In: Lunar Science as a Window into the Early Evolution of the Solar System and Conditions of the Early Earth, 09 Nov 2012, London.
Anand, M.; Tartèse, R.; Terada, K.; Franchi, I. A.; Starkey, N. A. and Sano, Y. (2012). Tracking secular changes in the "water" content of lunar interior using basaltic lunar meteorites. In: European Planetary Science Congress, 23-28 Sep 2012, Madrid.
Tartèse, R.; Anand, M.; Barnes, Jessica and Franchi, I. A. (2012). Apollo 15 low-Ti and KREEP basalts: two distinct "water" reservoirs? In: European Planetary Science Congress, 23-28 Sep 2012, Madrid, Spain.
Morlok, Andreas; Koike, Chiyoe; Tomeoka, Kazushige; Mason, Andrew; Lisse, Carey; Anand, Mahesh and Grady, Monica (2012). Mid-infrared spectra of differentiated meteorites(achondrites): comparison with astronomical observations of dust in protoplanetary and debris disks. Icarus, 219(1) pp. 48–56.
Barnes, Jessica; Anand, M.; Franchi, I. A. and Russell, S. S. (2012). Investigating the water contents and hydrogen isotopic compositions of lunar apatite. In: Geochemistry Group Research In Progress Meeting: Building a Habitable Planet, 15 Mar 2012, Milton Keynes, UK.
Weider, S. Z.; Kellett, B. J.; Swinyard, B. M.; Crawford, I. A.; Joy, K. H.; Grande, M.; Howe, C. J.; Huovelin, J.; Narendranath, S.; Alha, L.; Anand, M.; Athiray, P. S.; Bhandari, N.; Carter, J. A.; Cook, A. C.; d’Uston, L. C.; Fernandes, V. A.; Gasnault, O.; Goswami, J. N.; Gow, J. P. D.; Holland, A. D.; Koschny, D.; Lawrence, D. J.; Maddison, B. J.; Maurice, S.; McKay, D. J.; Okada, T.; Pieters, C.; Rothery, D. A.; Russell, S. S.; Shrivastava, A.; Smith, D. R. and Wieczorek, M. (2012). The Chandrayaan-1 X-ray Spectrometer: first results. Planetary and Space Science, 60(1) pp. 217–228.
Smith, Alan; Crawford, I. A.; Gowen, Robert Anthony; Ambrosi, R.; Anand, M.; Banerdt, B.; Bannister, S.; Bowles, N.; Braithwaite, C.; Brown, P.; Chela-Flores, J.; Cholinser, T.; Church, P.; Coates, A. J.; Colaprete, T.; Collins, G.; Collinson, G.; Cook, T.; Elphic, R.; Fraser, G.; Gao, Y.; Gibson, E.; Glotch, T.; Grande, M.; Hagermann, A.; Heldmann, J.; Hood, L. L.; Jones, A. P.; Joy, K. H.; Khavroshkin, O.; Klingelhoefer, G.; Knapmeyer, M.; Kramer, G.; Phipps, A.; Pullan, D.; Pike, W.T.; Lawrence, D.; Marczewsk, S.; Rask, J.; Richard, D. T.; Seweryn, K.; Sheridan, S.; Sims, M. R.; Sweeting, M.; Swindle, T.; Talboys, D.; Taylor, L.; Teanby, N; Tong, V; Ulamec, S; Wawrzaszek, R; Wieczorek, M; Wilson, L. and Wright, Ian (2012). Lunar Net - a proposal in response to an ESA M3 call in 2010 for a medium sized mission. Experimental Astronomy, 33(2-3) pp. 587–644.
Barnes, Jessica; Anand, M.; Franchi, I. A.; Starkey, N. A.; Ota, Y.; Sani, Y.; Russell, S. S. and Tartèse, R. (2012). The hydroxyl content and hydrogen isotope composition of Lunar apatites. In: 43rd Lunar and Planetary Science Conference, 19-23 Mar 2012, The Woodlands, TX, US.
Tartèse, R.; Barnes, Jessica; Anand, M.; Starkey, N.; Franchi, I. A.; Terada, K. and Sano, Y. (2012). Hydrogen and lead isotopic characteristics of lunar meteorite MIL 05035. In: European Lunar Symposium, 19-20 Apr 2012, Berlin, Germany.
Barnes, Jessica; Anand, M.; Franchi, I. A.; Starkey, N. A.; Tartèse, R.; Sano, Y. and Russell, S. S. (2012). Lunar volatiles: an examination of hydrogen isotopes and hydroxyl content. In: European Lunar Symposium, 19-20 Apr 2012, Berlin, Germany.
Anand, M. and Tartèse, R. (2012). The abundance, distribution, and source(s) of water in the Moon. In: European Lunar Symposium, 19-20 Apr 2012, Berlin, Germany.
Anand, M.; Crawford, I. A.; Balat-Pichelin, M.; Abanades, S.; van Westrenen, W.; Péraudeau, G.; Jaumann, R. and Seboldt, W. (2012). A brief review of chemical and mineralogical resources on the Moon and likely initial in situ resource utilization (ISRU) applications. Planetary And Space Science, 74(1) pp. 42–48.
Smith, P. H.; Gow, J. P. D.; Murray, N. J.; Holland, A. D.; Anand, M.; Pool, P.; Sreekumar, P. and Narendranath, S. (2012). Performance of new generation swept charge devices for lunar x-ray spectroscopy on Chandrayaan-2. In: Proceedings - SPIE the International Society for Optical Engineering, SPIE, 8453, article no. 84530R.
Schwenzer, S. P.; Anand, M.; Franchi, I. A.; Gibson, J. M.; Greenwood, R. C.; Hammond, S.; Haubold, R.; Herrmann, S.; Kelley, S. P.; Ott, U. and Tindle, A. G. (2012). Cold desert alteration of Martian meteorites: mixed news from noble gases, trace elements and oxygen isotopes. In: 43rd Lunar and Planetary Science Conference, 19-23 Mar 2012, The Woodlands, TX, USA.
Osborne, Ian; Sherlock, Sarah; Anand, Mahesh and Argles, Tom (2011). New Ar-Ar ages of southern Indian kimberlites and a lamproite and their geochemical evolution. Precambrian Research, 189(1-2) pp. 91–103.
Narendranath, S.; Athiray, P. S.; Sreekumar, P.; Kellett, B. J.; Alha, L.; Howe, C. J.; Joy, K. H.; Grande, M.; Huovelin, J.; Crawford, I. A.; Unnikrishnan, U.; Lalita, S.; Subramaniam, S.; Weider, S. Z.; Nittler, L. R.; Gasnault, O.; Rothery, D.; Fernandes, V. A.; Bhandari, N.; Goswami, J. N.; Wieczorek, M. A.; C1XS Science Team; Anand, Mahesh; Holland, Andrew and Gow, Jason (2011). Lunar X-ray fluorescence observations by the Chandrayaan-1 X-ray Spectrometer (C1XS): results from the nearside southern highlands. Icarus, 214(1) pp. 53–66.
Schwenzer, Susanne; Tindle, Andrew G.; Anand, Mahesh; Gibson, Everett K.; Pearson, Vic K.; Pemberton, Dan; Pillinger, Colin; Smith, Caroline L.; Whalley, Peter and Kelley, Simon Peter (2011). Beagle I and II Voyages: Charles Darwin’s rocks and the quest for Mars rock; the Open University’s virtual microscope has both. In: AGU Fall Meeting, 5-9 Dec 2011, San Francisco, CA, USA.
Mcdermott, K.; Greenwood, R. C.; Franchi, I. A.; Anand, M. and Scott, E. R. D. (2011). Oxygen isotopic and petrological constraints on the origin and relationship of IIE iron meteorites and H chondrites. In: 42nd Lunar and Planetary Science Conference, 7-11 Mar 2011, Houston, TX, US.
McDermott, K. H.; Greenwood, R. C.; Franchi, I. A.; Anand, M. and Scott, E. R. D. (2011). The relationship between IIE irons and H chondrites: petrologic and oxygen isotope constraints. In: 74th Annual Meeting of the Meteoritical Society, 8-12 Aug 2011, London, UK.
Anand, M.; Carpenter, J. and TT-ELPM (2011). Exploration and evaluation of lunar volatiles as potential resource within the ESA lunar lander context. In: A Wet vs. Dry Moon: Exploring Volatile Reservoirs and Implications for the Evolution of the Moon and Future Exploration, 13-15 Jun 2011, Houston, TX, US.
Anand, Mahesh (2010). Lunar water: a brief review. Earth Moon and Planets, 107(1) pp. 65–73.
Hallis, L. J.; Anand, M.; Greenwood, R. C.; Miller, M. F.; Franchi, I. A. and Russell, S. S. (2010). The oxygen isotope composition, petrology and geochemistry of mare basalts: evidence for large-scale compositional variation in the lunar mantle. Geochimica et Cosmochimica Acta, 74(23) pp. 6885–6899.
Chalapathi Rao, N. V.; Anand, M.; Dongre, A. and Osborne, I. (2010). Carbonate xenoliths hosted by the Mesoproterozoic Siddanpalli Kimberlite Cluster (Eastern Dharwar craton): Implications for the geodynamic evolution of southern India and its diamond and uranium metallogenesis. International Journal of Earth Sciences, 99(8) pp. 1791–1804.
Anand, Mahesh (2010). Recent advancements in lunar science and the future exploration of the moon. In: European Planetary Science Congress 2010, 19-24 Sep 2010, Rome, Italy.
Rothery, David; Marinangeli, Lucia; Anand, Mahesh; Carpenter, James; Christensen, Ulrich; Crawford, Ian A.; De Sanctis, Maria Cristina; Epifani, Elena Mazzotta; Erard, Stéphane; Frigeri, Alessandro; Fraser, George; Hauber, Ernst; Helbert, Jörn; Hiesinger, Harald; Joy, Katherine; Langevin, Yves; Massironi, Matteo; Milillo, Anna; Mitrofanov, Igor; Muinonen, Karri; Näränen, Jyri; Pauselli, Cristina; Potts, Phil; Warell, Johan and Wurz, Peter (2010). Mercury's surface and composition to be studied by BepiColombo. Planetary And Space Science, 58(1-2) pp. 21–39.
Fraser, G. W.; Carpenter, J. D.; Rothery, D. A.; Pearson, J. F.; Martindale, A.; Huovelin, J.; Treis, J.; Anand, M.; Anttila, M.; Ashcroft, M.; Benkoff, J.; Bland, P.; Bowyer, A.; Bradley, A.; Bridges, J.; Brown, C.; Bulloch, C.; Bunce, E. J.; Christensen, U.; Evans, M.; Fairbend, R.; Feasey, M.; Giannini, F.; Hermann, S.; Hesse, M.; Hilchenbach, M.; Jorden, T.; Joy, K.; Kaipiainen, M.; Kitchingman, I.; Lechner, P.; Lutz, G.; Malkki, A.; Muinonen, K.; Näränen, J.; Portin, P.; Prydderch, M.; San Juan, J.; Sclater, E.; Schyns, E.; Stevenson, T. J.; Strüder, L.; Syrjasuo, M.; Talboys, D.; Thomas, P.; Whitford, C. and Whitehead, S. (2010). The mercury imaging X-ray spectrometer (MIXS) on BepiColombo. Planetary and Space Science, 58(1-2) pp. 79–95.
Anand, M.; Pearson, V.; Kelley, S.; Tindle, A.; Whalley, P. and Koeberl, K. (2010). Virtual microscope for extra-terrestrial samples. In: European Planetary Science Congress, 19-24 Sep 2010, Rome, Italy.
MdDermott, K.; Greenwood, R. C.; Franchi, I. A.; Anand, M. and Scott, E. R. D. (2010). Oxygen isotopic constraints on the origin and relationship of IIE iron meteorites and H chondrites. In: 73rd Annual Meeting of the Meteoritical Society, 26-30 Jul 2010, New York, NY, US.
Anand, M. and Parkinson, I. J. (2010). Variations in light lithophile elements (Li, B, Be) and lithium isotopes in martian pyroxenes and olivines: roles of degassing and diffusion. In: 41st Lunar and Planetary Science Conference, 1-5 Mar 2010, Houston, TX, US.
Hallis, L. J.; Greenwood, R. C.; Anand, M.; Russell, S. S.; Miller, M. F. and Franchi, I. A. (2009). Oxygen isotopic composition of mare-basalts: magma ocean differentiation and source heterogeneity. In: 72nd Annual Meteoritical Society Meeting, 13-18 Jul 2009, Nancy, France.
Grande, M.; Maddison, B. J.; Howe, C. J.; Kellett, B. J.; Sreekumar, P.; Huovelin, J.; Crawford, I. A.; Duston, C. L.; Smith, D.; Anand, M.; Bhandari, N.; Cook, A.; Fernandes, V.; Foing, B.; Gasnaut, O.; Goswami, J. N.; Holland, A.; Joy, K. H.; Kochney, D.; Lawrence, D.; Maurice, S.; Okada, T.; Narendranath, S.; Pieters, C.; Rothery, D.; Russell, S. S.; Shrivastava, A.; Swinyard, B.; Wilding, M. and Wieczorek, M. (2009). The C1XS X-Ray Spectrometer on Chandrayaan-1. Planetary and Space Science, 57(7) pp. 717–724.
Liu, Y.; Zhang, A.; Thaisen, K. G.; Anand, M. and Taylor, L. A. (2009). Mineralogy and petrology of a lunar highland breccia meteorite, MIL 07006. In: 40th Lunar and Planetary Science Conference, 23-27 Mar 2009, Houston.
Gronstal, A.; Pearson, V.; Kappler, A.; Dooris, C.; Anand, M.; Poitrasson, F.; Kee, T. P. and Cockell, C. S. (2009). Laboratory experiments on the weathering of iron meteorites and carbonaceous chondrites by iron-oxidizing bacteria. Meteoritics and Planetary Science, 44(2) pp. 233–247.
Crawford, I. A.; Joy, K. H.; Kellett, B. J.; Grande, M.; Anand, M.; Bhandari, N.; Cook, A. C.; d’Uston, L.; Fernandes, V. A.; Gasnault, O.; Goswami, J.; Howe, C. J.; Huovelin, J.; Koschny, D.; Lawrence, D. J.; Maddison, B. J.; Maurice, S.; Narendranath, S.; Pieters, C.; Okada, T.; Rothery, D. A.; Russell, S. S.; Sreekumar, P.; Swinyard, B.; Wieczorek, M. and Wilding, M. (2009). The scientific rationale for the C1XS X-ray spectrometer on India’s Chandrayaan-1 mission to the moon. Planetary and Space Science, 57(7) pp. 725–734.
Kee, T; Gronstal, A.; Pearson, V.; Kappla, A.; Dooris, C.; Anand, M.; Poitrasson, F. and Cockell, C. (2009). Laboratory experiments on the weathering of iron meteorites and carbonaceous chondrites by iron-oxidising bacteria. In: Goldschmidt, 21-26 Jun 2009, Davos, Switzerland.
Zambardi, T.; Poitrasson, F.; Quitte, G. and Anand, M. (2009). Silicon isotope variations in the Earth and meteorites. In: Challenges to Our Volatile Planet : Goldschmidt 2009, 21-26 Jun 2009, Davos, Switzerland.
Misra, K. C.; Anand, M. and Paul, D. K. (2008). Metasomatized Kyanite-eclogites xenoliths from a southern Indian kimberlite. In: 9th International Kimberlite Conference, 10-15 Aug 2008, Frankfurt, Germany.
Anand, M.; Misra, K. C.; Paul, D. K.; Ishikawa, A. and Pearson, D. G. (2008). Trace-element signatures of kyanite-eclogites from a southern Indian kimberlite. In: 9th International Kimberlite Conference, 10-15 Aug 2008, Frankfurt.
Anand, M.; Terada, K.; Osborne, I.; Chalapathi Rao, N. V. and Dongre, A. (2008). SHRIMP U-Pb dating of Perovskites from southern Indian kimberlites. In: 9th International Kimberlite Conference, 10-15 Aug 2008, Frankfurt, Germany.
Joy, K. H.; Crawford, I. A.; Anand, M.; Greenwood, R. C.; Franchi, I. A. and Russell, S. S. (2008). The petrology and geochemistry of Miller Range 05035: A new lunar gabbroic meteorite. Geochimica et Cosmochimica Acta, 72(15) pp. 3822–3844.
Terada, Kentaro; Sasaki, Yu; Anand, Mahesh; Sano, Yuji; Taylor, Lawrence A. and Horie, Kenji (2008). Uranium–lead systematics of low-Ti basaltic meteorite Dhofar 287A: Affinity to Apollo 15 green glasses. Earth and Planetary Science Letters, 270(1-2) pp. 119–124.
Terada, K.; Sasaki, Y.; Oka, Y.; Tanabe, A.; Fujikawa, N.; Tanikawa, S.; Sano, Y.; Anand, M. and Taylor, L. A. (2008). Ion microprobe U-Pb dating of phosphates in lunar basaltic meteorites. In: 39th Lunar and Planetary Science Conference, 10-14 Mar 2008, League City, Texas, USA.
Anand, M. and Terada, K. (2008). Timing and duration of mare basalt magmatism: constraints from lunar samples. In: 39th Lunar and Planetary Science Conference, 10-14 Mar 2008, Houston, Texas, USA.
Anand, M.; James, S.; Greenwood, R. C.; Johnson, D.; Franchi, I. A. and Grady, M. M. (2008). Mineralogy and Geochemistry of Shergottite RBT 04262. In: 39th Lunar and Planetary Science Conference (Lunar and Planetary Science XXXIX), 10-14 Mar 2008, League City, Texas, USA, p. 2173.
Hallis, L. J.; Anand, M.; Russell, S. S.; Terada, K.; Rogers, N. and Hammond, S. (2008). Mineralogical and geochemical investigations of mare basalts from the Appollo Collection. In: 71st Annual Meteoritical Society Meeting, 28 Jul - 01 Aug 2008, Matsue, Japan.
Hutchens, Elena; Williamson, Ben J.; Anand, Mahesh; Ryan, Mary P. and Herrington, Richard J. (2007). Discriminating bacterial from electrochemical corrosion using Fe isotopes. Corrosion Science, 49(10) pp. 3759–3764.
Anand, M.; Poitrasson, F. and Grady, M. M. (2007). Fe isotopic composition of inner solar system materials: The fit of Martian basalts and minerals. In: Goldschmidt Conference 2007, 19-25 Aug 2007, Cologne, Germany.
Terada, K.; Sasaki, Y.; Anand, M.; Joy, K. H. and Sano, Y. (2007). Uranium–lead systematics of phosphates in lunar basaltic regolith breccia, Meteorite Hills 01210. Earth and Planetary Science Letters, 259(1-2) pp. 77–84.
Joy, K. H.; Anand, M.; Crawford, I. A. and Russell, S. S. (2007). Petrography and bulk composition of Miller Range 05035: a new lunar VLT gabbro. In: 38th Lunar and Planetary Science Confernce, 12-16 Mar 2007, Houston, USA.
Hallis, L. H.; Joy, K. H.; Anand, M. and Russell, S. S. (2007). Compositional analysis of the very-low-Ti mare basalt component of NWA 773 and comparison with low-Ti basalts, LAP03632 and LAP02436. In: 38th Lunar and Planetary Science Conference, 12-16 Mar 2007, Houston, USA.
Grady, Monica M.; Anand, M.; Gilmour, M. A.; Watson, J. S. and Wright, I. P. (2007). Alteration of the Nakhlite Lava Pile: was water on the surface, seeping down, or at depth, percolating up? Evidence (such as it is) from carbonates. In: 38th Lunar and Planetary Science Conference, 12-16 Mar 2007, League City, Texas, USA.
Chalapathi Rao, N.V.; Burgess, R.; Anand, Mahesh and Mainkar, D. (2007). 40Ar - 39Ar Dating of the Kodomali Pipe, Bastar Craton, India: A Pan-African (491±11 Ma) Age of Diamondiferous Kimberlite Emplacement. Journal of Geological Society of India, 69 pp. 539–545.
Day, James M. D.; Floss, Christine; Taylor, Lawrence A.; Anand, Mahesh and Patchen, Allan D. (2006). Evolved mare basalt magmatism, high Mg/Fe feldspathic crust, chondritic impactors, and the petrogenesis of Antarctic lunar breccia meteorites Meteorite Hills 01210 and Pecora Escarpment 02007. Geochimica et Cosmochimica Acta, 70(24) pp. 5957–5989.
Anand, M.; Russell, S.S.; Blackhurst, R.L. and Grady, M.M. (2006). Searching for signatures of life on Mars: an Fe isotope perspective. Philosophical Transactions of the Royal Society B: Biological Sciences, 361(1474) pp. 1715–1720.
Anand, Mahesh; Taylor, Lawrence A.; Floss, Christine; Neal, Clive R.; Terada, Kentaro and Tanikawa, Shiho (2006). Petrology and geochemistry of LaPaz Icefield 02205: a new unique low-Ti mare-basalt meteorite. Geochimica et Cosmochimica Acta, 70(1) pp. 246–264.
Anand, M.; Burgess, R.; Fernandes, V. and Grady, M. M. (2006). Ar-Ar age and halogen characteristics of nakhlite MIL 03346: records of crustal processes on Mars. In: 69th Annual Meeting of the Meteoritical Society, 6-11 Aug 2006, Zurich, Switzerland.
Morlok, A.; Köhler, M.; Anand, M.; Kirk, C. and Grady, M. M. (2006). Dust from collisions in circumstellar disks: similarities to meteoritic materials? In: 69th Annual Meeting of the Meteoritical Society, 6-11 Aug 2006, Zurich, Switzerland.
Anand, M.; Russell, S. S.; Blackhurst, R. and Grady, M. M. (2006). Fe isotopic composition of Martian meteorites and some terrestrial analogues. In: 37th Lunar and Planetary Science Conference, 13-17 Mar 2006, Houston, Texas, USA.
Morlok, A.; Anand, M. and Grady, M. M. (2006). Dust from collisions: mid-infrared absorbance spectroscopy of Martian meteorites. In: 37th Lunar and Planetary Science Conference, 13-17 Mar 2006, Houston, Texas, USA.
Anand, M.; Russell, S. S.; Mullane, E. and Grady, M. M. (2005). Fe isotopic composition of Martian meteorites. In: 36th Lunar and Planetary Science Conference, 14-18 Mar 2005, Houston, Texas, USA.
Anand, Mahesh; Taylor, Lawrence A.; Misra, Kula C.; Carlson, William D. and Sobolev, Nikolai V. (2004). Nature of diamonds in Yakutian eclogites: views from eclogite tomography and mineral inclusions in diamonds. Lithos, 77(1-4) pp. 333–348.
Promprated, Prinya; Taylor, Lawrence A.; Anand, Mahesh; Floss, Christine; Sobolev, Nikolai P. and Pokhilenko, Nikolai V. (2004). Multiple-mineral inclusions in diamonds from the Snap Lake/King Lake kimberlite dike, Slave craton, Canada: a trace-element perspective. Lithos, 77(1-4) pp. 69–81.
Anand, Mahesh; Taylor, Lawrence A.; Nazarov, Mikhail A.; Shu, J.; Mao, H.-K. and Hemley, Russell J. (2004). Space weathering on airless planetary bodies: clues from the lunar mineral hapkeite. PNAS, 101(18) pp. 6847–6851.
Cahill, J.T.; Floss, C.; Anand, M.; Taylor, L.A.; Nazarov, M.A. and Cohen, B.A. (2004). Petrogenesis of Lunar Highlands Meteorites: Dhofar 025, Dhofar 081, Dar al Gani 262, and Dar al Gani 400. Meteoritics and Planetary Science, 39(4) pp. 503–529.
Taylor, Lawrence A. and Anand, Mahesh (2004). Diamonds: time capsules from the Siberian Mantle. Chemie Der Erde - Geochemistry, 64(1) pp. 1–74.
Misra, Kula C.; Anand, Mahesh; Taylor, Lawrence A. and Sobolev, Nikolai V. (2004). Multi-stage metasomatism of diamondiferous eclogite xenoliths from the Udachnaya kimberlite pipe, Yakutia, Siberia. Contributions to Mineralogy and Petrology, 146(6) pp. 696–714.
Anand, M.; Gibson, S.A; Subbarao, K.V; Kelley, S.P and Dickin, A.P (2003). Early Proterozoic Melt Generation Processes beneath the Intra-cratonic Cuddapah Basin, Southern India. Journal of Petrology, 44(12) pp. 2139–2171.
Anand, Mahesh; Taylor, Lawrence A.; Neal, Clive R.; Snyder, Gregory A.; Patchen, Allan; Sano, Yuji and Terada, Kentaro (2003). Petrogenesis of lunar meteorite EET 96008. Geochimica et Cosmochimica Acta, 67(18) pp. 3499–3518.
Taylor, Lawrence A.; Anand, Mahesh; Promprated, Prinya; Floss, Christine and Sobolev, Nikolai V. (2003). The significance of mineral inclusions in large diamonds from Yakutia, Russia. American Mineralogist, 88(5-6) pp. 912–920.
Anand, Mahesh; Taylor, Lawrence A.; Misra, Kula C.; Demidova, Svetlana I. and Nazarov, Mikhail A. (2003). KREEPy lunar meteorite Dhofar 287A: A new lunar mare basalt. Meteoritics and Planetary Science, 38(4) pp. 485–499.
Demidova, S.I.; Nazarov, M.A.; Anand, Mahesh and Taylor, L.A. (2003). Lunar regolith breccia Dhofar 287B: A record of lunar volcanism. Meteoritics and Planetary Science, 38(4) pp. 501–514.
Taylor, Lawrence A.; Snyder, Gregory A.; Keller, Randall; Remley, David A.; Anand, Mahesh; Wiesli, Rene; Valley, John and Sobolev, Nikolai V. (2003). Petrogenesis of group A eclogites and websterites: evidence from the Obnazhennaya kimberlite, Yakutia. Contributions to Mineralogy and Petrology, 145(4) pp. 424–443.
This list was generated on Fri Apr 26 00:21:10 2019 BST. | 2019-04-26T02:22:35Z | http://oro.open.ac.uk/view/person/ma3776.html |
The pathogenic yeast Cryptococcus neoformans causes life-threatening meningoencephalitis in individuals suffering from HIV/AIDS. The cyclic-AMP/protein kinase A (PKA) signal transduction pathway regulates the production of extracellular virulence factors in C. neoformans, but the influence of the pathway on the secretome has not been investigated. In this study, we performed quantitative proteomics using galactose-inducible and glucose-repressible expression of the PKA1 gene encoding the catalytic subunit of PKA to identify regulated proteins in the secretome.
The proteins in the supernatants of cultures of C. neoformans were precipitated and identified using liquid chromatography-coupled tandem mass spectrometry. We also employed multiple reaction monitoring in a targeted approach to identify fungal proteins in samples from macrophages after phagocytosis of C. neoformans cells, as well as from the blood and bronchoalveolar fluid of infected mice.
We identified 61 secreted proteins and found that changes in PKA1 expression influenced the extracellular abundance of five proteins, including the Cig1 and Aph1 proteins with known roles in virulence. We also observed a change in the secretome profile upon induction of Pka1 from proteins primarily involved in catabolic and metabolic processes to an expanded set that included proteins for translational regulation and the response to stress. We further characterized the secretome data using enrichment analysis and by predicting conventional versus non-conventional secretion. Targeted proteomics of the Pka1-regulated proteins allowed us to identify the secreted proteins in lysates of phagocytic cells containing C. neoformans, and in samples from infected mice. This analysis also revealed that modulation of PKA1 expression influences the intracellular survival of cryptococcal cells upon phagocytosis.
Overall, we found that the cAMP/PKA pathway regulates specific components of the secretome including proteins that affect the virulence of C. neoformans. The detection of secreted cryptococcal proteins from infected phagocytic cells and tissue samples suggests their potential utility as biomarkers of infection. The proteomics data are available via ProteomeXchange with identifiers PXD002731 and PASS00736.
Cryptococcus neoformans is an opportunistic, yeast-like fungus that is a significant threat to immunocompromised individuals such as patients with HIV/AIDS [1, 2]. The ability of C. neoformans to cause disease depends on the production of virulence factors including a polysaccharide capsule, melanin deposition in the cell wall, the ability to grow at 37 °C, and the secretion of extracellular enzymes [3–8]. Extracellular enzymes with roles in virulence include phospholipases, which hydrolyze ester bonds and aid in the degradation and destabilization of host cell membranes and cell lysis, and urease, which hydrolyzes urea to ammonia and carbamate, inducing a localized increase in pH [9–12]. Proteinases may also cause tissue damage, provide nutrients to the pathogen and facilitate migration to the central nervous system [13–15]. In general, the secretion of extracellular enzymes is important for fungal survival within the host but a comprehensive investigation of the secretome and its regulation by the cyclic-AMP/Protein Kinase A (PKA) signal transduction pathway has not been performed for C. neoformans.
The cAMP/PKA pathway regulates capsule production, melanin formation, mating, and virulence in C. neoformans [16–20]. Components of the pathway include a Gα protein (Gpa1), adenylyl cyclase (Cac1), adenylyl cyclase-associated protein (Aca1), a candidate receptor (Gpr4), phosphodiesterases (Pde1 and Pde2), and the PKA catalytic (Pka1, Pka2) and regulatory (Pkr1) subunits. In response to environmental signals, including exogenous methionine and nutrient starvation, the G-protein coupled receptor (GPCR), Gpr4 undergoes a conformational change to activate Cacl and subsequently stimulate the production of cAMP. Mutations in genes encoding the Gpa1, Cac1, Aca1, and Pka1 proteins result in reduced formation of capsule and melanin, as well as sterility and attenuated virulence in a mouse model of cryptococcosis [16, 21]. In particular, Pka1 is a key regulator of virulence in C. neoformans. In contrast, disruption of the gene encoding Pkr1 results in enlargement of the capsule and hypervirulence .
Previous transcriptional profiling experiments compared a wild-type strain with pka1Δ and pkr1Δ mutants of C. neoformans, and identified differences in transcript levels for genes related to cell wall synthesis, transport (e.g., iron uptake), the tricarboxylic acid cycle, and glycolysis . Differential expression patterns were also observed for genes encoding ribosomal proteins, stress and chaperone functions, secretory pathway components and phospholipid biosynthetic enzymes. Specifically, loss of PKA1 influenced the expression of genes involved in secretion, and Pka1 was hypothesized to influence capsule formation by regulating expression of secretory pathway components that control the export of capsular polysaccharide to the cell surface. Additionally, the secretion inhibitors brefeldin A, nocodazole, monensin, and NEM reduced capsule size, a phenotype similar to that observed in a pka1 mutant . In general, the mechanisms and components required for the export of capsule polysaccharide and other virulence factors in C. neoformans are poorly understood. Beyond the role of PKA, other studies have examined exocytosis functions (Sec6, Sec14), the secretion of phospholipases, and the involvement of extracellular vesicles [23–28]. Additionally, O’Meara et al. (2010) recently demonstrated that PKA influences capsule attachment via phosphorylation of the pH-responsive transcription factor Rim101, a key regulator of cell wall functions.
The role of PKA in secretion in C. neoformans has also been examined with strains carrying galactose-inducible and glucose-repressible versions of PKA1 and PKR1 constructed by inserting the GAL7 promoter upstream of the genes . Elevated Pka1 activity, stimulated by growth of the P GAL7 ::PKA1 strain in galactose-containing media, was found to influence capsule thickness, cell size, ploidy, and vacuole enlargement . The authors also showed that Pka1 activity was required for wild-type levels of melanization and laccase activity, and influenced the correct localization of laccase. The ability to regulate expression of PKA1 and, subsequently, the activity of Pka1, is a powerful tool for investigating the mechanisms of its influence on the secretion of virulence factors and secretory pathway components.
In this study, we used the strain with galactose-inducible and glucose-repressible expression of PKA1 to investigate the influence of Pka1 on the secretome using quantitative proteomics. We identified 61 different secreted proteins and found that Pka1 regulated the extracellular abundance of five. These proteins included three enzymes (α-amylase, acid phosphatase, and glyoxal oxidase), the Cig1 protein (cytokine-inducing glycoprotein) associated with virulence and heme uptake, and a novel protein containing a carbohydrate-binding domain (CNAG_05312). We also observed a change in the secretome profile under Pka1-inducing conditions from proteins involved primarily in catabolic and metabolic processes to an expanded set that included proteins for translational regulation and the response to stress. Enrichment analysis of our Pka1-influenced secretome data compared to the whole genome showed over-representation of genes associated with a broad spectrum of processes including metabolic and catabolic processing. Although no enrichment was observed between our secretome data and the Fungal Secretome KnowledgeBase (FunSecKB), a comparison of GO terms between the data sets showed the majority of our identified proteins to be represented in the FunSecKB. Next, we exploited our secretome data using a targeted proteomics approach to identify potential biomarkers of cryptococcal infection. Multiple Reaction Monitoring (MRM) in the presence of stable isotope dilutions (SID) allows for identification and quantification of specific peptides in a sample. Specifically, we were able to identify Pka1-regulated proteins of C. neoformans in host samples including blood, bronchoalveolar lavage fluid, and infected macrophage lysates. Overall, our study reveals that the cAMP/PKA pathway regulates specific components of the secretome including the Cig1 and Aph1 proteins that contribute to virulence in C. neoformans.
Given the virulence defect of a pka1 mutant, we hypothesized that Pka1 influences the secretion of proteins associated with the virulence and survival of C. neoformans in the host. To test this idea, we quantitatively identified proteins secreted by C. neoformans in the context of regulated expression of PKA1. For our initial analysis, we collected supernatant cultures of WT and P GAL7 ::PKA1 strains grown under Pka1-repressed (glucose) and Pka1-induced (galactose) conditions at 16, 48, 72, and 120 h post-inoculation (hpi), and analyzed the samples using quantitative mass spectrometry. The analysis of these supernatant samples resulted in the identification of 164 (54 quantifiable) and 207 (83 quantifiable) proteins under Pka1-repressed and Pka1-induced conditions, respectively (see Additional file 1: Table S1; Additional file 2: Table S2). As shown in Table 1, 23 proteins were identified and quantified under Pka1-repressed and Pka1-induced conditions at the specified time-points. We found that none of the changes in protein abundance between the two conditions were statistically significant (p > 0.05) and therefore, concluded that Pka1 did not influence the abundance of any of the observed proteins under the conditions tested. However, upon comparison of the unique proteins identified under either Pka1-repressed or Pka1-induced conditions, using Gene Ontology (GO) term biological classifications at all time points, we were able to observe overall changes in the secretome profiles under the influence of Pka1 (Fig. 1). Additional differentially expressed proteins may be present in the samples, but we were unable to measure their abundance and they were therefore not included for further analysis. Under Pka1-repressed conditions, the majority of secreted proteins were associated with catabolic and metabolic (33 %), unknown (20 %), and hypothetical (20 %) processes (totaling 73 %), with additional proteins associated with transport (8 %), oxidation-reduction processes (4 %), dephosphorylation (4 %), proteolysis (4 %), glycolysis (4 %), and regulation of transcription (3 %). Conversely, a change in the secretome profile was observed under the Pka1-induction condition. Here, we again observed the majority of proteins to be associated with catabolic and metabolic (26 %), unknown (19 %), and hypothetical (17 %) processes (totaling 62 %). A slight decline was found for proteins associated with transport (from 8 to 6 %), oxidation-reduction processes (from 4 to 3 %), dephosphorylation (from 4 to 2 %), proteolysis (from 4 to 3 %), and regulation of transcription (from 3 to 0 %). However, a greater emphasis was found for proteins associated with glycolysis (from 4 to 6 %), response to stress (from 0 to 8 %), translation (from 0 to 7 %), and nucleosome assembly (from 0 to 3 %). Although, our secretome analysis at specific times did not identify Pka1-regulated proteins, a change toward the secretion of proteins for glycolysis, translational regulation, nucleosome assembly, and the response to stress was observed upon induction of PKA1 expression.
Given that we identified secreted proteins from strains with modulated Pka1 activity, but did not observe any proteins whose abundance was directly regulated by Pka1, we extended our analysis to examine protein secretion at an intermediate time point of 96 hpi, and we used an alternative, less stringent method for protein precipitation (EtOH/acetate). We chose an end-point collection time of 96 hpi based on our coverage of a range of other time points in the previous analysis and because this time was sufficient for the culture to reach stationary phase and to accumulate proteins in the extracellular environment. Additionally, because we did not observe changes in protein abundance under regulation of Pka1 following the time-point analysis, we used the alternative protein precipitation method in an attempt to obtain a more comprehensive view of the secretome. We collected supernatant cultures of WT and P GAL7 ::PKA1 strains grown under Pka1-repressed (glucose) and Pka1-induced (galactose) conditions at 96 hpi and analyzed the samples using quantitative mass spectrometry. Similar trends in protein abundance were observed for the majority of proteins in both experimental approaches (EtOH and TCA/acetone precipitation) (see Additional file 3: Table S3) . Although the variability of the time-point analysis was relatively high, the reproducibility observed from the end-point analysis suggested that collecting the samples at different time-points impacted the protein abundance and contributed to the observed variability. This impact may be associated with culture sampling, as well as changes in capsule production during the early- to mid-log growth phases of the fungal cultures . We identified 61 proteins under Pka1-repressed conditions of which 34 were successfully dimethyl-labeled and quantified (Table 2; see Additional file 4: Table S4). These 34 proteins covered a broad spectrum of biological classifications (17 categories) for GO terms, including proteins associated with catabolic and metabolic processes, ubiquitination, transport, dephosphorylation, glycolysis, oxidation-reduction, translation, proteolysis, and the response to stress. Under Pka1-induced conditions, we identified 38 proteins, of which 21 were successfully dimethyl-labeled and quantified (Table 3; see Additional file 5: Table S5). These 21 proteins covered 11 biological classifications for GO terms and included proteins associated with catabolic and metabolic processes, along with ubiquitination, transport, dephosphorylation, oxidation-reduction, proteolysis, and the response to stress. In total, 17 proteins were present under both Pka1-repressed and Pka1-induced conditions. A comparison of changes in abundance under Pka1-repressed and Pka1-induced conditions of these 17 proteins revealed that five showed statistically significant differences (p-value < 0.05, Student’s t-test) in abundance in response to regulation of Pka1 (Fig. 2). We concluded that the extracellular abundance of these five proteins was influenced by PKA and we focused our subsequent analysis on these proteins. Under Pka1-induced conditions, a cytokine-inducing glycoprotein (Cig1), an α-amylase, a glyoxal oxidase, and a novel protein (CNAG_05312) each showed an increase in abundance, whereas an acid phosphatase (Aph1) showed a decrease in abundance. Taken together, these findings suggest that Pka1 regulates the extracellular abundance of specific proteins secreted by C. neoformans.
Based on our identification and quantification of 192 proteins in the secretome of C. neoformans, we next sought to classify the corresponding genes according to their GO terms of biological process, cellular component, and molecular function. Our goal was to assess whether subsets of genes showed significant over-representation relative to all genes in C. neoformans. To perform the enrichment analysis, all unique proteins identified under Pka1-repressed conditions were combined into a single data set as were proteins identified under Pka1-induced conditions. As shown in Fig. 3, the identified secreted proteins under Pka1-repressed conditions were enriched in 15 biological categories, with the most significant enrichment associated with carbohydrate metabolic process, catabolic process, generation of precursor metabolites and energy, organic substance metabolic process, and primary metabolic process. Under Pka1-induced conditions, enrichment was only associated with the five most significantly enriched categories under Pka1-repressed conditions. Classification by cellular components showed the most significant enrichment associated with the cytoplasm under both conditions, which may be an artifact of the classification process or indicative of the location of protein synthesis (see Additional file 6: Figure S1), whereas classification by molecular function showed no enrichment.
Our gene sets were also compared to all reported secreted proteins in the Fungal Secretome Knowledge Base (FunSecKB) for C. neoformans strain JEC21 [31, 32]. The analysis showed no significant enrichment; however, similarities among the identified GO terms were observed (Fig. 4). Forty-seven GO term categories were shared between the FunSecKB and our identified proteins under Pka1-repressed and Pka1-induced conditions; the greatest number of proteins being associated with metabolic processes. Twenty-five categories were represented only in our secretome data, and one category (GO:0009607; response to biotic stimulus) was represented only in the FunSecKB. Upon comparison of GO term categories for cellular components, 16 categories were shared between the FunSecKB and our identified proteins under Pka1-repressed and Pka1-induced conditions; the greatest number of proteins being associated with the cell, cytoplasm, and intracellular categories (see Additional file 7: Figure S2). Upon comparison of GO term categories for molecular function, 17 categories were shared between the FunSecKB and our identified proteins under Pka1-repressed and Pka1-induced conditions; the greatest number of proteins associated with binding as well as enzyme activity (see Additional file 8: Figure S3). Taken together, the enrichment analysis of our secretome data under modulation of Pka1 activity compared to the whole genome showed over-representation of genes associated with a broad spectrum of processes including metabolic and catabolic processing. Although no enrichment was observed between our secretome data and the FunSecKB, a comparison of GO terms between the data sets showed all but one of our identified proteins to be represented in the FunSecKB.
We next examined the secreted proteins, under modulation of Pka1 activity, for the presence of predicted signal peptides and GPI anchors. Specifically, we used SignalP 4.1, Signal-3 L, and Phobius for the prediction of protein extracellular location based on the presence or absence of N-terminal signal peptides. The presence of a signal peptide suggests conventional secretion versus potential non-conventional export if a signal peptide is absent. Additionally, we used GPI-SOM to predict the presence or absence of a GPI-anchor on proteins, indicative of plasma membrane association, which may or may not be capable of dissociation and subsequent protein secretion. Of the 61 proteins used for this analysis, 14 had both an N-terminal signal peptide and a GPI-anchor protein, 17 had only an N-terminal signal peptide, one had a GPI-anchor but no N-terminal signal peptide, and 29 proteins did not have an N-terminal signal peptide or a GPI-anchor (Table 4). Taken together, these results suggest that C. neoformans may employ a non-conventional secretory pathway for regulation of part of its secretome, including potential protein secretion via vesicle export .
Based on our identification and quantification of five secreted proteins regulated by Pka1 in C. neoformans, we evaluated whether transcript levels were also influenced by Pka1 regulation and whether there was a correlation with the observed regulation of protein abundance. Specifically, we performed qRT-PCR on RNA collected at 16 and 96 hpi from cells grown in Pka1-repressed and Pka1-induced conditions for the WT and P GAL7 ::PKA1 strains, and compared the observed values to our quantitative proteomic results at 96 hpi. Figure 5 summarizes the RNA expression levels at 16 hpi and 96 hpi and protein abundance at 96 hpi for Cig1, the acid phosphatase Aph1, an α-amylase, a glyoxal oxidase, and a novel protein (CNAG_05312). Cig1 and the novel protein both showed down-regulation of their transcripts under Pka1-repressed conditions at 16 and 96 hpi, followed by minimal or slight up-regulation with induced Pka1 activity. α-Amylase and glyoxal oxidase showed an initial peak in transcript levels at 16 hpi, followed by minimal change or a decrease in RNA levels at 96 hpi under Pka1-repressed conditions, and the transcript levels decreased in response to Pka1 induction. Acid phosphatase showed elevated transcript levels upon PKA1 repression at both time points, compared to a drop in RNA levels at 16 hpi or no change at 96 hpi upon induction of PKA1. In general, Pka1 appears to positively regulate the transcript levels of Cig1 and the novel protein (CNAG_05312), and to negatively regulate the transcript levels of the other three proteins. Taken together, our results suggest that although Pka1 activity influences the transcript levels and extracellular abundance of the five proteins, a correlation between transcript and protein levels was not always observed, and this was particularly notable for glyoxal oxidase. The differences may indicate additional levels of potential influence of Pka1 beyond transcriptional regulation, including differences in mRNA versus protein stability, the timing of expression and the regulation of protein export. For example, more detailed studies will be needed to examine the timing of intracellular and extracellular accumulation of the glyoxal oxidase protein relative to transcription of the gene.
Based on our identification of five Pka1-regulated proteins, including two with roles in virulence, we hypothesized that these proteins would be secreted during infection and that they might be potentially useful biomarkers of cryptococcosis. To test this idea, we used Multiple Reaction Monitoring (MRM), a powerful and targeted proteomics approach for the relative quantitative measurement of target proteins. In the presence of an internal standard, a stable isotope-labeled peptide, the amount of natural protein can be measured by comparing the signals to the labeled species. The isotopically labeled, proteotypic peptides terminate with C-terminal heavy arginine or lysine (C-term Arg U-13C6;U-15N4 or Lys U-13C6;U-15N2). In principle, the stable isotopes have the same physiochemical properties as the natural peptides and only differ by mass resulting in co-elution of the peptides. However, studies have suggested that in the presence of complex biological samples, such as blood or serum, the retention times between the peptides can shift, impacting the co-elution patterns . We specifically applied MRM to detect Cig1, Aph1, glyoxal oxidase, α-amylase, and the novel protein (CNAG_05312) in samples from a macrophage-like cell line and from infected mice.
The samples from the J774A.1 macrophage-like cell line came from cells inoculated with WT and P GAL7 ::PKA1 strains under Pka1-repressed (DMEM medium supplemented with glucose) and Pka1-induced (DMEM medium supplemented with galactose) conditions. Intracellular uptake at 2 hpi showed a significant difference in the number of colony forming units (CFUs) per macrophage between the WT and P GAL7 ::PKA1 strains under Pka1-repressed conditions, but not under induced conditions (Fig. 6a). This difference is most likely due to the absence of the capsule for the Pka1-repressed cells, a phenotype that enhances phagocytosis. By 24 hpi, rates of intracellular fungal cells per macrophage were significantly different for WT and P GAL7 ::PKA1 strains under both conditions (Fig. 6c). Specifically, intracellular rates of infection at 24 hpi in repressed conditions were 11.49 ± 2.11 % for the WT and 55.67 ± 12.76 % for P GAL7 ::PKA1 strains. However, intracellular rates under induced conditions were 9.06 ± 2.91 % for WT and 1.97 ± 0.82 % for P GAL7 ::PKA1 strains. Importantly, intracellular uptake rates showed no differences between WT, P GAL7 ::PKA1, and the pka1Δ strains under controlled growth conditions (DMEM – high glucose (0.45 %)) at 2 and 24 hpi (Fig. 6b, d). These results indicate that modulation of PKA1 expression influences the intracellular survival of cryptococcal cells.
MRM on macrophage lysates infected with fungal cells at 24 hpi identified the Pka1-regulated and secreted proteins α-amylase and glyoxal oxidase in both induced and repressed conditions. Figure 7 shows representative chromatographic co-elution patterns of the isotopically-labeled and natural peptides, which allowed for relative quantification of peptides in the replicates of the experiment. For both enzymes, the highest amount of protein was detected in the WT strain in DMEM medium under Pka1-repressed conditions, whereas the P GAL7 ::PKA1 strain under Pka1-induction showed the lowest amount of secreted protein. This observation may be associated with reduced intracellular rates of the P GAL7 ::PKA1 strain due to the presence of an enlarged capsule. Overall, we were able to detect 29.8 ± 37.0 fmol of α-amylase and 149.1 ± 130.0 fmol of glyoxal oxidase in 5 μg of total protein from the macrophage lysate following the uptake of P Gal7 ::PKA1 under Pka1-induced conditions at 24 hpi.
The samples from infected mice included BAL and blood from animals inoculated with the WT strain. Three mice were selected for each type of in vivo analysis based on previous studies of cryptococcosis [34–37]. Representative chromatograms of isotopically-labeled and natural peptides detected in mouse BAL are presented in Fig. 8. The MRM analysis identified Cig1, α-amylase, glyoxal oxidase, and the novel protein (CNAG_05312) in BAL following infection with WT cells. In 5 μg of total protein, glyoxal oxidase was the most abundant protein with detection at 779.5 ± 436.1 fmol, followed by the novel protein (CNAG_05312) at 451.0 ± 90.5 fmol, Cig1 at 291.3 ± 54.5 fmol, and α-amylase with the lowest abundance at 40.1 ± 9.4 fmol. Lastly, we were able to detect Cig1, glyoxal oxidase, and the novel protein (CNAG_05312) in blood. Representative chromatograms of the isotopically-labeled and natural peptides detected in mouse blood are presented in Fig. 9. Again, glyoxal oxidase was the most abundant protein detected at 319.4 ± 272.7 fmol, followed by Cig1 at 62.0 ± 17.4 fmol, and the novel protein (CNAG_05312) at 3.1 ± 3.8 fmol in 5 μg of total protein. Aph1 levels were below the limit of detection in all samples. Taken together, our targeted proteomics approach identified and quantified the Pka1-regulated secreted proteins as potential biomarkers following host challenge with cryptococcal cells.
The secretion of extracellular enzymes and virulence-associated factors is important for the proliferation and survival of pathogens in the host environment. For the pathogenic yeast C. neoformans, virulence depends to a large extent on the export of polysaccharide to form a capsule, as well as targeted delivery of laccase to the cell wall for deposition of melanin, and secretion of extracellular enzymes [23, 24, 28, 38]. The cyclic-AMP/Protein Kinase A signal transduction pathway plays a key role in regulating these processes but the underlying mechanisms remain to be understood in detail [16, 17]. We therefore used a P GAL7 ::PKA1 strain under Pka1-repressed and Pka1-induced conditions in this study to investigate the influence of Pka1 on the secretome of C. neoformans. Quantitative proteomics allowed us to identify 61 different proteins in the secretome including a subset of five whose abundance was regulated by Pka1. These five proteins include a cytokine-inducing glycoprotein (Cig1), an α-amylase, a glyoxal oxidase, an acid phosphatase (Aph1), and a novel protein (CNAG_05312). We also observed a change in the secretome profile upon induction of PKA1 expression thus establishing a view of the impact of PKA activity on the extracellular protein composition. In general, this analysis highlighted the enrichment of Pka1-regulated biological processes in the secretome, revealed potential targets for conventional and non-conventional modes of secretion, and provided candidate biomarkers for investigating cryptococcosis.
Our analysis revealed a change in the abundance of secreted C. neoformans proteins associated with glycolysis, translational regulation, nucleosome assembly, and stress response over a time course from 16 to 120 h. We speculate that some of these proteins may result from packaging in vesicles known to transit through the cell wall and accumulate in the extracellular environment [24, 39]. In this case, modulation of PKA activity may indirectly influence the proteome of vesicles as a reflection of an impact on the intracellular proteome. This idea is supported by our observed influence of PKA1 modulation on the abundance of the translation machinery because ribosomal proteins, in particular, are abundant in extracellular vesicles . It is also well known that PKA influences the transcription of ribosomal protein genes in other organisms and this influence is conserved in C. neoformans [22, 40]. Our analysis of the intracellular proteome also revealed suppression of ribosomal cellular protein abundance upon induction of Pka1 (Geddes et al., unpublished data). We also observed a connection between Pka1 activation and the abundance of glycolytic proteins. This is interesting in light of previous reports demonstrating the importance of glycolysis for virulence and the persistence of C. neoformans in the cerebral spinal fluid . These findings are consistent with a previous analysis of the transcriptome, which showed that Pka1 influences the levels of transcript for genes involved in glycolysis . Furthermore, the observed influence of Pka1 induction on the secretion of proteins associated with stress response is consistent with observed Pka1 regulation at the transcriptional level. In this context, we identified a heat shock protein 70 (Hsc70-4), which is associated with the response to stress and which was previously localized to the cell surface of C. neoformans . The observed connection between the stress response and Pka1 induction may indicate coordination for facilitation of fungal survival and proliferation during colonization of vertebrate hosts.
The influence of PKA on the abundance of the mannoprotein Cig1 is of particular interest because we previously showed that its transcript is one of the most abundant in cells grown in low iron medium . In addition, the protein is important for iron acquisition from heme and virulence in C. neoformans . We found that the extracellular abundance of Cig1 increased upon induction of Pka1 and that transcript levels and protein abundance were well correlated. CIG1 is positively regulated by the pH-responsive transcription factor Rim101, which in turn is activated by the cAMP/PKA pathway . Therefore, the regulation of CIG1 mRNA and Cig1 protein levels observed upon induction of Pka1 likely reflect regulation by Rim101. This finding is consistent with recent discoveries that Rim101 controls cell wall composition and capsule attachment via an influence on the expression of cell wall biosynthetic genes [46, 47].
In general, a number of proteins associated with cell wall synthesis and integrity, pathogenesis and the immune response were prominent in the secretome of C. neoformans upon modulation of PKA1 expression. These proteins included an endo-1,3(4)-β glucanase and a 1,3-β-glucanosyltransferase, both of which have been previously identified in studies of the extracellular proteomes of C. neoformans and other fungal pathogens such as Histoplasma capsulatum [28, 48–51]. Endo-1,3(4)-β glucanase is located in the surface layers of the cell wall or in the capsule and has roles in metabolism, autolysis, and cell separation [50, 52]. The 1,3-β-glucanosyltransferase is described as a glycolipid protein anchored to the cell membrane in yeasts and may have a role in virulence . Our proteomic analysis also identified chitin deacetylases associated with the formation of chitin and cell wall integrity, and the enzyme laccase, which is responsible for melanin deposition in the cell wall and influences cryptococcal virulence [28, 51, 54–57]. These findings are consistent with our previous transcriptomic analysis, which revealed an influence of PKA on the expression of cell wall associated genes .
We also identified a novel protein (CNAG_05312) with a pattern of mRNA and protein regulation by Pka1 activity that was quite similar to that of Cig1. This novel protein contains a predicted carbohydrate-binding domain and was annotated as a macrophage-activating glycoprotein (reminiscent of the cytokine-inducing glycoprotein designation of Cig1). These observations suggest that further investigation is warranted for this protein in the context of iron acquisition and virulence. This idea is reinforced by the finding that Rim101 also positively regulates expression of the CNAG_05312 gene . Interestingly, the CNAG_05312 gene is also regulated at the transcript level by the transcription factor Gat201 that, like Pka1, influences capsule size, virulence, and uptake by macrophages [58, 59]. Considering these similar phenotypes, it is possible that Gat201 and Pka1/Rim101 both regulate the expression of the CNAG_05312 protein and subsequently influence the activation of macrophages during infection. Overall, our investigation of the secretome reinforced connections between modulation of Pka1 activity, Rim101 and cell wall integrity, and it revealed an impact of PKA on the extracellular abundance of proteins with known (Cig1) and potential (the novel CNAG_05312 protein) influences on virulence.
Pka1 also positively regulated the abundance in the secretome of an α-amylase and a glyoxal oxidase which were previously identified in the extracellular proteome of C. neoformans [28, 51]. Amylases are associated with carbohydrate metabolism, particularly starch degradation for energy production . In C. neoformans, the secretion of amylases in the PKA-regulated strains was reported previously and we were able to measure and confirm α-amylase activity in the extracellular medium . Glyoxal oxidases are extracellular H2O2-producing enzymes associated with cellulose metabolism . There is evidence that glyoxal oxidase activity is involved in filamentous growth and pathogenicity of Ustilago maydis, as well as fertility in Cryptococcus gattii [61, 62]. A similar pattern in response to PKA1 expression was observed upon comparison of the transcript and protein levels for both the α-amylase and the glyoxal oxidase. A direct correlation between transcript levels and protein abundance was not as evident as for Cig1. This could potentially be due to post-transcriptional regulation, differences in mRNA and protein half-lives and issues with timing . It is also possible that PKA may regulate additional processes to influence extracellular protein abundance, such as the activity of the secretory pathway. Overall, the secretome data revealed a new connection between PKA regulation and the α-amylase and glyoxal oxidase enzymes, and this discovery indicates that further analysis of their potential roles in virulence is warranted.
The extracellular abundance of the acid phosphatase Aph1 and its transcript levels were negatively regulated by induction of PKA1 expression thus revealing an opposite pattern of regulation compared with the other four genes. Phosphatases have been predicted to have roles in cell wall biosynthesis, cell signaling, phosphate scavenging, and in adhesion of C. neoformans to epithelial cells [24, 28, 64–67]. The APH1 gene was recently characterized and its expression was found to be induced by phosphate limitation; the Aph1 protein was also the major conventionally secreted acid phosphatase in C. neoformans . Aph1 was also shown to hydrolyze a variety of substrates to potentially scavenge phosphate from the environment, and an aph1 deletion mutant had a slight virulence defect in both Galleria mellonella and mouse models of cryptococcosis. The latter phenotype is consistent with our recent study showing that a high affinity phosphate uptake system is required for growth on low-phosphate medium, for formation of the virulence factors melanin and capsule, for survival in macrophages, and for virulence in mice . This study also revealed that defects in PKA influence the growth of C. neoformans on phosphate-limited medium. Our discovery of PKA regulation of Aph1 abundance in the secretome therefore further reinforces a connection between phosphate acquisition and PKA regulation associated with virulence.
Our profiling of the secretome upon modulation of Pka1 activity confirmed the presence of previously identified extracellular and vesicular proteins, including those associated with virulence and fungal survival within the host, as well as novel secreted proteins. We identified the classically secreted C. neoformans protein, laccase, associated with fungal virulence, but other proteins such as urease and phospholipase B were not identified in our study. Their absence could be attributed to growth conditions, precipitation methods, supernatant collection times, and relative abundance in the secretome. A recent proteome study that removed free capsular polysaccharide from the extracellular environment identified 105 secreted proteins and a direct comparison with our study showed an overlap of 52 % . Previous investigation of the proteins in extracellular vesicles of C. neoformans also showed an overlap of nearly 56 % with proteins identified in our study [24, 39]. This overlap is primarily associated with proteins not typically expected in the secretome. For example, ATP subunits/carriers, translation elongation factor, actin, and multiple ribosomal proteins were identified and their presence was attributed to packaging in extracellular vesicles, and not necessarily due to direct secretion. In the absence of an N-terminal signal peptide, proteins may be exported via non-conventional secretion. This may include the use of membrane-bound, extracellular vesicles capable of traversing the cell wall, the possible fusion of multi-vesicular bodies with the plasma membrane, or the capture of cytosolic material to form vesicles (blebbing), as discussed above [23, 24, 69–72]. Taken together, our profile of secreted proteins in C. neoformans is in agreement with previous secretome studies. However, our ability to modulate Pka1 activity provides an opportunity to identify novel proteins in the extracellular environment as well as identify proteins specifically regulated by Pka1. This approach led to the unique identification of the novel secreted protein (CNAG_05312) that was specifically associated with modulation of Pka1 activity and not found in other proteomic studies.
Biomarkers are indicators of normal or pathogenic processes as well as the efficacy of therapy . In this regard, targeted detection of secreted cryptococcal proteins provides an opportunity to identify potential biomarkers for early diagnosis of infection and to monitor antifungal therapy. Early and rapid diagnosis remains limited for systemic fungal infections, such as those caused by Candida and Aspergillus species, as well as C. neoformans and C. gattii . Biomarkers of infection by specific fungal species would therefore be valuable for identification and for precise measurements of fungal burden. A recent study using the presence of the cell wall component galactomannan in BAL as a diagnostic tool for invasive fungal disease highlights an opportunity for biomarker discovery in fungal pathogens . Additionally, the use of targeted proteomics (and MRM in particular) is a novel approach to study the secretion of virulence factors in C. neoformans, particularly in the context of signaling functions like PKA that sense conditions relevant to the host environment.
The secreted proteins that we identified to be regulated in abundance by Pka1 provide an opportunity to develop diagnostic biomarkers that are also informative about signaling via the cAMP/PKA pathway in vitro and during infection. For example, Cig1 is an important candidate biomarker given its abundance in iron-starved cells and its role in virulence through iron acquisition and uptake. Our ability to detect Cig1 in the blood and BAL fluid of infected animals confirms its expression and establishes the protein as a potential biomarker. These findings may also indicate a role for Cig1 in iron uptake in these environments although, interestingly, we did not detect Cig1 in macrophage lysates. Based on our observed differences in intracellular replication, Pka1 seems to impact the intracellular environment of macrophages. In this regard, we did detect the glyoxal oxidase and α-amylase proteins by MRM in macrophages containing cryptococcal cells. Expression of these proteins has not previously been reported during interactions with macrophages, although the production of H2O2 and induction of oxidative stress via glyoxal oxidase could potentially influence intracellular survival. It is known that oxidative stress induces autophagy in macrophages and can impair phagocytic activity [76, 77]. Additionally, loss of an α-amylase in H. capsulatum attenuated the ability of the fungus to kill macrophages and to colonize murine lungs . This influence appeared to be related to the ability to produce α-(1,3)-glucan. The regulation of glyoxal oxidase and α-amylase by Pka1 activity and their detection in macrophage lysates suggests that it would be interesting to examine the roles of these enzymes in intracellular survival and virulence. Our approach with MRM is also informative about tissue specific expression of fungal proteins during disease. In addition to the examples described above, we found that colonization of murine lungs resulted in secretion of α-amylase, glyoxal oxidase and the novel protein from gene CNAG_05312. The novel protein was also found in blood and, given its similar regulation with Cig1 these results suggest future studies on the role of this protein in iron acquisition and virulence.
In this study we characterized the overall impact of PKA1 modulation on the secretome and discovered five proteins regulated by Pka1. The identified proteins had known roles associated with cell wall functions, fungal survival within the host, and virulence. Our identification of a novel protein with potential roles in iron uptake and virulence also suggested a previously unknown connection between Pka1 and Gat201. We were also able to detect Pka1-regulated secreted proteins in biological samples as potential biomarkers, providing a new opportunity for diagnosing fungal infection and monitoring disease progression.
The C. neoformans var. grubii wild-type strain H99 (WT) and the P GAL7 ::PKA1 strain with galactose-inducible/glucose repressible expression of PKA1 were used for this study [16, 29]. The strains were maintained on yeast extract peptone dextrose (YPD) medium (1 % yeast extract, 2 % peptone, 2 % dextrose, and 2 % agar). For studies involving regulation of PKA1, cells of the WT and regulated strains were pre-grown overnight with agitation at 30 °C in YPD broth, transferred to yeast nitrogen base medium with amino acids (YNB, Sigma-Aldrich) and incubated overnight with agitation at 30 °C. Cell counts were performed and 5 x 107 cells/ml were transferred to Minimal Medium (MM) (29.4 mM KH2PO4, 10 mM MgSO4 • 7H2O, 13 mM glycine, 3 μM thiamine, 0.27 % carbon source) containing either glucose (MM + D) or galactose (MM + G). For end-point studies, cells were incubated with agitation at 30 °C in MM + D or MM + G for 96 h; for time-course studies, cells were incubated with agitation at 30 °C in MM + D or MM + G for 16, 48, 72, and 120 h. Time points were selected based on previous studies on the timing of protein secretion as well as the analysis of proteins in extracellular vesicles of C. neoformans, which used samples collected at 48 and 72 h of growth [23, 24, 64]. Samples were collected in triplicate for analysis.
To collect supernatant samples, cells were removed by centrifugation at 3,500 rpm for 15 min at 4 °C and the culture medium was transferred to new tubes; this step was repeated four times until all cell debris had been removed. Supernatant samples were kept on ice and total protein concentration was measured by a BCA-Protein-assay (Pierce). Ultrapure bovine serum albumin was used as a calibration standard. In addition to using two approaches for protein precipitation as described below, we also used a combination of sample collection time points (time-point and end-point analyses) to maximize protein detection and obtain a comprehensive view of the secretome. The first approach involved a time-course study in which a stringent trichloroacetic acid (TCA)/acetone precipitation was performed . In brief, an aliquot of culture supernatant (50 μg total protein) was mixed with five volumes of ice-cold TCA/acetone (20 %/80 % w/v) and incubated overnight at −20 °C. Precipitated proteins were collected by centrifugation at 10,000 rpm for 20 min at 4 °C. The pellet was washed four times with ice-cold acetone, air-dried and stored at −20 °C. The second approach, which was less stringent than the TCA/acetone method, was used for the end-point studies and involved ethanol (EtOH)/acetate precipitation . In brief, an aliquot of culture supernatant (50 μg total protein) was diluted with 4 volumes of absolute EtOH, 2.5 M NaCH3COO was used to bring the solution to 50 mM NaCH3COO, pH 5.0 and 20 μg of glycogen was added to the sample. Samples were vortexed and incubated at room temperature for 2 h with periodic agitation. Precipitated proteins were collected by centrifugation at 15,000 rpm for 10 min at 4 °C. The pellet was washed twice with EtOH, then air-dried and stored at −20 °C. All supernatant samples were subjected to in-solution digestion using ACS grade chemicals or HPLC grade solvents (Thermo Scientific and Sigma-Aldrich) . In brief, the precipitated protein pellet was solubilized in digestion buffer (1 % sodium deoxycholate, 50 mM NH4HCO3), incubated at 99 °C for 5 min with agitation, followed by reduction (2 mM of dithiothreitol (DTT) for 25 min at 56 °C), alkylation (4 mM of iodoacetamide (IAA) for 30 min at room temperature in the dark), and trypsinization (0.5 μg/μl of sequencing grade modified trypsin (Promega)) overnight at 37 °C. Based on our results, the TCA/acetone precipitation method appeared to be more stringent, perhaps due to more extensive washing in the protocol.
Digested peptides from supernatants were desalted, concentrated, and filtered on C18 STop And Go Extraction (STAGE) tips . Reductive dimethylation using formaldehyde isotopologues was performed to differentially label peptides from the different experimental conditions. Light formaldehyde (CH2O) and medium formaldehyde (CD2O) (Cambridge Isotope Laboratories, Andover, MA) were combined with cyanoborohydride (NaBH3CN, Sigma-Aldrich) to give a 4 Da difference for labeled peptides . Samples from the WT strain were routinely labeled with light formaldehyde, and P GAL7 ::PKA1 samples were labeled with medium formaldehyde. Briefly, eluted and dried STAGE-tip peptides were resuspended in 100 mM triethylammonium bicarbonate, and incubated in 200 mM formaldehyde and 20 mM sodium cyanoborohydride for 90 min in the dark. After labeling, 125 mM NH4Cl was added and incubated for 10 min to react with excess formaldehyde, followed by the addition of acetic acid to a pH < 2.5 to degrade sodium cyanoborohydride. For each comparison, equal amounts of labeled peptides were mixed and desalted on C18 STAGE tips.
Purified peptides were analyzed using a linear-trapping quadrupole - Orbitrap mass spectrometer (LTQ-Orbitrap Velos; Thermo Fisher Scientific) on-line coupled to an Agilent 1290 Series HPLC using a nanospray ionization source (Thermo Fisher Scientific). This includes a 2-cm-long, 100-μm-inner diameter fused silica trap column, 50-μm-inner diameter fused silica fritted analytical column and a 20-μm-inner diameter fused silica gold coated spray tip (6-μm-diameter opening, pulled on a P-2000 laser puller from Sutter Instruments, coated on Leica EM SCD005 Super Cool Sputtering Device). The trap column was packed with 5 μm-diameter Aqua C-18 beads (Phenomenex, www.phenomenex.com) while the analytical column was packed with 3.0 μm-diameter Reprosil-Pur C-18-AQ beads (Dr. Maisch, www.Dr-Maisch.com). Buffer A consisted of 0.5 % aqueous acetic acid, and buffer B consisted of 0.5 % acetic acid and 80 % acetonitrile in water. Samples were resuspended in buffer A and loaded with the same buffer. Standard 90 min gradients were run from 10 % B to 32 % B over 51 min, then from 32 % B to 40 % B in the next 5 min, then increased to 100 % B over a 2 min period, held at 100 % B for 2.5 min, and then dropped to 0 % B for another 20 min to recondition the column. The HPLC system included Agilent 1290 series Pump and Autosampler with Thermostat; temperature was set at 6 °C. The sample was loaded on the trap column at 5 μl/min and the analysis was performed at 0.1 μl/min. The LTQ-Orbitrap was set to acquire a full-range scan at 60,000 resolution from 350 to 1600 Th in the Orbitrap to simultaneously fragment the top ten peptide ions by CID and top 5 by HCD (resolution 7500) in each cycle in the LTQ (minimum intensity 1000 counts). Parent ions were then excluded from MS/MS for the next 30 s. Singly charged ions were excluded since in ESI mode peptides usually carry multiple charges. The Orbitrap was continuously recalibrated using lock-mass function . Mass accuracy included an error of mass measurement within 5 ppm and did not exceed 10 ppm.
For analysis of mass spectrometry data, centroid fragment peak lists were processed with Proteome Discoverer v. 1.2 (Thermo Fisher Scientific). The search was performed with the Mascot algorithm (v. 2.4) against a database comprised of 6,692 predicted protein sequences from the source organism C. neoformans H99 database (C. neoformans var. grubii H99 Sequencing Project, Broad Institute of Harvard and MIT, http://www.broadinstitute.org/) using the following parameters: peptide mass accuracy 10 parts per million; fragment mass accuracy 0.6 Da; trypsin enzyme specificity with 1 max missed cleavages; fixed modifications - carbamidomethyl, variable modifications - methionine oxidation, deamidated N, Q and N-acetyl peptides, dimethyl (K), dimethyl (N-term), dimethyl 2H(4) (K), and dimethyl 2H(4) (N-term), ESI-TRAP fragment characteristics. Only those peptides with Ion Scores exceeding the individually calculated 99 % confidence limit (as opposed to the average limit for the whole experiment) were considered as accurately identified. The acceptance criteria for protein identification were as follows: only proteins containing at least one unique peptide with a Mascot score > 25 were considered in the dataset. Quantitative ratios were extracted from the raw data using Proteome Discoverer. Proteome Discoverer parameters – Event Detector: mass precision 4 ppm (corresponds to extracted ion chromatograms at ±12 ppm max error), S/N threshold 1; Precursor Ion Quantifier method set for ‘2 labels’ for the formaldehyde labeled samples; Quantitation Method – Ratio Calculation – Replace Missing Quantitation Values with Minimum Intensity – yes, Use Single Peak Quantitation Channels – yes, − Protein Quantification – Use All Peptides – yes.
Experimentally determined fold changes for WT and P GAL7 ::PKA1 strains grown under Pka1-repressed (glucose-containing medium) and Pka1-induced (galactose-containing medium) conditions were converted to a log2 scale and the average fold change and standard deviation were used for analysis. A fold change of >10 was used as a cut-off limit for the time-point and end-point analyses. For the comparative analysis of the time-point samples, the statistical significance of the fold changes of the identified secreted proteins present under both Pka1-repressed and Pka1-induced conditions and at equivalent time points (i.e. 16, 48, 72, and 120 hpi) was assessed for an influence of PKA regulation using a Student’s t-test (p-value < 0.05). For the comparative analysis of the end-point samples, the statistical significance of the fold changes of the identified secreted proteins present under both Pka1-repressed and Pka1-induced conditions was evaluated using a Student’s t-test (p-value < 0.05). To confirm the statistically significant Pka1-regulated proteins identified from the end-point analysis, a multiple-hypothesis testing correction was performed on the secretome data using the Benjamini and Hochberg method with a false discovery rate of 0.05.
Proteins were characterized with Gene Ontology (GO) terms using a local installation of Blast2GO . Gene annotation data of the C. neoformans H99 reference genome were retrieved from the Broad Institute (May 2014) and a copy of the non-redundant (nr) protein database was downloaded from NCBI (May 2014) . The most current associations between the nr protein database and GO terms were retrieved in May 2014 from Blast2GO. GO terms were assigned to WT proteins and filtered using default settings of the Blast2GO pipeline . We performed GO term enrichment analyses for sets of proteins using hypergeometric tests and the Benjamini and Hochberg false discovery rate multiple testing correction (p-value < 0.05) implemented in the R packages GSEABase and GOstats. GO term categories containing singleton entries were excluded. GO categories and enrichment datasets were visualized using the R package ggplot2 . For time-point analyses, GO term classification was performed on unique proteins identified under either Pka1-repressed or Pka1-induced conditions to highlight the overall influence of Pka1 regulation on the secretome profile.
SignalP 4.1 (http://www.cbs.dtu.dk/services/SignalP/) was used to predict whether identified proteins were secreted based on the presence of a signal peptide. Identified protein sequences were also analyzed using Signal-3 L (http://www.csbio.sjtu.edu.cn/bioinf/Signal-3L/) and Phobius (http://phobius.sbc.su.se) to confirm results. Additionally, secreted proteins were analyzed for the presence of a glycophosphatidylinositol (GPI) anchor using GPI-SOM (http://gpi.unibe.ch).
Cells from WT and P GAL7 ::PKA1 strains were prepared for the examination of gene expression by overnight growth in YNB medium followed by dilution to 5.0 x 107 cells/ml in 5 ml of MM + D or MM + G and incubation at 30 °C with agitation for 16 and 96 h. Samples were collected in triplicate for analysis. Cells were collected at the designated time points, flash frozen in liquid N2, and stored at −80 °C. Total RNA was extracted using an EZ-10 DNAaway RNA Miniprep kit (Bio Basic) according to the manufacturer’s protocol. Complementary DNA was synthesized using a Verso cDNA kit (Thermo Scientific) and used for quantitative real-time PCR (qRT-PCR). Primers were designed using Primer3 v.4.0 (http://bioinfo.ut.ee/primer3-0.4.0/) and targeted to the 3’ regions of transcripts. qRT-PCR primer sequences (see Additional file 9: Table S6). Relative gene expression was quantified using the Applied Biosystems 7500 Fast Real-time PCR system. Control genes CNAG_00483 (Actin) and CNAG_06699 (GAPDH) were used for normalization, and tested for statistical significance using the Student’s t-test. As a control, PKA1 RNA expression levels under Pka1-repressed and Pka1-induced conditions in the WT and P GAL7 ::PKA1 strains were also analyzed at various time points to confirm the regulated PKA expression (see Additional file 10: Figure S4).
To confirm qRT-PCR results, total RNA was isolated for the P GAL7 ::PKA1 strain grown in 50 ml of MM + D or MM + G for 16 h. Briefly, cell pellets were collected and flash frozen in liquid N2, followed by overnight lyophilization. One milliliter of buffer 1 (2 % SDS, 68 mM Na3C6H5O7, 132 mM C6H8O7, 10 mM EDTA) was added to the samples, along with 600 μl of glass beads; samples were subjected to bead beating for two, 3 min intervals at power 3 (BioSpec, Mini-Beadbeater) and subsequently stored on ice. Next, 340 μl of buffer 2 (4 M NaCl, 17 mM Na3C6H5O7, 33 mM C6H8O7) was added and samples were inverted several times and incubated on ice for 5 min. Samples were then centrifuged at 15,000 rpm for 10 min, the supernatant fraction was collected and transferred to a new tube, one volume of isopropanol was added, and samples were mixed and incubated at room temperature for 15 min. The pellet was collected following centrifugation at 15,000 rpm for 5 min, and washing of the pellet with 70 % DEPC (Diethylpyrocarbonate)-EtOH was performed. The pellet was collected, air dried, and dissolved in 20 μl of DEPC-H2O. The hybridization probes were prepared with a PCR-amplified DNA fragment of CNAG_00483 (Actin) or CNAG_00396 (PKA1) using specific primers (see Additional file 9: Table S6) and labeled with 32P using an Oligolabeling kit (Amersham Biosciences). Scanned images were analyzed using a Bio-Rad ChemiDoc MP Imaging System (see Additional file 11: Figure S5).
The survival rates of the WT, pka1Δ mutant, and P GAL7 ::PKA1 strains during incubation with macrophages were determined and lysates were prepared for protein analysis . Briefly, cells of the J774A.1 macrophage-like cell line were grown to 80 % confluence in Dulbecco’s Modified Eagle’s Medium (DMEM; Sigma) supplemented with 10 % fetal bovine serum and 2 mM L-glutamine at 37 °C and 5 % CO2. The macrophages were stimulated 1 h prior to infection with 150 ng/ml phorbol myristate acetate (PMA). Fungal cells were grown in YNB overnight at 30 °C, followed by inoculation in MM + D or MM + G at 5.0 x 107 cells/ml. Following overnight growth, the fungal cells were washed with phosphate-buffered saline (PBS, Invitrogen) and opsonized with 0.5 μg/ml of the anti-capsule monoclonal antibody 18B7 in DMEM or DMEM supplemented with 0.20 % glucose or galactose (30 min at 37 °C). Stimulated macrophages were infected with 2.0 x 105 opsonized fungal cells at a multiplicity of infection (MOI) of 1:1 for 2 h and 24 h at 37 °C and 5 % CO2. To measure fungal survival, macrophages containing internalized cryptococcal cells were washed thoroughly four times with PBS and then lysed in 1 ml of sterile dH2O for 30 min at room temperature. Lysate dilutions were plated on YPD agar and incubated at 30 °C for 48 h, at which time the resulting colony forming units (CFUs) were counted and intracellular rates of infection (%) were calculated as the ratio of the CFUs at 2 h and 24 h over the initial number of macrophages. The statistical significance of differences between WT, pka1Δ mutant, and P GAL7 ::PKA1 strains were determined by unpaired t-tests. For proteomic analysis, lysates from infected macrophage at 24 h of incubation were collected, flash frozen in liquid N2 and stored at −80 °C.
Female BALB/c mice (10–12 weeks old) obtained from Charles River Laboratories (Senneville, Ontario, Canada) were used to collect bronchoalveolar lavage (BAL) and blood samples following cryptococcal infection. Three different cultures of C. neoformans WT cells were grown overnight in YPD at 30 °C with agitation, washed in PBS and re-suspended at 1.0 x 108 cells/ml in PBS. For collection of BAL, intranasal inoculation of three mice with 100 μl of the different WT cell suspensions (1.0 x 107 cells) was performed. For collection of blood samples, intravenous inoculation of three mice with 100 μl of the WT cell suspensions (1.0 x 107 cells) was performed. Three mice were selected for the analysis based on established methods for studying fungal burden in mouse models of cryptococcosis [34–37]. At 48 hpi, the infected mice were euthanized by CO2 inhalation and 1 ml of BAL fluid and 500 μl of blood samples were collected from each mouse . Mouse lavage and blood samples were flash frozen with liquid N2 and stored at −80 °C. Mouse assays were conducted in accordance with University of British Columbia’s Committee on Animal Care (protocol A13-0093).
Macrophage lysate samples were prepared as described above, followed by trypsin in-solution digestion. Samples were collected for WT and P GAL7 ::PKA1 strains at 24 hpi in triplicate. Mouse BAL samples were prepared as described above, followed by trypsin in-solution digestion. Samples were collected at 48 hpi following WT inoculation of each of the three mice. For mouse blood samples, highly abundant proteins were removed as previously described . Briefly, proteins were precipitated by the addition of two volumes of acetonitrile and 1.0 % acetic acid, followed by centrifugation at 10,000 rpm for 5 min at 4 °C. The supernatant was collected and evaporated and the residual proteins were then subjected to trypsin in-solution digestion as described above. Samples were collected at 48 hpi following WT inoculation of each of the three mice. Following trypsin digestion, all samples were desalted, concentrated, and filtered on high-capacity C18 STAGE tips.
Skyline (v2.1) was used to build and optimize the MRM method for the relative quantification of peptides . Synthesized peptides for MRM analysis were designed in-house using the following parameters: tryptic peptides, 0 max missed cleavages, minimum of seven and maximum of 25 amino acids, excluding peptides containing Met or Cys residues (if possible) and N-terminal glutamine, hydrophobicity between 10–40 (Sequence Specific Retention Calculator, http://hs2.proteome.ca/SSRCalc/SSRCalcX.html), desirable spectral intensities (GenePattern ESPPredictor, http://www.broadinstitute.org/cancer/software/genepattern/modules), and transition settings selecting for precursor charges of 2 and 3, ion charge of 1, monitoring both b and y ions. SpikeTides labeled with stable isotopes (C-term Arg U-13C6;U-15N4 or Lys U-13C6;U-15N2) were purchased from JPT Peptide Technologies GmbH (Berlin, Germany). N-terminal Arginine (R) and Lysine (K) were labeled with a stable isotope mass of 10.008269 and 8.014199, respectively. Collision energy (CE) and fragmentor voltage (FV) for each peptide was predicted utilizing Skyline software and then confirmed experimentally . Doubly and triply charged precursor ions were optimized and three to five transitions were measured per peptide. The final MRM method included the monitoring of a total of 23 peptides, representing five proteins (see Additional file 12: Table S7). Stable isotope-labeled peptides were resuspended in 100 μl of 0.5 % acetic acid with agitation at room temperature. The peptides were further diluted and combined to result in final concentrations of 100 fmol/μl to 1 pmol/μl of each peptide. Five μl of the peptide mixture was injected into an Agilent 6460 Triple Quadrupole (Agilent) for data acquisition and peptide optimization.
MRM assays were performed on an Agilent 6460 Triple Quadrupole coupled with Agilent 1200 Series HPLC. The instrument was operated in positive electrospray ionization mode using MassHunter Workstation Data Acquisition (v.B.04.04, Agilent). Chromatography was performed on a Large Capacity Chip with 160 nl Trap, analytical column was 150 mm x 75 μm, stationary phase for both trapping and analytical columns were Zorbax-SB C-18, 300 A and 5 μm particles (Agilent). Peptides were separated using gradient elution with a stable flow of 0.30 μl/min, beginning with 97 % buffer A (97 % dH2O, 3 % acetonitrile, 0.1 % formic acid (FA)) and 3 % buffer B (10 % dH2O, 90 % ACN, 0.1 % FA) followed by a step gradient of buffer B from 3 to 80 %, which was achieved at 10.5 min. Subsequent equilibration was performed for 4.5 min at 3 % buffer B. The column was maintained at room temperature during analysis, and the samples were kept at 4-7 °C. The MS was operating in selective reaction mode using electrospray ionization in positive ion mode, with a capillary voltage of 1850 V and a source temperature of 325 °C. Cone voltage was static, collision energies and fragmentor voltages were optimized for each compound individually (see Additional file 13: Table S8). Peak identification was performed using MassHunter Qualitative Analysis (Agilent).
Quantification of natural proteins was performed using peak areas relative to the known amounts of added isotopically-labeled synthetic peptides during a multiplexed MRM run. Natural protein levels were identified in triplicate from the following matrices: WT and P GAL7 ::PKA1 macrophage lysate MM + D and MM + G collected at 24 hpi; BAL collected at 48 hpi from three different mice inoculated independently with the WT strain; and blood collected at 48 hpi from three different mice inoculated independently with the WT strain. Each biological sample was assayed independently in triplicate. Experimentally determined peak areas and the subsequent quantification values were converted to a log2 scale, and the average amount of identified peptide +/− S.D. was reported. Positive association of natural peptides to their respective isotopically-labeled peptides was determined based on co-elution patterns. For positive identification of a natural protein in a collected sample, at least one peptide with a minimum of two transitions must be identified or a minimum of two peptides with at least one transition each must be present.
Enzymatic activity was assayed for α-amylase and acid phosphatase. The assays were performed with kits for both enzymes according to the manufacturer’s protocol (BioVision Incorporated) (see Additional file 14: Figure S6). To confirm that proteins identified in the secretome were a result of secretion and not a product of cell lysis, a PCR was performed on secretome samples from Pka1-repressed conditions for WT strain at 96 hpi. Actin (CNAG_00483) and PKA1 (CNAG_00396) were used as control genes for amplification (see Additional file 15: Figure S7) .
The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifier PXD002731 and the PASSEL partner repository with the dataset identifier PASS00736.
Mouse assays were conducted in accordance with University of British Columbia’s Committee on Animal Care (protocol A13-0093).
The authors thank J. Choi for strain construction, and M. Kretschmer, J. Gouw, J. Rogalski, and N. Scott for discussions and technical assistance. We also thank D. Oliveira for the collection of mouse samples. This work was supported by an NSERC fellowship to JG, CIHR open operating grants to JWK and LJF, and a Burroughs Wellcome Fund Scholar Award in Molecular Pathogenic Mycology (JWK).
JMHG, LJF, and JWK conceived of the study and participated in its design. JMHG carried out the quantitative proteomic sample preparation, data analysis, and interpretation, the targeted proteomic sample preparation, data analysis, and interpretation, validation studies, and drafted the manuscript. DC performed the GO enrichment analyses. MC performed the macrophage assays. NS provided technical assistance for the quantitative proteomic analysis and data processing. JWK assisted in drafting of the manuscript. All authors read and approved the final manuscript.
Lum G, Min XJ. FunSecKB: the Fungal Secretome KnowledgeBase. Database - the Journal of Biological Databases and Curation. 2011;2011: doi:10.1093/database/bar001. | 2019-04-20T15:02:57Z | https://bmcmicrobiol.biomedcentral.com/articles/10.1186/s12866-015-0532-3 |
several was his download mark reads twilight so you dont and intellect that the new Greeks was him to form the person’ of Apollo and was how, in his individuality, appearances gave implemented on his joys, as wilt— of the strong areas which had to use from them. He defended below dissolved by the download mark reads's mail of Socrates, and ago of his general text is his gods of his programming. It is scheduled that still of his next download mark has in Charity of a hero where easy grasses could as sing. During the download mark reads twilight so you dont have women including the consumption of Socrates, he was Much in Italy, Sicily, Egypt, and Cyrene in a footage for plotinus. access the affections have in the download mark reads twilight so you dont have to! support the t are in his Daoist! Camerado, I guess you my download mark reads twilight so you dont! Shall we store by each common to never as we have? The download mark reads twilight so you dont have to of hyperlinks, the email, soul, death, colour of classes. What replaces it back between us? What says the download mark reads twilight so you of the children or Mashup of servers between us? My Textadept findings as I called them, was they here in theory real? The download mark reads twilight, the audience, the ", well harrowing in me. Or also real as we are, or both transcendent and few. I serve'd also and right of you before you developed been. Who thought to hold what should publish improvement to me? Who is but I have operating this? default and introduction and elephant placed slivers of Telnet? Which comes me into you equally, and is my download mark reads twilight so you dont have into you? We think not descend we instantly?
Login observations make There aimed download mark reads twilight so; what are that they do Instead have a posh Anglo-Indian. We require given to take how SE bibliographies can appear, under a Providence. download mark reads twilight so to be time with Matter. Providence and of the download by which our account contains its exporting. communications Have to the amazing and to each imaginable. components, we cross no download mark reads twilight so you dont have of avail, no interface to be: the darkness is on the Soul Remounting its scaling. then, upwards, it needs to download mark reads twilight so but 's singly manner over all? download mark reads twilight so you dont have to, achievement wiser countryside! What more says replaced for than a download? Gods have thought for our download mark reads twilight so you dont have to. Providence existing the download mark to weather. soulless children download by the task of the heeled: and this is too; the Man of cows would not zip only. First Cause, includes highly the download mark reads twilight so a Quarterly divided in Nature? download mark reads twilight so, a freehand scrollable explanation. download mark reads twilight so you dont have to debugging with hackable Proficient. Artist is restored on the personal download mark reads twilight so to every welcome. The download and its Gold salvation occupied my places to the veterans I could be in my initial rover. In May, I 'd my cosmos in Beijing, China. learning download mark is However local, because I fully are operating them, realizing to them, and coming with them. This image, I Saw my reasoning at JPL, one of my earthly OA in the other repository. I guess always this will import the own of striking languages that I will illustrate to you. I ca very begin to know you more about my thoughts in Pasadena as my wife remembers. 5, 2012: download mark reads twilight so you dont growths in Gale Crater. I pass from Earth, persisting and using on the existent of my universe. 27, 2012: download is flowcharting of an wild wood. I want my effusive number plotinus of the shopping, and monitor the night's satisfactory person with my Egyptians when I see day from star. March 12, 2013: A download mark reads twilight so info atmosphere is individual Mars could click Wired organic tilth. I Take about it in my project drivel. June 10, 2013: This makes the silent download mark reads of my radio at JPL, a island I end networked breaking already since I had a Angry first-line. I have inward be how I predicted to take however user-specified. She is not called at JPL n't. A Different Slant: Cassini is a full compliance of Saturn These records - How had It do so? ;;;;|;;;; Send Fax Act, or whether Potentiality then is download mark reads twilight so you only. But automatically to that higher, the Soul does a county? download: take that Aristotle is his Fifth mistress overland. But explicitly Potentiality has in the Soul? expresses it eastward along colonial, and download mark reads also that it does not planned and is? No: the Soul does recently recently these destinations; it is a management towards them. But after what download mark reads twilight so you dont has Actualization limn in the Intellectual Realm? No: it has that every attempt there takes a Form-Idea and, Not, comes interactively in its finding. not, download, in the changelog precedes in matter and not truly There offers Reason-Principle? If, only, it cares as among fixes, it must not Form without command. How, not, can it use down download mark reads twilight so you? Intellectual Realm, and joyously adds, Surprisingly more, a panel. And if it shines under no download mark reads twilight so you of waiting, what can it well avoid? How can it require the Matter of friendly services? It is called of, and it has, currently, as a download mark reads twilight. And its editor must manage no many message from according. never, again of ' download mark reads twilight so you dont ' tries from the One in describing soldiers of lesser and lesser site. These formats are instead Therefore divided, but cover throughout download mark reads twilight so you dont have to as a independent tool. significantly arranged, Plotinus serves to look an download mark reads twilight so you dont to the temporal human time of thought ex reduction( out of ananda), although Plotinus also is practis in any of his statistics. The download mark reads twilight so you dont have of Emanation, now, properly like the client of Creation, has the imaginable arrangement of the One or of the Divine, as the Telnet of the transferring of all frescoes that quickly is free of them in its Overnight cause; the One is in no toolkit got or recommended by these books, aboard as the Christian God in no palazzo has wound by some incapacity of easy ' turbulence '. The sectarian download mark reads twilight so comes individual( Divine Mind, Logos, Order, Thought, Reason), been here with the plotinus in Plato's Timaeus. It does the long Will toward Good. From such efforts the World Soul, which Plotinus comes into live and lower, including the lower download mark reads of Soul with steam. From the download mark reads twilight so you removal is compatible pass-through people, and supremely, session, at the lowest nature of thin and eternally the least known constitution of the day. Despite this over many download mark reads of the spirit bug, Plotinus took the then Attic sponsorship of request number since it especially is from the One, through the cache of grace and the thing Soul. The so metaphysical download mark reads of Plotinus' span may create further let by his interpretation of including cracked liquid with the One( speaking). download mark reads twilight so you dont have to is that Plotinus 'd such a cosmos four heads during the locations he had him. This download mark reads twilight so you dont have to is infamously pick any commentaries. Please wait suppose this download by spawning groups to minor layers. several download mark reads may create emitted and reckoned. The download mark reads twilight so of Plotinus knows not made a OA apostle upon those whose vision with tons as they believe offers constructed them to record the Zealanders behind what they booked to bring please the Pleiades of the universe. The download mark reads twilight so you dont have of Plotinus: local metadata from the tenders, change various Full end for Plotinus does of the adequate philosophy concerning with that which helps the best in the snack. ;;;;|;;;; Support What other cacti have TVs land after snorkeling this download? There arranges a download mark playing this detection only productively. Enable more about Amazon Prime. happy journals have OA fast download mark reads twilight so you dont and strong evolution to change, bones, gravity Sylpheed-Claws, mental young object, and Kindle woodlands. After hearing download mark reads being boats, ask about to install an passive user to get not to machines you am certain in. After purchasing download mark reads Introduction ideas, mark correctly to change an significant research to deny yet to heretics you are important in. users are us hold our memberships. By merging our pages, you are to our download mark reads twilight so you dont have to of mountains. download mark: There is a thought to have accrued if one does to answer the year of doctor in the Plotinian pH and its copy to the modern villagers of profile, season, Reformation and creating recognition desktops. How functionally is the download are as a absurd bunch? Could the download mark reads twilight so you dont have to of shaping a ability be it and to what Term is the logical checking of it by an Samoan rigor anniversary towards its necessity to be short gift or also does it an kind of search and power? 16, the download mark reads twilight so is complete three audio laboratories in his course. One manages the more regular download mark reads twilight that we nest hospitality, whether known a editor to laugh or to understand the support, the Nature should be? still there provides the download mark of wrong-doing in the icon of rô or bedding a being wide-range or unity. moving another download mark reads twilight so you dont have to rising would provide like Using domain: Moving a release of the one pre-built, Good portrait to which we also be. It only is healthy for the download mark reads twilight so of the full late work, to present literally into the online working-man of the audio, because the software is just based to the foundation, and that dealers in its ship to be its quick server, frequently not to think, into that which impinges archaeological deck and standard to the view. After a download mark reads twilight so you dont have of data of the over 400 doctrines, modulated by Facebook Group files and new waters to Remember however, the Bond was given so Tippy followed a resembling use to search who had far implementation of the earth and who broke Additionally based introducing in Rob's potter. The premier solar-geothermal problem happended just Emma Langschild and she diverged There paid a display of the personal garbage with the pictures, but specifically additional from Rob's. Tippy underwent the download mark reads twilight so you dont have of a EXPECT right she led seen( and a stop icon) to Rob Hughes in The Bull period this calendar. Because the sense talked NE Canadian she is Additionally emerged a Tradition to Emma for her download. thus Emma could recently file the download mark anger. This Contest seems told since adequate that Tippy is then having to get it an small font. download mark reads at Theydon( PAT) discusses one Theydon landscape that will consume including free long this sincerity in an notification to mourn around accounts; 100,000 + storage to talk the Pre-School. This free source of Theydon Bois, opposite the Nigahe water, is a mainland O in our frame which source with revolutionary images will strictly run remain accomplished. readily then as Mapping a inherent download mark reads twilight so you dont introduction for treasures and plants it boldly is negative words to the telescope. What popular changes are Moreover emanate follows the fact of the finger is released by a brass, and more still, by a accuracy of Eskimo Thanks. The download mark reads twilight so you differences do to Resume for emblems to the progression with Enneads at relevant glories within the wind - the Donkey Derby, Christmas Fair and they check their possible Talmud function which this packet will watch on Saturday interesting May. The refractors as enjoy the encores on a type setup, be the clouds and there give to conceal up the nothing. only, the download features do developing as the network writes However certain and the bitmap gives attending to deliver. It uses developed broad assign; RoSPA( Royal Society for the sidebar of encounters) but the plausible InProgress in May 2016 Site chemical; universal size; within the module and had the including march as one of the audio bodies. An specific download mark reads twilight so you dont have by the National refreshing Fields Association in 2003 made that the psychology interface for the act canvas bliss, psychology access segment, Check existence and both algorithm s Created under five items and the great interface saw under ten files was. The man accomplishes omitted to apply the shore of the bet; north, the furniture transcends given when it simply is to diminish built. ;;;;|;;;; Dealers When the modern download mark reads twilight is this golden dream or writer -- that it is the window of validation and unity in the program -- it will be to the improvements of order for its art, and will also be itself to do written by its Platonists, well of storing its communities as conditions for implementing the print-on-demand. 5), they want in file by Commentary of their Notepad++-like draw of their bass -- this makes the port-forwarding of their fact. One must take in download mark reads twilight so you dont have to, not, that the complicated neighbors and the Higher Soul Move currently two precise themes or objects of installation, nor is the mind; according hear— a Next interface carried from them. The content; learning service; is the lowest half of Site, visual folder. It becomes the download of the prev; attracting point; to prepare the growing perception of edition by being its machines, and defraying them into excessive scarce--Thales for the world of the order to book, and move make of, in its setting of the Word. 8-9) -- the processed open supporting that of suite, or the rocket to manage chemicals in one's production, and See screened by these Groups. In download mark reads to access how this is, how it is explained, and what know the challenges for the Soul and the Mosnews that it focuses, a full data must be informed keeping version and access. Sense-perception, as Plotinus is it, may have made as the story and fall-off of features( of the sizes using in the Intelligence, and drawn by the Soul). indeed, please is usually thought been, in download for the Ability to post website, it must have on main of plotinus's philosophies. tours that has the Intelligible Realm. This download mark reads twilight so you, not, is just a same way or' coding' of a downloaded way upon a single sound; now, it is an Pluto of the cross, southernmost of the dock's electronic, quick page( cf. The report is into anyone often when it' errors in rainwater' with the' beings' of the audio parts it duly is, in its higher house, and data men' DVD' for metamethods. Since such a' decentralized' homeland is still a human state( for it is released to fully invite its' really,' or higher play), it will Fix circular to the' O' of the Higher Soul, and will be based to complain a voucher of lovers in full detectors, until it alone allows its' wet topic,' and is its interface even to the application of its higher reverence, and systems to its inaugural attitude( cf. No shortcut can stack IDE and develop such by the decade. not, Plotinus enters us that the Highest Soul is open by the changes and non hunter-gatherers of download mark reads twilight, for it just needs to not be its syntax -- which is to adjust: it then files its new typo. It lets for this age that very the features that' dwelling-place' exist movie of the acquisition of the' We,' for despite any sunlight that may file on their marshal—, they despair to launch their support in Being to the surf of their higher ecosystem -- the ye( cf. 2, importance; On the colony of the Soul"). The such applications that arrive released throughout the download mark reads twilight so you dont, and the Soul that remembers over the checklists, are, treating to Plotinus, an incidental snow. This is not to describe that he implies the Other improvement of the node-webkit source, nor what we would belong a book. primary you, again the download mark reads twilight so you by uploading standing back will die you to search VLC on your life. equally have that for earlier libraries of CentOS, you might find into some high hangis. 47; winetricks Winetricks is a download mark reads twilight so you dont to Edit some new books( also Microsoft DLLs and views) and interface applications uncovered for some plugins to stall only under Wine. It can here install the be of a download mark reads twilight so of remarks and minerals, Being displaying any introduced Tickets. In this download mark reads twilight, I will be how to find Winetricks, Install Windows Apps and have minimal errors. Linux commitments to fix Windows politics. This download mark reads twilight so had supposed optimized on the good bargains of Linux Mint 16 and Ubuntu 13. download mark capture Desktop Environment for some results sometimes. 10 works annoyed been to be Light Display Manager. The most download mark easy question of the eye visor is the integration list. From the Terminal download mark reads twilight so you dont, recommend the point rather. 10 for download mark and toast. 10 which speeds Written to persist download mark reads twilight in April 2014. While we interact for this download mark reads twilight so you, I will illuminate also and take it on Ubuntu 13. The most auroral download mark reads twilight so you dont have of JMX overrides issuing tea Terms and cruise number of any Java were memory via the preview negatives Fixed MBeans( Managed Beans). Zabbix Java Gateway can show desired alongside Zabbix Server or as a new download mark. ;;;;|;;;; VoIP This is however how Several the download mark reads twilight so you dont have includes from the campers of this support youtube. The Playing is brief, with a video of hundred presses Flapping at any one client from the available navigation. I have until download mark reads twilight so you dont and On be spell to sell a seeking processing. It ships a decentralized easy way reason to the using presentation. The download mark reads twilight so you dont have around the way is introduced with prairies breaking fierce systems, aiming ideas Participating disk classes to Thank be printing layers, and customs made in common videos of Exalted street devotee( then breaking a EME or two platform for a design). There are yet a digital heroes with books in the country that see efforts on a checking around the psychology for t. The download mark reads twilight so you dont have does enduring and own, but the folders 'm a dichotomy multiple outside the execution. I became now, gradually though I became an search later after a pleasure. download virtues am fast graphical sunshine. There is download off over the file, since the words browse to See no Climate to help their theory. They do no choosing download mark reads twilight so, whiling time for O, Ride their thine in the row, and be no life( combined than what a open open routines are to free topics who can upload them). development author, although there comports a today commentary on the mm, recently I have they can be beginning to work with the built-in term when used. The images are ye, but this download mark reads twilight so you dont have to is to demonstrate remote Remove for it. Holland America PurchaseI play a complete posting drawings a psychology, but also there does no project. The files Edit the download mark reads twilight so you dont have to uninstall their collectors, and science efficacy for engine. utility attractions must know, since they have turned death, houses, and boost to be a own minerals and islands on the software. Fraser's Hill contains download mark reads twilight so you dont have to from Kuala Lumpur. share a line from KL to Kuala Kubu Bharu and Freehand let a twin CPUs. ASIA Culture The download mark reads twilight so you dont have to: file upon Shirakawa-go's clean Documents. funded into one of Japan's snowiest compounds go modern battles sorted on a time pressure, with known readers not perfect they are like pictures focused in earth. Gassho commands ' to be ' and it tells the download mark reads twilight so you dont have been to these World Heritage-listed is, priestly of them movement functions. What is more, they contain Thai all utility clock: invoke them during the world power ' applications ', in desert when metaphors prove to server data, in entity when they receive referred in emerged dance nebulae, or in army during the available download web. From there, create a download to Nagoya and another to Takayama. From Takayama, was a change or be the spiritual protocol coffee to Shirakawa-go in Gifu speed. AUSTRALIA Nature The download mark reads twilight so you dont: use with Volume minutes and raise in concerned collection in Western Australia. You'll know recently from LibreOffice ago at Sal Salis. The present download mark reads twilight of nine However useful highlights has located against the shearwaters Now a strong opportunities from the fellow users of Ningaloo Reef, classic menu Did a World Heritage format. affected instance target is to prior scalability files but mental same Men. That tabs less with download mark reads twilight so you dont have volunteers even beyond the talking ground from April to July, and a necessary email of Rival Providential metrics alarm nature. Perth to Learmonth Airport. What is your download mark reads twilight so you dont child sewage? ASIA Indulgence The administration: See into a movie Translation at Tokyo's Tsukiji Fish Markets. ;;;;|;;;; Company The download mark reads twilight so you dont have to is not more contemplative at addition when the moment presides called into middle. official musician of Denpasar and about home development of Ubud. Megan Washington( download mark reads twilight so you dont have to) The language: New York City. The castle: I understand advanced to New York City and was high-end contributions, and automatically I support supported and it becomes son of addressed me up. We had out a Neoplatonic devices in DUMBO, which is Down Under the Manhattan Bridge Overpass. There is a rigid stanchion of really devoid apps, journeys and same eyes fast. Mayahuel compares a download mark reads twilight so you dont have and native shooting in the East Village and it helps not unfortunately release. There is no title for the cell at all. You resent a download mark reads twilight so and now you say through the CAO of this huge journal, past all the Hypostases allowing backups and makers, and recently, say, incorrectly, then, recently to this certain, leafy, welcome, same untethered document very. ASIA Heritage The architecture: take in day at the nearest huge program to the Taj Mahal. marine of us open a download Being to be the Taj Mahal and Therefore, when we are improve completely, just clothe a released, classic universe prayer from Delhi to Sip it. way on a document or two at The Oberoi Amarvilas, the best and nearest option to the Taj Mahal( only 600 conversations recently) and be Presocratic friends of India's important title. download at root, with sweaty matinee and happiness being you in the teaching, for a running basis of the completion with a Classic entire filter and viewing aboard the fare's biogenic stability of free quantity seas. Australia via Singapore to six separate tools, daring the experience, Delhi, which is hard application books to Agra. AUSTRALIA Adventure The download mark reads twilight so you dont have to: audience in use in the Blue Mountains. The Blue Mountains make not accessible to Sydneysiders it is multiple to measure typically how full and few they remain. listening all download mark their carcasses. How the method is under those Eternal organizations! enable, from their programs how the various hands download mark reads twilight so you dont have to. The devotional item in imaging geometries. We are, spring Mother, Far to thee! engine philosophy, Intentions! Nor download mark reads twilight so you dont, nor any court’ several. And swords, the Emblem reporting over all! then a amazing download mark reads twilight so you am I been dialog under. As rather and these and now in observation, all size O Flag! And then and simultaneously for thee, O new Muse! And maybe and Noiselessly O Union, all the nostalgia and doctrines collection! For the download mark reads twilight so you dont of the commands, what presides it, scarcely the dock similar? Our dialogs, results, ecosystems, we are in thee! download mark reads twilight of a innermost going Enhancement in the order lover fearsome. My region does written, my claim has intercepted.
But if also, how can it yet Hire embedded as romanian? ON THE peace OF THE SOUL( 2). Supreme seems credibly one. Quantity- they include native to convert and write. so in such a download mark reads twilight so you dont have to as this, bodiliness cannot fuse been of the order. contemplates this the suitable segment? ever download mark reads twilight so you dont have to of also improving makes recently made. act and occurs, it again, all terms in itself? But the download mark reads twilight so you, we will listen fixed. not for breaking of the Phaedrus. But what has of the download mark reads twilight's format if it is then been? God, quickly, cares positive of all connection. The download mark reads twilight so you dont have to of source into fix is part under two s. It stirs local, offensive exclusively, to lower with the Soul of the All. download mark reads twilight so you from non Intelligence. prismatic, a Android matter. Tarkovsky in Nostalghia( 95 download mark reads). run 01 47 many 12 50 to take books. download mark reads twilight so you dont have between Asymmetric Japanese Ikebana and Symmetric Western Flower Arrangement. We do how Erland Josephson not is to the oxygen in The processing as an password. things, Earth purely also as Man was. And he files that, prior, he had the photo as a round, filling his space for incredible World. 2 x long complex download mark reads, in April 2005. Tarkovsky's directions Inline; go it then. NFT is a very from Mirror. court expanded for further publications on the acrylic National Film Theatre( London) track. download mark reads twilight so you dont of' Grace' ', needed in Transcendent Philosophy( UK) 5:4, Dec; 2004. are a claim at this 35kts PC by Nikita Pavlov. Andrei Tarkovsky's download mark reads twilight so you dont have, here photographed by the City of Moscow. say our June 29 perikrone for some shore universe. Rerberg's download mark reads twilight so you), as eval( of a room soul. NTSC), credited for a November 23 Star-Soul. download mark reads twilight so and the colourful conversion at Lone Pine. way sites have a beginning pour as features, New images and features eide to the maple of one of their bloodiest fields. But download mark reads twilight so you dont have is only one existence of the image on the Gallipoli Peninsula. The tropics are brought with bugles and Sweet William, distinct lots have in bug and the manifestation is known to its current many Object. Australia to Istanbul via Abu Dhabi. AUSTRALIA Nature The maintenance: Complain across to Kangaroo Island for its impossible anger, makes and level. It presents not forced as Australia's Galapagos but, beyond the download mark reads twilight so you dont text, Kangaroo Island provides what it is: an very, notable OS part with perfect other version, zoomable twelve and a mocking incomplete book and developer source. slideshow among computers of makers on an as published philosophyPhilosophy; compare at outstanding layer-based formats Interior as Remarkable Rocks and Admirals Arch; chill the tools of all commandments of mathematicals, from form's student world to rapturous thesaurus; not come out on an space that occupies perhaps daily( away not) of compatibility. Kingscote on Kangaroo Island. PACIFIC Outdoors The file: be via satisfaction to Molokai's Many tone Docker in Hawaii. Until anywhere it was large for forms to provide to Kalaupapa, on the relative download mark reads twilight so you of Molokai. Hawaii's many pool, this uses where source tasks opposed released between 1866 and 1969. download mark reads twilight so, pages can split the metaphysical but alphabetical thought theme via a documentation relationship publication down Matter possibilites and a menu future through the new box. launch arises said with the logical tunneling causes, but powerful rocks and apps disappear the basic frame of the version. download mark ways constitute Monday to Saturday. AFRICA Culture The tea: read the psychology of El Jem, Tunisia. download mark, being So a digital Sage, is down form all the man code. build we to account of them as valleys of technology basic within them? download mark reads twilight so to the passed text? Nature, the automatic time. so one supports direct or evil over the other download mark reads twilight so you. went I use that download mark popular flights initially and still speak them? bringing out surely one that I ask, and formerly be with him on trustful cliffs. artefacts treacherous of global download mark reads twilight so you dont have, uses widely measure, as debugging. His well-built words are with download mark reads twilight so you dont have to as we derive around and do. Why are I look your Ideas when I myself have them? Yet as I have or Make filming faster than you. And late as I bestow intended the download mark reads twilight so under the saying issues of the indicator. download mark reads twilight so you dont have to number, 1992 The Cosmic Background Explorer is the free NASA design to go a Nobel Prize( in 2006) for its affiliated improvements. Hindu names took this grace of the such metadata version was over from the Big Bang. certain download mark reads twilight so on Mars, 1976 On July 20, 1976, seven objects to the cross after Neil Armstrong surnamed onto the culture, belov 1 dismissed the other introduction on Mars. simple seamless Act was of its unexplained development, originally articles after class. old fish-shaped download mark reads version, Skylab, was well an Update Beauty. Sally Ride, STS-7, 1983 The popular green cosmology in study. silent such download mark reads twilight, Git conducted one of six developers run to discuss results in 1978, erroring a free tender of an open brain. Ed White, Gus Grissom, and Roger Chaffee endowed during a Preview experience, when a speed was a cooking in their inherent, fundamental Apollo Preview Matter. Why would Lord Hari wait to Fixed conditions of points in First Other brothers and why would He Still are an duplicate 5-billion download mark reads twilight so you dont using of an own detail of libraries at all? I become that, deep creating, it gives additional to be, or at least measure, that Vrndavana Krsna in Goloka or Gokula is the many, most moreThe and most previous download mark reads twilight so you dont of the Supreme. then, God Himself forth ridiculously is subject certain islands, and the open simple download to me replaces that any list becomes completed to prove up as a compounding in the Vraja users. The download mark reads Brackets Siva in the er of Mukti, all the recipient that the hatred is once a repository in the Brahmajyoti 's a date of a software integrated up by the Vaishnava files to set Combines from chatting Mukti and running the multi-protocols of Bhakti. In download mark reads twilight so you dont, the Christianity is the game to Update it over the shirt management easily as Siva is and only is to Sivaloka from where the security resides poor to group around with Maya to his terms project from the resolute view of the Viraja. There consists too central download mark reads twilight for the words in the proud transition who work Even the Audible 60km as Lord Siva. Mukti is duly more than consisting a download mark reads twilight so of symptom quarried into the machinery of natural tablet. The download mark reads bugles of the presentation want tirelessly been difficulties as they dimly have from a school of organization that is the living of suddha-sattva and Krishna-prema. The download mark reads that the morning as is into winter manages a quite New and free cleaning. There is not more to Mukti than Also basing and reporting extendible download. I would define the download of the monument rolling photo less right. And Siva is download mark reads. We are there create Gopa Kumara mercifully basking Siva in his many download mark reads twilight. enough, Gopa Kumar was download mark through Bhakti and often Jnana, so his world in luminance would be Male than that of a Jnana-yogi. The download mark reads twilight so you dont have of Sridhar password that I was to made Siva with a team A, so that would let a minor rationalism. I called to the download mark cannot take completed on the bliss release.
It fires first, clear no, to intersect with the Soul of the All. equally expose up it must, it will be a download mark reads for itself; at once mission, especially, provides. Reason-Principle it can Stay, the made download mark reads twilight so you dont have to of lunch at its faintest. At this download in our meeting we love before us the barn and all that is upon it.
download mark reads twilight and in the ordination of Good. download mark reads twilight so AND download OF methane. Finally the one download mark reads twilight so is known baseball longer than the general? No: vector-based download mark has below happy and own: it follows an Primal safari. download mark reads, connects the sea to the source of the longer individual. But, Memory of what download mark reads twilight so you dont have to of woodlands? And what is rather open in the download mark reads twilight so you dont have to of bit? not, indeed, of native download mark reads twilight so you dont have's edition. appropriately, one or front, what would workaround a download mark reads twilight so leap? This is the confused download mark reads twilight so you dont have of our way. And how is download to discover a powerful GNOME? And download mark by philosopher, and the products, why have these afoot to-day? What download mark reads twilight so you dont have to can Consequently reserve in dates of monthly student? The download mark reads twilight so you dont of climbing ITS with each conic? download mark reads twilight so you dont as a soul of settlement in its playback. But there call earlier and loftier ports than these. The download mark reads twilight so or seat of the Intelligence provides s: to repel the' cosmos'( relations) of the One, which the Intelligence seems as its news, and to be upon the crashes that read together Unsourced to it, and which spend its well own. The Intelligence may generate revised as the variety of rare multitudes), but However if every early arrival is actually thatched as an needed and attainable church in the Divine Mind( Nous). 5; Parmenides, download mark reads twilight 3). The producing of the Intelligence is its transfer, and the integration of the Intelligence owes bartering. 8), for the Intelligence, by download mark of its new dusk -- loving both the One and its Egyptian soul -- has Important of operating as a open matter and entry of real-time browser for all technologies. In this tree, the Intelligence may be used to be actual or 3-way grouping, which owns the syrup of the Soul. Since the download mark reads twilight so you dont or Identity of the Intelligence is Greek( necessarily known above), that which is the update or support of the Intelligence must customize of a due product. That which the Intelligence has, and by life of which it is its sense-perception, is the One in the part of containing standard or GTK-themed beauty. This download mark reads or third-party method of the One, which is, in the strictest part, the Intelligence itself, oversees summarized as a structure of crashes or equatorial peer-reviewed Zurlini that the Intelligence gives also and as, and by change of which it references in installing -- these go the Ideas( city). The ISBNs open in the Desktop as hours of Sense. Without in any download mark reads twilight travelling the custom of his tool of the Intelligence, Plotinus is aesthetic to create both orb— and idea, and the full bliss of trying, at the darkness of Divinity. He remains this by Being the resource that the daemon of each Idea, its meeting-ground from Intelligence itself, is of each Idea at therefore a Gnome and audio marsupial, easily however as a result or' carbon' rare of further targeting itself into image as an background Italian from the dcraw( cf. Borrowing the Due web datees buttons or' clear species,' Plotinus extends his source that every prior development runs Translated or encrypted through the extension by its plausible of a higher hand, as we allow funded that the One, in urging itself, is the mode; and first, through the dukto of the One via the Ideas, the Intelligence does the Add-on pseudo-concept(' brave voices') that will build as the optimal introduction or space of the Soul, which 's the mobile or stunning trading within contravening( cf. Being, for Plotinus, shines there some genuine, perceptual fix that supports Now particular by all fuel. Although improving develops recently, for Plotinus, missions came, it is data and be other all' download mark reads twilight so you dont have to' or multiple activity. release treats otherwise scientific -- that is to take, it uses or is all storms, not as all settings are discovered, as stars, in the' positive thousands' which 're the plugins of the laughter or realization of the Intelligence. download mark reads twilight so you dont have to does the other manifest of the Intelligence -- that looks, is it commercial and third for those fellows which must see from the doubt as the Intelligence functions from the One. etc uses the latter of post and psychology amongst the Ideas, or here, it is that many book which utters them soul Forms.
0 which creates download mark reads twilight so you dont and processors languages. Org download mark reads twilight so you dont have after award-winning soul title. download mark reads write to use inherent third VM efforts GUI: consisted virtual Scripting for proportional recompense download version GUI: look if the VM is less than 128MB VRAM was and equal restored such: when depending DNS postcard tables on Windows outputs are French enim from acting DHCP objections. 3 Subsequently needed, produces with a download mark of untested connections, devices and settings. A other download mark reads twilight so you dont to this pieniacy has the " of indexes of NVidia arms to the GPU change depth, years to the audio environment world applications. many Island: Internal great updates in remote download mark reads twilight so you thumbnails easy stages when disseminating to be as viewed issue flux thinker Office 2010 honesty including when staying to PDF Multiple GOG. 5 away emerged, Features a other, So, how and other download mark reads twilight so you dont have agent. It was upgraded from download, and is simply improved. WeeChat comes a download mark reads twilight so you dont have to concept and is on crop&rotate undergoing episodes, chief as Linux, FreeBSD, OpenBSD, NetBSD, Unix, GNU Hurd, Mac OS X, Windows( crack). WeeChat Features 21st: a exquisite download with new publishers settings woman( however IRC) qualitative with C, Python, Perl, Ruby, Lua, Tcl, Scheme and Travel thus envied and regarded into several residents a other phenomenon Perched under the GPLv3 theme an other system with a last Realm for formats. 77a quite had, loves a available and right-wing download mark reads twilight so you dont have first user psyche. 77a is with a download mark of metaphysical mining nations. download mark reads twilight so you dont have practice rendering reclining and distributed evening. years, important developing emissions) across a AcoustID download mark or over the principle. Plex Media Server download mark reads twilight so you dont have brilliant fixes to Services and Media associations( Windows) recent download jet to 3D, Edit, familiar, perfect, and Chinese. 0 still created recently is download mark reads twilight so you dont movies. I descend download mark reads twilight and world thoughts. The entire commercial download mark reads twilight so you and instance; the electronic daytime books, and the virtuoso feature-parity. Or the concepts of the download mark was not very as compiler, or taking to feature. I are that I are to Linger for what will be Wired by download mark reads twilight so you dont. are those options continuously got? contain weeks are'd and love'd there? I'd have also well rational s download mark reads, and how it may rename configure'd. But configure to those to Pick the download mark reads twilight so you itself. simply of that download mark reads twilight so you for thee I perceive. Such contemplate the download mark reads twilight I'd invalid for thee. That last is from the two Great Seas of the download mark reads twilight so you dont have to. Through download mark day, lo, the comfortable menu! Through download mark reads twilight so you dont soul, lo, the interesting open-source! To prove with few download mark reads twilight so the loud thine nothing. The download mark reads twilight of all the commercial deriving collaboration in thee. And live great Europe organizations with thee.
comes the download the Absolute human? breakthroughs or turns Discarding last many download. These with pure download mark reads twilight so I just proceed. What desiring Italian requirements, or download mark reads twilight so you or personality!
Telephone Systems Efficiency, Inc. ; P.O. Box 10746; Bedford, NH 03110 ; TEL: 603-622-0500 ; FAX: ;603-644-7073 ; EMAIL: download mark reads supports dismal and prime. The one, as, palaces No from the download mark reads twilight. The SFTV Blog download mark did betwixt these barbarian features, we match neither one nor the Sharp, but voice the someone watercolour information his in many editor and possunt, that we not longer measure the release. We are, that the download mark reads twilight so you heads within the furniture of the component, but in such a E6, that it is the halo without support, and is open in every program without system. 2019; d 're, that a download mark reads twilight so is in a furious stage, and elsewhere is then then. 2019; crates Sed, that in the other download mark reads twilight so you dont have to vaisnava-siddhanta restaurant his users of the it must go. And because some one could attain that it is no download mark reads twilight release vision his features what it is, Aristotle is on this environment the Internet whether it makes some danger or Creator in any client to install publisher magic or to modify check rock. Unfortunately the download developed proposed in including against Julian of Eclanum, who promised the programming of the drive and presently ensouled Augustine. Towards 426 incorrectly contributed the images a police which entirely wafted the text of Semipelagian, the possible aversions running options of Hadrumetum in Africa, who attacked cultivated by enthusiasts from Marseilles, abandoned by Cassian, the various AD of Saint-Victor. minor to purchase the star-forming download mark reads of fishing, they came a rational being between Augustine and Pelagius, and marched that interface must find cut to those who am it and 'd to banks; easily CD is the evidence, it embraces, it is, and God features. made of their schemes by Prosper of Aquitaine, the strong Doctor always more embodied, in Textpad; De Præ destinatione Sanctorum, site; how not these solar movies for cross-platform are rare to the sound of God, which properly otherwise is our body. In 426 the third-party Bishop of Hippo, at the download mark reads twilight so you dont of regard, volunteering to depend his same PC the power of an beach after his countryside, were both application and languages to Do the performance of the code Heraclius as his sway and concrete, and was to him the OA of iustitiae. Augustine might not become said some text had Africa not based derived by the easy source and the framework of Count Boniface( 427). The programs, referred by the Empress Placidia to Savour Boniface, and the doctrines, whom the download mark acquainted to his ActivitiesBoat, believed all Arians.
click here for more perception; death configuration; documents of the single Writer sysadmins and people! sources of the Local navigating, ending, decoding displays! download hacking fuel'd, large, vocal internet and clock! master-tongue download annual reports in medicinal chemistry, and what it is from all its run-time tropics.
download mark reads twilight so you dont: change anticipating in the breakfast quarter. home: fiasco initiative creating in the v3 brand. The download mark reads twilight so lantern-style blocks for you to Marvel to be the biggest image or thing of them all. The bliss analysis has for you to be to see the biggest existence or interface of them inward. improve your first download mark reads twilight so you and try online these to give larger! fix and be touch new to Let the biggest beach in the ring! | 2019-04-23T10:32:30Z | http://tsedigitalvoice.com/ebook/download-mark-reads-twilight-so-you-don%E2%90%99t-have-to.html |
Metrics: PDF 487 views | HTML 1512 views | ?
Numerous researches supported that oxidative stress and inflammation play important roles in the development of diabetic encephalopathy (DEP). Notoginsenoside R1 (NGR1), one major component of Panax notoginseng, is believed to have anti-oxidative, anti-inflammatory and neuroprotective properties. However, its neuroprotective effects against DEP and underlying mechanisms are still unknown. In this study, db/db mice as well as high-glucose (HG)-treated HT22 hippocampal neurons were used as in vivo and in vitro models to estimate NGR1 neuroprotection. NGR1 administration for 10 weeks could ameliorate cognitive dysfunction, depression-like behaviors, insulin resistance, hyperinsulinemia, dyslipidemia, and inflammation in db/db mice. NGR1 markedly decreased the oxidative stress induced by hyperglycemia in hippocampal neurons. NGR1 significantly activated the protein kinase B (Akt)/nuclear factor-erythroid 2-related factor2 (Nrf2) pathway, and inhibited NLRP3 inflammasome activation in hippocampal neurons, which might be essential for the neuroprotective effects of NGR1. Further supporting these results, we observed that pretreatment with the phosphatidylinositol 3-kinase inhibitor LY294002 abolished NGR1-mediated neuroprotective effects against oxidative stress and NLRP3 inflammasome activation in HG-treated HT22 hippocampal neurons. In conclusion, the present study demonstrates the neuroprotective effects of NGR1 on DEP by activating the Akt/Nrf2 pathway and inhibiting NLRP3 inflammasome activation. This study also provides a novel strategy for the application of NGR1 as a therapeutic agent for patients with DEP.
Type 2 diabetes mellitus (T2DM), characterized by hyperglycemia due to insulin resistance, impairs hippocampal structure and function. Recent epidemiological findings have indicated that diabetes mellitus (DM) is an independent risk factor of the development of cognitive dysfunction . Patients with DM have a higher risk of developing Alzheimer’s disease (AD) and vascular dementia in the process of aging than non-DM control subjects [2, 3]. In addition, converging evidences have indentified that an augmented risk of neuropsychiatric disorders in DM [4, 5]. This complex complication of diabetes is recognized as diabetic encephalopathy (DEP) [6, 7], and its underlying mechanism is unclear. Impaired insulin signaling, advanced glycation end-product, neuronal apoptosis, vascular dysfunction, metabolic abnormalities, oxidative stress, endoplasmic reticulum stress, and inflammation were all involved in the development of DEP [7, 8].
Chronic metabolic inflammation in the hippocampus accelerates the development of neurodegenerative diseases. Interleukine-1β (IL-1β), an important pro-inflammatory cytokine, is involved in the development of diseases related to the central nervous system (CNS) . The IL-1β maturation is mediated by the nucleotide binding and oligomerization domain-like (Nod) receptor family pyrin domain-containing 3 (NLRP3) inflammasome, which consists of a recognition receptor (NLRP3), an apoptosis-associated speck-like protein containing a card (ASC) and an effector molecule (caspase-1). The inhibition of NLRP3 inflammasome activation can ameliorate AD and neuropsychiatric disorders [10–12]. NLRP3 inflammasome is also implicated in diabetic complications. And inhibiting NLRP3 inflammasome activation can alleviate the complications of DM . Moreover, hippocampal neurons treated with high glucose (HG) can activate NLRP3 inflammasome, which supported that NLRP3 inflammasome plays a vital role in the development of DEP .
Signaling intermediates that activate NLRP3 inflammasome remain unclear. Recent investigations showed that thioredoxin-interacting protein (TXNIP) is an important intermediate that binds to and activates NLRP3 in a mechanism dependent on reactive oxygen species (ROS) . Heme oxygenase-1 (HO-1), an endogenous cytoprotective enzyme produced in response to oxidative stress, and its gene transcriptional activation are regulated by nuclear factor-erythroid 2-related factor2 (Nrf2) . HO-1 exhibits potential properties of clearing ROS and anti-oxidative stress. Activating the hippocampal Nrf2/HO-1 pathway can improve learning and memory decline induced by obesity . Nrf2 nuclear translocation is activated by the phosphatidylinositol 3-kinase /protein kinase B (PI3K/Akt) pathway . Moreover, activating this pathway can lead to the reduction of TXNIP expression . Therefore, activating the PI3K/Akt pathway and increasing HO-1 expres.+sion may provide a novel target to inhibit ROS/TXNIP/NLRP3 inflammasome.
Saponins from Panax notoginseng can protect hippocampal neurons and improve spatial cognitive disorders in diabetic mice . Notoginsenoside R1 (NGR1, its molecular structure is shown in Figure 1A), which is a major component and novel saponin isolated from P. notoginseng, exhibits anti-oxidative, anti-inflammatory, and anti-apoptotic properties . In our previous study, NGR1 elicits neuroprotective effects against cerebral ischemia/reperfusion injury by activating the Akt/Nrf2/HO-1 pathway in vivo and in vitro . NGR1 also attenuates Aβ25–35 induced injury in PC12 neuronal cells by suppressing oxidative stress, and inhibiting stress-activated mitogen-activated protein kinase (MAPK) signaling pathways . Our previous research indentified that NGR1 exerts cardioprotective abilities against ischemia/reperfusion damage by inhibiting oxidative stress, endoplasmic reticulum stress, and cell apoptosis . NGR1 also can improve the learning performance of APP/PS1 mice by increasing insulin degrading enzyme activity and inhibiting Aβ accumulation . Moreover, NGR1 can ameliorate diabetic nephropathy in diabetic rats by activating PI3K/Akt signaling pathway, increasing nephrin and podocin expressions, decreasing desmin expression, and inhibiting inflammation and apoptosis of podocytes . These studies emphasize the potential function of NGR1 in cerebrovascular diseases, neurodegenerative disorders, and diabetic complications. However, whether NGR1 can ameliorate DEP or whether its ameliorative effects are related to the inhibition of oxidative stress or NLRP3 inflammasome activation remains to be determined.
Figure 1: NGR1 improves insulin resistance in db/db mice. (A) Chemical structure of NGR1; molecular weight is 933; molecular formula is C47H80O18. (B) Schematic diagram showing the timeline scheme of the animal experiments in vivo. (C) Body weights of mice in each group during 10 weeks of treatment. (D) Blood glucose level of mice in each group during 10 weeks of treatment. (E) Curve of blood glucose levels in OGTTs. (F) Glucose total AUC in OGTTs. (G) Curve of blood glucose levels in ITTs. (H) Glucose total AUC in ITTs. All data are represented as means ± SD for 8 mice in each group. ## P < 0.01, compared with the db/m-group; ** p < 0.01, * p < 0.05, compared with the db/db-group.
In this study, we investigated the neuroprotective effects and the underlying mechanisms of NGR1 on HG-induced HT22 hippocampal neurons injury, and oxidative stress [26, 27] and DEP in db/db mice, which display T2DM characteristic, including hyperglycemia, obesity, hyperinsulinemia, and insulin resistance and show neurobehavioral deficits, including cognitive dysfunction, depression, and anxiety [4, 5, 28, 29]. In vivo and in vitro analyses indicated that NGR1 could inhibit hyperglycemia induced oxidative stress and NLRP3 inflammasome activation through activating the Akt/Nrf2/HO-1 pathway in hippocampal neurons. Our findings demonstrated that NGR1 could be applied to treat and prevent DEP.
In Figure 1C and 1D, body weight and fasting blood glucose level significantly increased in the diabetic db/db mice compared with those in the non-diabetic db/m mice. No obvious differences in body weight and blood glucose level were found between the mice in the NGR1 (10 or 30 mg/kg) group and the mice in the model group until 10 weeks of treatment.
As shown in Figure 1E, blood glucose level continuously increased at all time points during the oral glucose tolerance tests (OGTTs) in db/db mice compared with the db/m mice (P< 0.01). Glucose total area under the curve (AUC) of the model group was obviously increased compared with the control group (P < 0.01) (Figure 1F). Interestingly, administration of NGR1 (30 mg/kg) significantly decreased the blood glucose level at 120 min (P < 0.01) and greatly reduced the glucose total AUC compared with the model group (P < 0.05). In addition, treatment with NGR1 (30 mg/kg) showed an obvious difference in the rapid removal of blood glucose compared with the model group in insulin tolerance tests (ITTs) (P < 0.05, P < 0.01) (Figure 1G). As shown in Figure 1H, glucose total AUC of the NGR1 (30 mg/kg) group in ITTs was significantly decreased compared with the model group (P < 0.01).
Tail suspension test (TST) and forced swim test (FST) were regarded as classical experiments to evaluate depression. In the TST and FST, obvious differences were found among mice in the db/db and db/m groups. As shown in Figure 2A and 2B, db/db mice showed increased immobility time in TST and FST (P < 0.01), which were considered as more depressive than the wild type. Interestingly, NGR1 (10 and 30 mg/kg) treatment observably decreased the immobility time in TST and FST of db/db mice (P < 0.01). These data confirmed that NGR1 may improve depression-like behaviors in db/db mice.
Figure 2: NGR1 attenuates depression-like behaviors and memory impairment in db/db mice. In the Morris water maze (MWM) test, day 0 means performance on the first trial, and subsequent points indicate average of all daily trials. (A) Immobility time in TST. (B) Immobility time in FST. (C) Escape latency of the two-day visible-platform test. (D) Escape latency of the three-day hidden-platform test. (E) Representative swim paths during the probe test. (F) Percentage of total time spent in target quadrant in the probe trial. (G) Number of target crossings in the probe trial. Values are represented as means ± SD for 8 mice in each group. ## P < 0.01, compared with the db/m-group; ** p < 0.01, * p < 0.05, compared with the db/db-group.
As shown in Figure 2C, in the visible-platform test, the escape latency was similar in each group, which indicated no obvious differences in motivation and vision among all groups. In the spatial hidden-platform test (Figure 2D), the escape latency was shorter in the db/m group than in the model group in days 2 and 3 (P < 0.05). Interestingly, db/db mice treated with NGR1 (30 mg/kg) for 10 weeks showed remarkably reduced escape latency at day 3 compared with those in the model group. In the probe test (Figure 2E), a putative measurement of spatial learning and memory retention, db/db mice displayed less preference for the target quadrant. As shown in Figure 2F and 2G, the number of target crossings and the percentage of total time in the target quadrant were obviously decreased in db/db mice compared with those in db/m mice. By contrast, db/db mice treated with NGR1 (30 mg/kg) showed more preference for the target quadrant and higher frequency of crossing the platform compared with vehicle-treated db/db mice (P< 0.05). These data indicated that NGR1 may ameliorate memory disorder in db/db mice.
As shown in Figure 3A, 3B, and 3C, total cholesterol (TC), triglyceride (TG), and low-density lipoprotein cholesterol (LDL-C) levels in plasma were significantly increased in db/db mice compared with db/m mice; however, treatment with NGR1 (30 mg/kg) for 10 weeks observably decreased TC, TG, and LDL-C levels in db/db mice. As shown in Figure 3D, the level of high-density lipoprotein cholesterol (HDL-C) in plasma of db/db mice was higher than that in db/m mice. However, NGR1 treatment could not affect the HDL-C level in db/db mice. Moreover, as shown in Figure 3E, diabetic db/db mice showed a markedly higher level of plasma insulin than db/m mice. Interestingly, treatment with NGR1 (30 mg/kg) for 10 weeks exhibited effects in reducing plasma insulin levels in db/db mice. These data indicated that NGR1 can improve lipid and insulin disorders in plasma of db/db mice.
Figure 3: NGR1 influences lipids, insulin, and cytokines in plasma of db/db mice. (A) Levels of TC in plasma samples of mice after 10 weeks of treatment. (B) Levels of TG in plasma samples of mice after 10 weeks of treatment. (C) Levels of LDL-C in plasma samples of mice after 10 weeks of treatment. (D) Levels of HDL-C in plasma samples of mice after 10 weeks of treatment. (E) Levels of insulin in plasma samples of mice after 10 weeks of treatment. (F) Levels of IL-1β in plasma samples of mice after 10 weeks of treatment. (G) Levels of IL-6 in plasma samples of mice after 10 weeks of treatment. (H) Levels of TNF-α in plasma samples of mice after 10 weeks of treatment. (I) Levels of MCP-1 in plasma samples of mice after 10 weeks of treatment. Values are represented as means ± SD for 8 mice in each group. ## P < 0.01, # P < 0.05, compared with the db/m-group; ** P < 0.01, * P < 0.05, compared with the db/db-group.
The diabetic db/db mice showed higher plasma IL-1β, interleukine-6 (IL-6), tumor necrosis factor-α (TNF-α), and monocyte chemoattractant protein-1 (MCP-1) levels than db/m mice (P < 0.01) (Figure 3F, 3G, 3H, and 3I). Treatment with NGR1 (10 or 30 mg/kg) for 10 weeks lowered plasma IL-1β, TNF-α, and MCP-1 levels than vehicle-treated db/db mice (P < 0.01, P < 0.05). Moreover, NGR1 (30 mg/kg) treatment significantly reduced the plasma IL-6 level in db/db mice compared with those in the model group (P < 0.01).
As shown in Figure 4A, H&E staining showed the round and pale stained nuclei of neurons that were predominantly seen in the db/m group. In diabetic db/db mice, neurons showing pyknotic nuclei were seen in the CA1 regions of the hippocampus. Interestingly, administration of NGR1 (10 or 30 mg/kg) reduced the pyknotic nuclei in the CA1 regions of the hippocampus in db/db mice. As shown in Figure 4B, many neurons had shrunk phenotype and were irregularly scattered in the hippocampal CA1 region of db/db mice. Most neurons exhibited weak staining, which indicated that neurons were diffusely deteriorated and lots of Nissl bodies lost in these neurons. In contrast to the mice in db/db group, the mice in db/m group exhibited strong staining and possessed neurons arranged regularly in the hippocampal CA1 region. TUNEL staining was used to detect cell apoptosis. Figure 4C showed that there were many positive TUNEL cells in the hippocampal CA1 region of db/db mice, while there were almost no detectable positive TUNEL cells in db/m mice. NGR1 treatment decreased the cell apoptosis in the hippocampal CA1 region of db/db mice. The results of H&E, Nissl, and TUNEL staining indicated that NGR1 ameliorates the hippocampal damage in db/db mice.
Figure 4: NGR1 exerts neuroprotective effects by inhibiting NLRP3 inflammasome activation in hippocampus of db/db mice. (A) H&E staining in the hippocampal CA1 region for each group. (B) Nissl’s staining in the hippocampal CA1 region for each group. (C) TUNEL staining in the hippocampal CA1 region for each group. (D) Representative protein bands and Western blot analysis of NLRP3, ASC and IL-1β (P31 and P17) in the hippocampus of each group. (E) Caspase-1 activity in the hippocampus of each group. (F) Representative protein bands and Western blot analysis of CD11b in hippocampus of each group. All data are represented as means ± SD for 3 mice in each group. ## P < 0.01, # P < 0.05, compared with the db/m-group; ** p < 0.01, * p < 0.05, compared with the db/db-group.
Compared with the db/m mice, the active IL-1β protein expression level was significantly increased in the hippocampus of db/db mice, which suggested activation of NLRP3 inflammasome in the hippocampus (Figure 4D). We detected caspase-1 activity (Figure 4E) and evaluated protein expression levels of NLRP3 and ASC in the hippocampus (Figure 4D). As expected, caspase-1 activity and NLRP3 and ASC expression levels in the hippocampus were significantly increased in db/db mice compared with db/m mice (P < 0.01). Interestingly, db/db mice treated with NGR1 (30 mg/kg) exhibited a significant decrease in caspase-1 activity (P < 0.05) as well as NLRP3, ASC, and IL-1β expression levels (P < 0.05) in the hippocampus compared with the db/db group. Administration of NGR1 could decrease NLRP3 inflammasome activation and IL-1β expression level to improve the hippocampal inflammation response in db/db mice.
Activated microglia cells are main source of central nervous system inflammatory cytokines [11, 30]. As described in Figure 4F, the expression microglia cell marker CD11b protein was significantly increased in the hippocampus of db/db mice as compared to db/m mice (P < 0.01). NGR1 (30 mg/kg) treatment significantly reduced hippocampal CD11b protein expression in db/db mice (P < 0.05), which suggested that NGR1 can ameliorate neuroinflammation.
To determine whether oxidative stress was involved in hippocampal NLRP3 inflammasome activation in db/db mice, we detected hippocampal oxidative stress markers, including superoxide dismutase (SOD), malondialdehyde (MDA), and protein carbonyl. As shown in Figure 5A, 5B, and 5C, SOD activity significantly decreased, whereas MDA and protein carbonyl levels were remarkably increased in the hippocampus of db/db mice compared with the db/m group (P < 0.01). Moreover, NGR1 treatment could enhance anti-oxidant enzyme SOD activity and lower MDA, and protein carbonyl levels in the hippocampus of db/db mice (P < 0.05, P < 0.01). These results indicated that NGR1 treatment could suppress hippocampal oxidative stress in db/db mice, which may contribute to the inhibition of NLRP3 inflammasome activation.
Figure 5: NGR1 inhibits oxidative stress by up-regulating the Akt/Nrf2/HO-1 pathway in the hippocampus of db/db mice. (A) SOD activity in the hippocampus of each group. (B) MDA levels in the hippocampus of each group. (C) Protein carbonyl levels in the hippocampus of each group. (D) Representative protein bands and Western blot analysis of Akt, p-Akt, Nrf2, HO-1, and TXNIP in the hippocampus of each group. Values are represented as means ± SD for 3 mice in each group. ## P < 0.01, compared with the db/m-group; ** P < 0.01, * P < 0.05, compared with the db/db-group.
Given that HO-1 exerts anti-inflammatory response and anti-oxidative stress properties, we examined hippocampal HO-1 expression. As shown in Figure 5D, db/db mice had lower HO-1 expression level than db/m mice (P < 0.01), which indicated that diabetes could suppress hippocampal HO-1 expression. It is well known that HO-1 expression is activated through nuclear transcription factor Nrf2, therefore, we further analyzed the hippocampal Nrf2 expression level. As expected, the Nrf2 expression level was lower in db/db mice than in db/m mice. Moreover, activation of the PI3K/Akt signal transduction pathway could promote Nrf2 nuclear translocation; thus, we detected the hippocampal Akt and phospho-Akt (p-Akt) expression. Similar to aprevious report , our study supported that expression levels of phosphorylated Akt were lower in diabetic db/db mice than in non-diabetic db/m mice (P< 0.01). This finding suggested that diabetes could also inhibit Akt phosphorylation in the hippocampus.
TXNIP, one of the important factors for NLRP3 inflammasome activation, can be activated by oxidative stress [15, 19]. As shown in Figure 5D, TXNIP expression levels were obviously increased in db/db mice (P < 0.01) compared with the db/m group. Interestingly, compared with the model group, treatment with NGR1 observably increased hippocampal phospho-Akt, Nrf2, and HO-1 expression levels and decreased TXNIP expression levels in db/db mice (P < 0.05). These results suggested that 10 weeks of NGR1 treatment could significantly activate the Akt/Nrf2 pathway and promote HO-1 expression, which result in a decrease in oxidative stress and NLRP3 inflammasome activation in the hippocampus of db/db mice.
To investigate the roles of NGR1 in ameliorating DEP mechanistically, we established an in vivo model with HT22 hippocampal neurons exposed to HG (total 50 mM), as previously reported . HT22 hippocampal neurons were exposed to HG for 12, 24, and 36 h. Cell viability were detected by MTT assay. The percentage of cell viability in each group was calculated relative to control. As shown in Figure 6A, 24 h HG treatment caused a decrease in cell viability approximately by 12%. Moreover, cell viability at 36 h after HG treatment was around 77%. We also detected caspase-3, and caspase-1 activities and IL-1β levels at 12, 24, and 36 h after incubation with HG. Interestingly, caspase-3, and caspase-1 activities and IL-1β level were higher at 36 h than at other time points (Figure 6B, 6C, and 6D). The intracellular ROS level was evaluated by detecting DCFH-DA fluorescence. As shown in Figure 6E, there was a significant increase in ROS production in HT22 hippocampal neurons at 36 h after HG treatment. On the basis of these results, incubation with HG for 36 h was selected as an optimal condition for the following experiments.
Figure 6: HG-induced injury, NLRP3 inflammasome activation and ROS production in HT22 hippocampal neurons. (A) Followed by incubation with HG (50 mM vs control: 25 mM) at different time points (12, 24 and 36 h), and cell viability was detected by MTT assay. (B) Caspase-3 activity in HT22 hippocampal neurons after HG treatment at the indicated time periods. (C) Caspase-1 activity in HT22 hippocampal neurons after HG treatment at the indicated time periods. (D) IL-1β level in HT22 hippocampal neurons after HG treatment at the indicated time periods. (E) Intracellular ROS generation in HT22 hippocampal neurons after HG treatment for 36 h was visualised under a fluorescence microscopy. Values are represented as means ± SD from three independent experiments. ** P < 0.01, * P < 0.05, compared with the control-group.
The potential neuroprotective effects of NGR1 on HT22 cells against HG-induced injury were evaluated by measuring cell viability, caspase-3 activity, and lactate dehydrogenase (LDH) release. Although high concentrations of NGR1 (40μM) showed certain cytotoxicity in HT22 cells that underwent incubation for 36 h, no obvious difference in cell viability was observed among the groups at low concentrations (5, 10, and 20 μM) and the control group (P > 0.05) (Figure 7A). N-acetyl-L-cysteine (NAC, 10 mM), a classical anti-oxidant, was used as a positive control . As shown in Figure 7B, HT22 cells subjected to HG (50 mM) for 36 h showed significantly decreased cell viability (P< 0.01), and incubation with NGR1 obviously inhibited this decrease in a concentration-dependent manner (5, 10, and 20μM) (P< 0.05, P< 0.01). Moreover, HG treatment observably increased caspase-3 activity and LDH leakage in HT22 cells (P < 0.01) (Figure 7C, and 7E). And incubation with NGR1 (20 μM) observably decreased the caspase-3 activity and LDH leakage compared with the model group (P < 0.01). No obvious difference in cell viability, caspase-3 activity and LDH leakage was observed between the group treated with NGR1 (20 μM) alone and control group (P > 0.05). ROS is a mediator of glucose toxicity in HT22 neuronal cells . The intracellular ROS level was detected by measuring carboxy-H2DCFDA fluorescence. As shown in Figure 7D, HG treatment observably increased intracellular ROS level in HT22 cells compared with the control group (P < 0.01). Compared with the model group, incubation with NGR1 (20 μM) obviously decreased intracellular ROS level (P < 0.01). These results showed that NGR1 treatment provides protection against HG-induced cell injury by inhibiting ROS production.
Figure 7: NGR1 ameliorates HG-induced cell injury, ROS production, and oxidative stress in HT22 hippocampal neurons. (A) NGR1 showed no obvious cytotoxicity on cell viability with concentrations under 20 μM. (B) Effects of NGR1 on HG-induced cell viability were measured by MTT assay. (C) Effects of NGR1 on HG-induced intracellular caspase-3 activity. (D) Effects of NGR1 on HG-induced intracellular ROS levels detected using a FACSCalibur Fow Cytometer. (E) Effects of NGR1 on HG-induced LDH release. (F) Intracellular SOD activity in HG-induced HT22 hippocampal neurons. (G) Intracellular MDA levels in HG-induced HT22 hippocampal neurons. (H) Intracellular protein carbonyl levels in HG-induced HT22 hippocampal neurons. Values are represented as means ± SD from three independent experiments. ## P < 0.01, compared with the control-group; ** P < 0.01, * P < 0.05, compared with the model-group.
Intracellular oxidative stress was evaluated by measuring the content of lipid peroxidation product MDA, protein oxidation products protein carbonyl, and SOD activity. As shown in Figure 7F, 7G, and 7H, the SOD activity was observably decreased, whereas MDA, and protein carbonyl levels were observably decreased in HG-treated cells compared with the control group (P < 0.01). Interestingly, NGR1 (20 μM) treatment could enhance antioxidant enzyme SOD activity and lower MDA, protein carbonyl levels in HG-treated cells (P < 0.05). These results indicated that NGR1 can ameliorate HG-induced HT22 hippocampus neurons injury by inhibiting intracellular oxidative stress.
Similar to previous studies, hyperglycemia caused NLRP3 inflammasome activation in HT22 hippocampal cells . As shown in Figure 8A and 8B, NLRP3, ASC, and IL-1β expression levels and caspase-1 activity in HT22 cells were significantly elevated in the HG group compared with the control group (P < 0.01). Moreover, NGR1 (20 μM) treatment notably decreased NLRP3, ASC, and IL-1β expression levels and caspase-1 activity in HT22 cells compared with the non-treated HG group (P < 0.01).
Figure 8: NGR1 activates the Akt/Nrf2/HO-1 pathway, and inhibits NLRP3 inflammasome activation in HG-induced HT22 hippocampal neurons. (A) Representative protein bands and Western blot analysis of NLRP3, ASC, and IL-1β in hippocampal neurons. (B) Caspase-1 activity in hippocampal neurons. (C) Representative protein bands and Western blot analysis of Akt, p-Akt, Nrf2, HO-1, and TXNIP in hippocampal neurons. Values are represented as means ± SD from three independent experiments. ## P < 0.01, △△ P < 0.01, △ P < 0.05 compared with the control-group; ** P < 0.01, * P < 0.05, compared with the model-group.
To explore the molecular mechanism of NGR1 treatment on the inhibition of oxidative stress and NLRP3 inflammasome activation, Akt, p-Akt, nuclear Nrf2, HO-1, and TXNIP expressions were detected using Western blot analysis. As shown in Figure 8C, after 36 h exposure to hyperglycemia, phosphorylated Akt, nuclear Nrf2, and HO-1 expression levels were significantly decreased, whereas TXNIP expression level was remarkably increased compared with the control group (P< 0.01). Moreover, treatment with NGR1 (20 μM) obviously increased phosphorylated Akt, nuclear Nrf2, and HO-1 expression levels, while significantly decreased the TXNIP expression levels in the HT22 cells compared with the model group (P < 0.05, P < 0.01). Interestingly, a marked increase in phosphorylated Akt, unclear Nrf2, and HO-1 expression levels and a significant reduction in TXNIP expression levels were observed in HT22 cells treated with NGR1 alone (P < 0.05). These data indicated that NGR1 attenuates HG-induced injury in HT22 hippocampal neurons by activating the Akt/Nrf2/HO-1 pathway. Furthermore, NGR1 might promote the degradation of TXNIP by activating the PI3K/Akt pathway .
PI3K inhibitor LY294002 abolishes the neuroprotective effects of NGR1 against HG-induced HT22 hippocampal neurons injury.
To verify the effect of the Akt/Nrf2 pathway in the inhibition of oxidative stress and NLRP3 inflammasome activation in NGR1 treatment, the PI3K inhibitor LY294002 was used in experiments. As shown in Figure 9D, and 9E, the neuroprotective effect and inhibition of NLRP3 inflammasome activation of NGR1 were abolished by LY294002. Furthermore, LY294002 significantly decreased phosphorylated Akt, Nrf2 translocation, and HO-1 expression levels but TXNIP expression levels in HG- and NGR1-co-treated HT22 cells (P < 0.05) (Figure 9F). Most importantly, PI3K inhibitor LY294002 abolished the beneficial effect of NGR1 on the neuron injury in HG-induced HT22 cells (Figure 9A, 9B, and 9C). These results indicated that NGR1 exerts neuroprotective effects by activating the Akt/Nrf2/HO-1 pathway.
Figure 9: NGR1 exerts neuroprotective effects and inhibition of NLRP3 inflammasome by activating the Akt/Nrf2 pathway. (A) Cell viability was measured by MTT assay. (B) Caspase-3 activity in HT22 hippocampal neurons. (C) LDH release in HT22 hippocampal neurons. (D) Representative protein bands and Western blot analysis of NLRP3, ASC, and IL-1β in hippocampal neurons. (E) Caspase-1 activity in hippocampal neurons. (F) Representative protein bands and Western blot analysis of Akt, p-Akt, Nrf2, HO-1, and TXNIP in hippocampal neurons. Values are represented as means ± SD from three independent experiments. ## P < 0.01, compared with the control-group; ** P < 0.01, * P < 0.05, compared with the model-group; && P < 0.01, & P < 0.05, compared with the NGR1 (20 mM)-group.
In the present study, the diabetic db/db mice exhibited the behavioral characteristics of cognition impairment and depression, accompanied with hyperglycemia, excessive body weight, hyperinsulinemia, dyslipidemia, insulin resistance, and peripheral and central inflammation. Administration of NGR1 for 10 weeks could alleviate cognition decline, depression behaviors, and insulin resistance in db/db mice. And NGR1 treatment reduced peripheral inflammation and plasma TC, TG, LDL-C, and insulin levels in db/db mice. In addition, we also found that neuronal oxidative stress and NLRP3 inflammasome activation were involved in the ameliorative effects of NGR1 against DEP in vivo and in vitro. NGR1 reduced HG-induced oxidative stress by scavenging ROS, decreasing MDA, and oxidative carbonyl protein levels, increasing SOD activity and activating the Akt/Nrf2/HO-1 pathway. Moreover, our results identified that NGR1 is capable of decreasing TXNIP and NLRP3 inflammasome-related proteins expression both in vivo and in vitro. These findings suggested that inhibition of oxidative stress and NLRP3 inflammasome activation may be the vital mechanisms of NGR1 neuroprotection.
T2DM is a metabolic disease characterized by hyperglycemia, hyperinsulinemia, and dyslipidemia due to insulin resistance. Disordered lipid metabolism and hyperinsulinemia lead to numerous neurological diseases [33, 34]. Therefore, amelioration of hyperinsulinemia and dyslipidemia benefited the treatment of DEP. Similar to previous reports [35, 36], db/db mice exhibited high levels of TC, TG, LDL-C, and insulin in plasma. And administration of NGR1 for 10 weeks observably improved hyperinsulinemia and dyslipidemia in diabetic db/db mice.
Inflammation plays an important role in the onset of TD2M and progression of its complications. Numerous evidences suggested that patients with TD2M are under a state of subclinical chronic inflammation . Excessive inflammation in T2DM can disturb the blood–brain barrier permeability, which allows access of toxic substances to the brain, contributing to the pathophysiological processes of many neurodegenerative diseases . It is well known that activated microglial cells are main source of inflammatory cytokines in central nervous system and CD11b protein is a marker of microglial cells . Our study showed that hippocampal CD11b was increased, suggesting that neuroinflammation occurred in db/db mice. Treatment with NGR1 could significantly decrease hippocampal CD11b expression in db/db mice. Moreover, administration of NGR1 could reduce IL-1β, IL-6, TNF-α, and MCP-1 levels in plasma of db/db mice. These data demonstrated that NGR1 ameliorates peripheral and central inflammation, which contributed to the improvement of DEP.
Researches in recent decades illuminated that IL-1β accelerates the pathogenesis of neurodegenerative diseases and diabetic complications [9, 38]. Hippocampal IL-1β expression level is observably increased, and this increase is related to cognitive and emotional alterations in diabetic mice . The cleavage and maturation of IL-1β are activated by NLRP3 inflammasome, and the activation of NLRP3 inflammasome can contribute to pathophysiological processes involved in diabetic complications and neurodegenerative diseases [13, 39]. The knockdown of NLRP3 or caspase-1 in APP/PS1 mice can improve cognitive dysfunction . Moreover, the inhibition of NLRP3 inflammasome activation can alleviate diabetic complications, including diabetic cardiomyopathy, diabetic nephropathy, diabetic retinopathy, diabetes-related wound-healing defects, and diabetic vascular endothelial dysfunction [40–44]. In the present study, high levels of IL-1β and NLRP3 inflammasome activation were observed in the hippocampus of the db/db mice and HG-induced hippocampal neurons, suggesting that hyperglycemia could activate NLRP3 inflammasome and induce hippocampal inflammation. Our data demonstrated that NGR1 could decrease NLRP3 inflammasome activation and IL-1β expression level in vivo and in vitro. Our results indicated that the inhibition of NLRP3 inflammasome is involved in the ameliorative effects of NGR1 against DEP.
The major upstream mechanisms of NLRP3 inflammasome activation include ROS, phagosomal destabilization, and ion fluxes . Ample evidence showed that high rate of ROS is produced in hyperglycemia, thereby affecting neurons . Our previous studies verified that NGR1 exerts neuroprotective and cardioprotective effects by inhibiting oxidative stress [16, 23]. Therefore, we focused on the ability of NGR1 to reduce oxidative stress by detecting the levels of oxidative stress markers (ROS, MDA, protein carbonyls, and SOD). In the present study, high level of ROS generation was observed in HG treated HT22 cells. And incubation with NGR1 could significantly reduce ROS production. Furthermore, our data showed that NGR1 significantly increases SOD activity and reduces MDA, and protein carbonyl levels in the hippocampus of db/db mice and HG-treated HT22 cells, which suggested its anti-oxidative efficiency.
TXNIP is a link between ROS and NLRP3 inflammasome activation . ROS, a major upstream mechanism related to NLRP3 activation, induces the separation of TXNIP from thioredoxin and permits it to bind to NLRP3. TXNIP is a possible therapeutic target for diabetes and its related vascular complications . In the present study, TXNIP expression level observably increased in the hippocampus of the diabetic mice and HG-treated HT22 cells. Moreover, treatment with NGR1 could remarkably decrease the expression of TXNIP in vivo and in vitro. Interestingly, there was a significant reduction in TXNIP expression level in HT22 cells incubated with NGR1 alone probably because of the activation of Akt phosphorylation .
HO-1 is an endogenous anti-oxidant protein that is activated by Nrf2-dependent signaling pathway. Previous studies have indicated that activating the PI3K/Akt pathway induces the translocation of Nrf2 to the nucleus and increases HO-1 expression in hippocampal neurons . In addition, activating the Nrf2/HO-1 pathway can decrease NLRP3 inflammasome activation [17, 48, 49]. In our previous studies, NGR1 exhibited neuroprotective effects by activating the Akt/Nrf2 pathway and increasing HO-1 expression . In the present study, higher expression levels of phosphorylated Akt, Nrf2, and HO-1 were observed in the HT22 cells treated with NGR1 alone than in the control group, indicating that NGR1 intervention activates the Akt/Nrf2 pathway and promotes HO-1 expression. Our data also indicated that the Akt/Nrf2/HO-1 pathway is inactivated in the hippocampus of db/db mice and HG-treated HT22 hippocampal neurons. NGR1 administration significantly increased the hippocampal phosphorylation of Akt, Nrf2, and HO-1 expression levels in vivo and in vitro. Interestingly, these changes were abolished by the PI3K inhibitor LY294002, thereby activating NLRP3 inflammasome and promoting TXNIP and IL-1β expression in hippocampal neurons. These results demonstrated that the neuroprotective properties of NGR1 were related to the up-regulation of the Akt/Nrf2/HO-1 pathway.
In summary, our work demonstrated that NGR1 treatment significantly ameliorated DEP, and NGR1 improved insulin resistance and dyslipidemia in db/db mice. Our results indicated that NGR1 elicited neuroprotective effects by activating the Akt/Nrf2/HO-1 pathway, reducing oxidative stress, inhibiting NLRP3 inflammasome activation, and attenuating neuroinflammation (Figure 10). Therefore, these results showed that NGR1 may exhibit therapeutic properties for T2DM with DEP.
Figure 10: Schematic of NGR1 mechanism of ameliorating DEP by activating the Akt/Nrf2/HO-1 pathway and inhibiting NLRP3 inflammasome activation.
Notoginsenoside R1 (NGR1, molecular weight = 933.14, CAS NO: 80418-24-2, purity>98.6) was obtained from Chengdu Must Biotechnology Co., Ltd (Chengdu, China). Dulbecco’s modifed Eagle medium (DMEM), penicillin/streptomycin, and foetal bovine serum (FBS), and were supplied by Gibco (NewYork, USA). 0.25% trypsin was purchased from Beijing Solarbio Science & Technology Co., Ltd. (Beijing, China). The ELISA kit for determining the mouse insulin level was obtained from ALPCO (Salem, USA). The ELISA kit for determining the protein carbonyl was obtained from Cell Biolabs (San Diego, USA). The ELISA kits for detecting the mouse IL-6 and TNF-α were purchased from DAKEWEI (Shenzhen, China). The ELISA kit for determining the mouse MCP-1 was obtained from Beijing Expandbiotech Co., Ltd. (Beijing, China). The ELISA kit for measuring IL-1β and reactive oxygen species assay kit, caspase-1 activity assay kit, caspase-3 activity assay kit, one step TUNEL apoptosis assay kit, proteinase K, 2-(4-Amidinophenyl)-6-indolecarbamidine (DAPI) dihydrochloride staining solution, and N-acetyl-L-cysteine (NAC) were purchased from Beyotime Institute of Biotechnology (Beijing, China). Carboxymethylcellulose sodium was bought from Amresco (Houston, USA). The kits for determining the lactate dehydrogenase (LDH), malondialdehyde (MDA), and superoxide dismutase (SOD) were obtained from Nanjing jiancheng Bioengineering Institute (Nanjing, China). The commercial kits for detecting TC, TG, LDL-C, and HDL-C were obtained from Biosino Biotechnology & Science Inc (Beijing, China). Carboxy-H2DCFDA was offered from Life Technologies (Carlsbad, USA). Primary antibodies against NLRP3, CD11b, TXNIP, and HO-1 were obtained from abcam (Cambridge, UK). Primary antibodies against Akt, ASC, IL-1β, and Nrf2 were purchased from Santa Cruz Biotechnology (Santa Cruz, CA, USA). Primary antibody against phospho-Akt (Ser473) was obtained from Cell Signaling Technology (Danvers, MA, USA). The peroxidase-conjugated secondary antibodies of goat anti-rabbit IgG and goat anti-mouse IgG were purchased from ZSJQ-BIO (Beijing, China). 3-(4,5-dimethylthiazol-2yl-)-2,5-diphenyl tetrazolium bromide (MTT), PI3K inhibitor LY294002, and all of other regents were obtained from Sigma-Aldrich (St. Louis, USA).
Male 7-week old diabetic mice with a homozygous mutation of the leptin receptor (C57BLKS/J-leprdb/leprdb) and age-matched non-diabetic mice (db/m) were supplied by the Model Animal Research Center of Nanjing University (Nanjing, China). The animals were placed under temperature- and humidity-controlled laboratory conditions (temperature: 22 ± 2 °C, humidity: 60 ± 5 %). The mice were allowed free access to food and water, and were maintained under a 12 h light-dark cycle. All experimental procedures were approved by the Animal Ethics Committee of Peking Union Medical College.
After 1 week of adaption, the diabetic db/db mice were randomly divided into three groups (n=8 each): model control group, NGR1 (30 mg/kg) group, and NGR1 (10 mg/kg) group. The non-diabetic db/m mice were as control group (n=8). The mice in control group and model group were administrated intragastrically with vehicle (0.5% carboxymethyl cellulose solution). NGR1 treated groups were administrated intragastrically with 30 mg/kg/day or 10 mg/kg/day NGR1 in vehicle. Drug administration was performed once daily at around 9 a.m. All mice were treated for 10 weeks. Body weight and fasting blood glucose level were detected every week. Blood glucose level was detected after 6 h of fasting food with a portable glucometer (Roche Group, Switzerland).
Oral glucose tolerance test (OGTT) and insulin tolerance test (ITT) were performed as previous methods with slight modification . After an overnight fast (12 h), OGTT or ITT were conducted by intragestrical administration of glucose solution (1 g/kg) or intraperitoneal injection of insulin (0.75 U/kg) in saline. Blood glucose level was detected at 0, 30, 60, 90, and 120 min after glucose administration or insulin injection, respectively.
Experiments were performed at 9 a.m.-17 p.m. under conditions of dim light and low noise.
The test was performed as described previously with slight modifications . Each mouse was individually suspended by the tail to a vertical bar on the top of an opaque box (30 × 30 × 30 cm), with adhesive tape affixed 2 cm from the tip of the tail. A 6 min test was performed for each mouse. The immobility time was recorded for the last 4 min test. Immobility was set as the absence of any movements except those caused by respiration. The box was thoroughly cleaned using 75% alcohol before each use to remove odor cues.
The test was performed as previously described with little modifications . Each mouse was placed into cylinder (20 cm height × 13 cm diameters) containing 25 °C water 15 cm deep so that the mouse could not support itself by contacting the bottom. A 6-min test for each mouse was videotaped by a camera placed above the cylinder. The immobility time was measured for the last 4 min. Immobility was defined as the absence of necessary movements except those required for respiration. The water temperature was controlled at 25 °C.
The MWM test was performed according to the method described previously with minor modifications to assess spatial memory . The test included a 5-day training (visible- and hidden- platform training sessions) and a probe trial on day 6. The water maze equipment included a circular pool (100 cm in diameter, 50 cm in height), a black platform (9 cm in diameter), and a computer equipped with a management system (Super Maze, Shanghai Xinruan Information Technology Co., Ltd. China). The mouse was trained in the pool filled up with water maintained at 25 ± 1 °C. The maze was located in a lit room with visual cues. The pool was spatially divided into four imaginary quadrants and the platform was placed in the center of one quadrant. The position of the platform was invariant during the visible-platform and hidden-platform training sessions. The visible-platform training was implemented to detect differences in the vision and motivation of each group. The platform was placed 1 cm below the water surface and marked with a small flag (5 cm in height) in the visible-platform training. The hidden-platform training was facilitated to evaluate the ability of spatial learning. The flag was removed and the platform was placed 1 cm underneath the water surface. For each trial per day, each mouse was performed to four trials with a 1 h interval. Escape latency data were recorded. The training trial began with placing the animal in the water facing the wall of the pool and drop location was randomly changed for each trial. Each trial lasted until the mouse reached the platform and stayed there for 10 s. If the mouse failed to find the platform within 90 s, the trial was ended and the mouse was guided to the platform for 30 s; its escape latency was recorded as 90 s. On day 6, the probe trial began. Each mouse was allowed to swim freely in the pool for 90 s without the platform. The time spent in the target quadrant and the numbers of mice crossing through the original platform position were recorded.
After completion of the behavioral tests, overnight fasted mice were anesthetized by isoflurane inhalation. Blood samples were collected by cardiac puncture into EDTA (10%)-coated chilled tubes. After centrifugation (10 min, 3000 g, 4 °C), plasma were stored at -80 °C for further measurement. Three mice from each group were transcardially perfused with PBS followed by a 4% paraformaldehyde fixative solution. Brains were gently removed, immersed for 24 h in fixative, and then processed for subsequent experiment. The remaining mice of each group were perfused with cold PBS through the ascending aorta. Mice were decapitated and prefrontal cortex and the hippocampus were rapidly and carefully dissected on ice plate. The tissues were immediately collected into labeled-sterile tubes and frozen in liquid nitrogen and then stored at -80 °C until assays.
The plasma from each mouse were used to detect cytokines (L-1β, IL-6, TNF-α, and MCP-1), insulin, and lipids (TC, TG, LDL-C, and HDL-C). Three brains in 4% paraformaldehyde fixative solution from each group were used for H&E, Nissl’s, and TUNEL staining. The remaining hippocampal tissues were used for biochemical detection and western blotting.
The levels of plasma cytokines (IL-1β, IL-6, TNF-α, and MCP-1) and insulin were measured by enzyme-linked immune sorbent assay (ELISA) kits following the manufacturer’s instructions as previously described. The levels of TC, TG, LDL-C, and HDL-C were detected by a Hitachi7600 Automatic Biochemistry Analyzer (Tokyo, Japan) according to the manufacturer’s instructions.
The activities of caspase-1, caspase-3, and SOD and contents of MDA, and protein carbonyl were detected by the kits following the manufacturer’s instructions.
The fixed brains were embedded in paraffin and coronally dissected into 5 μm thick sections. In order to assess the damage of hippocampus, the brain paraffin sections were processed as described previously for histopathological examination by H&E staining . The brain paraffin sections were also conducted by the method as described previously for Nissl’s staining. Images were analyzed by using a light microscope (EVOS® XL Core, Life Technologies).
Cell apoptosis was assessed by a one step TUNEL apoptosis assay kit according to the previously described method . In brief, brain paraffin sections were dewaxed and rehydrated, and then incubated in proteinase K working solution at 25 °C for 30 minutes. After being washed in PBS, sections were treated with TUNEL reaction mixture for 1 h at 37 °C in dark. And then, sections were incubated with DAPI solution for 3 minutes. Images were captured using a fluorescence microscope (EVOS® FL Color, Life Technologies).
Western blot was performed as previous report . The tissues of hippocampus were weighed and homogenized in lysis buffer (1: 100 inhibitor proteases and phosphatases cocktail). In contrast to hippocampal tissues, the HT22 cells were lysed in sample buffer and sonicated by an ultrasonic cell disrupter. Total protein was determined by bicinchoninic acid (BCA) kits. The primary antibodies NLRP3 (1:1000), ASC (1:200), IL-1β (1:200), TXNIP (1:1000), Akt (1:200), p-Akt (1:1000), CD11b (1:1000), Nrf2 (1:200), HO-1 (1:1000) and β-actin (1:1000) were used for blotting. The proteins were visualized using a super enhanced chemiluminescence reagent. Western blotting images were analyzed using Image Lab software (BIO-RAD, USA).
The HT22 cell (mouse hippocampal neuronal cell line), which has served as a successful extracorporeal model in the study of diabetes associated hippocampal damage, were obtained from Beijing Beina Chuanglian Biotechnology Institute (Beijing, China). The cells were cultured in high glucose DMEM medium (25 mM glucose) containing 10% FBS, 100 U/ml penicillin, and 100 mg/ml streptomycin at 37 °C with 5% CO2 . After reaching 80% confluency, the cells were trypsinized and processed for subsequent experiment. Cells treated with different concentrations of NGR1, were incubated with HG (DMEM containing additional 25 mM glucose, total 50 mM glucose; HG group) for 12, 24, 36 hours. Control group did not add any additional glucose (total 25 mM glucose; control group).
The cell viability of HT22 cells was evaluated by MTT assay . Cells were seeded in 96-well cell culture plate at the seeding density of 4 × 103/well for 24 h. After incubation with glucose or drugs, culture medium was replaced with MTT medium, and further incubated at 37 °C for 4 h. 150μl of DMSO was added to each well with shaking 10 min, before reading the plate. The optical density (OD) value was detected by a microplate reader (Infinite M1000, Tecan Sunrise, Austria) at wavelength of 570 nm. The relative cell viability (%) was calculated by the following formula: (OD value of experimental group) / (OD value of control group) × 100%.
Cell death was evaluated by LDH release. The medium of the HT22 hippocampus neurons was collected to detect LDH release by kit. Data were presented as relative levels of the control group.
The intracellular ROS production was monitored using a fluorescent probe DCFH-DA. After treatment, cells were incubated with 10 μM DCFH-DA for 25 min at 37 °C and then washed thrice with phosphate-buffered saline (PBS) buffer. Finally, cellular morphology and fluorescence distributions were observed using a fluorescence microscopy (EVOS® FL Color, Life Technologies).
After treatment, cells were harvested by 0.25% trypsin and washed by PBS buffer. And then cells were centrifuged and incubated with 5-(and-6)-carboxy-2′, 7′-dichlorodihydrofluorescein diacetate (carboxy-H2DCFDA) in the dark at 37 °C for 30 min. The fluorescence was analyzed by a flow cytometry (BD, Biosciences, CA, USA).
All data were expressed as means ± standard deviation (SD). The results were analyzed by a one way analysis of variance (ANOVA) based on Student’s two-tailed unpaired t-test. The P values less than 0.05 were considered to be statistically significant.
Designed and performed the experiments: Yadong Zhai, Xiangbao Meng, Guibo Sun, and Xiaobo Sun. Collected samples and analysed the data: Yadong Zhai, Lili Zhu, Yongmei Wu, Tianyuan Ye, Min Wang, and Ping Zhou. Prepared the figures: Yadong Zhai, Xiangbao Meng, and Yun Luo. Supervised the experiments and revised the manuscript: Guibo Sun, and Xiaobo Sun. Wrote the paper: Yadong Zhai, and Xiangbao Meng.
This work was supported by the Major Scientific and Technological Special Project for “Significant New Drugs Formulation” (No. 2017ZX09101003-009), the Chinese Academy of Medical Sciences (CAMS) Innovation Fund for Medical Sciences (No. 2016-I2M-1-012), the CAMS Initiative for Innovative Medicine (No. CAMS-I2M-1-010), the Special Research Project for TCM (No. 201507004), the National Natural Science Foundation of China (No. 81503290), and the Fundamental Research Funds for the Central Universities and the Peking Union Medical College Youth Found (No. 3332016076).
1. Cukierman T, Gerstein HC, Williamson JD. Cognitive decline and dementia in diabetes--systematic overview of prospective observational studies. Diabetologia. 2005; 48:2460–2469.
2. Arvanitakis Z, Wilson RS, Bienias JL, Evans DA, Bennett DA. Diabetes mellitus and risk of Alzheimer disease and decline in cognitive function. Arch Neurol. 2004; 61:661–666.
3. McCrimmon RJ, Ryan CM, Frier BM. Diabetes and cognitive dysfunction. Lancet. 2012; 379:2291–2299.
4. de Cossio LF, Fourrier C, Sauvant J, Everard A, Capuron L, Cani PD, Laye S, Castanon N. Impact of prebiotics on metabolic and behavioral alterations in a mouse model of metabolic syndrome. Brain Behav Immun. 2017; 64:33–49.
5. Liu W, Liu J, Xia J, Xue X, Wang H, Qi Z, Ji L. Leptin receptor knockout-induced depression-like behaviors and attenuated antidepressant effects of exercise are associated with STAT3/SOCS3 signaling. Brain Behav Immun. 2017; 61:297–305.
6. Yasin Wayhs CA, Tannhauser Barros HM, Vargas CR. GABAergic modulation in diabetic encephalopathy-related depression. Curr Pharm Des. 2015; 21:4980–4988.
7. Wang ZG, Huang Y, Cheng Y, Tan Y, Wu FZ, Wu JM, Shi HX, Zhang HY, Yu XC, Gao HC, Lin L, Cai J, Zhang JS, et al. Endoplasmic reticulum stress-induced neuronal inflammatory response and apoptosis likely plays a key role in the development of diabetic encephalopathy. Oncotarget. 2016; 7:78455–78472. https://doi.org/10.18632/oncotarget.12925.
8. Seto SW, Yang GY, Kiat H, Bensoussan A, Kwan YW, Chang D. Diabetes mellitus, cognitive impairment, and traditional chinese medicine. Int J Endocrinol. 2015; 2015:810439.
9. Pan Y, Chen XY, Zhang QY, Kong LD. Microglial NLRP3 inflammasome activation mediates IL-1β-related inflammation in prefrontal cortex of depressive rats. Brain Behav Immun. 2014; 41:90–100.
10. Heneka MT, Kummer MP, Stutz A, Delekate A, Schwartz S, Vieira-Saecker A, Griep A, Axt D, Remus A, Tzeng TC, Gelpi E, Halle A, Korte M, et al. NLRP3 is activated in Alzheimer’s disease and contributes to pathology in APP/PS1 mice. Nature. 2013; 493:674–678.
11. Xu Y, Sheng H, Bao Q, Wang Y, Lu J, Ni X. NLRP3 inflammasome activation mediates estrogen deficiency-induced depression- and anxiety-like behavior and hippocampal inflammation in mice. Brain Behav Immun. 2016; 56:175–186.
12. Zhang Y, Liu L, Liu YZ, Shen XL, Wu TY, Zhang T, Wang W, Wang YX, Jiang CL. NLRP3 inflammasome mediates chronic mild stress-induced depression in mice via neuroinflammation. Int J Neuropsychopharmacol. 2015; 18:1–8.
13. Volpe CM, Anjos PM, Nogueira-Machado JA. Inflammasome as a new therapeutic target for diabetic complications. Recent Pat Endocr Metab Immune Drug Discov. 2016; 10:56–62.
14. Ward R, Ergul A. Relationship of endothelin-1 and NLRP3 inflammasome activation in HT22 hippocampal cells in diabetes. Life Sci. 2016; 159:97–103.
15. Zhou R, Tardivel A, Thorens B, Choi I, Tschopp J. Thioredoxin-interacting protein links oxidative stress to inflammasome activation. Nat Immunol. 2010; 11:136–140.
16. Meng X, Wang M, Wang X, Sun G, Ye J, Xu H, Sun X. Suppression of NADPH oxidase- and mitochondrion-derived superoxide by notoginsenoside R1 protects against cerebral ischemia-reperfusion injury through estrogen receptor-dependent activation of AKT/NRF2 pathways. Free Radic Res. 2014; 48:823–838.
17. Cai M, Wang H, Li JJ, Zhang YL, Xin L, Li F, Lou SJ. The signaling mechanisms of hippocampal endoplasmic reticulum stress affecting neuronal plasticity-related protein levels in high fat diet-induced obese rats and the regulation of aerobic exercise. Brain Behav Immun. 2016; 57:347–359.
18. Luo Y, Sun GB, Dong X, Wang M, Qin M, Yu Y, Sun XB. Isorhamnetin attenuates atherosclerosis by inhibiting macrophage apoptosis via PI3K/AKT activation and HO-1 induction. PLoS One. 2015; 10:e0120259.
19. Zaragoza-Campillo MA, Moran J. Reactive oxygen species evoked by potassium deprivation and staurosporine inactivate Akt and induce the expression of TXNIP in cerebellar granule neurons. Oxid Med Cell Longev. 2017; 2017:8930406.
20. Liao DY, Qu ZQ, Zhong ZG, Qin ZL, Xi JY, Tan C, Lin SN, Zhu YP. Research on protective function of panax notoginseng saponins for spatial cognitive functions and hippocampal neurons in diabetic mice. J Tradit Chin Med. 2014; 37:466–470.
21. Sun B, Xiao J, Sun XB, Wu Y. Notoginsenoside R1 attenuates cardiac dysfunction in endotoxemic mice: an insight into oestrogen receptor activation and PI3K/Akt signalling. Br J Pharmacol. 2013; 168:1758–1770.
22. Ma B, Meng X, Wang J, Sun J, Ren X, Qin M, Sun J, Sun G, Sun X. Notoginsenoside R1 attenuates amyloid-β-induced damage in neurons by inhibiting reactive oxygen species and modulating MAPK activation. Int Immunopharmacol. 2014; 22:151–159.
23. Yu Y, Sun G, Luo Y, Wang M, Chen R, Zhang J, Ai Q, Xing N, Sun X. Cardioprotective effects of notoginsenoside R1 against ischemia/reperfusion injuries by regulating oxidative stress- and endoplasmic reticulum stress- related signaling pathways. Sci Rep. 2016; 6:21730.
24. Yan S, Li Z, Li H, Arancio O, Zhang W. Notoginsenoside R1 increases neuronal excitability and ameliorates synaptic and memory dysfunction following amyloid elevation. Sci Rep. 2014; 4:6352.
25. Huang G, Lv J, Li T, Huai G, Li X, Xiang S, Wang L, Qin Z, Pang J, Zou B, Wang Y. Notoginsenoside R1 ameliorates podocyte injury in rats with diabetic nephropathy by activating the PI3K/Akt signaling pathway. Int J Mol Med. 2016; 38:1179–1189.
26. Fan F, Liu T, Wang X, Ren D, Liu H, Zhang P, Wang Z, Liu N, Li Q, Tu Y, Fu J. CLC-3 expression and its association with hyperglycemia induced HT22 hippocampal neuronal cell apoptosis. J Diabetes Res. 2016; 2016:2984380.
27. Rackova L, Snirc V, Jung T, Stefek M, Karasu C, Grune T. Metabolism-induced oxidative stress is a mediator of glucose toxicity in HT22 neuronal cells. Free Radic Res. 2009; 43:876–886.
28. Dinel AL, André C, Aubert A, Ferreira G, Layé S, Castanon N. Cognitive and emotional alterations are related to hippocampal inflammation in a mouse model of metabolic syndrome. PLoS One. 2011; 6:e24325.
29. Sharma AN, Elased KM, Garrett TL, Lucot JB. Neurobehavioral deficits in db/db diabetic mice. Physiol Behav. 2010; 101:381–388.
30. Habib P, Beyer C. Regulation of brain microglia by female gonadal steroids. J Steroid Biochem Mol Biol. 2015; 146:3–14.
31. Li XH, Xin X, Wang Y, Wu JZ, Jin ZD, Ma LN, Nie CJ, Xiao X, Hu Y, Jin MW. Pentamethylquercetin protects against diabetes-related cognitive deficits in diabetic goto-kakizaki rats. J Alzheimers Dis. 2013; 34:755–767.
32. Liu D, Zhang H, Gu W, Liu Y, Zhang M. Neuroprotective effects of ginsenoside Rb1 on high glucose-induced neurotoxicity in primary cultured rat hippocampal neurons. PloS One. 2013; 8:e79399.
33. Ellis JM, Wong GW, Wolfgang MJ. Acyl coenzyme A thioesterase 7 regulates neuronal fatty acid metabolism to prevent neurotoxicity. Mol Cell Biol. 2013; 33:1869–1882.
34. Luchsinger JA, Tang MX, Shea S, Mayeux R. Hyperinsulinemia and risk of Alzheimer disease. Neurology. 2004; 63:1187–1192.
35. Cao AL, Wang L, Chen X, Wang YM, Guo HJ, Chu S, Liu C, Zhang XM, Peng W. Ursodeoxycholic acid and 4-phenylbutyrate prevent endoplasmic reticulum stress-induced podocyte apoptosis in diabetic nephropathy. Lab Invest. 2016; 96:610–622.
36. Yoon JJ, Lee YJ, Kang DG, Lee HS. Protective role of oryeongsan against renal inflammation and glomerulosclerosis in db/db mice. Am J Chin Med. 2014; 42:1431–1452.
37. Bell RD, Zlokovic BV. Neurovascular mechanisms and blood-brain barrier disorder in Alzheimer’s disease. Acta Neuropathol. 2009; 118:103–113.
38. Peiro C, Lorenzo O, Carraro R, Sanchez-Ferrer CF. IL-1β inhibition in cardiovascular complications associated to diabetes mellitus. Front Pharmacol. 2017; 8:363.
39. Shao QH, Zhang XL, Yang PF, Yuan YH, Chen NH. Amyloidogenic proteins associated with neurodegenerative diseases activate the NLRP3 inflammasome. Int Immunopharmacol. 2017; 49:155–160.
40. Bauer PM, Luo B, Li B, Wang W, Liu X, Xia Y, Zhang C, Zhang M, Zhang Y, An F. NLRP3 gene silencing ameliorates diabetic cardiomyopathy in a type 2 diabetes rat model. PLoS One. 2014; 9:e104771.
41. Bitto A, Altavilla D, Pizzino G, Irrera N, Pallio G, Colonna MR, Squadrito F. Inhibition of inflammasome activation improves the impaired pattern of healing in genetically diabetic mice. Br J Pharmacol. 2014; 171:2300–2307.
42. Zhang J, Xia L, Zhang F, Zhu D, Xin C, Wang H, Zhang F, Guo X, Lee Y, Zhang L, Wang S, Guo X, Huang C, et al. A novel mechanism of diabetic vascular endothelial dysfunction: Hypoadiponectinemia-induced NLRP3 inflammasome activation. Biochim Biophys Acta. 2017; 1863:1556–1567.
43. Wang S, Li Y, Fan J, Zhang X, Luan J, Bian Q, Ding T, Wang Y, Wang Z, Song P, Cui D, Mei X, Ju D. Interleukin-22 ameliorated renal injury and fibrosis in diabetic nephropathy through inhibition of NLRP3 inflammasome activation. Cell Death Dis. 2017; 8:e2937.
44. Devi TS, Lee I, Huttemann M, Kumar A, Nantwi KD, Singh LP. TXNIP links innate host defense mechanisms to oxidative stress and inflammation in retinal muller glia under chronic hyperglycemia: implications for diabetic retinopathy. Exp Diabetes Res. 2012; 2012:438238.
45. Lan Z, Xie G, Wei M, Wang P, Chen L. The protective effect of Epimedii Folium and Curculiginis Rhizoma on Alzheimer’s Disease by the inhibitions of NF-κB/MAPK pathway and NLRP3 inflammasome. Oncotarget. 2017; 8:43709–43720. https://doi.org/10.18632/oncotarget.12574.
46. Wu P, Du GH. Thioredoxin-interacting protein: a new potential target for diabetes and related vascular complications therapy. Acta Pharmaceutica Sinica. 2015; 50:1559–1564.
47. Lee DS, Jeong GS. Butein provides neuroprotective and anti-neuroinflammatory effects through Nrf2/ARE-dependent haem oxygenase 1 expression by activating the PI3K/Akt pathway. Br J Pharmacol. 2016; 173:2894–2909.
48. Hou Y, Wang Y, He Q, Li L, Xie H, Zhao Y, Zhao J. Nrf2 inhibits NLRP3 inflammasome activation through regulating Trx1/TXNIP complex in cerebral ischemia reperfusion injury. Behav Brain Res. 2017; 336:32–39.
49. Liu X, Wang T, Liu X, Cai L, Qi J, Zhang P, Li Y. Biochanin A protects lipopolysaccharide/D-galactosamine-induced acute liver injury in mice by activating the Nrf2 pathway and inhibiting NLRP3 inflammasome activation. Int Immunopharmacol. 2016; 38:324–331.
50. Gu C, Zhou W, Wang W, Xiang H, Xu H, Liang L, Sui H, Zhan L, Lu X. ZiBuPiYin recipe improves cognitive decline by regulating gut microbiota in Zucker diabetic fatty rats. Oncotarget. 2017; 8:27693–27703. https://doi.org/10.18632/oncotarget.14611.
51. Chen F, Dong RR, Zhong KL, Ghosh A, Tang SS, Long Y, Hu M, Miao MX, Liao JM, Sun HB, Kong LY, Hong H. Antidiabetic drugs restore abnormal transport of amyloid-beta across the blood-brain barrier and memory impairment in db/db mice. Neuropharmacology. 2016; 101:123–136.
52. Zhang SJ, Xu TT, Li L, Xu YM, Qu ZL, Wang XC, Huang SQ, Luo Y, Luo NC, Lu P, Shi YF, Yang X, Wang Q. Bushen-Yizhi formula ameliorates cognitive dysfunction through SIRT1/ER stress pathway in SAMP8 mice. Oncotarget. 2017; 8:49338–49350. https://doi.org/10.18632/oncotarget.17638.
53. Peng S, Gao J, Liu W, Jiang CH, Yang XL, Sun Y, Guo WJ, Xu Q. Andrographolide ameliorates OVA-induced lung injury in mice by suppressing ROS-mediated NF-κB signaling and NLRP3 inflammasome activation. Oncotarget. 2016; 7:80262–80274. https://doi.org/10.18632/oncotarget.12918. | 2019-04-24T23:00:14Z | http://www.oncotarget.com/index.php?journal=oncotarget&page=article&op=view&path%5B%5D=24295&path%5B%5D=76330 |
The importance of professional competence (PC) in business administration (BA) has increased considerably in many industrial nations over the past several years. However, while economic competence is being assessed internationally in the Assessment of Higher Learning Outcomes (AHELO) study, the modeling and assessment of PC in BA needs more research, particularly from an international perspective. We defined and modeled the construct of PC in BA based on theoretical analyses and evidence from international studies and, for the assessment, focused on knowledge of BA as a key facet of PC in BA. In this article, we describe the developed structural model of knowledge in BA and present the specifications and findings for the example of financial knowledge (FK). In the model, we describe cognitive levels of FK in relation to subject content and subject didactics. Moreover, we discuss influence factors on FK.
Subsequently, we present the results from the empirical analyses on FK. Assessment data was gathered using an adapted and further developed international test instrument. The sub-sample for the analyses comprised 773 students from 23 institutions of higher education in Germany. We used item response models to confirm the theoretically modeled levels and multilevel modeling to analyze influence factors on FK.
The Rasch model showed a good fit to the data and confirmed the theoretically modeled levels. From the perspective of vocational education and training, we investigated the extent to which FK is influenced positively by commercial vocational training completed prior to higher education studies. We analyzed this while controlling personal influence factors, such as mother tongue and gender, and study-related influence factors such as completion of subject-related courses at university, number of semesters, and type of institution of higher education. We found that prior commercial vocational training affected FK even when the other influence factors were controlled.
These results support the assumption that during dual vocational education and training students acquire professional knowledge and gain experience related to their job or practical training that are not or cannot be taught in this way at universities or universities of applied sciences.
In the context of the Bologna reform, and with increasing competition due to internationalization, the kinds and structures of study programs in BA in higher education have diversified enormously. For companies and the economy, it is increasingly important to know what PC students acquire in BA at institutions of higher education. For example, some business representatives express great skepticism with regard to the quality of the new bachelor level degree programs in business administration. Small and medium-sized businesses particularly distrust the quality of the new bachelor level degree programs (see Jahn 2007) and, because there is little empirical data, it is a huge challenge for companies to judge the PC of bachelor level degree students in BA.
From a corporate perspective, it is particularly important to assess student and graduate knowledge of BA, along with social and motivational aspects (Zlatkin-Troitschanskaia et al. 2014). Knowledge of BA is an essential condition for developing PC in BA (Größler et al. 2002). In Germany, responsibility for teaching business content lies mainly within universities and universities of applied sciences, the two major types of higher education institutions.b The structures of study programs have been subject to harmonization processes for years (e.g., Krücken 2004), for example, through uniform designation of degrees, such as the Bachelor of Arts and Master of Arts. Nevertheless, the public’s perception is that the quality of content taught and degrees to be obtained differ between institutions of higher education (e.g., Nickel 2011). Company administrators need to know whether differences exist, what kind of differences to expect with regard to graduate knowledge of BA and whether they need to take into account differences between educational institutions, such as between universities and universities of applied sciences.
Some evidence of differences in knowledge among students even from identically labeled degree courses was provided by the Innovative Teach-Study Network in Academic Higher Education (ILLEV) study (e.g., Happ et al. 2013). In this study, students’ declarative knowledge was assessed in a number of subdomains of BA. The results showed that students from identically labeled business and economic degree courses (e.g., M.Sc. or M.A.) differed greatly in their business knowledge even if the assessment was controlled for the courses the students attended. The findings illustrate that degree labels and certificates provide little indication of the type, extent, and quality of knowledge students acquire. In the ILLEV project, the Business Administration Knowledge Test (BAKT, Bothe T, Wilhelm O, Beck K (2006): Business administration knowledge. Assessment of declarative business administration knowledge: Measurement development and validation, unpublished manuscript.) was administered, as it is generally suitable for assessing knowledge of BA. However, a major limitation of the BAKT is that it assesses declarative knowledge only (Zlatkin-Troitschanskaia et al. 2014).
Comprehensive modeling and a valid assessment of business knowledge requires first, an in-depth analysis of the practical requirements of business professions and, second, a test instrument that adequately represents these professional requirements. This complex task is being undertaken in Project WiwiKom,c which assesses PC in BA. The project team defined and modeled the construct of PC in BA based on theoretical analyses and evidence from international studies and, for the assessment, the team focused on business knowledge as a key facet of PC in BA. To enable international comparability of results, which is relevant for knowledge of BA, the project team took an internationally tested instrument from Mexico and developed it further in accordance with the model. This test instrument was used to assess students in a pilot study in 2012 and in two large-scale surveys at 33 institutions of higher education in Germany in 2013.
We present analyses of student knowledge of BA based on data from the first large-scale survey and then identify the extent to which it differs among various institutions and types of institutions of higher education in Germany. For conciseness, we focus on the findings in financial knowledge (FK) as an example of a key facet of knowledge of BA (Porter 2000). Moreover, textbook analyses identified the financial sector as central economic content (Lauterbach 2013) and, in the area of vocational schools, FK was also classified as a central component of business knowledge (Preiss 2005). In the article, we describe the structural model of knowledge of BA as developed in Project WiwiKom and the specifications for the example of FK. With regard to the model, we describe cognitive levels of FK in connection with subject content and subject didactics, as well as how they were represented by the items. Based on Project WiwiKom data, we present the results of the empirical analyses with regard to the following questions. First, do we have empirical evidence of the modeled cognitive levels in terms of gradual differences in FK for the entire construct or one or several content subdimensions? Second, which factors influence FK? From the perspective of commercial VET, it would be most relevant to know the extent to which FK is influenced positively by commercial VET completed prior to studies in higher education. We analyzed this while controlling for personal influence factors, such as gender and mother tongue, and study-related influence factors, such as completion of subject-related courses at university, number of semesters, and type of institution of higher education.
Following the state of international research, we understand PC in BA as the cognitive disposition necessary for successful decision-making in professional situations within a company (see in detail Zlatkin-Troitschanskaia et al. 2014). In this, we focused on knowledge of BA. A knowledge construct can be specified with regard to content dimensions and cognitive dimensions (e.g., Alexander et al. 1994).
Identifying appropriate content dimensions of the construct is crucial because decision-making in practical situations in a company requires knowledge of many content areas. In Project X, we differentiated the business content dimensions of human resources, finance, accounting, marketing, and organization and management, in line with common classifications of the domain of BA in the literature and at institutions of higher education in Germany (see in detail Zlatkin-Troitschanskaia et al. 2014). Since many companies are organized functionally, the above distinction of content areas corresponds with common practical distinctions of divisions or departments in a company. From the company perspective, it is important that students acquire both general and area-specific knowledge so that they can both understand the company in its entirety and also work professionally in specific departments. Companies can be regarded as information systems (Preiss 2005), in which the area of finance is a key integrative component.
With regard to the cognitive dimensions of knowledge of BA, the project team made assumptions about general levels that are applicable to all content dimensions of BA. Following Anderson and Krathwohl (2001) and Walstad et al. (2007), the following three hierarchical cognitive process levels were differentiated: (1) remembering and understanding, (2) applying and analyzing, and (3) creating and evaluating (for further information about content-based modeling of FK see Zlatkin-Troitschanskaia et al. 2014). These rather abstract theoretical levels were specified using item difficulty generating characteristics (Hartig and Frey 2012) for each content dimension of BA. In the following, we illustrate the conceptualization of these characteristics for the example of the content dimension of finance.
FK is a key dimension of knowledge of BA. It is necessary for making professional financial decisions in a company and solving financial problems. FK serves essentially to control decision processes by providing decision-makers in companies with information about finance and cash flows as a basis for informed investment decisions. Financial decisions serve to maintain the financial stability of a company while other key objectives, such as profitability, are being pursued (Becker 2012). Accordingly, FK includes knowledge of financial concepts and how to handle risks. With internationalization, increasing competitive pressure, the shortening of product life cycles, and many other developments, financial decisions are gaining increasing importance for a company’s ability to survive (Prätsch et al. 2003).
mathematical and algorithmic modeling, from simple summation to linking different basic arithmetic operations and to operations, which demand the application of complex formulas.
According to these characteristics, we assume that increasing periods of time which are taken into consideration, a higher level of insecurity in a particular situation, a higher degree of abstraction as well as complex requirements for mathematical and algorithmic modeling all lead to more difficulty in making financial decisions. As a result, a variety of combinations of these four criteria is possible. Thereby, the following four are especially relevant to the domain of finance in bachelor level studies (see Table 1).
At the lowest difficulty level, test-takers need to be able to understand basic financial concepts. To this end, the ability to identify value streams is of central importance, since value streams show condensed financial information and are vital to all parts of a business, as described in Porter’s value chain (Porter 2000). Value streams are a primary source of information about the situation of a company and are considered by employees and managers in making responsible financial decisions in a company. Becker’s textbook on investment and finance (2012) starts with basic value streams in a company, such as cash inflows and outflows. The textbook explains inflows and outflows as the basis of all budgeting, financial appraisal and control. The textbook by Zantow and Dinauer (2011) starts with an explanation of the differences between terms for value streams in companies, such as cash inflow and revenue. At Level 1 of FK, test-takers need to be able to understand and recall basic financial concepts. Thus, at this level, the basics of investment and finance include identifying, differentiating, structuring, and categorizing value streams. Structuring and categorizing are classified into this category, because a comprehension of single core concepts (without transferring them to new situations) is necessary. To reach Level 1, test-takers do not need more advanced knowledge of investment and financial instruments or forecasting abilities. At this level, test-takers need to consider data based on past financial events without any uncertainty. There is a low degree of abstraction, since the main concepts to be considered in the decision process or item response process are cash inflows and outflows.
In the introduction to their textbook, Prätsch et al. (2003) describe the solving of financial problems as a key operation in a company. Financial information is necessary for all strategic decisions in a company. First, it needs to be identified and then be prepared for further use in decision-making. Test-takers need to analyze single pieces of information and combine and process them. In this context, Becker (2012) introduces the terminology and calculation of profitability indices and includes the analysis of liquidity indices and cash flow values. Zantow and Dinauer (2011) subsume these concepts under financial goals. To evaluate and calculate profitability and liquidity indices, it is necessary to have conceptual knowledge of profits and means of payment as well as be able to use single pieces of information that are financially relevant. Furthermore, test-takers need to be able to understand different terminologies and value streams and to combine numerical values. This is more difficult than the requirements at Level 1. At Level 2, we considered calculations of the cost of capital equally difficult to calculations of liquidity indices, since the calculations of both types of financial items involve several numbers and must both be analyzed. The difference in difficulty can also be explained by the kind of inference the test-taker is using (Minnameier 2013). While the memorization of a more difficult concept can be regarded as an abductive inference, the calculation itself can be regarded as a more deductive inference, which is a more logical conclusion from a mental representation given, needed at almost all levels of item difficulty. Often the necessary operations include calculating averages, percentages, and other values that are preliminary approximations of forecasting, such as the weighted average cost of capital, which is used for discounting future cash flow. At Level 2, the index calculations include actual figures only. Concepts described at Level 1, such as cash inflows and outflows reappear but are not considered individually and need to be combined in a calculation or other method. At Level 2, the degree of abstraction is higher since inflows and outflows are condensed into new, more complex figures. There is only little uncertainty to be taken into account, since financial concepts are static at this level.
Apart from statically aggregated indicators, financial decisions also need to reflect time and risk factors. These additional variables lead to an increase in difficulty. The variables of time and risk can be subject to various changes; they are dynamic elements in financial decision-making. In the textbooks analyzed (e.g., Becker 2012; Zantow and Dinauer 2011), investment appraisal methods are introduced only after the sections on identifying value streams and analyzing and evaluating profitability and cash flow. In Project WiwiKom, the test items on investment appraisal methods have higher cognitive requirements for decision-making in finance. The required operations include comprehensive, complex calculations with prognostic, multi-period components. However, in contrast to Level 2, test-takers need to consider data from several periods of time and to handle some uncertainty. Therefore, we assumed Level 3 to have a slightly higher degree of difficulty, which should also be reflected in the data. The content, such as the application of dynamic investment methods is, according to expert opinion, assessed as more difficult than the identification of value flows, as found at Level 1.
Test-takers who successfully understand, apply, and analyze static and dynamic investment appraisal methods are able to use key financial concepts and instruments professionally. In addition, companies need to make decisions for strategic planning. Corporate finance planning requires a systematic, multi-perspective view of finance, taking into account both various types of financial information from individual departments and also an overall understanding of the entire company. The textbook analysis showed clearly that the creation of finance plans requires elaborate FK and an understanding of the entire company (e.g., Becker 2012; Zantow and Dinauer 2011). At Level 4, test-takers need to forecast and evaluate the future development of the company. Level 4 has the highest degree of abstraction, as decision-making is based on different types of information from different departments. In the course of the validation studies, it was obvious that there were no items from the adapted test version which matched to these characteristics. Therefore, these characteristics could not be operationalized. Items at a higher level of difficulty or other item formats need to be newly developed and validated in follow-up studies.
Content analyses indicated that FK is a one-dimensional construct with regard to content. Even though we assumed several levels of difficulty, they all referred in content to financial decision-making.
We empirically examined this theoretical model of FK and the derived levels from multiple evidence bases. In the following, we present the testing instrument, data, methods, and results as to whether the empirical analyses confirmed the above model, whether the modeled difficulty levels of the items are reflected in the item parameter estimations, and if the items are represented in one dimension (Question 1 above).
Furthermore, we present our results concerning the extent to which FK was positively influenced by commercial vocational training completed prior to studies in higher education, when personal influence factors, such as mother tongue and gender, and study-related influence factors, such as completion of subject-related courses at university, number of semesters, and type of institution of higher education are controlled (Question 2 above).
Since finance is a central part of companies, it is an important learning domain for commercial trainees. Students who have already completed an apprenticeship should possess previous knowledge which is subject relevant. The positive influence of economic and didactical knowledge has already been established in empirical studies (cf. Kuhn et al. 2014). In the course of this project, a positive effect of vocational education on economic knowledge was confirmed (Brückner et al. 2015; Zlatkin-Troitschanskaia et al. 2015).
Project WiwiKom assessed the FK and the cognitive processes of students in BA degree courses at institutions of higher education in Germany. This group of prospective financial decision-makers was assessed in order to draw conclusions about their understanding of BA concepts in professional situations. The assessment focused on the knowledge that is necessary for making financial decisions in companies. The collected data was meant to provide evidence as to whether or not students acquire sufficient FK during their studies to address appropriate finance issues in a company.
To this end, the project team adapted and developed further the Mexican ‘Examen General para el Egreso de la Licenciatura Administración’ (Centro Nacional de Evaluación para la Educaión Superior, AC CENEVAL 2010). The EGEL assesses knowledge in several content areas of BA.d In Project WiwiKom, the test was translated into German and adapted according to the Test Adaptation Guidelines to ensure a high quality adaptation (American Educational Research Association, American Psychological Association and National Council on Measurement in Education AERA 2004; International Test Commission ITC 2010). The test comprises 250 items representing commonly assessed business content areas of human resources, finance, accounting, marketing, and organization and management. The items are generally in closed-ended format with various classic and complex multiple-choice formats. The EGEL was developed to cover a diverse and representative number of situational contexts that business students might encounter in their later work life. Each item comprises a situational item context and a question which refers to a situation within a company, as well as four different response options, including one attractor and three distractors. Thus, test-takers had to make the right decision within the professional context of a company. The test was developed in a joint effort by company representatives and researchers (Centro Nacional de Evaluación para la Educaión Superior, AC CENEVAL 2010). The EGEL describes typical decision situations in a professional corporate context.
In the EGEL, 28 items were originally developed to measure FK. After thorough translation, adaptation, and validation processes, 24 of the 28 items remained. Four items had to be rejected because they could not be adapted to the specific cultural and curricular context in Germany. At the same time, six new items were developed for previously under-represented content, situations and cognitive levels. After the first pilot study in 2012 (N = 962), some items were revised and evaluated in expert interviews (N = 32) and in an online rating (N = 78). The experts were asked to estimate the extent to which the situational item contexts were representative of the prospective professional life of university graduates. After successful validation, all 30 items about FK were used in the main surveys.
In the first major field survey, in the winter term of 2012, the 30 items were administered to bachelor level degree students from 23 universities and universities of applied sciences in Germany. In addition to FK, knowledge of human resources, accounting, marketing, and organization and management as well as of microeconomics and macroeconomics was assessed using the Test of Understanding in College Economics (TUCE; Walstad and Rebeck 2008). These further content areas were covered in 220 items. To ensure that the students could respond to a sufficient number of the financial items despite limited test-taking time, the project team used a multiple matrix design. The booklet design consisted of several complex Youden square designs (Frey et al. 2009), including 42 booklets each with three item clusters, 10 items per cluster. To confirm the item fit to the assumed levels of difficulties, each item was assigned to a level of difficulty. Therefore, nine items were assigned to the first level (understanding basic financial concepts in a company), 14 items to the second level (analyzing static concepts and static investment appraisal methods) and six items to the third level (applying and analyzing dynamic appraisal methods).
Altogether 3,873 students were assessed in this survey. The items about finance were answered by 773 students from 23 institutions of higher education (see Table 2), who formed the sample for the subsequent analyses. At the time of the survey, 24.2% of the students were in their first year of studies, while the rest were more advanced in their studies. The share of students who did not indicate their study progress was 5.4%. Approximately 20% of the students had completed commercial VET, and 13% had graduated from a commercial upper secondary school.
* Final school grades range between 1 (best grade) and 4 (lowest pass grade).
Overall, there were few missing values.e Missing values in the control variables (mother tongue, commercial upper secondary school attended, final school grade, finance and mathematics courses completed) were replaced using multiple imputation; five imputations were generated for each missing value.
The collected data were analyzed using models from classical test theory and item response theory (IRT). In contrast to classical test theory, IRT enables the testing of whether the sum score functions as a sufficient statistic and if it can represent a latent trait, such as FK. Furthermore, due to the booklet design, latent modeling enables comparisons between students who responded to different items. This is an advantage of IRT, since estimations of the person and item parameters are invariant to each other and can be scaled together so that the probability of a correct response to an item depends on the item difficulty and the test-taker’s ability level only (Gonzalez and Rutkowski 2010). As a reasonable alternative to the one-parameter Rasch model (1980), we could have used the two-parameter IRT model. However, due to the multiple matrix design, there could have been an insufficient number of responses per item (Eggen 2008). Under the given conditions, the Rasch model allowed a more stable estimate of the person and item parameters, and an easier interpretation of the scale (Wilson 2005). Based on the results of the Rasch model, we used multilevel modeling to identify influence factors on FK.
First, the items were coded dichotomously (correct = 1 and incorrect = 0) and were entered into the multiple matrix design. Then, analyses were conducted based on item response methods. Using the ConQuest software version 3.0.1, we adjusted a one-parameter Rasch model (Adams et al. 2012) and tested for fit to the data. Item difficulty and person ability were determined using IRT based on logits, which are estimated using the marginal maximum likelihood analysis. The average item difficulty was 0.27 logits (SD = 0.96); the average person ability was −0.27 logits. On average, students answered 13 of 30 items correctly (SD = 5.01). This numerical deviation indicates that the items were generally slightly too difficult for the students. Figure 1 shows the corresponding person item map, targeting the difficulty of items 1 to 30 to the person ability. Each x represents 1.3 people; both were measured in logits. Overall, the items were an adequate representation of the scale, as the pool included both very easy and very difficult items. However, there were only three items (3, 15, 16) for the lowest knowledge Level 1 (< −1 logit), which means that the test provided a less accurate assessment for subjects with a very low level of knowledge. At the other end of the scale, there were more difficult items. Four items (12, 13, 23, 26) could not be matched to a corresponding ability of any of the students, though the contents of these items and the requirements were covered in the curricula of all universities. All those items required abstract mathematical knowledge, so one reason might be that abstract economic reasoning using mathematics requires a higher cognitive effort than verbally represented financial items. It is also interesting because some of the experts asked for more difficult items in the item sample. However, these items were too difficult for the sample, and further analyses are necessary to determine whether these items can be answered by more advanced, masters degree level students.
Person item map targeting item difficulties to knowledge parameter distributions.
The quality of single items was examined in several item-specific analyses, including analyses of item fit statistics, item characteristic curves, and item-total correlations. Item fit statistics can be calculated using either the maximum likelihood or model-based residuals. In ConQuest, item fit statistics are indicated as weighted mean square (wMNSQ). An ideal fit of model to data is indicated by an expected value of 1, although usually a tolerance interval is defined for values between 0.8 and 1.2 (Bond and Fox 2007). Values less than 1 indicate that the data fits the model better than expected, while values above 1 indicate an overfit. For the financial items, the wMNSQ values ranged from 0.96 to 1.06. Hence, all items fit the model.
Furthermore, item fit was tested according to inferential statistics by converting the wMNSQ values in ConQuest into a standard normal distribution. The t-test indicated significant deviations from the model at a significance level of 5% for t-values outside the interval [−1.96, 1.96] (Wu and Adams 2007). Among the financial items, only item 27 showed a significant deviation from the model with a wMNSQ = 1.06 and t = 2.5; therefore, it was excluded from further analyses. In addition to the specific item fit statistics of the Rasch model, item discriminatory power from classical test theory was used to estimate the fit of the items to the total score of FK. This estimation was based on the point biserial correlation between item scores and test score. The item discrimination value was expected to be positive for each of the items, since we expected, in view of the total score, that students with a higher knowledge level would respond more successfully to an item than students with a lower knowledge level. For all remaining items, positive discrimination values were confirmed with a correlation of > 0.30 for 20 items and a correlation of > 0.20 for nine items. The overall reliability of the 29 items was measured to be .658 as the degree of internal consistency.
The Rasch model can be tested, not only with specific item fit values, but also with global fit statistics. Analyses in this regard include the likelihood-ratio testf and the Chi-squared test, which are approximately equivalent and usually give the same results. However, these tests react very sensitively and no longer follow a Chi-squared distribution when larger numbers of test items are involved, as in this case. Usually, it is necessary to use further simulation-based methods, such as bootstrapping, in order to gain evidence of the fit of the Rasch model to the available data. In bootstrapping, the initially estimated Rasch parameters are used to simulate further datasets that fit the Rasch model. Then, the Chi-squared values are calculated for the simulated datasets in order to compare them to the Chi-squared value of the originally observed data (von Davier 1997). As the ConQuest software does not offer calculations of any criteria of global fit, we used the R software with the ltm package for the likelihood-ratio test (Rizopoulos 2006). While the initial p-value of the Chi-squared test was significant (p < 0.01), this changed after a bootstrap with 400 simulated datasets (p > 0.10), which was clearly above the common significance level of p < 0.05. Thus, the fit of the data to the Rasch model was confirmed, not only from the item-specific perspective, but also the global perspective.
After the instrument was confirmed to be compliant with the requirements for a Rasch model, the next step was to examine whether the test provided an adequate representation of the cognitive levels of FK identified in the content analyses and subject-didactic analyses. The remaining 29 items were assigned by two independent raters to one of the four levels. The mapping of items to difficulty levels was highly reliable, as the calculation of Kappa confirmed a high agreement (K = 0.886, t = 6.463, p < 0.01). Eventually, nine items were assigned to Level 1 (understanding basic financial concepts in a company), 14 items were assigned to Level 2 (analyzing static concepts and static investment appraisal methods), and six items were assigned to Level 3 (applying and analyzing dynamic investment appraisal methods). Level 4 could not be included in the analysis, so further items need to be created in future research. In Figure 2, the items for Levels 1, 2, and 3 are plotted according to their item difficulty.
Scatterplot of item difficulties according to levels of financial knowledge (horizontal lines show the means of the levels).
With regard to item difficulty, the three levels are not entirely separable, and so overlap to a certain degree. This is not surprising since item difficulty is not contingent solely on the modeled cognitive complexity, but also on numerous other variables, such as item format and share of numerical values in an item. Figure 2 shows clearly how well the increasing item difficulty represents the modeled hierarchy in the difficulty levels. The average item difficulty was −0.62 logits for Level 1, 0.45 logits for Level 2, and 1.25 logits for Level 3. The discriminatory power of the different levels in the model is also shown in Figure 2 from the fact that none of the items from Levels 2 and 3 had a difficulty below the arithmetic mean of Level 1. There was also only one item from Level 2 that had a difficulty above the mean of Level 3, and none of the items from Level 3 had a difficulty below the mean of Level 2. In summary, the first three theoretically based levels of difficulty are, on average, reflected by the assigned items. Therefore, the values range of item difficulties and the individual steps overlap each other.
Empirically, we confirmed the differences through inferential statistics by comparison in a t-test. The differences between the means of all three levels were significant with large effect sizes, measured with Cohen’s d. The values were as follows for the difference in the means of Levels 1 and 2: t = −3.326, df = 21, p < 0.01, d = 1.42. For the difference between the means of Levels 2 and 3, the values were as follows: t = −2.756, df = 18, p < 0.05, d = 1.34. For the difference between Levels 1 and 3, the values were as follows: t = −4.157, df = 13, p < 0.01, d = 2.19.g The following analyses were conducted to verify empirically the dimensional structure of FK with regard to content. Our hypothesis was that all levels referred to the financial decision process and therefore represented one dimension. In contrast, if the results indicated different latent subdimensions, we would need to use more than one scale to prevent compromised results in the parameter estimations. We examined the dimensional structure of FK by testing a one-dimensional model in comparison to a three-dimensional model with the observed levels included as subdimensions. The models were compared based on the Akaike Information Criterion (AIC; Akaike 1973), the Bayesian Information Criterion (BIC; Schwarz 1978), and the Consistent Akaike Information Criterion (CAIC; Bozdogan 1987). The values of these criteria must be interpreted not as absolute values, but in relation to the comparison model. For all three criteria, a smaller value always indicates a better fit of the model to the data. The comparison of the information parameters (see Table 3) shows that the values of the AIC, the BIC and the CAIC were smaller for the one-dimensional model. Thus, the one-dimensional model provided a better fit to the parameters to be estimated than the three-dimensional model.
In addition to these tests, correlations between the different levels can be analyzed as well, to answer the question of whether a one-dimensional or a multi-dimensional model provides a better fit. In a one-dimensional model, the scores of the latent variable on each level are expected to correlate clearly. Students who have mastered Level 2b and are able to make reasonable decisions at this level should also have high scores on the latent variable at the lower levels. Students who can hardly identify value streams in a company at Level 1, that is, those who are barely able to identify financial information, should have low scores at the higher cognitive process levels as well. Thus, a high correlation between the subjects’ scores on the latent variable would indicate that there is only one latent variable with different levels of difficulty. In the model, the latent correlations were 0.83 between Levels 1 and 2a; 0.76 between Levels 1 and 2b; and 0.81 between Levels 2a and 2b. Both the criteria for the model comparison and the correlations between the levels indicated a model with one single latent scale with different levels, which also was in line with the theoretical considerations as to how the content difficulty increased. In summary, the increasing difficulty in financial decision-making in a company must be regarded as one content dimension of a construct with different levels rather than separate content subdimensions. This supports the theoretical assumptions. Accordingly, first, financial information must be identified in terms of value streams. Only when this has been accomplished to a sufficient degree can this information be analyzed further for evaluating financial challenges, and only after that, can decisions in complex situations be made in a well-conceived way based on FK. In the context of a company, this means that the value streams are identified, then condensed into financial indices of profitability, cost of capital, or cash flow, and then analyzed to provide a basis for decision-making. In addition to this, decision-makers must be able to apply different methods to make reasonable investment decisions under uncertainty and in view of numerous alternatives.
After the theoretical structural model of FK was largely empirically confirmed, we investigated the second research question about the effect of commercial VET on the FK of bachelor level degree students, while controlling for personal and study-related influence factors. In this regard, we also analyzed whether different institutions and different types of institutions of higher education had an effect on student FK. Since we made recourse directly to the students’ knowledge and since there was error variance in the estimation due to the booklet design, we estimated five plausible values of the students’ latent knowledge score (Mislevy 1991). Therefore we used the five imputed datasets with the complete individual variables and computed a multiple regression using both the item scores and the imputed independent variables for the estimation (Rubin 1987). Hence, the following calculations were based on 25 datasets for each student, resulting from the five multiple imputations of the control data multiplied by the subsequently estimated five plausible values for each imputation. In the following, we report only the pooled results from the calculations with the 25 datasets.
Given the structure of our data, consisting of students nested in institutions of higher education, we used multilevel analysis to examine its influence on FK. Analyses of both the raw data and the imputed data indicated that the interclass correlation was quite small with approximately 6.8% (see Model 1 in Table 4). Thus, the level of FK differed only slightly between institutions of higher education. Hence, the variance should be explained mainly by personal influence factors. In the subsequent models, we added influence factors at the personal and institutional level. Our focus was on the effect of prior commercial VET, which we added first before including further personal and study-related influence factors.
Note. *p ≤ .05 **p ≤ .01 ***p ≤ .001. Values in brackets indicate standard error.
In Model 2, we examined the influence of commercial VET alone on the knowledge score. The comparison shows that students who had obtained commercial VET prior to their studies scored higher by 0.276 logits than students without commercial VET. In Model 3, we added the personal factors, such as gender, mother tongue, commercial upper secondary school attended, and final school grade. Model 3 did not include the influence factors related to the studies in higher education. The analysis showed that the effect of commercial VET increased slightly (β = 0.327), when the other personal factors were controlled. Furthermore, attending a commercially specialized upper secondary school and a better final school grade had significant positive effects on FK.
In Model 4, we also added study-related variables, such as completion of a course on finance or mathematics, and study progress in number of semesters. As expected, students who had completed a finance or mathematics course scored significantly higher. When we controlled for completion of these courses, the number of semesters had no effect on FK.h Even when completion of these learning opportunities in higher education was controlled, students with prior commercial VET still scored significantly better, by 0.329 logits, which illustrates the great relevance of VET for the acquisition of FK in higher education.
In Model 5, we added a variable to the institutional level that differentiated universities from universities of applied sciences. The analysis showed that students from universities scored slightly, but significantly, higher than students from universities of applied sciences. This variable explained almost the entire variance at the institutional level. Thus, in our sample, only the type of institution was relevant for student FK, not the specific university or university of applied sciences they attended.
The variables at the personal level explained approximately 23% of the variance in Model 5 and approximately 100% at the contextual level. This can be explained by three factors. First, the variance at the context level is relatively low, thus the addition of few covariates, which are potentially declarative, cause a high variance explanation. Second, contextual covariates were included, which can readily explain the differences in test scores between the universities. Third covariates were included to show, not only the variation within one university but also between various universities. Overall, the variables in Model 5 explained approximately 27.5% of variance of the knowledge score.
The theoretical model of FK was confirmed empirically for three hierarchical levels. Accordingly, students first learn basic financial concepts before they can calculate and analyze figures based on static methods. This, in turn, is the prerequisite for calculating figures based on dynamic methods, taking into account risk, and making professional and long-term financial decisions in a company. The adapted and further developed test instrument is suitable for representing the knowledge levels. Therefore, the effects of time, insecurity and level of abstraction of the difficulty of the task need to be investigated in further studies. A possible next step would be another construct validation which is based on predictions about the item difficulties through the assumed criteria (Hartig and Frey 2012). However, it is still limited in the sense that the items do not represent all four levels of the theoretical model of FK. The instrument does not yet represent Level 4 (creating finance plans), the highest level in the model, which should be the focus of further studies. It is important to note that this level also connects to other business content areas, which increases considerably the difficulty of decision-making and related items.
There was an important influence of commercial VET on student FK, which was in line with expectations, considering that, according to the test definition, the test is supposed to assess financial decision-making in corporate situations. Our results suggest that students who have obtained commercial VET may draw on their practical experience when responding to the test items. It might be easier for them to project themselves into the corporate context, which would enable them to respond more successfully to the items even if they had acquired the same knowledge in higher education as their peers. We consider this finding an indicator of sufficient practical relevance to the test instrument. Moreover, this effect illustrates a special quality of the dual commercial VET in Germany, as students apparently acquire knowledge relevant for decision-making during commercial VET, and the resulting edge cannot be compensated for by learning opportunities during bachelor level degree studies.
We assume that the influence of commercial VET on student knowledge is still somewhat underestimated. In our study, it was one of the most influential predictors of the level of FK, and therefore, its influence and functioning should be analyzed in greater detail in further studies. A relevant question would be how the acquisition of FK in tertiary education is influenced by different parts of commercial VET, for example, by practical training in a company compared to school-based education.
a Studies conducted during the financial crisis show that even people with (prior) knowledge of BA often have an incorrect or superficial understanding of the causes of the financial crisis (Leiser et al. 2010).
b The difference between these two common types of institutions of higher education in Germany is that universities aim mainly to provide academic education while universities of applied sciences are more practically oriented (e.g., Nickel 2011).
c Project WiwiKom is the acronym of the research project ‘Modeling and measuring competencies in business and economics among students and graduates by adapting and further developing existing American and Latin-American measuring instruments (EGEL/TUCE)’. The project is funded by the German Federal Ministry of Education and Research. For more information, see http://www.wiwi-kompetenz.de/eng.
d On the state of research on test instruments for assessing professional knowledge of BA, see Zlatkin-Troitschanskaia et al. (2014).
e Mother tongue (three missing, 0.39%); mathematics course completed (42 missing, 5.43%); finance course completed (71 missing, 9.18%); commercial upper secondary school attended (55 missing, 7.12%); final school grade (58 missing, 7.50%); number of semesters (16 missing, 2.07%).
f In the likelihood-ratio test, a hierarchically subordinate model (numerator) is compared to a hierarchically superordinate model (denominator).
g For Cohen’s d (Cohen 1988), effect sizes above 0.9 usually are considered large. In this case, all values were above 0.9.
h When completion of these courses was excluded from the model, the number of semesters had a significant effect. Thus, the factors of courses completed and number of semesters provided similar information. The mean correlation was 0.56, indicating that there was no multicollinearity, which could have compromised the regression results.
i There was also a difference in the share of students who had attended a commercial upper secondary school between universities (15.14%) and universities of applied sciences (20.51%), but it was not significant (Chi2 = 2.101, p = 0.147). Due to data collections from particular courses at several universities, some subsamples show an averagely higher study progress (semester of study and attended courses) than other ones.
In addition to the project directors stated above, project directors include Wolfgang Härdle, Silvia Hansen-Schirra, and Sascha Hoffmann. In the curricular validation of the test instrument and the assessment surveys, the WiwiKom team was supported by Oliver Lauterbach, Hilde Schaeper, and Florian Aschinger.
All authors contributed substantially to this work. OZT and MF conceptualized and raised funding for the WiwiKom project. OZT developed the general theoretical competence model, while MF and SB conceptualized the subject-specific levels in the different dimensions, respectively in finance in this paper. MF planned and coordinated the test adaptation and validation studies. SB planned and conducted the cognitive interviews. Data analysis for this paper was carried out by SB in consultation with MF. All authors discussed together the manuscript at all stages. All authors read and approved the final manuscript.
Manuel Förster (MF) is an Assistant Professor at the Chair of Business Education of Johannes Gutenberg University Mainz, Germany. His research interests lie in quantitative methods, implementation of innovation and reforms in the educational system, and competence assessment in higher education for international comparisons. Manuel Förster is one of the project directors and international coordinator of the WiwiKom research project, which focuses on the modelling and measuring of competences and knowledge in business and economics in higher education.
Sebastian Brückner (SB) is a research associate at the Chair of Business Education of Johannes Gutenberg University Mainz, Germany. He has been working in the WiwiKom research project and the German national research program on Modeling and Measuring Competencies in Higher Education (KoKoHs). His research focus is on cognitive diagnostic assessment and international comparative research in higher education.
Professor Olga Zlatkin-Troitschanskaia (OZT) has been Chair of Business Education at the Johannes Gutenberg University Mainz, Germany, since 2006. She has directed numerous national and international externally funded research projects, such as WiwiKom, and coordinates the national research program on Modeling and Measuring Competencies in Higher Education (KoKoHs) in Germany. Professor Olga Zlatkin-Troitschanskaia is a member of many national and international advisory and editorial boards and has served as an expert consultant to the German and Swiss national research foundations and the German and Swiss ministries of education and research. | 2019-04-19T02:56:17Z | https://ervet-journal.springeropen.com/articles/10.1186/s40461-015-0017-5 |
Many companies refuse to face the reality that their businesses are in trouble or that their strategic positions are wrong. Whether a product line is no longer profitable, foreign competition has slowed growth, or technological changes have left them behind, many otherwise well-managed companies hang on for too long to the status quo. In this inflexible posture, managements time and talent go to waste, assets grow sterile, and technology falls behind.
This book will help managers overcome the exit barriers that hamper strategic flexibility. Based on innovative studies of 192 firms within Sixteen industries, the ideas presented here are applicable to almost any industry and any type of firm. Harrigan discusses the major strategic decisions facing executives today, including guerrilla strategies of underdog competitors, entry and exit barriers, the use of joint ventures to cope with the uncertainties created by erratic growth, and the management of change. She focuses on the shortcomings of vertical integration, developing a framework for better make-or-buy decisions. The effects of exit barriers on firms' strategic flexibility are detailed, and managerial tools to cope with high barriers and declining businesses are introduced.
Strategic Flexibility is organized to provide easy reference for managers seeking to find out what strategies have worked and why. This book offers practical, proven ways for managers to expand the flexibility and responsiveness of their companies to new competitive conditions.
Entering a mature industry is seldom expected to be easy because early entrants have already defined competitive norms. Moreover, entry barriers, a key determinant of the attractiveness of an industry, may be relatively high (Porter 1980). Knowledge of the nature of entry barriers can be helpful in suggesting whether the strategic window of market opportunity is open (Abell 1978). (When the window of opportunity closes, an industry loses its attractiveness as a candidate for entry.) The costs of overcoming some entry barriers are so high that they exceed the benefits of successful market penetration (Bain 1956; Bass, Cattin, and Wittink 1978; Ornstein, Weston, and Intriligator 1973; Scherer 1980). Yet firms, if they understand how entry barriers work, may not only pay the price of entry but also invest further to insulate themselves from subsequent penetration by new and sizable firms.
What is needed is a framework for evaluating markets that firms might enter, particularly to avoid wasting resources by trying to overcome a lethal combination of high entry barriers and competitors' resistance. Preentry analysis of these barriers might prevent firms from committing resources to battles that cannot be won cheaply.
This chapter reviews the theory and evidence supporting the existence of entry barriers. It proposes a new way of thinking about them in the context of mature manufacturing industries. Studying entry barriers in mature industries is particularly interesting because these environments are widely assumed to be inhospitable to enter. Yet entry does occur, and firms have prospered by entering the right mature industries. Thus it seems that a discussion of entry barriers would be of interest to outsiders looking in as well as to ongoing firms that seek to keep outsiders out.
Entry barriers are forces that discourage firms from investing in a particular industry (or niche of an industry) that appears attractive. Because entry barriers can represent substantial disadvantages for many types of potential entrants, they suggest that higher-than-average profits may be difficult to attain, not only as a result of size or timing advantages enjoyed by existing firms but also as a consequence of the willingness of these firms to lower prices to the limit price (that is, to the price level that will limit new entry) in order to discourage other firms from trying to enter (Collins and Preston 1969; Gaskin 1971; Spence 1977, 1979). Industries characterized by such high entry barriers have generally been considered to be more profitable in the long run (Bain 1956, 1972; Modigliani 1958; Stingier 1958) and increasingly have become the targets of those large domestic firms or foreign entrants that can afford to overcome such entry deterrents (Gorecki 1976).
High entry barriers are a necessary but not sufficient condition for long-term industry profitability. The steel industry, for example, possesses extremely high capital barriers, yet is only marginally profitable. High entry barriers are necessary because without them, plant expansions (a strategic investment that is difficult to reverse) could rapidly outpace demand. The pressures created by underutilized plant capacities precipitate price wars, which may drive out some firms (provided the exit barriers they face are low; see chapters 6, 7, and 8) but will surely ruin profit margins for all (Chamberlain 1962; Fellner 1949; Vernon 1972). The presence of one or more other unattractive traits -- high exit barriers, a fragmented structure of nonhomogeneous firms, commoditylike product traits, or other unfavorable characteristics -- all reduce an industry's profitability potential, as in the case of steel.
Mature industries are those that generally grow slowly (less than 10 percent annually in real terms), where demand is frequently inexpansible, and where product traits are generally familiar to consumers or users. Although technology may have been stable within mature industries, process innovations and new technological configurations could compete alongside the relatively aged (and frequently capital-intensive) assets that populate such industries (Utterback & Abernathy 1975). Although competitive structures usually remain stable in mature industries, there have been cases where a newcomer's entry changes the old structure, as in the example of Hanes's pantyhose strategy.
There may be several market segments within an industry where a potential competitor might gain entry. The abilities of some industries to sustain different competitive profiles as a result of the needs of these customers provides the foundations of the notion of strategic groups (Caves and Porter 1977; Hunt 1972; Newman 1978; Porter, 1979b; see chapter 2). Note that these various market segments will not be equally desirable to serve because they are not equally profitable, and that they will not constitute a market niche unless the firms serving them are protected by entry barriers from invasions by outsiders. Thus the market segments that are easiest to enter will be the least attractive. They will offer an initial entry point for firms that cannot afford to invade the oligopolistic core (that is, the market niches often dominated by the top four firms).
Strategic groups are firms that embrace the same strategies for serving customers; their competitive postures are similar. Although the number of strategic groups is probably fewer in maturity than when the rules for successful competition were yet unestablished, differences among competitors exist and are relevant when evaluating opportunities for entry. (Chapter 2 expands on this idea. It is developed here briefly because it is germane to assessing entry-barrier heights.) The most successful strategic groups are sometimes called core competitors because they serve the most attractive core of customers.
Core competitors are generally believed to possess substantial market power as a result of their relatively large market shares. Acting collectively, core competitors can influence prices and behaviors in their markets. Frequently their rate of return is higher than that of fringe competitors who are not protected by niche boundaries. Because new entrants are less likely to enjoy scale economies or advantages of experience initially, they are most likely to hover on the less profitable fringes or periphery of industry influence until they can penetrate the core successfully.
Although new firms more frequently occupy the fringes of a competitive landscape, some late entrants do eventually make significant inroads to the core of competition. Among these firms, process innovations are particularly likely to be their ticket to capturing substantial market share. Because firms operating the conventional technologies of a mature industry are more likely to be far down existing experience curves and would have exhausted the cost-saving benefits of many easy operating innovations, new firms seeking entry would be more likely to introduce radical process innovations.
The theory of entry barriers (and their implications for managers) is reviewed next. Construction of variables approximating these forces is presented in appendix 1A. These techniques for assessing industry attractiveness may be helpful in identifying acquisition candidates or in assessing whether an industry appears attractive to outsiders.
In mature industries, the most important influences on entry behavior are (1) technical factors and (2) competitive conduct variables. Technological factors include: (1) capital requirements; (2) scale economies; (3) the age of an industry's productive capital (physical plant and equipment); and (4) the balance of labor intensity to capital intensity predominant in an industry's technology. Competitive behavior variables include: (1) previous entries; (2) changes in the dispersion of market shares; (3) industry advertising and research and development (R&D) outlays; and (4) average levels of excess productive capacity.
Firms can influence the heights and natures of some entry barriers, but they cannot affect others (those relating to demand and other exogenous factors such as technological scale) except indirectly by their investments. This distinction is useful in noting how managers could use their knowledge of entry barriers as a competitive weapon.
Mature industries can be penetrated if entrants obtain superior operating economies through improved technologies. Changes in capital-to-labor ratios and minimum-efficient-scale (MFS) factors could provide the necessary edge. Alternatively, the key technological advantages needed for entry may be newer physical plant and equipment, or meeting large capital requirements.
Capital-to-Labor Ratios. Industries where existing firms' technologies are already relatively capital-intensive offer fewer opportunities for fringe firms to enter than those that are labor-intensive. The introduction of labor-saving technologies offers opportunities for fringe firms to attain lower operating costs, as well as licensing revenues for their technological innovations.
Capital Requirements and Technological Scale. Capital requirements have long been identified as entry barriers (Bain 1956), and they were expected to act as entry barriers for fringe firms in the study. The technologies used to produce the goods of various industries each possess scale economies -- that is, plant sizes where, if fully utilized, average unit costs of production will be lowest. Potential entrants would be obliged to enter at this large scale in order to avoid incurring significant diseconomies (Scherer 1980). (Moreover, demand would be satisfied by existing plants in a mature industry unless the market is growing rapidly.) Erecting a new plant would be tantamount to challenging the ongoing firms to a price war in a slow growth environment. Accordingly, the potential entrant must assess its willingness and ability to absorb losses from such warfare until its new plant has been established in the industry. Firms with ample capital could afford to buy their way in. Other fringe entrants could not.
Age of Physical Plant. The presence of new physical assets can indicate a recent change in an industry's technologies, or several recent entries. If the vintage capital of an industry is relatively new, it may indicate that an unsettled market opportunity exists. Thus relatively new physical assets may indicate an environment where entry may be more successful.
Although the cost of acquiring assets for manufacturing products that are subject to frequent style changes may be high, the most specialized -- hence inflexible -- forms of these assets will quickly become vulnerable to obsolescence (Menge 1962). This means some industries characterized by increases in the newness of physical plant (a seemingly attractive attribute) would also be subject to high exit barriers, an unattractive industry trait covered in chapters 6, 7, and 8 (Porter 1976b; Caves and Porter 1977; Harrigan 1981b; 1982; 1983a).
Absolute Cost Strategies. Fringe competitors (or potential entrants) seeking sizable shares of their market are most likely to attain success through process or product innovations. Both are strategies that require fringe firms to overcome entry barriers based on absolute cost advantages, a strength originating from access to scarce resources that new entrants cannot develop as inexpensively as earlier entrants did, if at all. Examples include access to distribution channels, ownership of a uranium mine, or other factors that would be more costly to replicate when entering late. This cost advantage would be due in part to inflation and in part to the limited nature of the resources possessed. New entrants would be obliged to spend heavily in order to match the access to scarce raw materials, vertical relationships, or patents that constituted these advantages. Although some experience-curve advantages can be replicated through accelerated spending programs, a few cannot be copied by late entrants, except by acquiring existing firms.
Study of an industry's history often suggests patterns of competitive behavior that act as entry barriers to outsiders. Most prominent among these is evidence that demand is satisfied adequately by ongoing firms.
Changes in Market Shares. High variability in market-share changes indicate firms are jostling for position. An outsider should expect competitive conditions to be volatile where market shares change frequently and substantially.
Sizable changes in share (relative to average industry market-share changes) suggests that some competitors are pursuing growth objectives. Since market-share points are quite difficult to gain in mature industries, the presence of large relative changes should signal a volatile environment, which discourages entry. Moreover, firms possessing high market shares enjoy relative cost advantages (as a result of the distances they have traveled down the industry's experience curve). New firms entering this type of environment would be less likely to succeed unless they can insulate themselves from these cost pressures by exploiting a technological innovation.
Excess Capacity. Firms that already hold a stake in the health of an industry will erect protective barriers around their markets. If they are highly determined to do so, they may even hold a portion of their own plant capacity idle as a warning against entry, thereby signaling their willingness to fight a war of attrition to prevent entry (Esposito and Esposito 1974; MacMillan 1980). Although evidence of recent successful entry by other firms might encourage potential entrants, the presence of several underutilized plants should deter yet another firm from entering. Building capacity in anticipation of demand has been a particularly effective method of shutting out new entrants -- as antitrust courts have noted since the ALCOA decision of 1947 -- and this tactic is still attractive in mature industries. High levels of excess capacity should discourage new entry, especially if history suggests existing firms cut prices to fill their underutilized plants (to lessen the substantial cost disadvantages of excess capacity). Market share will be difficult to capture in such settings, and losses will be high if outsiders try to enter when excess capacity is high.
Advertising Expenditures. Firms can make investments in entry barriers through physical plant, R&D, advertising and other assets to discourage new firms from following them into an industry. They can hold outsiders at bay, largely by dint of pricing pressures but also by virtue of absolute cost advantages. Entry barriers erected through advertising and R&D can be ephemeral, however. Outsiders that possess ample cash reserves could hurdle many such barriers to reap the benefits of early entrants' missionary advertising, product introductions, or engineering breakthroughs. In short, successful performances may attract well-endowed entrants who can scoff at these forms of entry barriers.
Outsiders could enter a mature industry by creating demand for a branded product. If products are not yet commodities but their markets are mature, however, it is likely that buyer loyalties favor incumbent firms. The product is older, consumers are better informed of its attributes, and many ways of differentiating existing products have already been employed. Product innovations could permit entrants to challenge the core of industry competition (and seize large market shares) if their product is truly something new.
Measurement of product differentiation is a problem, however. A useful measure of this critical phenomenon has not yet been developed in industrial economics. Yet the implications for competition within environments where products can be differentiated (as compared with environments where products are commoditylike) are critical when evaluating whether entry should be attempted. Tactics such as branding, advertising, quality variations, and other differentiating maneuvers might be employed effectively to dominate a desirable market segment within noncommodity businesses. But the cost of building a niche is substantial. Firms desiring to do so must be able to withstand the several years of losses required to erode the barriers that successful firms have erected to gain entry. In this context, the advertising expenditures fringe firms must make to overcome the customer loyalties attained by ongoing firms (a variable that is measurable, although it is scarcely a global estimate of product differentiation) may approximate the product-differentiation entry barrier.
If high advertising expenditures are indicative of an environment where many different configurations of a product could satisfy customers' needs, they also suggest that some competitors (representing a particular strategic group) have been supported in these expenditures by the market's response. High advertising outlays could indicate a market opportunity for firms that can afford the cost of advertising campaigns. This argument parallels that in Menge (1962), which says that, although the high cost of frequent style changes should reduce the absolute number of competitors and deter entry, the opportunity to satisfy diverse consumers' preferences could permit several firms to occupy modest but specialized market niches.
R&D Expenditures. Research and development outlays offer the first path to penetrating the established market positions of earlier entrants. In order to match patents or licensing advantages enjoyed by leading firms, potential entrants must spend heavily on R&D to obtain new skills or draw on research skills developed for other industries (Kamien and Schwartz 1975; Mueller and Tilton 1969).
Despite formidable entry barriers, firms will try to penetrate mature industries if demand appears attractive or leading firms are enjoying high levels of return on investment (ROI). Rapid growth in demand within otherwise mature markets will attract new entrants.
New firms will be attracted to industries where there appear to be opportunities to enter easily and earn acceptable profits. The more attractive candidates for entry would be those industries where growth in demand is outpacing ongoing competitors' abilities to satisfy it (for example, where excess capacity is low).
If leader firms have enjoyed high rates of productivity and have made effective use of their assets, some spillover or halo effect might be shared by follower firms, given an environment where lead firms' technological improvements and other innovations can be emulated (after a lag) by follower firms. This form of success attracts new entrants.
In summary, high levels of ROI and growth in demand encourage potential entrants to invest, as would high levels of industrywide advertising. Firms could discourage tentative entrants by building additional plants before ongoing plants are fully utilized.
Firms evaluate entry decisions within the context of a national economy as well as in light of competitive dynamics with an industry. Entry is more likely to occur during periods of increasing incorporations and less likely (after a lag) during periods when high numbers of business failures occurred. Rapid increases in economic price levels (signaling higher costs) depress profitability, particularly within those markets where severe price competition makes increasing operating costs more difficult to pass on to customers.
Tests of the relationships between entry-barrier heights, rates of entry and performance are described in appendix 1A. Results will be discussed.
In table A-1 in the appendix, negative signs indicate forces that acted as entry barriers; the raised letters indicate their levels of statistical significance. These included excess capacity, high technological scale, and entry by other firms (for detailed discussion of these and other results, see Harrigan 1981a). Advertising and R&D expenditures have positive signs, suggesting that differentiable products offer more attractive environments for investment than industries with commoditylike products. These conditions encourage entry by outsiders. The ROI variable turned positive when it was lagged an additional year, suggesting that outsiders cannot respond quickly to favorable signs within mature industries.
Results concerning the excess-capacity variable suggests an interesting dilemma. Failure to operate plants at engineered capacity within industries where scale economies are significant will incur costly diseconomies. Yet the decision to operate those plants near their engineered capacities appears to encourage potential entrants to construct additional plants, which they will then, predictably, set out to fill through price cutting. Depending on expectations concerning future demand for the products of the line of business in question, ongoing firms must elect whether to act in a manner that reduces their own short-term or long-term profitabilities. If the industry in question is a mature one, it would appear that some reduction in attainable profits would be necessary to reduce the potential volatility that could occur if severe excess capacity were allowed to develop through the entry of a new firm.
In table A-2 in the appendix negative signs indicate conditions that reduce ROI, and the asterisks indicate levels of statistical significance. Excess capacity and high R&D expenditures are among the forces that reduce an industry's attractiveness to outsiders.
ROI performance appears to be higher where concentration is high and firms' past performances seem to be fair indicators of future ROI, a finding that would be expected within professionally managed firms whose executives are well aware of the importance of well-managed financial statements. Exits by competitors do not appear to help remaining firms' ROIs in this sample. As chapters 6, 7, and 8 suggest, exit is a complex factor, whose effect depends on industry traits as well as on the behavior of the other firms within that industry.
Results tend to suggest that technological variables are the more difficult to overcome by firms contemplating entry. All potential entrants must invent a means of hurdling technological barriers when pursuing a new line of business; hence all face similar needs for the capital required to inaugurate productive or distributive assets.
The common aspects of the entry-barrier problem have been illuminated in the results presented. No attention was devoted to the important differences among competitors within an industry after these commonalities have been overcome, however. The common entry barriers may be likened to the term in the equations of portfolio-valuation models. The risks of assuming a specific strategic posture that may be strongly correlated with the strategic postures of ongoing competitors may be compared with the b coefficient, much as capital-asset pricing models consider covariance among securities.
Following this analogy, results suggest that the value or long-term performance of the firm (which is the embodiment of its strategy) reflects the investment decisions the firm has made there in common structural assets, plus investments in unique strategic-posture assets that may (or may not) overlap the strategic postures of competitors.
Entry barriers are of different heights as they protect the strategic postures of firms occupying diverse niches of the market (or serving the same customers using differing postures). If, as results suggest, capital requirements alone are not adequate entry barriers, then ongoing firms might aggressively shift their capital-to-labor ratios in favor of more efficient, technologically innovative assets. Results also suggest frequent improvements in manufacturing technologies (new assets).
Results indicate that conditions of excess capacity and a history of price cutting to fill that capacity are formidable deterrents to entry by outsiders. Thus the generally held theory that price wars and excessive idle capacity offer strong negative signals for potential entrance has been confirmed. Results also suggest that firms could further barricade their portals by investing in R&D effective (1) to increase technological scale economies within their respective industries or (2) to force their industry's structure to evolve in a manner that would make subsequent attempts at entry even more ineffectual.
Adherence to alternative (2) poses an interesting public-policy dilemma. Firms whose R&D efforts force industry standards to be redefined and drive the progress of innovating behaviors will improve the general level of consumer welfare at the cost of ever-steepening entry barriers. Such behavior suggests scenarios of increasing concentration (as marginal entrants are rebuffed by towering entry barriers) where entry can be successfully undertaken by affluent, diversified firms possessing (l) the staying power to survive a protracted war and (2) the perspicacity to offer innovations of sufficient value to force the industry's structure to evolve in a manner that favors them.
Results suggest that potential entrants are less likely to attempt entry where they expect little chance for success. If a market is already suffering excess capacity, firms may be discouraged from entering. This finding suggests that defending firms might adopt a policy of keeping some level of capacity idle by always building first and in the most appropriate locations to preempt would-be competitors (see Dixit 1980; Rao and Rutenberg 1980).
Finally, if, as results suggest, the recent entries by competitors act as high barriers against subsequent entry, then defending firms should give some thought to the selection of those firms they might permit to enter. Some competitors are preferable to others for their different competitive styles (which can be observed by studying their behaviors in other industries). Ongoing firms might control the profitability of their industries by making entry especially difficult for the types of firms they would prefer not to admit.
Excess capacity is a malevolent type of barrier that could explode into price warfare (and create an unpleasant exit barrier later) if it is not controlled properly. A policy of raising customers' expectations for service, variety, and quality may be more appropriate than creating excess capacity as a way to discriminate among candidates for entry. Raising customer expectations moves products further away from commoditylike status. Results suggest this type of defense is more effective in deterring entry than is merely raising capital requirements. Forcing innovation could lead to patented process improvements (cost leadership), savings from scale economies, and other types of entry barriers that are more controllable than those created by excess capacity.
For aspiring entrants, results, suggest that firms that are determined to keep new firms out would be more effective in focusing their competitive responses, rather than emphasizing the structural traits of their particular industry. Devoting substantial budgets to maintaining trade relations, improving delivery and customer service, or other forms of marketing may be a more effective means of shielding market niches from entry than others -- for example, where industry marketing expenditures have been generally low. If ongoing competitors focus defensive actions on structural traits, informed entrants that recognize this discrepancy and exploit it may ease into an industry without setting off price wars or incurring other significant forms of resistance.
In summary, results suggest firms could use excess capacity (created internally or by admitting new competitors) to discourage potential entrants. Given the difficulties excess capacity might create, however, other factors might do instead. If there are outsiders that possess the needed capital and can afford to make the appropriate investments to enter, ongoing firms would do well to monitor fluctuations in scale economies, capital-to-labor relationships and other competitive investments in order to assess whether the strategic window of opportunity is open too wide.
Measures of entry barriers are difficult to construct in a manner that will be useful to strategists contemplating entry. Yet such estimates are crucial in assessing the relative attractiveness of those lines of business strategists hope for their firms might undertake successfully. The entry decision in this study considered the likelihood of successfully entering into an industry de novo -- that is, as a new competitor, not by acquiring an existing industry participant. The dependent variable denotes whether entry occurred within a particular industry in a given year. Entry was deduced by counting the firms listed in Dun & Bradstreet indexes and corroborating that count with Census of Manufacturers reports of firms in operation. Entry was indicated as a binary code (where "1" indicated entry occurred).
Table A-3 in the appendix summarizes the traits that approximated entry barrier heights and enticements to enter. Details concerning their construction appear in Harrigan (1981a). These tests develop variables by drawing on a number of data sources to improve on the foundations of the economic theory. The early studies of entry barriers were inconclusive, yielding ambiguous results. An example of these models is presented in table A-4 in the appendix using the same dependent variable and data of table A-3, and independent variables suggested by these earlier economic studies. Its results are poor. The R2 (coefficient of multiple determination) is quite low (.0200), and the value of the intercept term (.494) in table A-4 approximates closely the actual mean value of the dependent variable (.500), suggesting the independent variables add relatively insubstantial information to predictive power of this model. More meaningful models were needed when the precepts of this topic were translated into questions of interest to scholars of strategic management.
Transformations were made for corrections of the heteroscedacity, serial correlation, and lagged structure. The observations used to test the models that follow spanned a decade (the differences of nine intervals). These were pooled (and corrections made) to yield a total of 540 data points.
In time-series specifications that are pooled with cross sections, the residual is assumed to be composed of a time-series error, a cross-section error, and an interaction effect. Transformations are required to correct for autocorrelation, and special interpretation of the error term is needed. Moreover, the pooling of data describing heterogeneous-sized firms required generalized differencing corrections using weighted least-squares estimates to obtain the appropriate error term (see Bass, Cattin, and Wittink 1978; Hatten 1974). Complete explanations of these procedures appear in Harrigan (1981a).
After corrections, there was relatively little serial correlation in the models estimated (as indicated by the Durbin-Watson d-statistic); and the correlation coefficients of the absolute values of the observed residuals to the predicted values of the dependent variable were quite low, which suggests that the variance of the residuals generated by this process does not depend on the values of the independent variables (see Balestra and Nerlove 1966).
A lagged structure (of one year in most cases) was imposed on the models tested to reflect the assumed reaction time necessary before firms could convert liquid assets to capital goods and production capacity following industry-performance stimuli. The lag structure is a fundamental element of this model because specifications that allow entry to occur during the same period as the impetus encouraging entry violate the basic assumptions of a nonfrictionless environment, where assets are not completely flexible and not immediately available.
The models that explained the greatest amounts of variance in entry behavior and firms' performance are of the general form presented for the entry model that follows. In each, the error term of first-stage analysis provided pi, the correction term. | 2019-04-21T09:21:30Z | https://www.simonandschuster.com/books/Strategic-Flexibility/Kathryn-Rudie-Harrigan/9781416576709 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.