text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
#include <stdio.h>int putc(int ch, FILE *stream);
The putc( ) function writes the character contained in the least significant byte of ch to the output stream pointed to by stream. Because character arguments are elevated to integer at the time of the call, you can use character values as arguments to putc( ). putc( ) is often implemented as a macro.
The putc( ) function returns the character written if successful, or EOF if
an error occurs. If the output stream has been opened in binary mode, EOF is a valid value for ch. This means that you may need to use ferror( ) to determine whether an error has occurred.
Related functions are fgetc( ), fputc( ), getchar( ), and putchar( ). | https://flylib.com/books/en/3.13.1.154/1/ | CC-MAIN-2019-13 | refinedweb | 116 | 63.59 |
On Tue, 2004-06-29 at 19:10, Colin Paul Adams wrote:
> >>>>> "Bruno" == Bruno Dumon <bruno@outerthought.org> writes:
>
> >> Yes. At least, only those whose parent is not an html/xhtml
> >> element.
>
> Bruno> So the namespace of an element made using xsl:element is
> Bruno> inherited by literally-defined elements?
>
> Afraid not - as I realised two minutes ago :-)
>
> Bruno> Let's hope so :-)
>
> Well, on what I've coded so far - definitely not (when I hit the
> reload button - I have to look twice - generation is zero time to my perceptions).
yeah, you'd have to be fast to see the difference between 50 en 75
milliseconds, but it sure would be noticeable if you have dozens of
concurrent requests to handle. But I'm still not implying there is such
degration in performance -- in fact, likely not.
>
> However, the existing stylesheets are not valid anything, so it is
> defeating my purpose (of generating valid xhtml) from proceeding as I
> have been - at least without further changes.
BTW, what's the advantage of using XHTML? What browser does support it?
>
> I had just got as far as changing a span into an xsl:element - and
> then looked at it's child.
> It was at that point I realised what you have just pointed out
> (above), and sighed.
> But I then positively groaned as i realised the contents of the span
> was a table - illegal - span can only contain inline elements - html
> 4.01 or any xhtml (I don't bother to check earlier DTDs).
>
> So what do I do?
fix it ;-)
>
> The span concerned is just apparently surrounding the table with core
> attributes - which can equally go on the table itself - so the obvious
> solution is to just transfer these attributes to the table element. No
> difference as far as (x)html is concerned.
> But maybe there is a knock-on effect somewhere? Perhaps there is java
> code somewhere that expects this structure? I don't know. Can someone
> advise me please?
It might help if you mention where this is done (stylesheet/template).
--
Bruno Dumon
Outerthought - Open Source, Java & XML Competence Support Center
bruno@outerthought.org bruno@apache.org | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200406.mbox/%3C1088531330.18847.65.camel@yum%3E | CC-MAIN-2018-26 | refinedweb | 360 | 73.58 |
Introduction
IBM® WebSphere® Transformation Extender, formerly called IBM DataStage TX and hereafter referred to as WebSphere TX, was first introduced in 2005. WebSphere TX maps provide complex, any-to-any and many-to-many transformation capabilities that can be executed either standalone or embedded into other IBM products, such as WebSphere Message Broker, WebSphere ESB, and WebSphere DataPower SOA Appliances.
In 2010, IBM acquired Cast Iron Systems and began to offer clients an end-to-end platform to integrate cloud applications from providers such as ADP, Amazon, NetSuite, and Salesforce.com, with on-premise applications from providers like JD Edwards and SAP. By using Cast Iron's hundreds of pre-built templates, you can eliminate expensive custom coding and complete enterprise cloud integrations in the space of days instead of the usual weeks or months. You can achieve these results using a physical appliance, a virtual appliance, or a cloud service.
Cast Iron includes built-in support for XSLT to support XML transformations, and it provides integration between various cloud services providers through its support for Web services. This article describes one way to extend Cast Iron's built in transformation capabilities beyond simple XML transforms, by calling WebSphere TX maps and exposing them to the cloud as Web services.
Prerequisites
This article assumes that you have WebSphere TX V8.3, WebSphere Cast Iron Studio V6.0, WebSphere Application Server V7, and WebSphere Integration Developer V7. It also assumes that you have a basic understanding of WebSphere TX and that you know how to create a simple WebSphere TX map with one input and one output card.
Hosting a WebSphere TX map as a Web service
WebSphere TX provides a Software Development Kit (SDK) that includes a Java API. Starting with WebSphere TX V8.3, the SDK is bundled with Design Studio. The Java API enables any Java program to create objects representing WebSphere TX maps, configure them, and run them.
The first step in this process is to create a WebSphere TX map. You can use one of the example maps that come with Design Studio, such as the states map, which rearranges statistics on US states. It is located in the Design Studio directory under examples\general\states.
- Open the states.mms file in Design Studio and build the map called master to generate an executable map called master.mmc. You can test the map by running it in Design Studio. It should read the input file sts.txt and generate an output file output.txt that looks like this:
OH,257*IN,142*MI,154*WI,80*MT,5*ID,8*WY,3*CO,21*NM, 8*AZ,15*UT,13*NV,4*MN,48*MO,67*ND,9*SD,9*KS,27
- You need to make one small modification to this map to make it Web service friendly. Open the states.mtt type tree and locate the United States group under the Input category. Then open its properties and change the Group Subclass / Format / Component Syntax / Delimiter / Value to <LF>. This change prevents the map from running in Design Studio, but don't worry; it will make for a good Web service.
Figure 1. Changing the delimiter
- Analyze and save the type tree then rebuild the map to generate a new master.mmc executable map.
- In order to run this map from a Cast Iron Orchestration, you must first expose it as a Web service. There are numerous ways to expose maps as Web services. A simple way is to use IBM Integration Developer (formerly known as WebSphere Integration Developer), which provides a simple wizard for generating a Web service from a Java class. Since WebSphere TX provides a Java API, there is a good fit. Open the Web perspective in IBM Integration Developer and create a new Dynamic Web Project. Name the project and then click Finish:
Figure 2. Creating a dynamic Web project
- Next, you need to write Java code to invoke the WebSphere TX map before it can be turned into a Web service. Before you can do that, make sure that the project understands the WebSphere TX Java API. Open the project's properties, navigate to Java Build Path and add dtxpi.jar (located in your Design Studio or in the WebSphere TX installation directory) as an external JAR:
Figure 3. Configuring the Java build path
- Create a new Java Class in your new project, name it, and click Finish:
Figure 4. Creating a Java class
- You will be presented with an empty Java class in the main editor. Fortunately, you do not have much code to write and you can use one of the WebSphere TX SDK examples as a starting point. Open Example5.java in a text editor -- it is located in the examples\dk\dtxpi\java directory under your Design Studio installation.
- Copy and paste the import statements into your new Java class:
import com.ibm.websphere.dtx.dtxpi.MAdapter; import com.ibm.websphere.dtx.dtxpi.MCard; import com.ibm.websphere.dtx.dtxpi.MConstants; import com.ibm.websphere.dtx.dtxpi.MMap; import com.ibm.websphere.dtx.dtxpi.MStream; import com.ibm.websphere.dtx.dtxpi.MException;
- Copy and paste the main() method into your new Java class, and change its signature to
public String runMap(String szInputBuffer).
- You will pass the input buffer in with each call to the Web service, so the static definition of szInputBuffer must be removed:
//private static String szInputBuffer = "This is my input data";.
- Since this class will be invoked repeatedly, move the code that initializes the WebSphere TX API into a static block:
static { try { MMap.initializeAPI(null); } catch (Throwable t) { t.printStackTrace(); } }
- You must also remove or comment out the line of code that terminates the WebSphere TX API:
//MMap.terminateAPI();
- The states map only has one output card, so modify the line of code that looks for the output card to point to output card 1 instead of card 2:
card = map.getOutputCardObject(1);
- Create a new String to hold the output from the map:
String outData = "";.
- Change the comment telling you to do something with the data produced by the map, to a line of code that appends each page to the output String:
outData += new String(page);
- Make your runMap method return the output String:
return outData;
- Finally, change the name of the map to point to the master.mmc file you built in Design Studio:
MMap map = new MMap("C:\\IBM\\WebSphere Transformation Extender 8.3 \\examples\\general\\states\\master.mmc");
- You should now have a Java class that builds without error. If you don't, don't worry because one is attached to this article.
- Create a new Web service in your new project. To open the Web Service wizard, select File => New => Other/Web Services:
Figure 5. Finding the Web service wizard
- You need a bottom-up Java bean Web service, and you should specify your Java class as the service implementation:
Figure 6. Running the Web service wizard
- Click Next and accept all of the defaults on the following screens. You should now have a Web service, and you may (depending on your IBM Integration Developer configuration) have deployed it already. If it is not deployed, then log on to WebSphere Application Server and deploy the EAR manually.
- You can test the Web service using the IBM Integration Developer Web Services Explorer. The argument to the szInputBuffer parameter is the contents of the sts.txt file in the folder where your states map is located.
Importing the WSDL into WebSphere Cast Iron Studio
- The Web service wizard you ran in IBM Integration Developer created a WSDL definition file. You can locate it in your project under WebContent/WEB-INF/wsdl. This file can be imported into a new WebSphere Cast Iron Studio project.
- Open WebSphere Cast Iron Studio and create a new project. From the Project tab, right-click on WSDLs and select Add Document. Browse for your WSDL file and click OK:
Figure 7. Import the WSDL
Creating the orchestration
Create an orchestration to invoke this Web service. You can use any endpoints, but for simplicity, use HTTP endpoints:
- From the Activities tab, drag an HTTP Receive Request activity onto the orchestration. Drag an HTTP Send Response activity on the orchestration and drop it to the right of the Receive Request.
- Select the Receive Request and click on it to open its configuration. In the checklist, select Pick Endpoint and click New. Choose a port number and click OK. Again, in the check list, select Configure and choose a URL , such as /runthemap. Leave the message type as Text and check Requires a Reply:
Figure 8. Configuring the Receive Request activity
- Select the Send Response activity and click on it to open its configuration. Notice that when you select Configure in the checklist, the Reply To already shows Receive Request and the message type is already Text.
Figure 9. Configuring the Send Response activity
- From the Activities tab, drag a Web Services / Invoke Service activity onto the orchestration and drop it between the Receive Request and Send Response activities:
Figure 10. Adding the Invoke Service activity
- Select it to open its configuration. In the checklist, select Pick Endpoint and click New. Click Browse next to the WSDL Name field and browse for the WSDL you imported. Click OK and then OK again.
- All that remains now is to map the variables through Cast Iron. Starting on the left, click the Receive Request activity and in the checklist click Map Outputs. Locate the body, select it, and click Copy. Select Body from the output parameters and choose Create:
Figure 11. Mapping outputs from Receive Request activity
- Select the Invoke Service activity and in its checklist select Map Inputs. Click Select Inputs and choose body. Map the body to the szInputBuffer:
Figure 12. Mapping inputs to the Invoke Service activity
- From the checklist, select Map Outputs, then click Select Outputs, and choose body. Map runMapReturn to the body:
Figure 13. Mapping outputs from the Invoke Service activity
- Select the Send Response activity, open its configuration, and from the checklist select Map Inputs. Click Select Inputs and choose body. Map the input body to the output body:
Figure 14. Mapping inputs to the Send Response activity
- Save the project.
Testing the orchestration
- The easiest way to test your new orchestration is to run it inside Cast Iron Studio. To do this, click the Verify tab and then click the green triangle to start the orchestration:
Figure 15. Verifying the orchestration
- From the Tools menu, open the HTTP Post Test Utility and specify your URL. If you do not know your host name, you can find it by looking in the Cast Iron error log. The port and the path are the ones you specified above. Click Show Response and paste the contents of the sts.txt file (from the states example in Design Studio) into the Message to post text box. Click Submit to send the data to the orchestration. You should see the output from the map in the response window:
Figure 16. The HTTP Post utility
Conclusion
This article showed you how to create a Web service that calls the WebSphere TX Java API to run a map. You created a Cast Iron orchestration that uses the Invoke Service activity to call this Web service, thereby running a map. Although this article used a simple example map, a simple example of the Java API, and a simple orchestration, you can extend the concepts to any WebSphere TX map hosted in any Web services container and called from any Cast Iron orchestration.
Download
Resources
- WebSphere Cast Iron resources
- WebSphere Cast Iron Studio information center
A single Web portal to all WebSphere Cast Iron Cloud Integration documentation, with conceptual, task, and reference information on installing, configuring, and using WebSphere Partner Gateway.
- WebSphere Cast Iron Cloud Integration product page
Product descriptions, product news, training information, support information, and more.
- WebSphere Cast Iron Cloud Integration support
A portal for support problems and their solutions, plus downloads, fixes, problem tracking, and more.
- Transformation Extender resources
- WebSphere Transformation Extender V8.3 library.
A guide to all WebSphere Transformation Extender information, including books, release notes, and information centers for recent versions or the product.
-. | http://www.ibm.com/developerworks/websphere/library/techarticles/1109_hudson/1109_hudson.html?cmp=dw&cpb=dwweb&ct=dwnew&cr=dwnen&ccy=zz&csr=091511 | CC-MAIN-2013-48 | refinedweb | 2,041 | 55.54 |
Course Overview
Ember.js Training Courses:
A Guiding Light for Developers
Elucidating on Ember.js involves examining the rich and varied applications of this framework. This front end Java-script framework has been created for coming up with heavy web apps. Angular has many angles, Backbone forms the fundamentals of development and Knockout delivers a punch. But for the ultimate fireworks, you can only rely on Ember.js. This is a wonderful framework for helping developers create precise and useful front end applications. For instance, User Controller looks for User View and Template.
Ember.js Training: Enlightening Developers
Ember.js training is a valuable skill to acquire if early persistence is considered a plus point. Saving a record is also infinitely easy with Ember.js. Auto updating templates involve the creation of property and its prominent display. All updates will be instantaneous. Ember.js initiates its own objects set complete with a really friendly API.
Ember carries an Array object with methods such as SortBy and FilterBy. If building intricate front end applications is important, consider Ember.js training. Front end apps with considerable complexity can be created using basics of the API. Ember is also amazing to work with.
Ember is a lot of fun to operate and opens up a potential for creating front end apps which are complex with clean, easy to read code-base. Ember.js is wonderful for creating enhancements and bug fixes while maintaining codebase stability.
Prerequisites for Ember.js Training
Developers need to have fundamental know how of Javascript and Ruby on Rails. A Rails development ecosystem must be set up on your system. Ember.js has two versions- JavaScript and CoffeeScript. Integration tests for Ember involve either Rspec Integration or Cucumber.
Ember Inspector: Light in the Dark
Through this Ember.js Training you will learn that, Ember inspector is an important and essential tool for debugging Ember.js. Chrome Extension assists in ensuring what is going on within the app can be accessed. Once the inspector is installed, you can ensure the browser is refreshed and Chrome dev tools are opened. The tab entitled Ember contains important tools. Ember Inspector makes intricate workings of the app clearly visible.
Ember.js Object- Independent System
Ember.js implements its own specific object system. Base object refers to as Ember.Object. All other objects involve Ember.Object extensions. Ember. controller or Ember. View are only some of the Ember.Objects to be used. The Ember. Object itself can also be used. Through this Ember.js Training you will learn that, a common use of Ember.Object is creation of services following a certain logic.
A subclass of Ember.Object can be created through extensions. Objects comprise observers, functions and properties. Basic versions exist of Ember.js properties. Observers refer to functions that fire whenever change is observed and. Though they look like properties, they close with observes.
Through this Ember.js Training you will learn that, Ember Object System facilitates ease of writing with modular, reusable codes. Any existing object in the app can be extended. Extending object refers to the pattern used while coming up with Ember. Common functionality or pull in functionality from remaining objects can be extracted. An init function is referred to as ember objects.
This requires set up work. If other, more specific Ember objects such as Route or Controller are used, other Ember conventions can be chosen. One can also add more properties and observers through reopening of the object. Base objects can function in the same way as do classes in other language. The sort of class method on objects can even be defined. Ember.js provides users the ability to compose JavaScript which is object oriented and is one of the key Ember features. All in Ember starts with routes.
Ember Routing: Basics
Mechanics of routing through Ember can be covered. Using default, Ember can apply has change event within the browser to find out whether routes were changed. It also implements its own Hash Locations object for dealing with this. Use of Hash Location ensures the Ember route will be visible after the # within the uniform resource locator. One may not want to serve the Ember app directly from the root URL. The root uniform resource locator should be specified to Ember. All browsers may not need the history API.
Through this Ember.js Training you will learn that, Ember can be used through Auto Location which will use History Location in case the user’s browser provides support for it else HashLocation should be used. HashLocation and History Location are needed for implementation of Ember.js’s API regarding location. One can write the API Location class in a way that it responds to the API.
Through this Ember.js Training you will learn that, Resource can be used to nest things within resource. The route can also be used for new UI that does not require a particular record. There is a dead end route as well. Within Ember.js, UI for an active route will be visible in perpetuity. If one can observe the route in the uniform resource locator bar, it implies there is activation and there should be UI visibility.
Then there is the Ember Object Flow where the route is activated flowing downward to the linked objects. The Ember Route object is different from the Router which creates named uniform resource locator routes. Router also comes up with uniform resource locators routes. This route object is a certain type of Ember object which creates the setup and manages what can happen when the url route is visited by others.
The store refers to am Ember data construct that one goes through with persisted records to fetch the record of the type that must be passed. Routes are the place where you can reach across the application. Route objects are your key ally.
Three Controllers, One Model
Controllers handle non persistent logic associated with a specific UI piece and wrap a model or array of these. Functions, properties and observers on controllers can function with exceptions much like normal Ember Objects.
Three types of controllers include Object Controller, Controller and Array Controller. Object Controller is used when the controller’s route revolves around a single model. Array Controller is perfect for when the route revolves around an array of models. Controller is used when route is not fetching any models at all. Another critical use for controllers is template action handling.
Ember View- Organizing the Code
Action handlers must go inside the action object within the controller. Through this Ember.js Training you will learn that, There is an Ember convention to organize the code. The view can be seen as a means of constituting the template. The javascript which is contained may be executes on the template and managed around class names as well as attributes. This view does not carry the current model however. Views have numerous hooks which can be used.
Ember: Shorthand is Supreme
Ember provides numerous computed functions to create shorthand properties. Views wrap templates quite literally in a div with ember ids which are generated. This wrapping element can be easily personalized and custom-made.
Ember will create the attribute with the property’s name and value of attribute to be the retain the return value of any property. In cases where the property will have a different name than the attribute, specifications for the same can be made.
A UserView can opt for a user template, but in case a different one is required, the template Name can be specified. User Interaction events such as click, mouse Enter and double Click are only seen through a single view. The function needs to be defined with the event name and it will get there following the occurrence of the event. Event listeners should be made applicable to the entire view so clicking within will initiate the click function.
Ember Template:
Ember templates refer to Handlebar files. For learning purposes, these templates provide direct and complete access to properties of the controller. Templates also provide access to view the property yet prefix should be initiated with view. Handlebars provide if, else and unless arguments which means that they accept only single arguments. Handlebars do not allow and nor or. Zero app logic is what the ember template is designed to follow.
Through this Ember.js Training you will learn that, Ember templates are Handlebars files and for assisting in the learning, templates have to possess direct access to controller properties. Template also has the access for viewing properties yet the prefix should be initiated with view. Handlebars can provide if, else and unless arguments-they only accept single arguments. No ands or ors are permitted in Handlebars.
The ember template is designed to follow zero application logic. A combined boolean requires a controller. Handlebars offers the means to loop through things. The first provides access to current objects. Within Emblem, the call for this is implicit and access to this can be had if required. Handlebars offer dual ways to loop through, the first providing access to the current object while the second involves looping the name of the object.
Templates are equipped with helpers to render a view, template or controller to reutilize and organize logic. Ember is associated with a link to helper for transition to a separate route. The route’s name and any models to be sent along can be passed on with the aid of this.
Ember.js is one of the most recent members of the JavaScript framework system. It was born of a project referred to as SproutCore, created in 2007 and heavily employed by Apple for apps such as MobileMe. Ember.js is perfect for coming up with precise web apps doing away with the boilerplate and providing standard app architecture.
This is well integrated with a tempting engine called the Handlebar providing the important feature of two way data binding. Ember is also associated with other features including auto updating template, state managing and properties that are computed. Ember.js is an advanced and important player. jQuery is the only dependency of Ember. Ember.js also aids in structured data management. Block commences through the use of App namespace, extending the prepackaged views, the TextField.
Additionally, arbitrary properties and functions within views also encompass built in helper functions for use. Once the TextField View is defined, corresponding view helper codes to the HTML file are added. Any template commencing with the world view pertains to a View defined within the JavaScript codes.
This portion of the template comprises a view helper and a button tag associated with an additional helper. In case the field is empty, text within the placeholder attribute will be put into the input field. The value goes away with typing and Ember permits developers to apply HTML 5 standard attributes located within the constructed views. Second attribute includes the spell of data bindings by Ember.
Ember.js employs a set of conventions for help in determining what one is trying to attain. As the contents of the variables change, value containing the input field will be automatically updated and vice versa. Same function is called when the Enter key is pressed by the user within the text field. Ember provides a number of helper functions for facilitating writing applications.
Through this Ember.js Training you will learn that, these functions are created in every Ember object and providing rapid access to properties or functions. Ember’s data bindings have dual direction and as soon as a value is typed into the input field, value binding attribute of input field view is updated. Once the username has been attained, test run ensures it is not empty.
Why You Should Learn Ember.js Training ?
Selecting an opinionated framework ensures one can move between projects with ease. Proper patterns follow and leveraging Ember framework conventions gets the job done. Ember ensures it is easy to onboard more engineering talent within the framework. Creating the own JS framework ensures ease of accessibility, norms and conventions.
Within Ember, there is strict separation of concerns between different objects that are exposed. Within Ember, there is separation of concern between different objects exposed. There is a framework with extremely distinct roles.
Another reason why it is a good choice to learn this framework is because there is a fresh release of Ember every few weeks. Seamless and smooth upgrades are the reason why Ember has acquired value in current times. Ember.js also has a intricate feedback system from the developer community meeting the needs of web app development. Fresh releases come with new features and breaking changes on migration.
Deprecations are supported till a fresh revision. Ember.js has a front-end first philosophy and most features are not constrained by back end limitations. Front-end team should also work to define APIs and assess test data which will result in future development. With a single command, backend for the application serving fixture data in the same fashion as a genuine backend is there.
This also has a fast growing community of add-ons and component. Most of these components are shared with the growing community of users. Internal member components are also available as are Ember add-ons. Ember.js also works really well on mobile and ensures that browsers move at the speed of light. HTML Bars is the new Ember.js tempting library.
Ember itself runs extremely fast and works well on mobile browsers making it the framework of choice for users. Greatly improving performance, the new tempting library is an asset which will improve performance.
Ember.js CLI: 3 Fold Advantage:
Ember implements web components and respects W3c specs. It also uses ES6 modules for organizing code into varied files. Ember CLI also has CSP for enhanced security. Ember CLI is the perfect tool for Ember.js. It is perfect for streamlining Ember Apps and ensures ease of sharing common codes among different Ember applications.
This makes Ember developers extremely productive and takes care of all the work needed for applying codes and third party libraries to the app.Writing tests instead of wasting time on the plumbing code before testing starts is another advantage of Ember.JS.
Other JavaScript frameworks offer command line utility but nothing can beat CLI. Ember developers have the advantage of working in shorter periods of time to write their applications.
Other developers have the twin worry of developing code for the app and updating builds while making the addition of 3rd party libraries. Ember CLI has some major advantages over other tools like Yeoman and Grunt. It uses broccoli.js as the pipe-line.
Moreover, its conventions are baked in and it integrated with ember components, gunit testing and more Code becomes more friendly in the future through the use of the ES6 trans-piler for compilation of Ember.js. As Ember.js has an opinionated framework, it lowers the amount of time spent deciding the nature of the best code architecture.
Ember projects can also be joined by the developers faster and no more time is spent on getting familiar within the code base. Ember CLI is highly advantageous in that it saves developers from the hassles of organizing code within modules, coming up with building tools, fashioning mock servers for the front-end apps and composing common functionality within the application. This is attained through reuse of add-ons from within the community.
Ember.js Training Conclusion:
Thus, Ember.js has many advantages and it is a sparkling combination of efficiency and creativity as well as web development tools and frameworks are concerned. Ember.js is known as the leading web development framework. It is truly a developer’s delight and outshines nearly every other framework including Grunt and Yeomen. | https://www.educba.com/course/ember-js-training/ | CC-MAIN-2019-18 | refinedweb | 2,643 | 58.38 |
Launch a CI Service in 100 Lines of Clojure!
There are many factors that contribute to the quality of the code we produce. Undoubtfully, adopting Continuous Integration is one of the biggest leaps one can make when closing the gap between The Holy Grail Of Software Engineering.
Over time there have emerged quite a few CI services that make it easy to integrate changes to our code. Unless you've been living in vacuum for the past few years, you must've heard names like Jenkins, TravisCI or CircleCI.
But what if I told you, you could roll your own YetAnotherCI in just around 100 lines of Clojure? If this sounds interesting, make sure to follow along and ship YACI with us.
Preliminaries
While various CI services may have some differences between them, ultimately they have one goal:
- They have to run some code,
- and they have to report what the outcome is.
The code will usually involve triggering tests or trigerring a build, but in principle it doesn't matter what kind of code it runs, as long as the end user is satisfied with the result it produces (much like in classic programs).
When designing a CI service we also have to take into account that there will be many users, submitting jobs concurrently – and their submissions should be treated more or less equally. In order to ensure that we'll have to employ some sort of a queueing mechanism.
Then, we have another factor – we have to assign these jobs to our worker machines. We can't place all jobs on a single machine, or else it will choke under the load. We have to enforce some limits and make sure worker nodes only get more work when they are ready to receive it. That's where we arrive to the next point – we need a scheduler that will poll our queue and assign jobs accordingly.
Now that we know what we're up to, let's get down to work.
Orchestration
Luckily for us, all the things I have mentioned above have already been done, and instead of writing our own queues and schedulers we can just roll a container orchestrator like Kubernetes which will take care of all that.
If you want to get an overview of what Kubernetes is, [Kelsey Hightower gave a great talk on Kubernetes] () at PuppetConf 2016.
However, for the purpose of this article it will suffice that you know that you can just give your tasks to Kubernetes and it will run them using available resources.
Conveniently enough, Kubernetes also has an API we can easily interact with.
That's a huge leap forwards, because what we now have to implement is basically a wrapper around this API, that will hide the details from our end users.
But first let's get a Kubernetes cluster we could work on. Incidentally, I already had one in my pocket (check yours too) – but if you don't have one, you can get one up and running from most cloud providers in a matter of minutes.
Alright, now that we have our cluster, let's proxy it, so the API is accessible locally.
mewa@sea$ kubectl proxy Starting to serve on 127.0.0.1:8001
Let's confirm it's working:
mewa@sea$ curl localhost:8001/api { "kind": "APIVersions", "versions": [ "v1" ], "serverAddressByClientCIDRs": [ { "clientCIDR": "0.0.0.0/0", "serverAddress": "35.205.116.105" } ] }
Great, let's get down to writing our API wrapper.
Writing our API
As mentioned earlier, we have one goal — to run code and report. Since CI jobs can be long running we're going to split it into 2 separate endpoints: one for posting a job, another for retrieving info about it.
We're going to use ring for our API server.
(require '[ring.adapter.jetty :as jetty])
Next, we'll create
/run and
/get endpoints, which will run their respective handlers. We'll expose them at port 4000.
(defn route "Given a map of HANDLERS, returns a Ring handler which matches requst URIs on map keys and executes handlers associated with those keys" [handlers] (fn [request] (if-let [handler (handlers (:uri request))] (into {:headers {"Content-Type""application/json"}} (handler request)) {:status 404 :body "Not found"}))) (defn -main [& args] (jetty/run-jetty (route {"/run" run-handler "/get" get-handler}) {:port 4000}))
Before we can actually write these handlers we'll need code for interfacing with Kubernetes API. The only obstacle (well, not really) here is that we must first generate it using
swagger-codegen.
mewa@see$ curl localhost:8001/swagger.json -o api.json mewa@sea$ java -jar swagger-codegen-cli-2.3.1.jar \ generate -i api.json -l clojure -o kubernetes
Unfortunately, as I'm writing this article, this won't work due to a bug in
swagger-codegen. You can either pull relevant code from my repo or apply required patch to the generated code manually.
But minor hurdles won't stop us from shipping YACI, will they? Let's proceed with code for posting new jobs.
(require '[kubernetes.api.batch-v- :as k8sbatch]) (defn new-job "Create job which executes CMD" [cmd] (let [job-name (str "k8s-job-" (java.util.UUID/randomUUID))] (k8sbatch/create-batch-v1-namespaced-job "default" {:metadata {:name job-name} :spec {:template {:spec {:containers [{:image "alpine" :name "k8s-job" :command ["sh" "-c" cmd]}] :restartPolicy "Never"}}}}) {:name job-name}))
If you've taken a look at Kubernetes job specs you'd see it's actually a 1:1 mapping.
In order to verify it's working, we'll need to supply a context for the Kubernetes API. Let's create a helper function that will take a function and run it in the context of our proxied Kubernetes API.
(require '[kubernetes.core :as core]) (def kube-config {:base-url ""}) (defn run-k8s [f & args] (core/with-api-context kube-config (try (apply f args) (catch Exception e {:error (or (ex-data e) e)}))))
Let's run it in a REPL
repl=> (run-k8s new-job "echo success") {:name "k8s-job-5079d2cf-acc7-4787-96d6-12d83be720a6"}
and verify the job was created
mewa@sea$ kubectl get jobs NAME DESIRED SUCCESSFUL AGE k8s-job-5079d2cf-acc7-4787-96d6-12d83be720a6 1 1 1m mewa@sea$ kubectl get pods \ --selector job-name=k8s-job-5079d2cf-acc7-4787-96d6-12d83be720a6 \ --show-all NAME READY STATUS RESTARTS AGE k8s-job-5079d2cf-acc7-4787-96d6-12d83be720a6-2fs6t 0/1 Completed 0 1m mewa@sea$ kubectl logs k8s-job-5079d2cf-acc7-4787-96d6-12d83be720a6-2fs6t success
Great, it seems to work. Now let's retrieve these logs programatically.
The steps required are identical to what we've been doing in with
kubectl.
- Retrieve pods for job
id,
- Retrieve logs for pods returned
(require '[kubernetes.api.core-v- :as k8scorev]) (defn job-pods "Get pods for job ID and return a channel with information about each pod" [id] (map (fn [v] {:name (get-in v [:metadata :name]) :status (get-in v [:status :phase])}) (:items (k8scorev/list-core-v1-namespaced-pod "default" {:label-selector (str "job-name=" id)})))) (defn pod-logs "Get logs for each item in channel PODS and return a channel with logs for each pod" [podinfo] (let [pod (:name podinfo) status (:status podinfo)] {:pod pod :status status :log (k8scorev/read-core-v1-namespaced-pod-log pod "default")}))
Now we can easily retrieve our logs. Note that we have to force evaluation of returned sequence, with
doall, to get side effects (run API calls).
(defn get-job "Get information for job with given ID" [id] (let [pods (job-pods id)] (doall (map pod-logs pods))))
Last thing that's left is hooking these functions in as request handlers in our API. It's pretty straight-forward.
(require '[cheshire.core :refer :all]) (defn run-handler "Ring handler which parses REQUEST and starts a job" [request] (let [handler (fn [cmd] (run-k8s new-job cmd)) resp (-> request :body slurp handler)] {:status 200 :body (generate-string resp {:pretty true})})) (defn explode-query "Explode query string Q into a map" [q] (reduce #(apply assoc %1 %2) {} (map #(s/split % #"=") (s/split q #"&")))) (defn get-handler "Ring handler which parses REQUEST and returns job info" [request] (if-let [id ((explode-query (:query-string request)) "id")] (let [jobinfo (run-k8s get-job id)] {:status 200 :body (generate-string jobinfo {:pretty true})}) {:status 400 :body "id query param is required"}))
With our handlers being ready, it's time to package our API and run it on our servers.
mewa@sea$ lein uberjar mewa@sea$ java -jar k8s-0.1.0-SNAPSHOT-standalone.jar
Let's test our api in another terminal.
mewa@sea$ curl localhost:4000/run -d 'echo great success' { "name" : "k8s-job-730cdd63-cefb-44f2-b760-cde00c24a35e" } mewa@sea$ curl localhost:4000/get?id=k8s-job-730cdd63-cefb-44f2-b760-cde00c24a35e [ { "pod" : "k8s-job-730cdd63-cefb-44f2-b760-cde00c24a35e-xsz7k", "status" : "Succeeded", "log" : "great success\n" } ]
What can I say, the log is pretty accurate. Great success!
Before we finish, let's just do a final check to make sure I wasn't lying to you:
mewa@sea$ wc -l < src/k8s/core.clj 95
95 lines of code, including docstrings! If we wrote this in Java our imports would have more!
Wrapping up
Now that our CI is packaged all that's left to do is to buy an
.io domain, hire a marketing team and start collecting money!
I hope you had as much fun reading through this as I had preparing it. As always, code used in this article is present on my Github.
And when you ship — remember, the
io part is crucial!
Originally published on marcinchmiel | https://functional.works-hub.com/learn/launch-a-ci-service-in-100-lines-of-clojure-66844?utm_source=rss&utm_medium=automation&utm_content=66844 | CC-MAIN-2020-16 | refinedweb | 1,606 | 59.64 |
US7580953B2 - System and method for schema lifecycles in a virtual content repository that integrates a plurality of content repositories - Google PatentsSystem and method for schema lifecycles in a virtual content repository that integrates a plurality of content repositories Download PDF
Info
- Publication number
- US7580953B2US7580953B2 US11098781 US9878105A US7580953B2 US 7580953 B2 US7580953 B2 US 7580953B2 US 11098781 US11098781 US 11098781 US 9878105 A US9878105 A US 9878105A US 7580953 B2 US7580953 B2 US 7580953B2
- Authority
- US
- Grant status
- Grant
- Patent type
-
- Prior art keywords
- node
- content
- embodiments
- user
- the following applications, which is hereby incorporated by reference in its entirety:
This application is a Continuation in Part of “SYSTEM AND METHOD FOR CONTENT LIFECYLES”; U.S. patent application Ser. No. 10/911,099; Inventors: Rodney McCauley et al., filed on Aug. 4, 2004; and “SYSTEM AND METHOD FOR CONTENT LIFECYCLES”; U.S. Provisional Application No. 60/561,796, Inventors: Rodney McCauley et al., filed on Apr. 13, 2004.
This application is related to the following co-pending applications which are each hereby incorporated by reference in their entirety:
SYSTEM AND METHOD FOR DELEGATED ADMINISTRATION, U.S. patent application Ser. No. 10/279,543, Filed on Oct. 24, 2002, Inventors: Philip B. Griffin, et al.,
SYSTEM AND METHOD FOR RULE-BASED ENTITLEMENTS, U.S. patent application Ser. No. 10/279,564, Filed on Oct. 24, 2002, Inventors: Philip B. Griffin, et al.,
SYSTEM AND METHOD FOR HIERARCHICAL ROLE-BASED ENTITLEMENTS, U.S. patent application Ser. No. 10/367,177, filed on Feb. 14, 2003, Inventors: Philip B. Griffin, et al.,
METHOD FOR ROLE AND RESOURCE POLICY MANAGEMENT, U.S. patent application Ser. No. 10/367,462 filed on Feb. 14, 2003, Inventors: Philip B. Griffin, et al.,
METHOD FOR ROLE AND RESOURCE POLICY MANAGEMENT OPTIMIZATION, U.S. patent application Ser. No. 10/366,778, filed on Feb. 14, 2003, Inventors: Philip B. Griffin, et al.,
METHOD FOR DELEGATED ADMINISTRATION, U.S. patent application Ser. No. 10/367,190, filed on Feb. 14, 2003, Inventors: Philip B. Griffin, et al.
SYSTEM AND METHOD FOR CONTENT VERSIONING, U.S. patent application Ser. No. 60/561,780, filed on Apr. 13, 2004, Inventors: Rodney Mc Cauley, et al.,
SYSTEM AND METHOD FOR CONTENT AND SCHEMA VERSIONING, U.S. patent application Ser. No. 60/561,783, filed on Apr. 13, 2004, Inventors: Rodney McCauley, et al.,
SYSTEM AND METHOD FOR CONTENT AND SCHEMA LIFECYCLEs, U.S. patent application Ser. No. 60/561,785, filed on Apr. 13, 2004, Inventors: Rodney McCauley, et al.,
METHODS FOR DELEGATED ADMINISTRATION, U.S. patent application Ser. No. 10/819,043, filed on Apr. 6, 2004, Inventors: Manish Devgan, et al.,
SYSTEM AND METHOD FOR VIRTUAL CONTENT REPOSITORY DEPLOYMENT, U.S. patent application Ser. No. 60/561,819, filed on Apr. 13, 2004, Inventors: Rodney McCauley, et al.,
SYSTEM AND METHOD FOR VIRTUAL CONTENT REPOSITORY ENTITLEMENTS, U.S. patent application Ser. No. 60/561,778, filed on Apr. 13, 2004, Inventors: Rodney McCauley, et al.,
SYSTEM AND METHOD FOR CONTENT TYPE MANAGEMENT, U.S. patent application Ser. No. 60/561,648, filed on Apr. 13, 2004, Inventors: Rodney McCauley, et al.,
SYSTEM AND METHOD FOR CUSTOM CONTENT LIFECYLES, U.S. patent application Ser. No. 60/561,782, filed on Apr. 13, 2004, Inventors: Rodney McCauley, et al.,
SYSTEM AND METHOD FOR CONTENT TYPE VERSIONS, U.S. patent application Ser. No. 60/561,759, filed on Apr. 13, 2004, Inventors: Rodney McCauley, et al.,
SYSTEM AND METHOD FOR INFORMATION LIFECYCYLE WORKFLOW INTEGRATION, U.S. patent application Ser. No. 60/561,646, filed on Apr. 13, 2004, Inventors: Rodney McCauley, et al.,
SYSTEM AND METHOD FOR BATCH OPERATIONS IN A VIRTUAL CONTENT REPOSITORY, U.S. patent application Ser. No. 60/561,799, filed on Apr. 13, 2004, Inventors: Rodney McCauley, et al.,
SYSTEM AND METHOD FOR VIEWING A VIRTUAL CONTENT REPOSITORY, U.S. patent application Ser. No. 60/561,647, filed on Apr. 13, 2004, Inventors: Rodney McCauley, et al.,
SYSTEM AND METHOD FOR SEARCHING A VIRTUAL CONTENT REPOSITORY, U.S. patent application No. 60/561,818, filed on Apr. 13, 2004, Inventors: Rodney McCauley, et al.,
FEDERATED MANAGEMENT OF CONTENT REPOSITORIES, U.S. patent application Ser. No. 10/618,513, filed on Jul. 11, 2003, Inventors: James Owen, et al.,
VIRTUAL REPOSITORY CONTENT MODEL, U.S. patent application Ser. No. 10/618,519, filed on Jul. 11, 2003, Inventors: James Owen, et al.,
VIRTUAL CONTENT REPOSITORY BROWSER, U.S. patent application Ser. No. 10/618,379, filed on Jul. 11, 2003, Inventors: Jalpesh Patadia, et al.,
SYSTEM AND METHOD FOR A VIRTUAL CONTENT REPOSITORY, U.S. patent application Ser. No. 10/618,495, filed on Jul. 11, 2003, Inventors: James Owen, et al.,
VIRTUAL REPOSITORY COMPLEX CONTENT MODEL, U.S. patent application Ser. No. 10/618,380, filed on Jul. 11, 2003, Inventors: James Owen, et al.,
SYSTEM AND METHOD FOR SEARCHING A VIRTUAL REPOSITORY CONTENT, U.S. patent application Ser. No. 10/619,165, filed on Jul. 11, 2003, Inventor: Gregory Smith,
VIRTUAL CONTENT REPOSITORY APPLICATION PROGRAM INTERFACE, U.S. patent application Ser. No. 10/618,494, filed on Jul. 11, 2003, Inventors: James Owen, et al.,
SYSTEMS AND METHODS FOR PORTAL AND WEB SERVER ADMINISTRATION, U.S. patent application Ser. No. 10/786,742, Inventors: Christopher Bales, et al., filed on Feb. 25, 2004.
The present invention disclosure relates to content management, and in particular, versioning content and providing definable content lifecycles., lifecycles,.
Aspects of the invention are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings in which like references indicate similar elements. It should be noted that references to “an”, “one” and “various” embodiments in this disclosure are not necessarily to the same embodiment, and such references mean at least one. If. life cycle such that they appear and behave as a single content repository from the standpoint of application layer 120. The VCR can also add content services to repositories that natively lack them., lifecycles, lifecycles implemented as a Java Management Extension objected automatically be resolved.
In various embodiments, a display template (or “template”) can be used).
In various embodiments and by way of an illustration, display templates can be implemented using HTML (Hypertext Markup Language) and JSP (Java® Server Pages). By way of a further illustration, such a display template can be accessed from a web page through a JSP tag which. For a user or a process to be in a role, they must belong to PMembers and satisfy the Membership Criteria. lifecycle:
Policy1=Printer504+Read/View+Marketing
Policy2=Printer504+All+Engineering. By way of illustration, the Browse privilege can be considered the least dominant of the privileges for the Printer504 node. Addition of any other privilege will implicitly include Browse. For example, if the next step up is the Read/View capability, selection of Read/View will implicitly include the Browse privilege.. Instances of a schemas (e.g., content nodes), can inherit these policies unless overridden by a more local policy. For purposes of illustration, assume the following policies:
Policy5=Press_Release+Read/View+Everyone
Policy6=Press_Release+All+Public_Relations”.) Instances of a schemas (e.g., in content nodes), would inherit these property policies unless overridden by a more local policy.
In various embodiments, content and schema nodes can follow lifecycles. In certain aspects of these embodiments, a lifecycle can set forth: a set of states through which a node can pass; actions that can occur as part of or resulting from state transitions; and actors that can participate in the lifecycle. By way of illustration, lifecycles can be used to model an organization's content approval process. In various embodiments, lifecycles can be nested within lifecycles. This allows for complex lifecycles to be compartmentalized for easy manipulation and development. Various embodiments include a lifecycle definition, an extensible lifecycle system, an interactive lifecycle design tool to generate and/or modify lifecycle definitions, and means for lifecycles to interact with other systems. If a content repository does not natively support lifecycles, support can be provided by the VCR.
In various embodiments, a lifecycle can be associated with, or be a property of, a node. In aspects of these embodiments, if a lifecycle is associated with a hierarchy node, the children of the hierarchy node will also be associated with the lifecycle. Likewise, if a lifecycle is associated with a schema, nodes instantiated based on the schema will also be associated with the lifecycle. Lifecycles can also be directly associated with content nodes.
In various embodiments and by way of illustration, a node can transition from a current state to a new state. Before, during or after a transition, one or more actions can be performed. Actions can optionally operate on and/or utilize the node. Actions can include any type of processing that can be invoked in the course of the lifecycle. By way of an example, actions can include function/method calls, remote procedure calls, inter-process communication, intra-process communication, interfacing with hardware devices, checking a node into/out of version control, assigning the node to a user, group or role, performing some kind of processing on the node (depending on any policies that may be defined on the node), providing a notification to users, groups and/or roles, and other suitable processing. Actions can also be specified as command(s), directive(s), expression(s) or other constructs that can be interpreted or mapped to identify required processing. For example, high-level action directives such as “publish” could cause a content node to be published, and an e-mail or other message to be sent to certain parties. It will be apparent to those of skill in the art that any action is within the scope and the spirit of the present disclosure.
An exemplary lifecycle for a content node representing a news article is illustrated in Table 6 and
The exemplary lifecycle in
The news article can be modified by user(s) and/or process(es) while in the Draft state and then submitted for approval. By way of an example, a user can check-out the news article (assuming it is under version control), modify it, and then check-in the article with the changes. Before checking the article in, the user can change the state property from “Draft” to “Ready for Approval” in order to bring about a transition to the Ready for Approval 208 state. By way of a further illustration, a user interface can present a button or a menu option that a creator can be selected when the user has finished editing the article. Once selected, the article can be automatically submitted to the lifecycle where it can progress to the next state. In this illustration, the transition through decision point D1 206 to the Ready for Approval state is constrained to users in the Creator role. Thus, only a user/process that created the article can cause the article to transition into the Ready for Approval state.
The transition from Draft to Ready for Approval also has an accompanying action, Submit. By way of an example, this action can cause a notification to be sent to those interested in reviewing articles for approval. Alternatively, or in addition to this, the news article can be assigned to users/groups/roles. In this way, users/processes that are in the assigned users/groups/roles can review it while it is in the Ready for Approval state. From the Ready for Approval state, there is a transition through decision point D2 210. The D2 decision point specifies that a user/process in the Approver role can cause a transition to the Draft state 204 or to the Published state 212. If the transition is to the Draft state, the action associated with the transition will be to Reject the article. A rejected article will repeat the lifecycle path from Draft to Ready for Approval. If the transition is to the Published state, however, the action will be to Accept the article. Once the article is in the Published state, a user/process in the role of Editor or of Creator can cause a transition to the Retired state 216. A user in the role of Creator can cause a transition to the Draft state. Transitioning from the Published state to the Draft state causes an Update action whereas transitioning from the Published state to the Retired state causes a Retire action.
In aspects of these embodiments, roles can be organized into a role hierarchy such that superior roles can skip state transitions required of inferior roles. By way of illustration, suppose the Approver role was superior to the Creator role. If the current lifecycle state of the article was Draft, a user in the role of Approver could skip the Ready for Approval state and transition the article all the way to the Published state. In one embodiment, actions associated with the decision points D1 and D2 could be automatically invoked even though the Ready for Approval state was skipped.
In various embodiments and by way of illustrations, lifecycles can be defined using a text editor and/or an IDE. From a text editor a user can create a full lifecycle definition in a language (e.g., XML). In a graphical environment, a user can create different states and then connect them together to represent transitions. In an embodiment, a graphical depiction of a lifecycle can appear as in
In various embodiments, third party lifecycle engines can be invoked. This allows additional functionality to be seamlessly incorporated into the lifecycle model. In one embodiment, this can be accomplished from within a lifecycle through lifecycle actions. In another embodiment, third party lifecycles can be invoked through a callback mechanism. By way of illustration, the VCR API can invoke a third party lifecycle in response to certain events, such as when a content node/scenario has been modified and/or its state property has changed. In this illustration, a process which implements a third party lifecycle can register to receive callbacks when these events occur. The callback notification can also include the VCR node identifier and optionally context information such as information about the user/process that caused the event.
In various embodiments, lifecycles can be utilized from other processes. The VCR API includes a lifecycle interface to allow access to a node's lifecycle definition. In addition, the lifecycle interface allows a process to drive a node through the lifecycle by providing functionality such as the ability to ascertain a node's current state, place the node in a new state based on transition choices available from its current state, and invoke actions associated with a state transition.
Various embodiments of the system include a version control capability that is available for nodes such that a history of states is maintained over the lifetime of a node. In various embodiments, one version of a node is considered the published version. In certain aspects of these embodiments, versions are given names (or other suitable identifiers) and can be stored or accessed via a version list associated with the particular node. In aspects of these embodiments, the VCR can provide support for versioning if the repository in which the node is persisted does not.
In various embodiments, a version of a node can also include identification of the user/process that last modified the version and a description or comment. A node under version control can be moved or copied within the VCR. In various embodiments, version history travels with the node during a move or a copy. When the user or a process moves a content node the history of that node moves with it. In case of a roll back, parent version of the roll back can be indicated in the history. In various embodiments, a node's version can be “rolled back” (i.e., restored) to a previous version. In aspects of these embodiments, upon roll back the selected content node version to restore becomes the latest version of the content node. In certain of these embodiments, the rolled back version is automatically given a new version identifier (e.g., old version number+1).
Each node can have a lock (e.g., semaphore or other suitable means for controlling access to the node). Locks prevent a node from being modified by more than one user/process at the same time. When a node is checked-out of version control, the user acquires the lock. Lock acquisition prevents others from checking-out versions associated with the node until it is checked-in. Locks can be employed on the node level or on the version level. If version-level locks are employed, it is possible for more than one version of a node to be checked-out at the same time. Version control can be turned on/off for a given node, repository, and/or VCR. In one embodiment, a node that does not utilize version control has only a single version in its version list.
Versioning includes the ability for a user or a process to check-out a node version for editing. By way of illustration, a user can click on an item in a tree browser view of a VCR (see
In one embodiment, a user interface can provide a rendering of a node's properties. For example, a user can select an ‘Edit Content’ button in order to edit the node's properties. This action can attempt to check-out the node if it is not already checked-out. In various embodiments, a given user's checked-out nodes appear in the user's workspace (see
Upon check-in, a new version of the node is available in the system. In various embodiments, checking-in a node causes the node to appear in the VCR but not in the user's workspace. A user/process can enter a description of the changes made to the node during check-in time. The description can be saved along with the version. Checking a node into the workspace also causes the associated lock to be released (so that others can edit) and, if versioning is turned on, creates a new version of the node. However, it the user merely saves their work rather than checking it in, the node will remain in the workspace and a new version will not be generated. In aspects of these embodiments, if a lifecycle is associated with the node, checking the node in can submit the node to the lifecycle.
By way of a illustration and with reference to
In various embodiments, a user/process can navigate to a node and perform a delete action. In one embodiment, deleting a node changes its state (e.g., to Retired or Deleted). A user can view all deleted nodes at a later time and choose to permanently delete them or to un-delete them. In another embodiment, deleting a node permanently removes it from the VCR. In one embodiment, a node can be deleted regardless of its lifecycle state. In one embodiment, a node cannot be deleted if it is checked-out. In one embodiment, deleting a node causes all of the node's children to be deleted. In one embodiment, only a node's checked-in children are deleted. In yet another embodiment, a deleted node (and, optionally, its children) can be un-deleted.
A user interface according to various embodiments and by way of illustration can include an interactive graphical tree browser as is well known in the art to allow users to explore and interact with the VCR and their workspace. A tree browser presents a hierarchical view of nodes and schemas. The indentation level of a node indicates parent/child relationships. In various embodiments, the tree browser can present one or more views of the VCR. These views can include (but are not limited to), published nodes, unpublished nodes, retired nodes, deleted nodes, assigned nodes, locked nodes, and nodes waiting for approval. In aspects of these embodiments, a user can customize the tree browser to include one or more of these views. The views can be presented as separate trees or as a merged tree. In various embodiments, views can be automatically customized to particular user(s). For example, roles and/or polices that adorn nodes can be used to filter the view. By way of illustration, the assigned items view and waiting for approval view will only show nodes that are applicable to a given user. In one embodiment, this can be accomplished by examining roles in lifecycle transitions and filtering out nodes for lifecycles that a given user cannot interact with.
By way of illustration, a tree browser can expose VCR 400, the federated root. It contains two repositories (Neptune 402 and Pluto 412) and a Public hierarchy node 426. In various embodiments, nodes can be decorated with a folder or related visual symbol to indicate their purpose. In aspects of these certain embodiments, selection of a folder icon or hierarchy node name causes the contents of the node to expand beneath it in the tree. In further aspects of these embodiments, selection of any type of node can allow a user to edit the properties of the node. In one embodiment, schemas can be defined anywhere in the VCR, including directly beneath the federated root (not shown). Schemas that are not defined in a repository (e.g., 428, 430 and 434) are considered virtual in various embodiments. The Neptune repository 402 contains two repository nodes: Press Releases 404 and Old Schemas 408. Press releases contains a content node named Japan Account 406, which is currently locked (e.g., checked-out) as indicated by the padlock icon next to its name. Only the user who has checked-out the node can edit it. Others can optionally view it depending on roles and/or privileges. The Old Schemas hierarchy node contains a schema named Account Schema 410 which is currently unlocked. In aspects of these embodiments, a node's properties can be viewed by selecting the node in the tree browser. If the selected node is not locked, the system can automatically attempt to obtain a lock on behalf of the user.
The Pluto repository includes a schema 414 and two top-level hierarchy nodes (416, 418). One of the hierarchy nodes, 2003 Memos 416, has a folder symbol that is a solid color, the other has an outline of a folder symbol 418. In one embodiment, a special visual symbol (e.g., a solid folder icon) can indicate to a user that the hierarchy node has a schema and/or a lifecycle associated with it. In various embodiments, associating a schema and/or a lifecycle with a hierarchy node results in the schema and/or lifecycle being imposed on or inherited by the children of the hierarchy node. The 2004 Memos hierarchy node contains another hierarchy node 420, an unlocked content node 422 and a locked content node 424.
In various embodiments, the user interface can provide a logical Workspace folder 436 that provides quick access to a user's new, checked-out and assigned items. Assigned items are those items which are assigned to one or more users, groups and/or roles according to a lifecycle or some other suitable means. In this illustration, there are two nodes assigned to the user: Japan Account 444 and Internal Memo Schema 446. The user has checked-out Japan Account node since it appears in the VCR tree with a padlock beside it unlike the Internal Memo Schema which is not currently checked-out. The user has not checked-out the Staff Change schema 434 since it does not appear in their Workspace (i.e., another user or process has checked it out). In various embodiments, by selecting the Staff Change schema 434 the user can discover who holds the lock and when they obtained it.
In various embodiments and by way of a further illustration, new nodes can be created. Newly created nodes can appear in the workspace of the user that created them until they are published in the VCR. Aspects of certain of these embodiments allow a user to create new nodes through a user interface that enables the user to select in which VCR, repository or hierarchy node the new node will reside. The user can indicate whether a version history should be maintained for the new node and can add properties to it or base its properties on a schema. A lifecycle for the new node can also be specified. A tree browser can be updated to reflect the addition of the new node.
The RepositoryManager 502 can serve as an representation of a VCR from an application program's 500 point of view. In aspects of these embodiments, the RepositoryManager attempts to connect all available repositories to the VCR (e.g., 512-516); 506-510. The RepositoryManager can invoke a connect( ) method on the set of Repository objects. In various embodiments, the RepositoryManager return a list of repository session objects to the application program, one for each repository for which a connection was attempted. Any error in the connection procedure can be described by the session object's state. In another embodiment, the RepositoryManager can connect to a specific repository given the repository name. In various embodiments, the name of a repository can be a URI (uniform resource identifier).
Referring to
By way of illustration, repository 622 provides NodeOps 610, WorkspaceOps 612 and SearchOps 614. Repository 624 provides NodeOps 616, WorkspaceOps 618 and SearchOps 620. API level objects/interfaces communicate with their corresponding SPI level objects/interfaces. In this way, an operation on an API-level object can be distributed to each repository such that each repository can work in parallel to perform the requested operation. Accordingly, an operation that might take on average time M*N to perform on all repositories sequentially in theory might only require time M, where N is the number of repositories in the VCR.
The NodeOps 604 provides create, read, update, delete methods for nodes and node properties in the VCR. In aspects of these embodiments, nodes and properties can be operated on based on an identifier, a path in the VCR or through any other suitable relative or absolute reference. When the API NodeOps 604 receives a request to perform an action, it can map the request to one or more SPI NodeOps (610, 616) which in turn fulfill the request using their associated repositories. In this way, applications and libraries utilizing the API see a single VCR rather than individual content repositories. NodeOps functionality exposed in the API can include the following:
- Update a given node's properties and property definitions.
- Copy a given node to a new location in a given hierarchy along with all its descendants.
- Create a new content node underneath a given parent.
- Create a new hierarchy node underneath a given parent.
- Perform a full cascade delete on a given node.
- Retrieve all the nodes in a given node's path including itself.
- Retrieve content node children for the given parent node.
- Retrieve hierarchy node children for the given parent node.
- Retrieve a node based on its ID.
- Retrieve a node based on its path.
- Retrieve the children nodes for the given hierarchy node.
- Retrieve the parent nodes for the given hierarchy node.
- Retrieve all the nodes with a given name.
- Retrieve the Binary data for given node and property ids.
- Moves a node to a new location in the hierarchy along with all its descendants.
- Renames a given node and implicitly all of its descendants paths.
- Get an iterator object which can be used to iterate over a hierarchy.
In various embodiments, WorkspaceOps 606 exposes services for versioning, including the services to check-in/check-out nodes, node/property locking, access node version history, lifecycle manipulation, labeling, and jobs. When the API WorkspaceOps 606 receives a request to perform an action, it can map the request to one or more SPI WorkspaceOps (612, 618) which in turn fulfill the request using their associated repositories. WorkspaceOps functionality exposed in the API can include:
- check-in: Unlocks the node and saves the node along with it's working version.
- check-out: Locks the node such that only the user/process that locked it may save or check it in and creates a new working version.
- copy: Recursively copies the published source node to the destination.
- create: Creates a new Node and also a working version for it, if attached to the node.
- delete: Deletes a node version with the given version.
- get: Gets the Node at the given path.
- get versions: Returns all versions for the given Virtual Node.
- save: Saves the node and the working version of the node (if attached to the node), which is the current version on the node.
- submit: Submits the node to it's life cycle.
In various embodiments, SearchOps 608 provides API searching services for retrieving nodes, properties, and/or property values throughout the entire VCR based on one or more search expressions. When the API SearchOps 608 receives a request to perform an action, it can map the request to one or more SPI SearchOps (614, 620) which in turn fulfill the request using their associated repositories. The API SearchOps 608 combines the search results from each SPI SearchOps into a result set. In various embodiments, result sets can be refined by performing a further searches on the items in the result set.
Search expressions can include (but are not limited to) one or more logical expressions, Boolean operators, nested expressions, variables, identifiers node names, function/method invocations, remote procedure calls, mathematical functions, mathematical operators, string operators, image operators, and Structured Query Language (SQL). Search expressions can also include support for natural language queries, keyword searching, fuzzy logic, proximity expressions, wildcard expressions, and ranging search types. In various embodiments, the result set can be tailored according to roles/policies in effect on the items that satisfy the search expressions. Those items which a user/process does not have permission to view can be filtered during the search or after the results have been gathered.
In aspects of these embodiments, search results can be ranked according to ranking algorithms and criteria. In one embodiment, a ranking algorithm can rank the result set according to what extent items in the result set satisfy the search expression(s). It will be apparent to those of skill in the art that many other ranking algorithms are possible and fully within the scope and spirit of the present disclosure. In various embodiments, multiple ranking algorithms can be applied to the result set. In one embodiment, the ranking criteria for a given ranking algorithm can be adjusted by a user/process.
In various embodiments, jobs provide the ability to perform VCR operations on sets of nodes. By way of illustration, a job can be used to check-in and check-out a set of nodes as a group, or send a group of nodes through a lifecycle together. In aspects of these embodiments, a job identifier and/or a label can be associated with a node to indicate its inclusion in a particular job and/or label set. In one embodiment, if a job becomes ready for approval, all nodes in the job will reach this state. In various embodiments, a label can be used to tag a repository or a group of nodes. By way of illustration, this provides a way to refer to a set of nodes with different versions. By way of further illustration, labels can be used to in search expressions.
In various embodiments, information in the VCR can be exported in an external format. In aspects of these embodiments, the external format can be XML or another suitable language/representation (e.g., HTML, natural language, a binary file) that can preserve the hierarchical structure of the information. Exporting of all or some of the VCR nodes allows “snapshots” of the VCR for backing-up the VCR, transporting information in the VCR to another repository, and reloading the nodes at a later date. In various embodiments, a node and all of its children will be exported by an export process. By way of an example, if the federated root was chosen to be exported, the entire VCR would be exported. By way of a further example, an export process can recursively traverse the VCR (e.g., depth-first or bread-first traversal), serializing information associated with each node that is visited (e.g., content, hierarchy and schema nodes). Aspects of these embodiments have a “preview” mode where it is reported what information (e.g., nodes, lifecycles, roles, policies) would be exported.
In various embodiments, an import process can do the work of the export process in reverse by de-serializing each node (and other information) and adding it at the appropriate place in the VCR namespace. In another embodiment, the import process can install nodes beneath a chosen node rather then in their original location. As with the export process, aspects of these embodiments have a “preview” mode where it is reported what information (e.g., nodes, lifecycles, roles, policies) would be imported into the VCR. In addition to node properties, various embodiments allow the export and import of version history, roles and/or policies associated with content and schema nodes.
Box 702 represents one or more content administration tools which can be used to create, modify and delete information in the VCR. These tools can take advantage of the VCR's content services. Box 704 represents one or more tools that can operate on repositories without the need for content services. By way of example, these can include bulk content loaders, content searching tools and content tags. A content manager API component 712 can be used to manage interaction between the VCR and its integrated subsystems.
Various embodiments.
Various embodiments include a computer program product which is a storage medium (media) having instructions stored thereon/in which can be used to program a computing device to perform any of the features presented herein. The storage medium can include, but is not limited to, any type of physical media including floppy disks, optical discs, DVDs, CD-ROMs, microdrives, magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data. Various embodiments include a computer program product that can be transmitted over one or more public and/or private networks wherein the transmission includes instructions which can be used to program a computing device to perform any of the features presented herein.. | https://patents.google.com/patent/US7580953?oq=assignee%3A+google | CC-MAIN-2018-26 | refinedweb | 5,783 | 56.45 |
(For more resources on ASP.Net, see here.)
Approach: Loading JavaScript on demand
The JavaScript code for a page falls into two groups—code required to render the page, and code required to handle user interface events, such as button clicks. The code to render the page is used to make the page look better , and to attach event handlers to for example, buttons.
Although the rendering code needs to be loaded and executed in conjunction with the page itself, the user interface code can be loaded later, in response to a user interface event, such as a button click. That reduces the amount of code to be loaded, and therefore the time rendering of the page is blocked. It also reduces your bandwidth costs, because the user interface code is loaded only when it's actually needed.
On the other hand, it does require separating the user interface code from the rendering code. You then need to invoke code that potentially hasn't loaded yet, tell the visitor that the code is loading, and finally invoke the code after it has loaded.
Let's see how to make this all happen.
Separating user interface code from render code
Depending on how your JavaScript code is structured, this could be your biggest challenge in implementing on-demand loading. Make sure the time you're likely to spend on this and the subsequent testing and debugging is worth the performance improvement you're likely to gain.
A very handy tool that identifies which code is used while loading the page is Page Speed, an add-on for Firefox. Besides identifying code that doesn't need to be loaded upfront, it reports many speed-related issues on your web page.
Information on Page Speed is available at.
OnDemandLoader library
Assuming your user interface code is separated from your render code, it is time to look at implementing actual on-demand loading. To keep it simple, we'll use OnDemandLoader, a simple low-footprint object. You'll find it in the downloaded code bundle in the folder OnDemandLoad in the file OnDemandLoader.js.
OnDemandLoader has the following features:
- It allows you to specify the script, in which it is defined, for each event-handler function.
- It allows you to specify that a particular script depends on some other script; for example Button1Code.js depends on library code in UILibrary1.js. A script file can depend on multiple other script files, and those script files can in turn be dependent on yet other script files.
- It exposes function runf, which takes the name of a function, arguments to call it with, and the this pointer to use while it's being executed. If the function is already defined, runf calls it right away. Otherwise, it loads all the necessary script files and then calls the function.
- It exposes the function loadScript, which loads a given script file and all the script files it depends on. Function runf uses this function to load script files.
- While script files are being loaded in response to a user interface event, a "Loading..." box appears on top of the affected control. That way, the visitor knows that the page is working to execute their action.
- If a script file has already been loaded or if it is already loading, it won't be loaded again.
- If the visitor does the same action repeatedly while the associated code is loading, such as clicking the same button, that event is handled only once.
- If the visitor clicks a second button or takes some other action while the code for the first button is still loading, both events are handled.
A drawback of OnDemandLoader is that it always loads all the required scripts in parallel. If one script automatically executes a function that is defined in another script , there will be a JavaScript error if the other script hasn't loaded yet. However, if your library script files only define functions and other objects, OnDemandLoader will work well.
Initializing OnDemandLoader
OnDemandLoading.aspx in folder OnDemandLoad in the downloaded code bundle is a worked-out example of a page using on-demand loading. It delays the loading of JavaScript files by five seconds, to simulate slowly loading files. Only OnDemandLoader.js loads at normal speed.
If you open OnDemandLoading.aspx, you'll find that it defines two arrays—the script map array and the script dependencies array. These are needed to construct the loader object that will take care of the on-demand loading.
The script map array shows the script file, in which it is defined, for each function:
var scriptMap = [
{ fname: 'btn1a_click', src: 'js/Button1Code.js' },
{ fname: 'btn1b_click', src: 'js/Button1Code.js' },
{ fname: 'btn2_click', src: 'js/Button2Code.js' }
];
Here, functions btn1a_click and btn1b_click live in script file js/Button1Code. js, while function btn2_click lives in script file js/Button2Code.js.
The second array defines which other script files it needs to run for each script file:
var scriptDependencies = [
{
src: '/js/Button1Code.js',
testSymbol: 'btn1a_click',
dependentOn: ['/js/UILibrary1.js', '/js/UILibrary2.js']
},
{
src: '/js/Button2Code.js',
testSymbol: 'btn2_click',
dependentOn: ['/js/UILibrary2.js']
},
{
src: '/js/UILibrary2.js',
testSymbol: 'uifunction2',
dependentOn: []
},
{
src: '/js/UILibrary1.js',
testSymbol: 'uifunction1',
dependentOn: ['/js/UILibrary2.js']
}
];
This says that Button1Code.js depends on UILibrary1.js and UILibrary2.js. Further, Button2Code.js depends on UILibrary2.js. Further, UILibrary1.js relies on UILibrary2.js, and UILibrary2.js doesn't require any other script files.
The testSymbol field holds the name of a function defined in the script. Any function will do, as long as it is defined in the script. This way, the on-demand loader can determine whether a script has been loaded by testing whether that name has been defined.
With these two pieces of information, we can construct the loader object:
<script type="text/javascript" src="js/OnDemandLoader.js">
</script>
var loader = new OnDemandLoader(scriptMap, scriptDependencies);
Now that the loader object has been created, let's see how to invoke user interface handler functions before their code has been loaded.
Invoking not-yet-loaded functions
The point of on-demand loading is that the visitor is allowed to take an action for which the code hasn't been loaded yet. How do you invoke a function that hasn't been defined yet? Here, you'll see two approaches:
- Call a loader function and pass it the name of the function to load and execute
- Create a stub function with the same name as the function you want to execute, and have the stub load and execute the actual function
Let's focus on the first approach first.
The OnDemandLoader object exposes a loader function runf that takes the name of a function to call, the arguments to call it with, and the current this pointer:
function runf(fname, thisObj) {
// implementation
}
Wait a minute! This signature shows a function name parameter and the this pointer, but what about the arguments to call the function with? One of the amazing features of JavaScript is that can you pass as few or as many parameters as you want to a function, irrespective of the signature. Within each function, you can access all the parameters via the built-in arguments array. The signature is simply a convenience that allows you to name some of the arguments.
This means that you can call runf as shown:
loader.runf('myfunction', this, 'argument1', 'argument2');
If for example, your original HTML has a button as shown:
<input id="btn1a" type="button" value="Button 1a"
onclick="btn1a_click(this.value, 'more info')" />
To have btn1a_click loaded on demand, rewrite this to the following (file OnDemandLoading.aspx):
<input id="btn1a" type="button" value="Button 1a"
onclick="loader.runf('btn1a_click', this, this.value,
'more info')" />
If, in the original HTML, the click handler function was assigned to a button programmatically as shown:
<input id="btn1b" type="button" value="Button 1b" />
<script type="text/javascript">
window.onload = function() {
document.getElementById('btn1b').onclick = btn1b_click;
}
</script>
Then, use an anonymous function that calls loader.runf with the function to execute:
<input id="btn1b" type="button" value="Button 1b" />
<script type="text/javascript">
window.onload = function() {
document.getElementById('btn1b').onclick = function() {
loader.runf('btn1b_click', this);
}
}
</script>
This is where you can use the second approach—the stub function. Instead of changing the HTML of your controls, you can load a stub function upfront before the page renders (file OnDemandLoading.aspx):
function btn1b_click() {
loader.runf('btn1b_click', this);
}
When the visitor clicks the button, the stub function is executed. It then calls loader.runf to load and execute its namesake that does the actual work, overwriting the stub function in the process.
This leaves behind one problem. The on-demand loader checks whether a function with the given name is already defined before initiating a script load. And a function with that same name already exists—the stub function itself.
The solution is based on the fact that functions in JavaScript are objects. And all JavaScript objects can have properties. You can tell the on-demand loader that a function is a stub by attaching the property "stub":
btn1b_click.stub = true;
To see all this functionality in action, run the OnDemandLoading.aspx page in folder OnDemandLoad in the downloaded code bundle. Click on one of the buttons on the page, and you'll see how the required code is loaded on demand. It's best to do this in Firefox with Firebug installed, so that you can see the script files getting loaded in a Waterfall chart.
Preloading
Now that you have on-demand loading working, there is one more issue to consider: trading off bandwidth against visitor wait time.
Currently, when a visitor clicks a button and the code required to process the click hadn't been loaded, loading starts in response to the click. This can be a problem if loading the code takes too much time.
An alternative is to initiate loading the user interface code after the page has been loaded, instead of when a user interface event happens. That way, the code may have already loaded by the time the visitor clicks the button; or at least it will already be partly loaded, so that the code finishes loading sooner. On the other hand, this means expending bandwidth on loading code that may never be used by the visitor.
You can implement preloading with the loadScript function exposed by the OnDemandLoader object. As you saw earlier, this function loads a JavaScript file plus any files it depends on, without blocking rendering. Simply add calls to loadScript in the onload handler of the page, as shown (page PreLoad.aspx in folder OnDemandLoad in the downloaded code bundle):
<script type="text/javascript">
window.onload = function() {
document.getElementById('btn1b').onclick = btn1b_click;
loader.loadScript('js/Button1Code.js');
loader.loadScript('js/Button2Code.js');
}
</script>
You could preload all your user interface code, or just the code you think is likely to be needed.
Now that you've looked at the load on demand approach, it's time to consider the last approach—loading your code without blocking page rendering and without getting into stub functions or other complications inherent in on-demand loading.
(For more resources on ASP.Net, see here.)
Approach: Loading Javascript without blocking
The idea behind this approach is to load all (or almost all) script files without blocking rendering of the page. That puts the rendered page sooner in front of the visitor.
There are a couple of ways to achieve this, each trading off more work for a better visitor experience:
- Moving all <script> tags to the end of the page
- Separating user interface code and render code
- Introducing page loading indicator
Let's go through each of them.
Moving all <script> tags to the end of the page
On a basic level, loading JavaScript code without blocking page rendering is really easy to accomplish—simply take your script tags out of the head of the page, and move them to the end of the body. That way, the page will have rendered before the script tags have a chance to block anything.
Page ScriptAtEnd.aspx in folder LoadJavaScriptWithoutBlocking in the downloaded code bundle has an example for this. Similar to the test site you used in the previous section, to simulate a slowly loading JavaScript file, this site delays JavaScript files by five seconds; just make sure you run it in IIS 7.
The example page has both render code and user interface code (ScriptAtEnd.aspx):
<script type="text/javascript" src="js/Code.js"></script>
<script type="text/javascript">
// Execute render code
beautify();
// Attach event handlers for user interface
attachEventHandlers();
</script>
In this example, the render code simply makes the background of the page yellow, and turns the caption of the middle button yellow as well. That may be very simplistic, but that makes it perfectly clear whether the render code is executed.
When you load ScriptAtEnd.aspx, you'll see the "raw" version of the page until the code finishes loading and changes the page's appearance. That's not necessarily what you want.
This brings us to the next iteration.
Separating user interface code and render code
The code (if there is any) required to render the page tends to be much smaller than that required to handle user interface events. So, to prevent showing the raw page to visitors, it can make sense to separate out the render code, and load that upfront. It will block rendering of the page while it is loading, but in this case, this is precisely what we want.
Use the Page Speed add-on for Firefox to figure out which JavaScript functions are not needed to render the page.
After you've separated the code, you'll wind up with something similar to the following (page SeparateScripts.aspx):
<head runat="server">
<script type="text/javascript" src="js/Beautify.js"></script>
</head>
<body>
... page contents
<script type="text/javascript">
beautify(); // run code that helps render the page
</script>
<script type="text/javascript" src="js/UICode.js"></script>
<script type="text/javascript">
attachEventHandlers();
</script>
</body>
The render code is now loaded in the head of the page, so that it blocks rendering the rest of the page. It is executed at the end of the page by calling function beautify; otherwise, there is no page content to work with. Only then the user interface code is loaded, so that it doesn't block the call to beautify.
If you run SeparateScripts.aspx, you should no longer see the raw version of the page. However, if you click any of the buttons while the user interface code is being loaded, nothing happens. This may cause your visitors to think that your site is broken, and move on to some other site, such as your competitor's.
If you look closely, you'll see that the browser shows that the page is busy while the user interface code is loading. However, you can't rely on your visitors to see this. So, let's add a better "page loading" indicator.
Introducing page loading indicator
Introducing a page loading indicator consists of creating the indicator itself, and introducing code to make it go away once all the code has completed loading.
The page loading indicator itself can be any <div> tag with a "Loading ..." text. The code below fixes the indicator just below the browser window, so that it stays there even if the visitor scrolls the page (PageLoadingIndicator.aspx):
<div id="pageloading" style="position:fixed; top: 10px;
left: 50%;" >Loading ...</div>
After all the code is loaded and you've attached the event handlers, make the loading indicator disappear by setting its display style to none:
<script type="text/javascript">
attachEventHandlers();
document.getElementById('pageloading').style.display = 'none';
</script>
Alternatively, or in addition, you could disable all input elements after the page is rendered. This dims the captions of all buttons and prevents them from being "depressed" when clicked (PageLoadingIndicator.aspx):
<script type="text/javascript">
beautify();
function disableButtons(disable) {
var inputTags = document.getElementsByTagName('input');
var inputTagsLen = inputTags.length;
for (var i = 0; i < inputTagsLen; i++) {
inputTags[i].disabled = disable;
}
}
disableButtons(true); // Disable all input elements
</script>
Then after the code is loaded, re-enable them again (PageLoadingIndicator.aspx):
<script type="text/javascript">
attachEventHandlers();
document.getElementById('pageloading').style.display = 'none';
disableButtons(false); // Re-enable all input elements
</script>
If you run JavaScript code that changes the color of button captions while they are disabled, Firefox, Google Chrome, and Safari will apply the color right away, thereby removing the "disabled" look. Internet Explorer, however, changes only the caption color after the buttons have been re-enabled.
In the solutions you've seen so far, loading of the user interface code is initiated after the page has loaded and rendered.
However, if the HTML of your page takes a long time to load, you will want to start loading at the beginning of the page, so that the code loads in parallel with the HTML.
You can achieve this with the OnDemandLoader object that you saw in the previous section. You can get it to load one or more sets of script files while the page itself is loading, and to call a function when each set is done. Finally, it exposes an onallscriptsloaded event that fires when all script files have loaded, which can be used to remove the page loading indicator.
This solution is in page ParallelLoading.aspx, folder LoadJavaScriptWithoutBlocking in the downloaded code bundle. It breaks into the following parts:
- Initialize the loader object
- Start loading the code
- Ensure that the code runs after the page is rendered
Let's go through each part.
Initializing the loader object
The first step is prepare two pieces of information: the script map array, showing for each function the script file in which it is defined. And the script dependencies array, showing for each script file which other script files it depends on.
Here, the script map contains the two functions you've already seen: attachEventHandlers to attach the event handlers after the user interface code has loaded, and beautify to execute the render code:
var scriptMap = [
{ fname: 'attachEventHandlers', src: 'js/UICode.js' },
{ fname: 'beautify', src: 'js/Beautify.js' }
];
Also, list both script files in the array with dependencies:
var scriptDependencies = [
{
src: 'js/UICode.js',
testSymbol: 'attachEventHandlers',
dependentOn: []
},
{
src: 'js/Beautify.js',
testSymbol: 'beautify',
dependentOn: []
}
];
If you need to load additional script files, list them in the dependentOn array, along the following lines:
{
src: 'js/UICode.js',
testSymbol: 'attachEventHandlers',
dependentOn: ['js/UILibrary1.js', 'js/UILibrary2.js',
'js/UILibrary3.js']
}
Finally, create the loader object:
<script type="text/javascript" src="js/OnDemandLoader.js">
</script>
var loader = new OnDemandLoader(scriptMap, scriptDependencies);
With the loader object established, it is now time to start loading the JavaScript code.
You want the code to load while the page is loading, which means that the JavaScript that initiates the loading needs to sit towards the top of the page. Here is what it looks like (ParallelLoading.aspx):
<body>
<div id="pageloading" ...>Loading ...</div>
<script type="text/javascript">
loader.onallscriptsloaded = function() {
document.getElementById('pageloading').style.display =
'none';
disableButtons(false);
}
loader.runf('beautify', null);
loader.runf('attachEventHandlers', null);
</script>
If the onallscriptsloaded property on the OnDemandLoader object is set to a function, the object runs that function when it finds that all script files in the dependencies list have been loaded. That feature is used here to hide the "Loading" box and re-enable the buttons after all the script files have been loaded.
You came across loader.runf in the Load on demand section. It tells the loader to make sure that all code required to run the beautify and attachEventHandlers functions has been loaded, and to call those functions once the code has been loaded.
Ensuring that code runs after the page is rendered
There is one last problem to solve: the script files may finish loading before the page itself finishes rendering. This can happen when the script files are in browser cache, left there by another page. The problem is that if the code runs before the page is rendered, it may fail to properly update the appearance of the page, or to attach event handlers to all user interface elements because the page isn't fully there yet.
The way to solve this is to call the beautify and attachEventHandlers functions not only after they have been loaded, but also after the page has finished rendering. That way, you're sure that these functions will be executed properly, even if the script files were quick to load. You do need a try-catch when you call the functions after the page has finished rendering, in case their code hasn't been loaded yet:
try { attachEventHandlers(); } catch (err) { }
try { beautify(); } catch (err) { }
This means that these functions are called twice—once when the code finishes loading, and once after the page has finished rendering. You don't know in which order this happens. It also means that you need to make sure that the functions do not cause an error message if they run while the page isn't rendered yet.
How do we find out when the page is completely rendered? Here are the usual options:
- Create a handler for the page's onload event. This is the most natural solution, but here it has a big problem. When the OnDemandLoader object starts loading a script file, it does so by inserting a <script> tag into the DOM, as shown in the following code:
var se = document.createElement('script');
se.src = src;
document.getElementsByTagName('head')[0].appendChild(se);
This method loads a script file without blocking rendering of the page, except that in Firefox, it blocks the onload event. This means that if the render code loads quickly and the user interface code takes a long time, execution of the render code will still be delayed until the user interface code finishes loading, which is not good.
- Place a <script> tag at the end of the page containing calls to attachEventHandlers and beautify. Unfortunately, Firefox not only blocks onload, but also all script tags until all the code is loaded.
- Place an invisible element at the very end of the page, and periodically check whether that element is rendered. If it is, the whole page will have rendered. This slightly pollutes the HTML and couples the invisible element and the polling code because they have to refer to the same ID.
You could make the first two options work by loading the JavaScript code asynchronously using XMLHttpRequest instead of inserting a <script> tag in the DOM. However, that would stop you from loading script files from any host but the one used by your site. For example, you then couldn't load the jQuery library from the Google CDN.
So in this example, we'll use the third method, based on polling for the end of page rendering.
To implement the polling solution, first place an invisible element at the very end of the page:
<div id="lastelement" style="display: none;"></div>
</body>
Then run the polling code at the beginning of the page, in the same script block where you started loading the JavaScript code:
function pollpage() {
var lastelement = document.getElementById('lastelement');
if (lastelement == null) {
setTimeout("pollpage();", 100);
return;
}
try { attachEventHandlers(); } catch (err) { }
try { beautify(); } catch (err) { }
if (document.getElementById('pageloading').
style.display != 'none') {
disableButtons(true);
}
}
pollpage();
The function pollpage first checks whether the element with ID lastelement exists. If not, the page hasn't finished rendering yet, and so it calls setTimeout to have itself called again 100 milliseconds from now.
Otherwise, the page has rendered, and the function calls attachEventHandlers and beautify. Now that all the buttons will have rendered, this is also a good time to disable all the buttons. However, if the JavaScript code has already loaded, you obviously don't want to do that. So it checks whether the "Loading" box has already been made invisible by the OnDemandLoader object.
Finally, the code calls the pollpage function to start polling.
All this is expressed in the following flowchart:
That concludes the four approaches to improving JavaScript loading. We'll now look at two related topics: improving ad loading and improving loading of CSS files.
Improving ad loading
If you use an ad network such as DoubleClick or Google AdWords, they will have given you code to place on your pages along the following lines:
<script src="?....."></script>
This loads some JavaScript from the ad server, which then places the actual ad on your site. Easy.
Normally, this works fine. However, the ad server is slow at times. The problem is that while the browser is waiting for the ad server, it holds off rendering the page below the ad. If there is a long delay, this will not look good.
You could prevent this by loading the ads in iframes. However, this will prevent your ad slots from showing variable-sized ads. It also creates a big empty space on your page if the ad fails to load.
A neat way to solve this problem is to load the ads after the entire page is rendered, and then move the ads to their ad slots.
In this approach, you place an empty <div> tag at the spot where you want an ad to appear. You can give it the size of the eventual ad (if it has fixed size and you don't mind the empty space on your page) and a progress image to make it look nice (page AdDoesNotDelayPage.aspx, folder AdLoad in the downloaded code bundle):
<div id="ad" style="width: 486px; height: 60px; background:
#ffffff url('/images/throbber.gif') no-repeat center;">
</div>
Then at the end of the page, you place the ad loading script within its own <div> tag. Make that tag invisible (display value is none) so that your visitors don't see it while the ad is loading:
<script type="text/javascript">
var scriptloaded = false;
var divloaded = false;
function adloaded() {
...
}
</script>
<div id="tempad" style="display:none">
<script type="text/javascript" src="?....."
onload="scriptloaded=true; adloaded()"
onreadystatechange="if (this.readyState=='complete') {
scriptloaded=true; adloaded(); }">
</script>
</div>
<script type="text/javascript">
divloaded = true;
adloaded();
</script>
The function adloaded() will move the <div> to the actual ad slot. But before that can be done, not only the script, but also the <div> needs to have loaded. Otherwise, there will be a JavaScript error when adloaded tries to move it.
Finding out whether the script is loaded means adding an onreadystatechange handler, used by Internet Explorer, and an onload handler, used by all other browsers. Finding out whether the <div> is loaded, means simply add a JavaScript block after the <div>. If both the variables scriptloaded and divloaded are true, the <div> is ready to be moved.
Finally, implement the adloaded function:
function adloaded() {
if ((!scriptloaded) || (!divloaded)) { return; }
var et = document.getElementById("tempad");
et.parentNode.removeChild(et);
var d = document.getElementById("ad");
d.appendChild(et);
et.style.display = "block";
}
This finds the <div> holding the script (with the ID tempad) and detaches it from its parent, so that it won't show up at the bottom of the page. It then finds the <div> marking the ad slot where you want to actually show the ad (with the ID ad). Finally, it appends the <div> with your ad as a child to the empty ad slot and changes its display property to block to make it visible. Done!
Improving CSS Loading
Just as JavaScript files, CSS files also block rendering of the page. This way, the visitor isn't confronted by the page in its unstyled form.
You can see this behavior by running the website in folder CssBlocksRendering in the downloaded code bundle. This test has a single page that loads a single CSS file. Generation of the CSS file on the server is delayed by five seconds, using the HTTP Module DelayModule in its App_Code folder. When you open the page, you'll find that the window stays blank for five seconds.
CSS files also block the execution of JavaScript files. This is because the JavaScript may refer to definitions in the CSS files.
One way to reduce this issue is to load your CSS files as quickly as possible. You can achieve that in the following ways:
- Using techniques used with images such as caching, parallel downloads, and using a CDN.
- Using GZIP compression.
- Combining or breaking up CSS files.
- Minifying the CSS.
- Removing unused CSS lines.
A sixth option is to load your CSS without blocking rendering.
Minifying CSS
Minifying CSS is very similar to minifying JavaScript. It involves removing unnecessary white space and comments. It also includes replacing long property values by equivalent shorter ones. For example, the color #ff0000 can be replaced by the shorter red.
The impact of minification on CSS files tends to be fairly modest if compression is enabled on the server. Here are the results for a typical 4-KB CSS file (style1.css in folder Minify in the downloaded code bundle):
There are many tools that will minify a CSS file for you, including the following:
- Microsoft Ajax Minifier
This tool minifies JavaScript and CSS files.
- YUI Compressor
It is a standalone Java-based JavaScript and CSS minifier.
- Yahoo! UI Library: YUI Compressor for .NET
In addition to minifying JavaScript, this .NET port of the YUI Compressor Java project also allows you to minify CSS, like so:
using Yahoo.Yui.Compressor;
...
string compressedCSS =
CssCompressor.Compress (uncompressedCSS);
You can use this to create an HTTP Handler for CSS files, in the same way as you saw earlier for JavaScript files. See folder Minify in the downloaded code bundle for an example.
- CSSTidy
This tool is an open source CSS parser and optimizer. Its source code available in PHP and C++.
Removing unused CSS selectors
Because CSS styles are separate from the pages where they are used, when bits of HTML are removed, the styles used with that HTML often remain in the CSS. After a while, you may wind up with a lot of unused styles, causing confusion and file bloat.
A good tool that helps you identify unused CSS selectors is the Firefox add-on Dust- Me Selectors. Download it at.
After you've installed the add-on, open your site in Firefox. Then, click on Tools | Dust-Me Selectors | Find unused selectors, or press Ctrl + Alt + F. This reads all CSS files used by the page and finds the unused selectors. You could also right-click on the pink broom at the bottom of the Firefox window.
Now, click on Tools | Dust-Me Selectors | View saved data to see the used and unused selectors for each CSS file. There, you also find a button to export the used and unused selectors to CSV files.
CSS files are often shared among multiple pages, and so they may have selectors that are only used on some but not all pages. To make sure you catch all used selectors, navigate through the pages on your site. Dust-Me Selectors will go through each page, and move all the used selectors to the used list. That will greatly weed down your unused list. After you've visited all your pages, consider each selector in the unused list for removal from your CSS files.
If your site has lots of pages, visiting each page would take too much time. However, Dust-Me Selectors can read a site map – click Tools | Dust-Me Selectors | Automation | Spider Sitemap. Site maps tell Google and other search engines where to find your pages, so they are good for SEO. Visit for more information.
Getting the browser to load your CSS without blocking rendering is not hard. Consider the following line of code:
<link rel="Stylesheet" type="text/css" href="css/style1.css" />
Replace it with JavaScript that creates the link element and inserts it into the DOM (folder LoadCssWithoutBlocking in downloaded code bundle):
<script type="text/javascript">
var scriptElem = document.createElement('link');
scriptElem.type = "text/css";
scriptElem.rel = "Stylesheet";
scriptElem.href = "css/style1.css";
document.getElementsByTagName('head')[0].appendChild(scriptElem);
</script>
When you run this page, you'll see that the page becomes visible to the visitor before the CSS is loaded, and then changes appearance when the CSS finishes loading. This may not be the sort of behavior you want, even if it means a page that renders sooner.
Find out more
Here are some more online resources:
- What ASP.NET Developers Should Know About JavaScript
- Function.apply and Function.call in JavaScript
- JavaScript and HTML DOM Reference
- Ecmascript reference
- JavaScript Kit - JavaScript Tutorials
- Object Oriented Programming in JavaScript
Summary
In this article we discussed loading JavaScript code on demand, and techniques specifically aimed at loading JavaScript without blocking page rendering. We also saw how to load ads from ad networks without allowing them to slow down rendering of your page, and improving the way your page loads CSS stylesheets, including minification and removing unused selectors.
Further resources on this subject:
- ASP.NET Site Performance: Reducing Long Wait Times [Article]
- Fixing Bottlenecks for Better Database Access in ASP.Net [Article]
- ASP.Net Site Performance: Improving JavaScript Loading [Article] | https://www.packtpub.com/books/content/aspnet-site-performance-reducing-page-load-time | CC-MAIN-2018-13 | refinedweb | 5,532 | 63.9 |
Creative coding in Python
Project description
p5
p5 is a Python library that provides high level drawing functionality to help you quickly create simulations and interactive art using Python. It combines the core ideas of Processing — learning to code in a visual context — with Python’s readability to make programming more accessible to beginners, educators, and artists.
Example
p5 programs are called “sketches” and are run as any other Python program. The sketch above, for instance, draws a circle at the mouse location that gets a random reddish color when the mouse is pressed and is white otherwise; the size of the circle is chosen randomly. The Python code for the sketch looks like:
from p5 import * def setup(): size(640, 360) no_stroke() background(204) def draw(): if mouse_is_pressed: fill(random_uniform(255), random_uniform(127), random_uniform(51), 127) else: fill(255, 15) circle_size = random_uniform(low=10, high=80) circle((mouse_x, mouse_y), circle_size) def key_pressed(event): background(204) run()
Installation
p5 requires Python 3 to run. Once you have the correct version of Python installed, you can run:
$ pip install numpy $ pip install p5 --user
to install p5.
Features Roadmap
Our end goal is to create a Processing-like API for Python. However, instead of being a strict port of the original Processing API, we will also try to extend it and use Python’s goodness whenever we can.
For now, though, we plan to focus on the following features:
- Support most 2D drawing primitives and related utility functions from the Processing API (as of the latest release, this is almost done).
- Support other parts of the Processing API: images, fonts, etc.
- Port relevant tutorials and reference material from Processing’s documentation.
- Support live coding of sketches in the Python REPL (here’s a screencast from an earlier prototype).
License
p5 is licensed under the GPLv3. See LICENSE for more details. p5 also includes the following components from other open source projects:
- OpenGL shaders from the Processing project. Licensed under LGPL v2.1. See LICENSES/lgpl-2.1.txt for the full license text.
- Code from the Glumpy project See LICENSES/glumpy.txt for the full license text from the Glumpy project.
All licenses for these external components are available in the LICENSES folder.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/p5/ | CC-MAIN-2018-22 | refinedweb | 399 | 54.22 |
Programmatic Navigation in React using react-router
Aniket
Updated on
・3 min read
I have seen my fair share of react tutorials, but every time they come across navigation using react-router, they only show the way using the Link component. As soon as someone starts working on his/her own project, one of the first problem they come across is how to route programmatically, which basically means routing with ways other than clicking on a component wrapped inside <Link>.
This blog mainly aims to be a refuge to those people who come here looking for the answers to this problem.
1. Using the <Redirect> Component
This method requires us to render <Redirect> component whenever we want to navigate to any other route. It already comes loaded in the react-router-dom library.
import { Redirect } from "react-router-dom";
The easiest way to use this method is by maintaining a redirect property inside the state of the component.
state = { redirect: null }; render() { if (this.state.redirect) { return <Redirect to={this.state.redirect} /> } return( // Your Code goes here ) }
Whenever you want to redirect to another path, you can simply change the state to re-render the component, thus rendering the <Redirect> component.
this.setState({ redirect: "/someRoute" });
Note
This is the recommended way to navigate other than the <Link> method.
Discussed in detail towards the end of the post.
The downside of this method is that in cases like a redux action, we cannot redirect directly from inside it.
The next method that we'll discuss allows us to do so.
2. Using history.push()
Every component that is an immediate child of the <Route> component receives history object as a prop. This is the same history (library) which keeps history of the session of React Router. We can thus use its properties to navigate to the required paths.
this.props.history.push("/first");
A common problem that we can encounter here is that in components which are not immediate child to the <Route> component, there is no history prop present. This can be easily solved using the withRouter function.
2.1. Using withRouter
withRouter is a function provided in the react-router-dom library that helps us access the history prop in components which are not immediate children to the <Route> components.
To import withRouter
import { withRouter } from "react-router-dom";
Now to get the history prop inside our component, we need wrap our component with withRouter while exporting it.
export default withRouter(yourComponent);
Now we can access the history prop same as above to do our required navigations.
3. Using createHistory
The second method we discussed above can cover most of the cases that we'll ever encounter while building a react app, so why this third method?
Every time we need to redirect from inside a redux action, we always have to pass the history prop to the action, unnecessarily increasing the number of arguments. This method can thus be used to get a neater code.
In this method, we make our custom history instance which we can import in other files to redirect.
// Inside /utils/history.js import createHistory from "history/createBrowserHistory"; export default createHistory();
As <BrowserRouter> uses its own history and does not accept any outer history property, we have to use <Router> instead of it.
import { Router } from "react-router-dom"; import history from "./utils/history"; function App(){ return( <Router history={history}> // Your Routes go here <Router> ) }
After this, we can import this history instance in whichever file we want to redirect from.
import history from "./utils/history"; history.push("/somePath");
4. Using useHistory Hook
As of release 5.1.2, react-router ships with some new hooks that can help us access the state of the router.
For this topic, we only need to talk about the useHistory hook.
import { useHistory } from "react-router-dom"; function App() { let history = useHistory(); }
After this, we can use it the same way we were using it before by pushing the new route that we want to navigate to.
history.push('/someRoute')
NOTE
At its core, React is a Declarative approach to building UIs.
Declarative approach is one where we express the logic of a computation without describing its control flow, or without describing what's actually happening behind the scenes.
Due to this reason the recommended way to navigate other than <Link> is using <Redirect> component.
There is no harm in using the other methods mentioned here, just that they don't exactly align with React's vision.
Repository
A fully working implementation of the above methods is available on my github profile .Feel free to explore it if anyone wants to see these methods actually working in a project.
projectescape
/
blogs-reference
A repository which contains the source complementing all the blogs I write
A crash course to Bookshelf.js
Code for this blog can be accessed here
Programmatic Navigation in React
Code for this blog can be accessed here
The article completely misses the hooks way (useRouter).
You should stick to the declerative way whenever you can. I think the article should endorse this to be complete.
I've updated the post, would really appreciate if you'd check it out!
Good job!
Thanks for the comment.
I'll try to update the article as soon as I can.
Would checking for a Redux state and redirect on state change/value be a good way?
I updated my comment as it was bs :) For some reason I had reducers in my mind all along. | https://practicaldev-herokuapp-com.global.ssl.fastly.net/projectescape/programmatic-navigation-in-react-3p1l | CC-MAIN-2020-05 | refinedweb | 918 | 53.21 |
public class Order { public static void main(String[] args) { System.out.println(consecutive(5, 6, 8)); System.out.println(consecutive(9, 7, 8)); System.out.println(consecutive(7, 8, 9)); } public static boolean consecutive(int x, int y, int z) { if (x + 1 == y && y + 1 == z ){ return true; } else { return false; } } }
i have it so that if the numbers are in order it reads true but i cant get it to read if they are not arranged in the correct order but the numbers are there to get it to read true there for if you enter any numbers in the right order it will read true but if all the numbers are there and you don't enter them in order it will read false have trouble trying to figure out what im missing to have it work both ways the first one should read false and the next 2 should be true any help is appreciated | http://www.dreamincode.net/forums/topic/270137-boolean-numbers-in-order/page__p__1572438 | CC-MAIN-2017-22 | refinedweb | 159 | 51.86 |
Reconstructing Brain MRI Images Using Deep Learning (Convolutional Autoencoder)
You will use 3T brain MRI dataset to train your network. To observe the effectiveness of your model, you will be testing your model on :
This tutorial will not be addressing the intricacies of medical imaging but will be focused on the deep learning side! Note : This tutorial will mostly cover the practical implementation of convolutional autoencoders. So, if you are not yet aware about convolutional neural network (CNN) and autoencoder, you might want to look at CNN and Autoencoder tutorial. The best part about this tutorial is that you will be loading the 3D volumes as 2D images and will be feeding them in to the model. In a nutshull, you will address the following topics in today's tutorial:
- In the beginning you will be briefed about Magnetic Resonance Imaging (MRI),
- Then you will understand about the Brain MRI dataset: what kind of images it has, importing the modules, how to read the images, creating an array of the images, preprocessing the brain MRI images to be able to feed them in the model and finally explore the brain MRI images.
- In the implementation of convolutional autoencoder: you will Fit the preprocessed data into the model, visualize the training and validation loss plot, sabe the trained model and finally predict on the test set.
- Next, you'll will test the Robustness of your pre-trained model by adding noise into the test images and see how well the model performs quantitatively.
- Finally, you will test your predictions using a quantitative metric peak signal-to-noise ratio (PSNR) and measure the performance of your model.
Brief Introduction on MR images
A variety of systems are used in medical imaging ranging from open MRI units with magnetic field strength of 0.3 Tesla (T) to extremity MRI systems with field strengths up to 1.0 T and whole-body scanners with field strengths up to 3.0 T (in clinical use). Tesla is the unit of measuring the quantitative strength of magnetic field of MR images. High field MR scanners (7T, 11.5T) yielding higher SNR (signal-to-noise ratio) even with smaller voxel (a 3-dimensional patch or a grid) size and are thus preferred for more accurate diagnosis.
A smaller voxel size leads to a better resolution, which can in turn aid clinical diagnosis. However, the strength of magnetic field being used in MR scanner puts a lower bound on voxel size to maintain a good signal to noise ratio (SNR), in order to preserve the MR image details.
Despite the superior image quality of 7T and 11.5T, they are rarely deployed in production due to its cost constraints.
According to the recent papers, the reported number of 3T scanners are ~20,000 compared to just ~40 7T scanners.
Understanding the Brain MRI 3T Dataset
The brain MRI dataset consists of 3D volumes each volume has in total 207 slices/images of brain MRI's taken at different slices of the brain. Each slice is of dimension 173 x 173. The images are single channel grayscale images. There are in total 30 subjects, each subject containing the MRI scan of a patient. The image format is not jpeg,png etc. but rather nifti format. You will see in later section how to read the nifti format images.
The dataset consists of T1 modality MR images, T1 sequences are traditionally considered good for evaluation of anatomic structures. The dataset on which you will be working today consists of 3T Brain MRI's.
The dataset is public and is available for download at this source.
Tip: if you want to learn how to implement an Multi-Layer Perceptron (MLP) for classification tasks with the MNIST dataset, check out this tutorial.
Note : Before you begin please note that the model will be trained on a system with Nvidia 1080 Ti GPU Xeon e5 GeForce processor with 32GB RAM. If you are using Jupyter Notebook, you will need to add three more lines of code where you specify CUDA device order and CUDA visible devices using a module called os.
In the code below, you basically set environment variables in the notebook using os.environ. It's good to do the following before initializing Keras to limit Keras backend TensorFlow to use first GPU. If the machine on which you train on has a GPU on 0, make sure to use 0 instead of 1. You can check that by running a simple command on your terminal: for example, nvidia-smi
import os os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" os.environ["CUDA_VISIBLE_DEVICES"]="1" #model will be trained on GPU 1
Importing the modules
First, you import all the required modules like cv2, numpy, matplotlib and most importantly keras, since you'll be using that framework in today's tutorial!
In order to read the nifti format images, you also have to import a module called nibabel.
import os import cv2 from keras.layers import Input,Dense,Flatten,Dropout,merge,Reshape,Conv2D,MaxPooling2D,UpSampling2D,Conv2DTranspose from keras.layers.normalization import BatchNormalization from keras.models import Model,Sequential from keras.callbacks import ModelCheckpoint from keras.optimizers import Adadelta, RMSprop,SGD,Adam from keras import regularizers from keras import backend as K
Using TensorFlow backend.
import numpy as np import scipy.misc import numpy.random as rng from PIL import Image, ImageDraw, ImageFont from sklearn.utils import shuffle import nibabel as nib #reading MR images from sklearn.cross_validation import train_test_split import math import glob from matplotlib import pyplot as plt %matplotlib inline
/usr/local/lib/python3.5/dist-packages/sklearn/cross_validation.py:44:)
You will use glob module which will return a list comprising of all the volumes in the folder that you specify!
ff = glob.glob('ground3T/*')
Let's print the first element of the list and also check the length of the list: which in our case should be 30.
ff[0]
'ground3T/181232.nii.gz'
len(ff)
30
Now you are all set to load the 3D volumes using nibabel. Note that when you load a Nifti format volume, Nibabel does not load the image array. It waits until you ask for the array data. The normal way to ask for the array data is to call the get_data() method.
Since you want the 2D slices instead of 3D, you will initialise a list in which; every time you read a volume, you will iterate over all the complete 207 slices of the 3D volume and append each slice one by one in to a list.
images = []
Let's also print the shape of one of the 3D volume, it should be of the shape 173 x 207 x 173 (x, y, z; coordinates).
Note : You will be using only the middle 51 slices of the brain and not all the 207 slices. So, let's also see how to use only the center slices and load them.
for f in range(len(ff)): a = nib.load(ff[f]) a = a.get_data() a = a[:,78:129,:] for i in range(a.shape[1]): images.append((a[:,i,:])) print (a.shape)
(173, 51, 173)
Let's analyse shape of one slice out of 207 slices.
a[:,0,:].shape
(173, 173)
Data Preprocessing
Since images is a list you will use numpy module to convert the list in to a numpy array.
images = np.asarray(images)
It's time to check the shape of the numpy array, the first dimension of the array should be 207 x 30 = 6210 respectively and the remaining two dimensions will be 173 x 173.
images.shape
(1530, 173, 173)
The images of the dataset are indeed grayscale images with a dimension of 173 x 173 so before we feed the data into the model it is very important to preprocess it. You'll first convert each 173 x 173 image into a matrix of size 173 x 173 x 1, which you can feed into the network:
images = images.reshape(-1, 173,173,1)
images.shape
(1530, 173, 173, 1)
Next, rescale the data with using max-min normalisation technique:
m = np.max(images) mi = np.min(images)
m, mi
(3599.0959, -341.83853)
images = (images - mi) / (m - mi)
Let's verify the minimum and maximum value of the data which should be 0.0 and 1.0 after rescaling it!
np.min(images), np.max(images)
(0.0, 1.0)
This is an important step, here you will pad the images with zeros at the boundaries so that the dimension of the images are even and it is easier to downsample the image by two while passing them through the model. Let's add zeros in three rows and three columns to make the dimension as 176 x 176
temp = np.zeros([1530,176,176,1])
temp[:,3:,3:,:] = images
images = temp
After all of this, it's important to partition the data. In order for your model to generalize well, you split the data into two parts: a training and a validation set. You will train your model on 80% of the data and validate it on 20% of the remaining training data.
This will also help you in reducing the chances of overfitting, as you will be validating your model on data it would not have seen in training phase.
You can use the train_test_split module of scikit-learn to divide the data properly:
from sklearn.model_selection import train_test_split train_X,valid_X,train_ground,valid_ground = train_test_split(images, images, test_size=0.2, random_state=13)
Note that for this task, you don't need training and testing labels. That's why you will pass the training images twice. Your training images will both act as the input as well as the ground truth similar to the labels you have in classification task.
Data Exploration
Let's now analyze how images in the dataset look like and also see the dimension of the images once again since you have added three extra rows and columns to the images with the help of the NumPy array attribute .shape:
# Shapes of training set print("Dataset (images) shape: {shape}".format(shape=images.shape))
Dataset (images) shape: (1530, 176, 176, 1)
From the above output, you can see that the data has a shape of 6210 x 176 x 176 since there are 6210 samples each of 176 x 176 x 1 dimensional matrix.
Now, let's take a look at couple of the training and validation images in your dataset:
plt.figure(figsize=[5,5]) # Display the first image in training data plt.subplot(121) curr_img = np.reshape(train_X[0], (176,176)) plt.imshow(curr_img, cmap='gray') # Display the first image in testing data plt.subplot(122) curr_img = np.reshape(valid_X[0], (176,176)) plt.imshow(curr_img, cmap='gray')
<matplotlib.image.AxesImage at 0x7ff67d5d59b0>
The output of above two plots are from the training and validation set. You can see that both the training and validation images are different, it will be interesting to see if the convolutional autoencoder is able to learn the features and is reconstructing these images properly.
Now you are all set to define the network and feed the data into the network. So without any further ado, let's jump to the next step!
The Convolutional Autoencoder!
The images are of size 176 x 176 x 1 or a 30976-dimensional vector. You convert the image matrix to an array, rescale it between 0 and 1, reshape it so that it's of size 176 x 176 x 1, and feed this as an input to the network.
Also, you will use a batch size of 128 using a higher batch size of 256 or 512 is also preferable it all depends on the system you train your model. It contributes heavily in determining the learning parameters and affects the prediction accuracy. You will train your network for 50 epochs.
batch_size = 128 epochs = 300 inChannel = 1 x, y = 176, 176 input_img = Input(shape = (x, y, inChannel))
As you might already know well before, the autoencoder is divided into two parts: there's an encoder and a decoder.
Encoder: It has 3 Convolution blocks, each block has a convolution layer followed a batch normalization layer. Max-pooling layer is used after the first and second convolution blocks.
- The first convolution block will have 32 filters of size 3 x 3, followed by a downsampling (max-pooling) layer,
- The second block will have 64 filters of size 3 x 3, followed by another downsampling layer,
- The final block of encoder will have 128 filters of size 3 x 3.
Decoder: It has 2 Convolution blocks, each block has a convolution layer followed a batch normalization layer. Upsampling layer is used after the first and second convolution blocks.
- The first block will have 128 filters of size 3 x 3 followed by a upsampling layer,
- The second block will have 64 filters of size 3 x 3 followed by another upsampling layer,
- The final layer of encoder will have 1 filter of size 3 x 3 which will reconstruct back the input having a single channel.
The max-pooling layer will downsample the input by two times each time you use it, while the upsampling layer will upsample the input by two times each time it is used.
Note: The number of filters, the filter size, the number of layers, number of epochs you train your model, are all hyperparameters and should be decided based on your own intuition, you are free to try new experiments by tweaking with these hyperparameters and measure the performance of your model. And that is how you will slowly learn the art of deep learning!
def autoencoder(input_img): #encoder #input = 28 x 28 x 1 (wide and thin) conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(input_img) #28 x 28 x 32 conv1 = BatchNormalization()(conv1) conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv1) conv1 = BatchNormalization()(conv1) pool1 = MaxPooling2D(pool_size=(2, 2))(conv1) #14 x 14 x 32 conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(pool1) #14 x 14 x 64 conv2 = BatchNormalization()(conv2) conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv2) conv2 = BatchNormalization()(conv2) pool2 = MaxPooling2D(pool_size=(2, 2))(conv2) #7 x 7 x 64 conv3 = Conv2D(128, (3, 3), activation='relu', padding='same')(pool2) #7 x 7 x 128 (small and thick) conv3 = BatchNormalization()(conv3) conv3 = Conv2D(128, (3, 3), activation='relu', padding='same')(conv3) conv3 = BatchNormalization()(conv3) #decoder conv4 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv3) #7 x 7 x 128 conv4 = BatchNormalization()(conv4) conv4 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv4) conv4 = BatchNormalization()(conv4) up1 = UpSampling2D((2,2))(conv4) # 14 x 14 x 128 conv5 = Conv2D(32, (3, 3), activation='relu', padding='same')(up1) # 14 x 14 x 64 conv5 = BatchNormalization()(conv5) conv5 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv5) conv5 = BatchNormalization()(conv5) up2 = UpSampling2D((2,2))(conv5) # 28 x 28 x 64 decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(up2) # 28 x 28 x 1 return decoded
After the model is created, you have to compile it using the optimizer to be RMSProp.
Note that you also have to specify the loss type via the argument loss. In this case, that's the mean squared error, since the loss after every batch will be computed between the batch of predicted output and the ground truth using mean squared error pixel by pixel:
autoencoder = Model(input_img, autoencoder(input_img)) autoencoder.compile(loss='mean_squared_error', optimizer = RMSprop())
Let's visualize the layers that you created in the above step by using the summary function, this will show number of parameters (weights and biases) in each layer and also the total parameters in your model.
autoencoder.summary()
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_2 (InputLayer) (None, 176, 176, 1) 0 _________________________________________________________________ conv2d_18 (Conv2D) (None, 176, 176, 32) 320 _________________________________________________________________ batch_normalization_11 (Batc (None, 176, 176, 32) 128 _________________________________________________________________ conv2d_19 (Conv2D) (None, 176, 176, 32) 9248 _________________________________________________________________ batch_normalization_12 (Batc (None, 176, 176, 32) 128 _________________________________________________________________ max_pooling2d_5 (MaxPooling2 (None, 88, 88, 32) 0 _________________________________________________________________ conv2d_20 (Conv2D) (None, 88, 88, 64) 18496 _________________________________________________________________ batch_normalization_13 (Batc (None, 88, 88, 64) 256 _________________________________________________________________ conv2d_21 (Conv2D) (None, 88, 88, 64) 36928 _________________________________________________________________ batch_normalization_14 (Batc (None, 88, 88, 64) 256 _________________________________________________________________ max_pooling2d_6 (MaxPooling2 (None, 44, 44, 64) 0 _________________________________________________________________ conv2d_22 (Conv2D) (None, 44, 44, 128) 73856 _________________________________________________________________ batch_normalization_15 (Batc (None, 44, 44, 128) 512 _________________________________________________________________ conv2d_23 (Conv2D) (None, 44, 44, 128) 147584 _________________________________________________________________ batch_normalization_16 (Batc (None, 44, 44, 128) 512 _________________________________________________________________ conv2d_24 (Conv2D) (None, 44, 44, 64) 73792 _________________________________________________________________ batch_normalization_17 (Batc (None, 44, 44, 64) 256 _________________________________________________________________ conv2d_25 (Conv2D) (None, 44, 44, 64) 36928 _________________________________________________________________ batch_normalization_18 (Batc (None, 44, 44, 64) 256 _________________________________________________________________ up_sampling2d_5 (UpSampling2 (None, 88, 88, 64) 0 _________________________________________________________________ conv2d_26 (Conv2D) (None, 88, 88, 32) 18464 _________________________________________________________________ batch_normalization_19 (Batc (None, 88, 88, 32) 128 _________________________________________________________________ conv2d_27 (Conv2D) (None, 88, 88, 32) 9248 _________________________________________________________________ batch_normalization_20 (Batc (None, 88, 88, 32) 128 _________________________________________________________________ up_sampling2d_6 (UpSampling2 (None, 176, 176, 32) 0 _________________________________________________________________ conv2d_28 (Conv2D) (None, 176, 176, 1) 289 ================================================================= Total params: 427,713 Trainable params: 426,433 Non-trainable params: 1,280 _________________________________________________________________
It's finally time to train the model with Keras' fit() function! The model trains for 50 epochs. The fit() function will return a history object; By storying the result of this function in fashion_train, you can use it later to plot the loss function plot between training and validation which will help you to analyze your model's performance visually.
Train the model
autoencoder_train = autoencoder.fit(train_X, train_ground, batch_size=batch_size,epochs=epochs,verbose=1,validation_data=(valid_X, valid_ground))
Train on 1224 samples, validate on 306 samples Epoch 1/300 1224/1224 [==============================] - 7s - loss: 0.1201 - val_loss: 0.0838 Epoch 2/300 1224/1224 [==============================] - 7s - loss: 0.0492 - val_loss: 0.0534 ... Epoch 299/300 1224/1224 [==============================] - 7s - loss: 1.3101e-04 - val_loss: 6.1086e-04 Epoch 300/300 1224/1224 [==============================] - 7s - loss: 1.0711e-04 - val_loss: 3.9641e-04
Finally! You trained the model on the fingerprint dataset for 200 epochs, Now, let's plot the loss plot between training and validation data to visualise the model performance.
loss = autoencoder_train.history['loss'] val_loss = autoencoder_train.history['val_loss'] epochs = range(300) plt.figure() plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.show()
Finally, you can see that the validation loss and the training loss both are in sync. It shows that your model is not overfitting: the validation loss is decreasing and not increasing, and there is rarely any gap between training and validation loss throughout the training phase.
Therefore, you can say that your model's generalization capability is good.
Finally, it's time to reconstruct the test images using the predict() function of Keras and see how well your model is able reconstruct on the test data.
Save the Model
Let's now save the trained model. It is an important step when you are working with Deep Learning.Since, the weights are the heart of the solution to the problem you are tackling at hand!
You can anytime load the saved weights in the same model and train it from where your training stopped. For example: The above model if trained again, the parameters like weights, biases, the loss function etc. will not start from the beginning and it will no more be a fresh training.
Within just one line of code you can save and load back the weights into the model.
autoencoder = autoencoder.save_weights('autoencoder_mri.h5')
autoencoder = Model(input_img, autoencoder(input_img))
autoencoder.load_weights('autoencoder_mri.h5')
Predicting on Validation Data
Since here you do not have a testing data. Let's use the validation data for predicting on the model that you trained just now.
You will be predicting the trained model on the remaining 306 validation images and plot few of the reconstructed images to visualize how well your model is able to reconstruct the validation images.
pred = autoencoder.predict(valid_X)
plt.figure(figsize=(20, 4)) print("Test Images") for i in range(5): plt.subplot(1, 5, i+1) plt.imshow(valid_ground[i, ..., 0], cmap='gray') plt.show() plt.figure(figsize=(20, 4)) print("Reconstruction of Test Images") for i in range(5): plt.subplot(1, 5, i+1) plt.imshow(pred[i, ..., 0], cmap='gray') plt.show()
Test Images
Reconstruction of Test Images
From the above figures, you can observe that your model did a fantastic job in reconstructing the test images that you predicted using the model. At least qualitatively, the test and the reconstructed images look almost exactly similar.
May be it can do a bit better job on reconstructing some local details present in the original 3T images.
Predicting on Noisy 3T images
First let's add some noise into the validation images with a mean zero and standard deviation of 0.03.
[a,b,c,d]= np.shape(valid_X) mean = 0 sigma = 0.03 gauss = np.random.normal(mean,sigma,(a,b,c,d)) noisy_images = valid_X + gauss
Its time to do prediction on the noisy validation images. Let's see how well your model does on noisy images despite of not being trained on them.
pred_noisy = autoencoder.predict(noisy_images)
plt.figure(figsize=(20, 4)) print("Noisy Test Images") for i in range(5): plt.subplot(1, 5, i+1) plt.imshow(noisy_images[i, ..., 0], cmap='gray') plt.show() plt.figure(figsize=(20, 4)) print("Reconstruction of Noisy Test Images") for i in range(5): plt.subplot(1, 5, i+1) plt.imshow(pred_noisy[i, ..., 0], cmap='gray') plt.show()
Noisy Test Images
Reconstruction of Noisy Test Images
It seems the model did a pretty good job. Isn't it? After all the reconstructed images look better than the noisy images. Moreover, the model never actually saw the noisy images while being trained.
Quantitative Metric: Peak Signal-to-Noise Ratio (PSNR)
The PSNR block computes the peak signal-to-noise ratio, in decibels(dB), between two images. This ratio is often used as a quality measurement between the original and a reconstructed image. The higher the PSNR, the better the quality of the reconstructed image.
So, first let's compute the performance of the validation and the reconstructed validation images.
valid_pred = autoencoder.predict(valid_X) mse = np.mean((valid_X - valid_pred) ** 2) psnr = 20 * math.log10( 1.0 / math.sqrt(mse))
print('PSNR of reconstructed validation images: {psnr}dB'.format(psnr=np.round(psnr,2)))
PSNR of reconstructed validation images: 34.02dB
Next, let us compute the PSNR between the original validation and the predicted noisy images
noisy_pred = autoencoder.predict(noisy_images) mse = np.mean((valid_X - noisy_pred) ** 2) psnr_noisy = 20 * math.log10( 1.0 / math.sqrt(mse))
print('PSNR of reconstructed validation images: {psnr}dB'.format(psnr=np.round(psnr_noisy,2)))
PSNR of reconstructed validation images: 32.48dB
Well, looks like quantitatively there is merely a difference of 1.54 dB between the PSNR of without noise reconstructions and with noise reconstructions. Moreover, you have not trained your model to handle noise. Isn't this amazing?
Go Further!
This tutorial was good start to understanding how to read MRI nifti format images, analyse, preprocess and feed them into the model using a brain MRI 3T dataset. It showed you one of the nice application of autoencoders practically. If you were able to follow along easily or even with little more efforts, well done!
You can play around with the architecture and try improving the predictions both quantitatively and qualitatively. May be by adding more layers, may be by training it for more time? Try these combinations and see if that helps.
There is still a lot to cover, so why not take DataCamp’s Deep Learning in Python course? If you haven’t done so already. You will learn from the basics and slowly will move into mastering the deep learning domain, it will undoubtedly be an indispensable resource when you’re learning how to work with convolutional neural networks in Python, how to detect faces, objects etc. | https://www.datacamp.com/community/tutorials/reconstructing-brain-images-deep-learning | CC-MAIN-2018-39 | refinedweb | 4,000 | 53.81 |
I've a list of questions regarding pointers and strings. I was doing K&R section 5.8. In the section a function month_name is given which returns name of the month when number of the month is passed as an argument. Here is the code I made from the function.
Now my questions are:Now my questions are:Code:#include<stdio.h> char *month_name(int); int main(void) { int n; printf("Enter"); scanf("%d",&n); printf("%s",month_name(n)); } char *month_name(int n) { char *name[]={"illegal month","jan","feb","march","april","may","june", "july","august","september","oct","nov","dec"}; return (n<1||n>12)?name[0]:name[n]; }
1. Is name a one-dimensional or two dimensional array? This is what K&R have written
What will the compiler fill with: 13 or something else?What will the compiler fill with: 13 or something else?Since the size of the array name is not specified, the compiler counts the initializers and fills in the correct number.
2. Another quote from K&R in the same section
Why is this an ideal application of static. I did the program without static and it runs fine?Why is this an ideal application of static. I did the program without static and it runs fine?This is an ideal application for an internal static array.
3. Although the program didn't give any errors it gave a warning
What's wrong with the compiler?What's wrong with the compiler?Code:Warning 1 warning C4996: 'scanf' was declared deprecated
4. I'm fed up of asking this question and now feel awkward to ask it but what is the difference between
char *s
char s[10]
I'm not able to visualize the storage of s in the former one. I've checked it in many sites but still I'm not able to get it.
Thanks for reading this huge post. | http://cboard.cprogramming.com/c-programming/116496-list-questions.html | CC-MAIN-2015-27 | refinedweb | 320 | 67.04 |
- NAME
- VERSION
- SYNOPSIS
- DESCRIPTION
- METHODS
- ARRAY METHODS
- ATTRIBUTES
- SEE ALSO
- AUTHOR
NAME
HackaMol::Roles::AtomGroupRole - Role for a group of atoms
VERSION
version 0.051
SYNOPSIS
use HackaMol::AtomGroup; use HackaMol::Atom;, ); $atom1->push_charges(-0.834); $_->push_charges(0.417) foreach ($atom1, $atom2); # instance of class that consumes the AtomGroupRole my $group = HackaMol::AtomGroup->new(atoms=> [$atom1,$atom2,$atom3]); print $group->count_atoms . "\n"; #3 print $group->total_charge . "\n"; # 0 print $group->total_mass . "\n"; my @atoms = $group->all_atoms; print $group->dipole_moment . "\n"; $group->do_forall('push_charges',0); $group->do_forall('push_coords',$group->COM); $group->gt(1); # same as $group->do_forall('t',1); print $group->dipole_moment . "\n"; print $group->bin_atoms_name . "\n"; print $group->unique_atoms . "\n"; $group->translate(V(10,0,0)); $group->rotate( V(1,0,0), 180, V(0,0,0)); $group->print_xyz ; #STDOUT my $fh = $group->print_xyz("hackagroup.xyz"); #returns filehandle $group->print_xyz($fh) foreach (1 .. 9); # boring VMD movie with 10 frames
DESCRIPTION
The HackaMol AtomGroupRole class provides core methods and attributes for consuming classes that use groups of atoms. The original implementation of this role relied heavily on attributes, builders, and clearers. Such an approach naturally gives fast lookup tables, but the ability to change atoms and coordinates made the role to difficult. Such an approach may be pursued again (without changing the API) in the future after the API has matured. The AtomGroupRole calculates all values for atoms using their own t attributes.
METHODS
do_for_all
pass method and arguments down to atoms in group
$group->do_for_all('t',1); #sets t to 1 for all atoms
gt
integer argument. wraps do_for_all for setting time within group
$group->gt(1);
dipole
no arguments. return dipole calculated from charges and coordinates as Math::Vector::Real object
COM
no arguments. return center of mass calculated from masses and coordinates as Math::Vector::Real object
COZ
no arguments. return center of nuclear charge calculated from Zs and coordinates as Math::Vector::Real object
total_charge
no arguments. return sum of atom charges.
total_mass
no arguments. return sum of atom masses.
total_Z
no arguments. return sum of Zs.
dipole_moment
no arguments. returns the norm of the dipole in debye (assuming charges in electrons, AKMA)
bin_atoms
Called with no arguments. Returns a hash with a count for each unique atom symbol.
count_unique_atoms
no arguments. returns the number of unique atoms
bin_atoms_name
no arguments. returns a string summary of the atoms in the group sorted by decreasing atomic number. For example; OH2 for water or O2H2 for peroxide.
tmax
return (count_coords-1) if > 0; return 0 otherwise; croaks if not all atoms share the same tmax.
translate
requires Math::Vector::Real vector argument. Optional argument: integer tf.
Translates all atoms in group by the MVR vector. Pass tf to the translate method to store new coordinates in tf rather than atom->t.
rotate
requires Math::Vector::Real vector, an angle (in degrees), and a MVR vector origin as arguments. Optional argument: integer tf.
Rotates all atoms in the group around the MVR vector. Pass tf to the translate method to store new coordinates in tf rather than atom->t.
print_xyz
optional argument: filename or filehandle. with no argument, prints xyz formatted output to STDOUT. pass a filename and an xyz file with that name will be written or overwritten (with warning). pass filehandle for continuous writing to an open filehandle.
print_xyz_ts
argument: array_ref containing the values of t to be used for printing. optional argument: filename or filehandle for writing out to file. For example,
$mol->print_xyz_ts([0 .. 3, 8, 4], 'fun.xyz');
will write the coordinates for all group atoms at t=0,1,2,3,8,4 to a file, in that order.
print_pdb
same as print_xyz, but for pdb formatted output
print_pdb_ts
same as print_xyz_ts, but for pdb formatted output
bin_this
argument: Str , return hash_ref of binned $self->Str.
$hash_ref{$_}++ foreach ( map {$_->$Str} $self->all_atoms );
what_time
returns the current setting of t by checking against all members of group.
fix_serial
argument, optional: Int, offset for resetting the serial number of atoms. Returns the offset.
$group->fix_serial(0); # serial starts from zero
centered_vector
calculates least squares fitted vector for the AtomGroup. Returns normalized Math::Vector::Real object with origin V(0,0,0).
$mvr = $group->centered_vector; # unit vector origin 0,0,0 # place two mercury atoms along the vector to visualize the fit my $hg_1 = HackaMol::Atom->new(Z => 80, coords => [$group->center]); my $hg_2 = HackaMol::Atom->new(Z => 80, coords => [$group->center + $mvr]);
ARRAY METHODS
push_atoms, get_atoms, set_atoms, all_atoms, count_atoms, clear_atoms
ARRAY traits for the atoms attribute, respectively: push, get, set, elements, count, clear
push_atoms, unshift_atoms
push atom on to the end of the atoms array or unshift_atoms on to the front of the array
$group->push_atoms($atom1, $atom2, @otheratoms); $group->unshift_atoms($atom1, $atom2, @otheratoms); # maybe in reverse
all_atoms
returns array of all elements in atoms array
print $_->symbol, "\n" foreach $group->all_atoms;
get_atoms
return element by index from atoms array
print $group->get_atoms(1); # returns $atom2 from above
set_atoms
set atoms array by index
$group->set_atoms(1, $atom1);
count_atoms
return number of atoms in group
print $group->count_atoms;
clear_atoms
clears atoms array
ATTRIBUTES
atoms
isa ArrayRef[Atom] that is lazy with public ARRAY traits described in ARRAY_METHODS
qcat_print
isa Bool that has a lazy default value of 0. if qcat_print, print all atoms coordinates in one go (no model breaks). | http://web-stage.metacpan.org/pod/HackaMol::Roles::AtomGroupRole | CC-MAIN-2019-43 | refinedweb | 889 | 58.18 |
[Part 1, Part 2, Part 3, Part 4, Part 5]
Hello everyone,
With the recent release of our cloud-powered controls for Windows Phone 8, now is a great time to begin using them. Using these components, they can help you create an application that stores its data in the cloud that can be easily accessed by multiple users on multiple devices. In this blog post, I will walk you through several of the cloud-powered controls currently available and teach you how to use them in your app. We’ll create an app called “AppointmentTracker”, which allows users to store information about upcoming appointments and so forth.
Before we can get started, make sure that you have the latest bits installed. The Q3 2013 release contains both our traditional components as well as our cloud components.
After you have installed our controls, you will find several new templates. The one we are interested in working with today is called “Telerik Cloud Powered Windows Phone App” as shown in Figure 1.
Figure 1: The new cloud template installed with RadControls for Windows Phone 8.
The “Telerik Cloud Powered Windows Phone App” template is where we are going to begin, so select it and give it a name and press OK.
After doing so, you will be presented with a Cloud Services Configuration dialog, where you will need to enter your username and password and accept the terms and conditions. From this screen you can either select an existing project or create a new one. Let’s create a new one as shown in Figure 2. This information will carry over to our Everlive account which we will take a look at shortly.
Figure 2: Creating a new project with our Cloud Controls.
From here, we can simply click the “Create” button, then finish and our project will be loaded in Visual Studio 2012 or 2013.
Launch the Configuration Manager and change the “Active Solution Platform” to x86 instead of ARM, where we can test this on an emulator instead of a device as shown in Figure 3.
Figure 3: Changing the Configuration Manager.
Now that the app is created and connected to the cloud service, let’s add three Windows Phone Portrait pages. First add a folder called “Views” and then add three pages Login.xaml, Registration.xaml and Appointment.xaml inside this folder. You can go ahead and change the TitlePanel in the Registration.xaml and Appointment.xaml to something more appropriate instead of the default text.
Next, you may go ahead and delete the MainPage.xaml file and edit the WMAppManifest.xml to start the Navigation Page to /Views/Login.xaml.
We are going to begin by editing our Login.xaml page by replacing the ContentPanel with the following XAML.
<!--ContentPanel - place additional content here-->
<Grid x:
<telerikCloudControls:RadCloudLogin
x: <telerikCloudControls:RadCloudLogin.LoginProviders>
<telerikCloudControls:FacebookLoginProvider
<telerikCloudControls:GoogleLoginProvider
<telerikCloudControls:LiveIDLoginProvider
</telerikCloudControls:RadCloudLogin.LoginProviders>
</telerikCloudControls:RadCloudLogin>
</Grid>
While we are here, let’s map the telerikCloudControls prefix to the Telerik.Windows.Controls.Cloud namespace on all pages:
xmlns:telerikCloud="clr-namespace:Telerik.Windows.Controls.Cloud;assembly=Telerik.Windows.Controls.Cloud"
Let’s examine what we’ve done so far, we’ve used one cloud-control called RadCloudLogin and provided the path to two pages if the registration succeeds or if they need to register. After that we’ve used our new Social LoginProviders to make registration even simpler by adding in support for FaceBook, Google and LiveID. You will need to create an app on the social network in order to get the ClientID and ClientSecret. A page in our help documentation describes this in more detail. In my example, I’m using Facebook, so I headed to and created a new app as shown below. Now I have my ClientId and ClientSecret.
If we look at our designer, we should have the following screen as shown in Figure 4.
Figure 4: RadCloudLogin with the various social providers.
Please note that Google and LiveID you will need to setup an app similar to the one we did for Facebook.
Let’s switch over to everlive.com and log into our AppointmentTracker instance and go to settings, then user authentication. Make sure you have a check mark in Facebook, Google and Windows Live as shown in Figure 5. This will give our app more flexibility in the future if we chose to add Google and the the Live ID providers.
Figure 5: Adding Everlive permissions to the various social providers.
If we run our app with a valid Client ID and Client Secret for Facebook, we would see the following screen after the user touches the Facebook provider as shown in Figure 6.
Figure 6: Using Facebook instead of creating another username and password.
As you can see we can now log in using Facebook and after a successful login, it will take us to our stubbed out “Appointment.xaml” page as shown in Figure 7.
Figure 7: Stubbed out Appointment.xaml page.
If we look at our Users table, which was automatically generated for us in Everlive. You will see our user has been added as shown in Figure 8.
Figure 8: The User Table in Everlive showing the user we just added and Facebook being a provider.
So far, we’ve started an app that uses Everlive as its backend storage mechanism, modified the user permissions to use social logins, created our first Facebook app and used our first cloud component control called RadCloudLogin. In the next part of the series, we will fill in the gaps, by adding in the Login.xaml page as well as begin to use other cloud-powered component such as RadCloudRegistration (in case they choose not to use a social provider) as well as tie in our RadCloudCalendar control. All of this is coming in part 2, so stay tuned!. | http://www.telerik.com/blogs/building-an-appointment-tracking-app-by-using-telerik-s-wp-cloud-components---part-1 | CC-MAIN-2015-35 | refinedweb | 977 | 64.3 |
I have been staring at javascript in Windows 8, and it just seems weird. That anonymous thing with the empty set of parentheses at the end of the file. Why is it anonymous, is it a criminal on the look out for the police?
Here is what the code in one of the templates called “Blank App”, I don’t show the code above it.
(function () {
//Code removed for brevity sake
};
app.start(); })();
Why the empty parentheses? This makes the function “anonymous”, why is the function “anonymous”? Because it has no name! It’s true. This also means that we are able to do some cool things with the function. For instance we are able to control controls. Really, I didn’t just repeat the word control twice, one was the verb control and the other was the noun for objects that the user or system can interact with. Sometimes we need to know the exact order of when controls are used.
This can be done using anonymous functions. More in later blogs.
You do this so that your code inside the outer function doesn't pollute the global namespace when you declare functions and variables. This is not to make it "anonymous".
Emmm.. your sample code contains unbalanced curly brackets.
Btw, when you give the function a name and take it out, then replace the original part with the function name, it's really not that weird.
cheong00,
Oops, unbalanced are my curly bracket and so is my brain. Thank you for the feedback.
As to being weird, again, you are right. But when I think about it, all of software is weird. So JavaScript anonymous functions are part of the set of weird. See set theory in use!
Jesse,
Thank you for the comment, and you are right.
Legal Note:
Restrictions: | http://blogs.msdn.com/b/devschool/archive/2012/08/22/why-is-javascript-in-windows-8-so-anonymous-and-weird.aspx | CC-MAIN-2015-35 | refinedweb | 303 | 85.28 |
This is the mail archive of the cygwin mailing list for the Cygwin project.
While Andrew's tone is a bit strong/rude, I do agree. The keyboard should just work. PC keyboards have been for a very long time now. One of the "obvious problems" though is that there is never any instant switch over from "old days" to "modern times". It is a continuum. It is gradual. Looking backward from our "current distant future", many things look stupid, but they only ever get there gradually. The teletype surely died off gradually, not instantly. So how/when do you phase out support for it? How do you have new code interoperate with the old code that is "sensitive" to teletypes? Code lives a very long time of course. How do you know removing support for something won't break someone? Do you actively collect "telemetry" data as to the usage of everthing? How close to zero is close enough? When do you go back and remove workarounds for bugs at other layers, that eventually got fixed? Nobody is still using your code along with the other? Check versions and exit rudely? For "old" stuff? For anything you haven't tested? For anything you haven't tested recently? There are pluses and minuses. Blinding running ignoring the surrounds is bad. You can't test every combination. You must be willing to run on stuff you haven't tested. Now, you know, an ironic thing here, is that Windows is the one with a teletype compatibility that Unix folks/code don't like. As I understand... there was a control code for a typewriter to move the head (the carriage) to the start of the line, and another for advancing the paper one line. As well, moving the head took some time, so advancing the paper elegantly fill the gap. Therefore you get carriage return followed by line feed. Once you have electronic terminals, this separation is less useful, but not actually useless (I'll come back to this). So Unix folks arbitrarily chose linefeed. Apple arbitrariliy chose carriage return (Apple II, pre-Mac OS X Mac). Windows via MS-DOS presumably via CP/M didn't change. A byte is wasted for every line darn. Now, really, carriage return without line feed can be useful. It is a way to implement "spinners" and other "fancy" ui involving overwriting text on the same line. I think linefeed might also be useful on its own, to go down one without moving left or right. Not sure. Like maybe it is an optimization using "free form" text ui such as vi, over a slow serial line?? I could be wrong here on the cr/lf story. Check Wikipedia... > That's the price of using stuff without understanding it. Oh man. Let he who has not sinned cast the first stone.. Everyone uses stuff they don't understand all the time. I use my brain, lungs, cars, airplanes, compilers, linkers, kernels, interpreters, Cygwin, etc... I imagine I understand a lot, but... - Jay -- Unsubscribe info: Problem reports: Documentation: FAQ: | http://cygwin.com/ml/cygwin/2008-07/msg00381.html | CC-MAIN-2014-41 | refinedweb | 514 | 77.43 |
BibiMembers
Content count521
Joined
Last visited
Community Reputation241 Neutral
About sBibi
- RankAdvanced Member
- Quote:Could a clump of mass the size of the earth or the moon become a black hole? no, they aren't massive enough to collapse under their own gravity into a blackhole. but theorically, with enough energy, any lump of matter could be forced into a blackhole... except that, like with the tiny artificial "blackholes", they won't be stable, and as soon as you stop artificially holding them together (assuming you can do so in the first place), they will just blow apart... (though I might be wrong here...) Quote:What would you need to know to calculate a black hole's gravitational force? Probably mass, right? er... yes... like with any other body... Quote:Can you calculate mass from size of event horizon? you have the answer to this in the 7th post: "sr = 2.0 * G * mass / (c * c);" and you have the answers to the rest in the page dv linked:
- ah, nice.. thanks, I'll google on that =)
- mh, something else: about the singularity thing, I'm not sure a blackhole necessarily means a singularity. if you take a neutron star, what stops if from collapsing further is the strong nuclear force repelling neutrons from one another (which gives a density around 1.10^18 Kg/m^3 if I'm not mistaken), and even then, when the star's mass becomes large enough that the neutron-repelling force can't stop it from collapsing, it collapses again to a quark star... (it seems "logical" (although here, what seems logic might lead to very wrong deductions, of course))... so, why systematically throw in a singularity? why couldn't there be blackholes with quark stars inside? (and how can we know there isn't yet another collapsing level below that?) if any one could answer, or has any links answering these questions, or explaining why not, I'd be very interested! thanks :) EDIT: perhaps it has something to do with the fact that neutron stars can have quark cores, and that a full quark star would necessarily have a core collapsed one or more levels further, and that it is assumed there are no further steps, and no sub-particles below quarks, so the whole thing actually collapses to a singularity? Oo EDIT2: Quote. to give a more concrete example: if you consider the earth, and the sun, and you make the earth orbit so close to the sun that it almost touches it, the earth, although totally roasted, will stay together. if you want it to actually be ripped apart by tidal forces, you'll have to move it to an orbit that's actually below the sun's surface (so that's not possible), however, if the sun had a smaller radius, you could experience such effects... it's the same with blackholes. as they have a smaller radius, orbiting objects can be tidally disrupted much more often, as they may come closer. [Edited by - sBibi on October 31, 2006 12:58:41 AM]
- There are various types of black holes, among which: non-rotating (schwarzschild) blackoles, and rotating (Kerr) blackholes. (and maybe some other types/subtypes, I don't know) the easiest one to simulate would be a non-rotating one. Quote:Also, does a black hole's size affect its gravitational pull in any meaningful way? actually, it is the black hole's mass (thus gravitational pull) that defines its size (more specifically, the size of its event horizon, more on that later...). you can define a black hole with various parameters in your game: mass, angular momentum, and electric charge. (although you probably don't care about the last one) for a non-rotating blackhole, if you know its mass, you can get its visible size by computing the schwarzschild radius: sr = 2.0 * G * mass / (c * c); (where G is the gravitational constant, and c the speed of light in vacuum) basically, this radius is the distance from the blackhole below which the blackhole's gravitational pull is so strong that even particles travelling at the speed of light cannot escape. what this means graphically is that you "see" a pitch black sphere of radius == sr, and this is your blackhole. you can go even further with this, and have intermediate states. you can see a blackhole as a regular star/object whose event horizon has risen above the object's physical surface, so actually simulate a transition from a regular star that collapses into a blackhole, and render the transition accordingly. (although you would need raytracing to "accurately" render this, you might be able to fake it with regular rendering, as a star that's in between blackhole and star would probably (just speculating, didn't check it on paper (and even ten, I don't know enough about this stuff.. perhaps someone here will be able to give you more details))) look like a black sphere with a small circular "window" on its center where you would still see the star's surface. another interesting thing to do if you want to render blackholes would be to simulate the bending of light rays in the blackhole's vicinity. you could do that by rendering a cubemap from the blackhole's center, then render a large billboard centered on the blackhole, and for each pixel of the billboard, depending on its position and distance to the blackhole, somehow compute an indexing vector into your cubemap... you'll also probably want to have a look at the articles in wikipedia, and see what they have on the nasa website... EDIT: ah.. crap... triple cross-post... sorry... :D EDIT2:.
sBibi replied to Freakdesign's topic in Math and Physics"dodekaeder" seems to be the german word for dodecahedron ? if that's the case, and unless I misunderstood your post, the left image isn't based from a dodecahedron. It's a subdivided icosahedron. (you could rebuild it starting from a dodecahedron, but it would be much more painful than subdividing an icosahedron). and the faces aren't all exactly the same. they seem to be, but there are small differences/distortions. the icosahedron is actually the largest structure you can create in 3D that's made of identical equilateral triangles. (and that's 20 triangles) you can grab an icosahedron's coordinates from paul bourke's platonic solids page you can then subdivide it to get the left image (that would actually be subdivided twice with the algorithm below), with something more or less like: while (subdivisions--) { for (f = 0; each triangle in the mesh; f++) { // grab original face's vertices v0 = cur_mesh.faces[f].vertex[0]; v1 = cur_mesh.faces[f].vertex[1]; v2 = cur_mesh.faces[f].vertex[2]; // compute edge midpoints m0 = (v0 + v1) * 0.5f; m1 = (v1 + v2) * 0.5f; m2 = (v2 + v0) * 0.5f; // // v0 m0 v1 // x----x----x // \ / \ / // \/___\/ // m2 \ / m1 // \ / // " // v2 // build new triangles new_mesh.add_face(v0, m0, m2); new_mesh.add_face(m2, m0, m1); new_mesh.add_face(m1, m0, v1); new_mesh.add_face(m1, v2, m2); } // final step, push back the vertex on the sphere's surface for (each output vertex) vertex = normalise(vertex) * sphere_radius; swap(cur_mesh, old_mesh); } there are many ways to subdivide this, that's probably one of the most straightforward ones, but clearly not the fastest. you could also directly subdivide in one go without iterating, and not be limited to multiple of two subdivision counts (as with the above algorithm), although it would be a little bit more complex, as you would have to take the sphere's curvature into account to avoid distortions when projecting vertices back onto the sphere surface, you also can stripify quite efficiently the output triangles if you wish, the subdivided structure being quite tristrips-friendly, etc... [Edited by - sBibi on November 27, 2005 2:04:34 AM]
sBibi replied to Kest's topic in Math and Physicsif you are interested in knowing what maximum precision your floats will have at a certain distance, there is a very easy way to do it... something more or less like: float get_precision(float val) { int prev_val; prev_val = *((int*)&val) - 1; // -1 instead of +1 because you won't go past val anyway, so the actual maximum error you will ever get is the delta between val-1 and val return (val - *((float*)&prev_val)); } (this code doesn't handle values <= 0.0f, denormals, infinities, etc.. but you don't really need to, and it's just a few simples checks to add anyway if you do need it). it just takes advantage of the fact that, if you take a binary representation of a float, let's call it bin_float, bin_float + 1 will be the next representable float, and bin_float - 1 the previous one. just feed the function with the maximum extent you will ever have in one of your world's partitions, and it will give you the maximum error you will get in this space. (or you can also see this as the maximum resolution of a discrete coordinates grid, you can observe that pretty well in your screenshots, see how the vertices are placed on discrete layers on the Y axis, (not the others of course, as only the Y coordinate went very high), getting further apart as the coordinate grows bigger...) if you plan on partitioning the world in.. say... 2000.0f units sized cubes, and each cube's origin is centered, each coordinate will range from -1000 to 1000. so call the function with 1000, and that's it =) EDIT: typo
sBibi replied to Richy2k's topic in Graphics and GPU Programmingappears to run fine here too. ran at an average of 78 fps. GF7800 GTX + 256Mb vram with 78.01 drivers P4 3.4GHz & 2048Mb ddr2
sBibi replied to ehmdjii's topic in Graphics and GPU Programmingw00f> catmull-clark subdivision scheme can be applied to ngons, not only quads and tris, you can very well use it to subdivide a mesh made out of 100-gons if you like, (instead of 3 or 4-gons (respectively tris and quads)) Quote:am i correct here: number of new quads = 4 * quads + 3 * tris numver of new tris = 0 yes, and more generally: for each ngon { quad_count += ngon level } (where "ngon level" is the number of vertices in the ngon) Quote:and what would be the number of vertices in the new mesh? (open meshes are possible too) you will need more information than just the initial number of faces and vertices to do this, especially if your mesh isn't closed, or has duplicate vertices (if you want to handle whatever per-vertex data discontinuity, like hard edges on your surface (normal discontinuity), or texcoords discontinuities (almost unavoidable for most closed textured meshes)) the number of output vertices can be computed with: n_verts = vc + ec + fc where vc is your vertex count, ec your edge count, and fc your face count before subdivision. you keep each vertex, you subdivide each edge (each edge will generate a vertex at its midpoint), and you add a vertex at the centroid of each face.
sBibi replied to johnnyBravo's topic in Math and PhysicsQuote:if it's for a game, you'd be better of to make it up. Then you can tweak the gameplay as you wish (add more mass since the ship is more powerful than expected, ect...). I second that.. and a ship's density isn't constant, you could have a very small ship that weights the same or more than a larger ship, how do you determine the average density? the same for all ships? quite restrictive... :/ by hand? then you'd better just set their weight by hand in the first place, and then you won't need their density... anyway.. that's just my opinion.. if you still want to compute their volume, you can use the method Dmytry described in the other thread. (link's somewhere up on this page) I use that method too and it works fine. just take the AABB height and lower Y bound, place the virtual projection plane at min.y - (height * 0.1) (the relative offset is here just to avoid some imprecisions if the meshes you want to measure vary widely in size, then just ignore the Y coordinate of the triangles, compute the 2D area (dot product), and do area * (v0->y + v1->y + v2->y) / 3.0f, and add these values for each triangle to an accumulator. in the end, you'll get the volume. he gave some visual explanations in the other thread if I recall correctly... might be clearer :) note that if your mesh isn't closed, you will have volume "leaking" in or out, depending if the "hole" is located on the top or bottom of the mesh... anyway, make sure the mesh is completely closed.
sBibi replied to Samurai Jack's topic in Graphics and GPU Programmingyes, this is handy to remove degenerate tris on cards that support it, however, you can exclude all ATI cards AFAIK (except if they implemented this NV extensions in their latest stuff, but I highly doubt it ;)) (although this isn't really a problem is you rebuild the strips at runtime according the the primitive restart extension...) and for dynamic data, unless perhaps on PCIE, I guess the gains from lower upload times due tu less vertices in tristrips would be quite advantageous compared to potential rendering gains with tris... (and if both tristrips and tris were ordered cache friendly, perfs are pretty much the same anyway, (didn't do extensive tests on this though.. the only difference except increased upload rates for tri lists (and it doesn't count if already in vid mem) would be the cost of evaluating degenerate tris, or restart indices (I have no idea of what this costs), so deciding on which one to use depends pretty much on the mesh, and on the usage (dynamic, static..)))
sBibi replied to Stompy9999's topic in Game Design and TheoryQuote:Very interactive storyline is Half Life 2. There are no cutscenes and you basically can choose to follow or not follow the storyline. ?? I guess this was ironic :) HL2 is an example of a game _completely_ linear, where you have no freedom at all (except freedom of movement), everything is scripted, and you can fool these scripts quite easily using some tricks like flying, bunnyhopping and grav-jumping... even if you are free to move wherever you want, the storyling is completely static and predefined, you can't do anything else but follow it (or die), you can't even kill these damn "friendly" NPCs that get in your legs all the time and sometimes get you killed because they're so dumb that they block you in some corner and you basically can't move... this is an example of a game as linear as it could be. you have absolutely no choice in the story, it always starts the same, continue the same, and ends the same, as long as you finish the game, no matter how you played...
sBibi replied to littlekid's topic in Graphics and GPU ProgrammingQuote:If you absolutely need per-vertex collision detection, you could always skin the mesh in software. Then, do your tests, and render with the vertices that you already skinned. However, I don't really recommend this, because it is quite slow. actually, to solve this I use collision meshes for nearby LODs, and simple OBB // ellipsoids for further away collision lods. you very probably don't _need_ a per-triangle accurate collision detection on the rendered mesh, and collision meshes are almost always totally sufficient. (they are just very low res versions of the model that enclose the high resolution mesh) you just need to skin the collision mesh in software (you can get away with only 1 weight per vertex, or maybe two or three for a high resolution collision mesh), and software skinning with 1//2 weights is pretty dammn fast (you can use SSE to speed things up even more...)
OpenGL
sBibi replied to gazsux's topic in Graphics and GPU ProgrammingQuote:I think what Sages meant was to render your scene with color writes disabled but not with black. This means your z buffer is written as it should but you save a lot of bandwidth(half?). So: you don't even need to re-render your scene, just render the normal scene, grab the z-buffer, and render the glowing geometry using the z-buffer from the rendered scene. that's all, no need to re-render non glowing geometry...
- AP> you've got a very valid point here... just need to look at the female dark elf ratio (and their lightweight clothes) in lineage to be convinced ;)
- aw, I knew I should've read everything before posting... my bad :D Vaipa> Quote:Well, a more in-depth discussion on this topic could be found around this site: that's the site I was talking about in my previous post... :) Quote:3) I supose its clear you'd have to tone down the frequency of deaths, its common for an inexperienced mmog player to die more than 5 times in an hour. Thats quite a strain on the bloodline, and if they have to re-equip, travel and gain skills again, seriously boring. you don't need to have permadeath on normal fights, only with specific monsters, or quests, or whatever, witch would be pretty rare... Quote:Slightly off-topic, but what about the concept of no death at all. Why does an MMOG have to be about creatures in life-or-death battles? yes, I partially agree with this (not with the no death at all, rather with not necessarily death). what I mean is, in most (if not all?) mmo(rp)gs where there are potentially lethal fights, a fight ends up with the death of one or more players/npcs(mobs). it doesn't necessarily have to be that way. you can have an unconciousness level, wich is basically a life level beyond witch you fall unconcious, but do not die (your life level progressively goes down, until you reach another life treshold, and wake up from the pain. if you don't heal yourself quick enough, you'll die anyway). a fight (assumed between two players/mobs) is considered won as soon as one of the two fall unconcious (note that you can still die directly without going through the unconcious state if the last damage taken was high enough). then the other player can either choose to finish its oponent (but he doesn't gain anything in doing it, except personal satisfaction), or walk away, leaving the defeated opponent alive (he stills gets the experience and all, and can loot the body) frostburn> nice idea about the spirit fight to gain a body... about this: Quote:odily remains, and items: The only logical thing I can think of is that the body drops where he dies, and all the items are lootable. what we do is quite similar, everything is lootable on a corpse, but as this can be _very_ annoying (even if you've got friends around to "protect" your body the time it takes you to either be ressurected or for your spirit to come back from the cemetary and reacquire the body (system similar to wow on this cemetary/soul point of view, except a few differences), and as players would really be pissed off, we also have an item protection spell. different skill levels on this spell allows to improve the protection, and there also is a counter spell (much harder to learn/master however), that can deprotect items. basically, you have a protection quota that depends on your level on that spell, that you can freely spread across multiple objects, and balance protection time over protection intensity. for the same protection quota used on an item, you can either protect it very strongly, but for a short period of time, or protect it more weakly, but longer. (or protect one item very strongly for a long time, or lots of items less strongly for a shorter time, etc...) the protection is automatically activated when you die. if you take too long to come back and reacquire your body, and if time goes past the protection spell time, other players/mobs will be able to loot the protected item(s). and the protection intensity is used for the counter spell. if you protected very strongly a given item, only players very skilled with the unprotect spell will be able to loot that item. (takes quite a long time to get a good unprotection skill, whereas it's pretty quick to get a good protection skill, so it balances things out) Quote:Without a body: When your body has been killed you exist in a non-corporial form. You can levitate some distance over the ground, go through some walls and objects (not ground though), and cover ground at a much larger pace than you could with a body. You can't do anything in this form however, so you have to posess a being to interact with the world. but you can/could interact with other spirits like yourself at that moment, or access certain places unaccessible for material beings. basically be in a parallel spirit world. (have things that could harm/help your spirit, interact with them, etc...) Quote:Keep in mind that the dropping of attributes with age only applies to physical attributes. Mental stats like intelligence and, more importantly, wisdom will grow linearly through a character's lifetime. at some point, mental attributes will also begin to fall inevitably... Quote:The bloodline idea is a complete departure from role playing in the traditional sense. Just because death of a character is permanent doesn't mean that it is catastrophic. Quite the opposite. I'd say it's even necessary at some point... Quote:Intermarriage with other households could be encouraged for political or eugenic reasons, and some sort of aging dynamic would require the uber-characters to pass into legend and let others have a shot. or, as an uber character passes into legend, it can also enchant a magical item (like a statuette that allows you to call the spirit of the deceased hero to do/cast something linked to what that hero was so good at), the more powerful the character, the more powerful the enchantment. Quote:I believe that in order to create a more compelling game world that you HAVE to integrate player property ownership into the cities. there is also the technical problem of such a thing, and efficient management/display of dynamic cities in a large world can become really hard... (I assume 3D worlds, it'll be easyer in 2D though...) Quote:I envisioned time passing quite a bit faster than that. The problem with a system like this is that you have to have a balance. If time passes too slowly then you have to go to great lengths to limit death as you need enough time for new family members to come of age. However, if time is too fast then it would get ridiculous if it took you 5 game years to get from point A to point B. This is perhaps the most difficult thing to determine in a system like this. about time, you can't just accelerate the time scale. distances are the same, and travel/battle times should be the same too. there isn't any necessity to have a ration between days/months/years similar to what we have in the real world. or even to have days or weeks or months or years. in your mmo world, a normal lifetime for a human could be 25 cycles, each cycle being 83 days, each day being 6 hours, (beyond the hour, imho it's better to keep a coherent time frame with ours, so the player doesn't get _completely_ confused and lost :)). that would make a lifetime last roughly 518 days, so a bit less than a year and a half (considering that the character gets unplayable as it gets too old, or when it's too young, if its efficiently playable lifetime is only, in the above example, something like 19 cycles, this will make a playable lifetime of 394 days. now for seasons, if there is a winter and a summer, and in-between seasons, and if we assume a season cycle == 1 cycle, a player sees a full season cycle every 20 days or so... it not too fast, but not too slow either :) Quote:As for the game evolving, that is a great idea. Instead of simply releasing "expansions" you could progress the game world.. excellent. yes, it's a good idea, but it also brings its whole load of technical problems... Quote:Remember also that not everyone will be on every day, or even every week. Maybe you could have automatic e-mail updates on the status of the bloodline. A quick status report on each member, and a summary of new business, like formal offers of marriage arrangements or other communiques from other players. Running a bloodline might be more than just inhabiting a character and killing goblins with it; you'd be the head of a social unit. good idea :) Quote:There could perhaps be a web interface to manage your blood line. In the case of a casual player that plays maybe only on the weekends, they can log into a web page for 5 minutes from work and take care of what needs to be done. if I recall correctly, there is something like that in everquest2 (although not about bloodlines, but I think you can access your stats and other player's stats through their website) Quote:Another little nugget. I am totaly against a "con" system. For those of you unfamiliar with a con system, it is a system commonly used in mmorpgs to give the player an exact or ballpark guess as the level of a target. In this way characters can choose only enemies that are close to their level. I think these systems are bad. Instead I would replace them with a system in which enemies can only be judged by physical appearance. Strong characters will be taller and broader than others. They will show more muscle definition. In contrast, a weak character may have poor posture and a beer belly. As characters age they will get taller and more well-built. as they grow old they will shrink and become feeble; perhaps getting an arched back and grey hair. This way players would have to target their opponents based on a gut reaction. Of course it could be possible for a character to be strong but a terrible fighter, or old and a quite good fighter, but these variations would only add to the adventure. I totally agree with you. same thing with mental/spirit/mana/magic/whatever that's not directly visible physically, you can/could "sense" the aura of a character, perhaps through some spell that makes it visible to the player. this also adds the possibility to make this aura hideable at wish (a skill to learn), adds some strategy ;) Quote:Many of you have added ideas that will prevent or at least slow down this phenomenon (such as the length of an in-game year, etc) but eventually, once a player *did* grow his bloodline to epic proportions- who's going to play all the available characters in the family? other players. if a "family" is seen as a collection of players that can be played by more than one person. if for example instead of making a "clan" with some fellow players whom I'm used to play with, I make a family, and let my fellow players play the members of my family (that'll be their family too). this would prevent having one family per real player (as it can become quite heavy :D) the only problem with this is that the incoming players would already need to have an existing character. so it would be good to make some free members of your family available to new players (instead of explicitely "giving" them to someone already registered...) although this can be problematic, as you don't know who will take it, and you might not like him at all. dunno if it's good or bad... er, sorry, just saw this: Quote:that if families become to big, you can decide to put certain members of your family into the "new player pool" and a player that starts anew could have the option to take over the name Quote:While I have thought about the idea of intermarriages and sharing bloodlines, I actually think extending this system that far is a bit too ambitious. There are a number of issues that arise from that sort of system that make it extremely complex and I haven't been able to figure out, nor has anyone presented a way to me in which it could be done simply and effectively (in my mind at least :]). I also think the idea of integrating actual sexual reproduction in a game may not go over well (even though there will be no actual "sex" depicted). lol. that'll make the game 18+, although you'll probably have an increased <18 players population ^^ joke apart.. I don't see the problem with this kind of reproduction. it's much more logical, adds lots of possibilities, and isn't that hard to implement (maybe harder to balance, although I don't really think so...) Quote:I think that intermarriage would be missed. With players manifesting families, an economy of genetic material would be expected. clearly Quote:One additional thing I was thinking of was... say you don't manage your characters very well, and you DO run out of people to play due to getting them all killed. Then, long-lost relatives come and you play them -- at a cost. They will have lesser stats since they weren't directly involved with the rest of the bloodline. mmh, you make it sound as if it was easy to get a character perma-killed. it shouldn't, and if you really managed your family that badly, then you're out. and just have to start again (your family is destroyed and ceased to exist anyway), or find another family that can welcome you (although I'm not sure anyone would want to welcome such a horrible player that got all its bloodline destroyed, except if it was an accident ^^) sorry for the (relatively) long reply... [Edited by - sBibi on January 21, 2005 12:21:12 PM] | https://www.gamedev.net/profile/43425-sbibi/ | CC-MAIN-2017-30 | refinedweb | 5,152 | 65.35 |
Hello all,
I am in the process of teaching myself c++. I wrote one program successfully and I'm trying to write my second. I have run into a problem that I can not seem to figure out at all. I understand that this seems trivial, but I really need to grasp why this isn't working. The idea behind the program is that the user enters 5 baseball players' names and batting averages. The user is then prompted to enter a player's name to receive their average. Here is the code:
#include <iostream> #include <cstring> using namespace std; void displayInstructions(void) { cout << "This program is created to allow you to enter \n" << "up to 5 baseball players' names and averages and then \n" << "enter a player's name to receive their average. \n" << endl; cout << "To stop entering names, enter the word 'end' when asked \n" << "for the player's name." << endl; } int main() { class ballPlayer{ public: char name[30]; int avg; }; displayInstructions(); int i=0; int players=0; int totalPlayers=0; ballPlayer player[5]; char quit[]="end"; for(i=0;i<=4;i++) { cout << "Please enter a player's name:" << endl; cin.getline(player[players].name,30); if(player[0].name == "end") { cout << "Please enter at least one batter's name." << endl; cin.getline(player[players].name,30); } else if(player[players].name == quit && i != 0) { break; } cout << "Please enter this player's average:" << endl; cin >> player[players].avg; if(player[players].avg > 1000) { cout << "This is not a valid batter's average. \n" << endl; cout << "Please enter a valid bater's average:" << endl; cin >> player[players].avg; players++; totalPlayers++; } else { players++; totalPlayers++; } } char endBatter[30]; enterName: cout << "Enter the player's name to get their average:" << endl; cin >> endBatter[0]; players=0; for(i=0;i<=totalPlayers;i++) { int isMatch = strcmp(player[players].name,endBatter); if(isMatch == 0) { cout << player[players].name << "'s average is: " << player[players].avg << endl; continueDecision: cout << "Would you like to enter another player? Y/N" << endl; char yesNo[4]; cin.getline(yesNo,4); if(yesNo == "y" || yesNo == "Y") { goto enterName; } else if(yesNo == "n" || yesNo == "N") { break; } else { cout << "You did not enter a valid answer." << endl; goto continueDecision; } if(isMatch != 0 && players == totalPlayers) { cout << "You did not enter a valid player's name. \n" << endl; goto enterName; } } } return 0; }
As of right now, everything goes smoothly up until after the user enters the first player's batting average. It then prompts the following:
"Please enter a player's name:
Please enter this player's average:"
Here are the problems I am experiencing:
1. It doesn't allow users to enter the batter names of the final four players, only their averages. Because of this, I can not test the portion of code that has the user enter the word "end" to stop entering players.
2. When I enter the word "end" for the first batter, the program should prompt that the user needs to enter at least one batter, but instead it just accepts "end" as the player's name.
3. After the user enters the name of the batter that they want to receive the average for, the program quits instead of running the loop to display the correct average.
I understand that this code is most likely severely flawed and poorly written, but any help would be much appreciated and would greatly aid me in better understanding this language.
-D | https://www.daniweb.com/programming/software-development/threads/209125/new-programmer-needs-help-with-second-program | CC-MAIN-2018-05 | refinedweb | 572 | 63.19 |
Modified files: CVSToys/bin/loginfo 1.1 1.2 CVSToys/NEWS 1.27 1.28 CVSToys/ChangeLog 1.83 1.84 Log message: security/bugfix: remove '' from sys.path This is only a "security" fix if your commiters do not have shell access anyway. ViewCVS links: Index: CVSToys/ChangeLog diff -u CVSToys/ChangeLog:1.83 CVSToys/ChangeLog:1.84 --- CVSToys/ChangeLog:1.83 Sun Aug 17 19:32:24 2003 +++ CVSToys/ChangeLog Sun Aug 17 22:18:56 2003 @@ -1,5 +1,8 @@ 2003-08-17 Kevin Turner <acapnotic@twistedmatrix.com> + * bin/loginfo: 1.1 Security fix: remove '' from sys.path before loading + any modules. + * TODO: 1.26 Lots of new things to do, gathered from the mailing list. 2003-04-19 Kevin Turner <acapnotic@twistedmatrix.com> Index: CVSToys/bin/loginfo diff -u CVSToys/bin/loginfo:1.1 CVSToys/bin/loginfo:1.2 --- CVSToys/bin/loginfo:1.1 Mon Sep 9 13:35:31 2002 +++ CVSToys/bin/loginfo Sun Aug 17 22:18:56 2003 @@ -1,4 +1,25 @@ #!/usr/bin/env python -# $Id: loginfo,v 1.1 2002/09/09 20:35:31 acapnotic Exp $ +# $Id: loginfo,v 1.2 2003/08/18 05:18:56 acapnotic Exp $ + +import sys +try: + # Ok, this is one of those bits of code which needs commenting. Problem + # is, this script may be invoked in such a way that Python adds '' to + # the beginning of sys.path, and CVS seems to run commitinfo scripts + # from a directory in which there are copies of the files waiting to be + # checked in. The result is that if you are checking in a file whose name + # conflicts with something in the top-level module namespace, (i.e. most + # anything in the standard library, such as 'token.py'), and that module + # is imported by any code used by this process, Python will load that file + # instead of the module this code expects. + # The effects of this range from not being able to check things in when + # Python throws an exception and gives a nonzero exit code to commitinfo, + # to a wide security hole if you thought you were giving people "CVS only" + # accounts. + # Removing '' from the path should solve all that. + sys.path.remove('') +except KeyError: + pass + from cvstoys import loginfo loginfo.main() Index: CVSToys/NEWS diff -u CVSToys/NEWS:1.27 CVSToys/NEWS:1.28 --- CVSToys/NEWS:1.27 Sun Apr 20 00:00:18 2003 +++ CVSToys/NEWS Sun Aug 17 22:18:56 2003 @@ -1,3 +1,12 @@ +Version 1.0.9, NOTYET + + * Security fix: Remove '' from sys.path before loading any modules for the + commitinfo script. There was a bug here which could prevent you from + checking in files with certain names, and could also be exploited by a + malicious committer to run arbitrary code in what might otherwise be a + "CVS only" account. (But if your commiters have full shell access, it + doesn't give them any permissions they did not have access to already.) + Version 1.0.8, 4/20/2003 * Brown paper bag release, don't crash when action is None. @@ -114,4 +123,4 @@ * First release. -- -$Id: NEWS,v 1.27 2003/04/20 07:00:18 acapnotic Exp $ +$Id: NEWS,v 1.28 2003/08/18 05:18:56 acapnotic Exp $ | http://twistedmatrix.com/pipermail/cvstoys-list/2003q3/000114.html | CC-MAIN-2015-40 | refinedweb | 542 | 68.16 |
Conversations + Ajax + Single PageIsrael Fonseca May 21, 2009 4:55 PM
I have a seam component scoped to conversation that have a list of objects retrived from the database, and I have the crud operations in it. So with the conversation everthing can be lazy, i can select itens to edit or delete without any problem. (i'm using seam managed persistence context)
My life would be very easy if i had something like:
@Scope(ScopeType.CONVERSATIONALPAGE)
@Name("backingbean")
public class Backing { ... }
For while i'm using this strategy in the pages.xml:
<page view-
<action execute="#{conversation.begin}" on-
<action execute="#{conversation.end}" on-
</page>
With this, I clear the conversation when i change a page, and start a new one on the subsequential postbacks(triggered from ajax requests of richfaces).
But well.. is there any better idea?
Thks in advance,
Israel
1. Re: Conversations + Ajax + Single PageNikos Paraskevopoulos May 21, 2009 11:41 PM (in response to Israel Fonseca)
Hello, have you tried the PAGE scope for this? I am not sure how it behaves regarding persistence contexts, but it seems to
fit like a glovethe rest of your requirements.
2. Re: Conversations + Ajax + Single PageJeff May 22, 2009 4:02 AM (in response to Israel Fonseca)
I encountered the similar problem Israel mentioned. The PAGE Scope doesn't help. The scenario is:
Iterate over a list of records from database on the same JSF page, user performs some actions (e.g. editing), the result is saved back into the database, then pull some other records out of the same database table. This loop continues until all the records in the table are edited. All these should occur on the same JSF page. How could this be accomplished?
Any help and/or advice is highly appreciate.
3. Re: Conversations + Ajax + Single PageIsrael Fonseca May 22, 2009 6:25 PM (in response to Israel Fonseca)
The problem is that the entityManger cant be (or should'nt be) scoped to the PAGE, i think that have problems with serialization (i think), and the PAGE scope is not a very good place to left entityManagers living cause it's only cleaned after another 14 views are requested.
Anyway, so i really needed something like the ConversationalPage, and looks like i'm not the only one. | https://developer.jboss.org/message/692170?tstart=0 | CC-MAIN-2018-30 | refinedweb | 384 | 63.19 |
Introduction: I was working on a scenario where an IDOC is sent from ECC to CPI via Cloud connector. The scenario worked fine except for the fact that when the IDOC failed in CPI we were getting a standard Error text (Not even complete) in WE02 after opening the status details like below.
We Wanted something more meaningful Like the error that was encountered. In our case we knew the exact case where this error must be propagated.
The final Outcome was like Below.
Solution: This cannot be achieved by using IDOC adapter as the processing is quite rigid and It doesnt allow to trick itself into taking a soap fault.
The error text we see above is nothing but he soap Fault which is triggered only when there is an handled error or exception in either the producer or consumer IFLOW.
This is the reason we need to pass this error in the IFLOW as a cosmetic error rather than a full blown exception to ECC.
We will use SOAP adapter like below in the sender side. Nothing changes not even the final url after deployment i.e /cfx/xyz…
After this We need to modify the Body while sending the response back in a content modifier.
Producer IFLOW
Consumer IFLOW:
In my case the receiver always returns a 204 when successfull request is made so i need to change that to 200 as any http code other than 200 results in a 02 status in ECC i.e Fail. Another major thing which this groovy script does is that it produces the standard response which is produced automatically by IDOC adapter if that is used. but in our case we are using SOAP adapter so we need to produce that manually.
Enclosing the error message in soap:fault tags is mandatory as otherwise it will go back as a normal response. also pay attention to the soap namespace that is mandatory otherwise it fails with a undeclared namespace error.
Also we need to modify the CamelHttpResponsecode as below.
Groovy Scripy
import com.sap.gateway.ip.core.customdev.util.Message; import java.util.HashMap; def Message processData(Message message) { //Body def body = message.getBody(); String<faultcode>soap:Server</faultcode><faultstring>An internal error occurred. Product characteristics could not be saved in C4C. Search by Message ID"+ Idoc_num +" in SAP CPI</faultstring></soap:Fault>"; } message.setBody(newbody); return message; }
Conclusion: I don’t see any disadvantages of using SOAP adapter over an IDOC adapter for Receiving IDOCS from ECC (Same may not be said for IDOC Receiver and SOAP Receiver adapter). IDOC’s basically use SOAP over HTTP and if you look closely IDOC’s coming to CPI via IDOC adapter are wrapped in a soap envelope and soap body. IDOC adapter just like SOAP adapter unwraps the payload before CPI processes it.
Let me know in comments what do you think of preferring SOAP adapter over IDOC adapter for such requirements. | https://blogs.sap.com/2020/01/13/changing-fault-message-of-outbound-idocs-sent-from-ecc-to-sap-cpi/ | CC-MAIN-2020-05 | refinedweb | 492 | 62.58 |
I want to output a defined frequency and volume to the speakers, how do I do this with out using DirectSound?
I want to output a defined frequency and volume to the speakers, how do I do this with out using DirectSound?
You can use the BEEP command
Unfortunately this is really low tech and doesn't utilize the sound card, so use it as a last resort.Unfortunately this is really low tech and doesn't utilize the sound card, so use it as a last resort.Code:#include <iostream> #include <windows.h> using namespace std; int A = 440; int main() { cout<<"This is a concert A: "<<A<<"Hz"; Beep(A, 5000); cin.get(); return 0; }
-JM
Huh?
I'm well aquaintied with the Beep() function. The problem is you don't have control of the volume of the output, only the frequency.
How do you feel about winmm? waveOut[*] Functions?
Why don't you read about them? Was that an off-hand comment suggesting that you would go read about it, or an invitation for some poor fool to explain how to use this library with plenty of existing documentation?
There some MIDI functions in the Windows API. A few years ago, I was able to write a short MIDI program from the information on this web site.
I'm sure there's some some help at MSDN... if you know what to look for... if you can find it... Try searching MSDN for this function:
midiOutShortMsg()
I assume that these "standard" MIDI functions are "obsolete" and replaced by their DirectX equivalents.
Thanks, that is a good link, but I guess DirectSound/DirectX is the way to go, there is really no reason not to go with it. | https://cboard.cprogramming.com/cplusplus-programming/87154-speaker-output.html | CC-MAIN-2017-13 | refinedweb | 290 | 73.78 |
From: Paul A. Bristow (boost_at_[hidden])
Date: 2003-03-03 18:50:13
This looks most interesting, and there most definitely remains a great need for
a units handling package.
I presume you have looked at W W Brown's SI units proposal
and wonder why you rejected it and how your proposal is different.
Paul
PS It seems that time gets into global namespace. I agree it should not be
there but don't know how to fix it. I hit the same problem in my previous
(rudimentary) attempt with units.]
> -----Original Message-----
> From: boost-bounces_at_[hidden]
> [mailto:boost-bounces_at_[hidden]]On Behalf Of Eric Ford
> Sent: Friday, February 28, 2003 8:46 AM
> To: boost_at_[hidden]
> Subject: [boost] (no subject)
>
>
> I decided that I needed a workable units library, so I wrote one. It
> allows for weakly typed dimensioned quantities (so a length divided by
> a time is automatically converted to a velocity). It also allows
> users to use strong typeing for quantities of the same dimension which
> shouldn't be confused (so you don't set a mass of apples equal to a
> mass of oranges).
>
> I attempted to make it pretty general, allowing for all the standard
> SI dimensions, a dimension for money (since that is something lots of
> people care about), and I've left one dimension avaliable for users.
> Fractional powers of dimnensions are allowed. (It also includes a
> compile-time fractions header file that might be useful for other
> purposes.) Power users could setup units using their own classes
> for the internal numeric type or even provide their own systems of units.
>
> I uploaded the first draft version of it to the vault a ebf_units.zip.
> I've included several demonstration programs to show how it can be used.
>
> example1.cpp demonstrates the use of simple SI units, including arithmetic
> on such variable and automatic conversion when multiplying or dividing
> dimensioned quantities.
>
> example2.cpp demonstrates how you could use multiple representations
> (e.g., float and double) in the same program. I've included extremely crude
> numeric type promtion mainly for demonstration purposes.
>
> example3.cpp demonstrates the use of multiple systems of units in a
> single program (e.g., including both standard/si/mks units and
> "relativistic units where the speed of light is set to unity).
> SIUnits only allows one in a program. Any conversions
> between systems must be made explicitly.
>
> example4.cpp demonstrates basic use of "qualifiers". These allow
> users to make strongly typed units, so that quantities of the same
> dimension, but different meaning can't be confused. (e.g., a mass
> of apples and a mass of oranges)
>
> example5.cpp demonstrates how a user can extend this to allow for
> some automatic conversions (e.g., automatically convert apples to
> fruits, but not vice versa, or add apples and oranges and assign
> the result to fruits).
>
> Known bugs/limitations:
>
> - In the mks_double and mks_single namespaces, time is non-standard in
> that it is capitalized unlike all the other dimenions. This is due to
> my compiler (g++ 3.0.4 on rh) having conflicts with time (which I
> beleive should be in the std namespace). Help on solving this would
> be appreciated.
>
> - Many more dimensions could be predefined in the standard system
> (basically SI). However, I'd prefer not to define every possible unit
> (like SIunits), particularlly those that are not frequently used
> (e.g., furlong) and/or those which can be easily constructed by the
> user (e.g., meters per second).
>
> - Headers for other internal numeric representations (e.g., mks_int
> ???) could be included.
>
> - Other systems (quantum, natural, planetary, ...) could be included.
>
> - The system tag classes (e.g., mks_tag, which provides for
> identifying which system of units is being used and labeling the
> dimension of quantities) could be made more intelligent. For example,
> SIUnits allows users to set the default unit that they'd like a
> quantity with a certain dimension to be displayed as. Since one can
> simply divide by whatever unit they want their result in, I don't see
> much point. However, if someone wanted this, they should be able to
> add such features by replacing the sytem tag class class without touching
> the rest.
>
> Bugfixes, improvements, encouragment, and other feedback would be welcome.
>
>
>
>
>
>
> --------------------------------------
> Protect yourself from spam,
> use
> _______________________________________________
> Unsubscribe & other changes:
>
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2003/03/45195.php | CC-MAIN-2019-18 | refinedweb | 736 | 57.57 |
We have looked at generic method but in C#, generic classes are also used widely.
Generic classes allow operations that are not specific to a particular data type. Usually, those data types are of type integer or string.
If you look at programming literature related to WPF, you will come across this symbol <> quite frequently.
The symbol <> actually denotes either generic method, class or generic interface.
Generic classes are used commonly in WPF such as this ObservableCollection class where it is used to track items in a list or combo box.
using System; class Mix<T,U> { T val1; U val2; public Mix(T t, U u) { // The field has the same type as the parameter. this.val1 = t; this.val2 = u; } public void Write() { Console.WriteLine(this.val1); Console.WriteLine(this.val2); } public void Write1() { Console.WriteLine("I have " + this.val1 + " " + this.val2); } } class Program { static void Main() { Mix<int, int> m1 = new Mix<int, int>(5, 6); m1.Write(); Console.WriteLine(); Mix<int, string> m2 = new Mix<int, string>(5,"cats"); m2.Write1(); Console.ReadKey(); } }
In this example, the class Mix will be able to take in either integers or strings. In the class declaration, we use <T,U> and in fact they can be any letter or combination of letters. | http://codecrawl.com/2014/10/07/c-generic-classes/ | CC-MAIN-2017-04 | refinedweb | 213 | 67.35 |
UNLINK(2) BSD Programmer's Manual UNLINK(2)
unlink - remove directory entry
#include <unistd.h> int unlink(const char *path); de- layed until all references to it have been closed.
Upon successful completion, a value of 0 is returned. Otherwise, a value of -1 is returned and errno is set to indicate the error.
The unlink() and the effective user ID of the process is not the superuser, or the file system con- taining the file does not permit the use of unlink() on a directory. [EPERM] The directory containing the file is marked sticky, and neither the containing directory nor the file to be removed are owned by the effective user ID. [EPERM] The named file.
close(2), link(2), rmdir(2), symlink(7)
An unlink() function call appeared in Version 2. | http://www.mirbsd.org/htman/sparc/man2/unlink.htm | CC-MAIN-2014-41 | refinedweb | 133 | 64.2 |
calloc, malloc, free, realloc - Allocate and free dynamic memory
#include <stdlib.h> void *calloc(size_t nmemb, size_t size); void *malloc(size_t size); void free(void *ptr); void *realloc(void *ptr, size_t size);(). Other- wise,().
For calloc() and malloc(), the value returned is a pointer to the allo- cated.
ANSI-C
brk(2), posix_memalign(3) tation immedi- ately. This can be useful because otherwise a crash may happen much later, and the true cause for the problem is then very hard to track down. Linux follows an optimistic memory allocation strategy. This means that when malloc() returns non-NULL there is no guarantee that the mem- ory really is available. In case it turns out that the system is out of memory, one or more processes will be killed by the infamous OOM killer. GNU 1993-04-04 malloc(3) | http://ccrma.stanford.edu/planetccrma/man/man3/malloc.3.html | crawl-002 | refinedweb | 139 | 65.62 |
XXTEA is a fast and secure encryption algorithm. This is a XXTEA library for Dart.
It is different from the original XXTEA encryption algorithm. It encrypts and decrypts String/Uint8List instead of uint32 array, and the key is also String/Uint8List.
import 'package:xxtea/xxtea.dart';(str == encrypt_data)
example/main.dart
library xxtea_test; import 'package:xxtea/xxtea.dart'; void main() {(decrypt_data); }
Add this to your package's pubspec.yaml file:
dependencies: xxtea: ^2:xxtea/xxtea.dart';
We analyzed this package on Jul 15, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
Detected platforms: Flutter, web, other
No platform restriction found in primary library
package:xxtea/xxtea.dart.
Fix
lib/xxtea.dart. (-0.50 points)
Analysis of
lib/xxtea.dart reported 1 hint:
line 1 col 1: Prefer using /// for doc comments. | https://pub.dev/packages/xxtea | CC-MAIN-2019-30 | refinedweb | 139 | 53.27 |
Archives from memory- libarchive
This blog post is about python wrapper around libarchive and how to use it to generate archive from memory.
Libarchive & python-libarchive-c
If you happen to learn more about how to create archives in various formats like tar, iso or zip I bet you heard about libarchive. It is widely used archive library written in C.
To use it within python you can choose from a few libraries but one that is currently maintained is called python-libarchive-c. When in my work I was to implement the feature of adding entries to archive from memory I decided to use existing module and give something back to a community in form of an open source contribution.
Add entry from memory
To make such a feature I have to reread carefully code examples in libarchive c itself. I also get familiar with few archive formats and their limitations. But enough talking lets jump to the code:
import requests
import libarchive
def create_archive_from_memory_file():
response = requests.get('link', stream=True)
with libarchive.file_writer('archive.zip', 'zip') as archive:
archive.add_file_from_memory(
entry_path='filename',
entry_size=int(response.headers['Content-Length']),
entry_data=response.iter_content(chunk_size=1024)
)
if __name__ == '__main__':
create_archive_from_memory_file()
My changes in code have not been released so make sure that you install python-libarchive-c from github like this (to run this script you also need requests library):
$ pip install git+
In this snippet, I use request feature that doesn't require loading the
whole content of the response to memory but instead I add the argument:
stream=True and then I use
response.iter_content(chunk_size=1024).
Rest of the code is calling
add_file_from_memory with a path
(
entry_path) and size of the entry in an archive (
entry_size).
Under the hood, python-libarchive-c is using
c_types with ffi to
call libarchive functions. At first, it setup path to entry then sets
its size, filetype and permission which file will be saved in the
archive. Then write the header and start iterating through the
entry_data by chunks and write them. At the end, header is set and
archive is ready for user.
To see it in action have snippet above as example.py and run this script:
$ python example.py
$ ls -la
-rw-r--r--. 1 kzuraw kzuraw 11M 09-24 13:04 archive.zip
-rw-rw-r--. 1 kzuraw kzuraw 511 09-24 12:59 example.py
That's all for this week.
Special thanks to Kasia for being editor for this post. Thank you. | https://krzysztofzuraw.com/blog/2016/archives-from-memory/ | CC-MAIN-2022-21 | refinedweb | 416 | 64.2 |
Simple Note
Code
#include <stdio.h> #include <string.h> /** * * Tested with Dev-C++ 4.9.9.2 * * Practice of string compare. * * int strcmp ( const char * str1, const char * str2 ); * Compares the C string str1 to the C string str2. * Returns an integral value indicating the relationship between the strings: * A zero value indicates that both strings are equal. * A value greater than zero indicates that the first character that does * not match has a greater value in str1 than in str2; And a value less than zero * indicates the opposite. * size_t strspn ( const char * str1, const char * str2 ); * Returns the length of the initial portion of str1 which consists only of characters that are part of str2. * The length of the initial portion of str1 containing only characters that appear in str2. * Therefore, if all of the characters in str1 are in str2, * the function returns the length of the entire str1 string, * if the first character in str1 is not in str2, * the function returns zero. * size_t is an unsigned integral type. * * References: * * * */ int main () { // string char* chPtr = "abcdwxyz"; int len; /** Compare whether two strings are * equal, grater than or less than */ printf("chPtr equals to 'abcdwxyz'? \n%s\n", (strcmp(chPtr, "abcdwxyz") == 0? "true" : "false")); printf("chPtr larger than 'abcd'? \n%s\n", (strcmp(chPtr, "abcd") > 0? "true" : "false")); printf("chPtr less than or equal to 'abcd'? \n%s\n\n", (strcmp(chPtr, "abcd") <= 0? "true" : "false")); /** Use strspn to compare and find the different position * */ len = strspn(chPtr, "abcdwxyz"); printf("Are chPtr and 'abcdwxyz' equal? \n%s\n", (len == strlen(chPtr)? "true" : "false")); len = strspn(chPtr, "abcd"); printf("Are chPtr and 'abcd' equal? \n%s\n", (len == strlen(chPtr)? "true" : "false")); printf("What is the first index (start from 0) that 'abcd' different with chPtr? \n%d\n\n", len); system("PAUSE"); return 0; }
Result
References
strcmp
strspn
Download
Source code at github | http://ben-bai.blogspot.com/2013/03/c-string-string-compare-simple-note.html | CC-MAIN-2018-22 | refinedweb | 316 | 67.45 |
A PeerDist discovery segment. More...
#include <peerdisc.h>
A PeerDist discovery segment.
Definition at line 42 of file peerdisc.h.
Reference count.
Definition at line 44 of file peerdisc.h.
List of segments.
Definition at line 46 of file peerdisc.h.
Referenced by peerdisc_find().
Segment identifier string.
This is MS-PCCRC's "HoHoDk", transcribed as an upper-case Base16-encoded string.
Definition at line 52 of file peerdisc.h.
Message UUID string.
Definition at line 54 of file peerdisc.h.
List of discovered peers.
The list of peers may be appended to during the lifetime of the discovery segment. Discovered peers will not be removed from the list until the last discovery has been closed; this allows users to safely maintain a pointer to a current position within the list.
Definition at line 63 of file peerdisc.h.
Referenced by peerblk_open().
List of active clients.
Definition at line 65 of file peerdisc.h.
Transmission timer.
Definition at line 67 of file peerdisc.h. | https://dox.ipxe.org/structpeerdisc__segment.html | CC-MAIN-2020-24 | refinedweb | 163 | 72.32 |
In flutter, we have widgets and properties that help to create custom shapes. So, in this article, we will see what is a custom clipper in Flutter.
What is a custom clipper in Flutter?
Custom Clipper is a property that helps to clip the widget into any shape we want. It clips unused areas of the container to get the desired shape. Now we will create a curve bottom shape appbar using a custom clipper. So, consider the below code for reference:
import 'package:flutter/cupertino.dart'; import 'package:flutter/material.dart'; class Customshape extends CustomClipper<Path>{ @override Path getClip(Size size) { double height = size.height; double width = size.width; var path = Path(); path.lineTo(0, height-50); path.quadraticBezierTo(width/2, height, width, height-50); path.lineTo(width, 0); path.close(); return path; } @override bool shouldReclip(covariant CustomClipper<Path> oldClipper) { return true; } }
In this file, our custom shape class extends to a custom clipper. Custom clipper uses two override methods :
- getclip(): We define here how we want to clip-path.
- shouldReclip(): It returns bool value whether we want to reclip the widget or not.
The getClip() method is used to customize the shape. To give a curve shape we have used a path.quadraticBezierTo feature and we have also passed path.lineTo with certain height and width. We do not want to reclip so we have returned true in the shouldReclip() method.
import 'package:flutter/cupertino.dart'; import 'package:flutter/material.dart'; import 'custom_shape(), ); } } class MyHomePage extends StatefulWidget { const MyHomePage({Key? key}) : super(key: key); @override _MyHomePageState createState() => _MyHomePageState(); } class _MyHomePageState extends State<MyHomePage> { @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( toolbarHeight: 130, backgroundColor: Colors.transparent, elevation: 0.0, flexibleSpace: ClipPath( clipper: Customshape(), child: Container( height: 250, width: MediaQuery.of(context).size.width, color: Colors.green, child: Center(child: Text("GeeksforGeeks", style: TextStyle(fontSize: 40, color: Colors.white),)), ), ), ), body: Container(), ); } }
In this main.dart file, we have created a stateful widget first. Afterward, we have used the property of an appbar called flexible space. In this flexible space property, we have a container with height, width, color, and text. We have wrapped this container with a clip path. Clip path has a property called clipper. In the clipper property, we have passed the Custom shape class so that it can get accessed to custom_shape.dart file and give us the desired shape.
Conclusion:
Thanks for being with us on a Flutter Journey!
So, in this article, we have seen what is a custom clipper in Flutter.. | https://flutteragency.com/custom-flipper-in-flutter/ | CC-MAIN-2021-43 | refinedweb | 419 | 53.37 |
I need to pass a word to a function and have the first letter in that word "displayed" back to the user as a capital letter, the rest lower case.
I say "displayed" because I DO NOT want to change the original.
I have the following function, which works, but I think there's a memory leak, and, this looks more complicated than what it needs to be.
I DO NOT want the CAP_WORD function to alter the original.I DO NOT want the CAP_WORD function to alter the original.Code:// This will CAP the first letter of a single word char *CAP_WORD(char *txt) { std::string rt = ""; char *txt2 = ""; rt = txt; txt2 = const_cast<char*>(rt.c_str()); _strlwr(txt2); *txt2 = (((*txt2)>='a' && (*txt2) <= 'z') ? ((*txt2)+('A'-'a')) : (*txt2)); return (_strdup(txt2)); }
As the function sits right now, it does not affect the original. However, I feel there's memory that's getting built that I cannot free.
I'm ready to answer any questions needed.
Thank you. | http://cboard.cprogramming.com/cplusplus-programming/141749-capitalize-first-letter-word-function.html | CC-MAIN-2014-35 | refinedweb | 167 | 73.07 |
Using the Xbase For Linux Library
Getting Started
Install the Xbase source code
Execute the following commands to install the Xbase code:
su (if not signed on as root)
cd /usr
mkdir xbase
Set up security for the new directories. The next two commands are set up to define a default level of access to the xbase directories. Actual mileage may vary. Your site may have specific prorocols on directory security which should be addressed here. If you just want to get it going, then the next two commands will work.
chown YourUserId.users xbase
chmod 775 xbase
cp /home/of/xbase.tar.gz /usr/xbase
cd xbase
gunzip xbase.tar.gz
tar -xvf xbase.tar
ls -l
There should be three directories at this point,
src, samples and html
chown -Rv YourUserId.users *
exit (if su was used)
Building the Xbase Library
Execute the following commands to build the xbase library:
cd /usr/xbase/src
make all
If there are any errors in this step it is probably due to either the compiler not being set up on the machine, or some of the include (.h) files are not where the Makefile expects to find them. After this step completes, a file called
xbase.a
will be in the
xbase/src
directory. This is the Xbase library.
Building a program with the Xbase library
Create a directory for your project:
cd /usr/xbase
mkdir MyProject
cd MyProject
vi MySource.cc
To use the Xbase classes, include the following header file in the program:
#include "xbase.h"
Compiling an Application Program
Execute the following command to compile a program:
g++ -c -I/usr/include -I/usr/src/linux/include/asm-i386 -I../src myprog.cc
Execute the following command to link edit the program:
g++ -o myprog myprog.o xbase.a
Note: The -I/usr/src/linux/include/asm-i386 option in the above compile command is for Linux running on the Intel platform. Other platforms require the correct include directory.
The current NDX indexes only support single field, character keys. Future versions of Xbase will support numeric keys as well as full index key expressions.
Directory Structure
The recommended directory structure is automatically created when the X-Base source is installed.
Recommended Xbase Directory Structure
Directory
Description
/usr/xbase
Base Directory for X-Base
/usr/xbase/src
X-Base source code
/usr/xbase/samples
X-Base samples
/usr/xbase/MyProject1
Your first project
/usr/xbase/MyProject2
Your second project
Type Defs and Structures
To effectively make a library as portable across platforms as possible, fields must be consistantly defined in each environment. The
types.h
file defines the xbase data types. To modify the Xbase code base to function in a different (non Linux) environment, start by modifying the
types.h
file for your site and recompile.
Field Types
Type
Description
ULONG
Unsigned Long
UCHAR
Unsigned Char
USHORT
Unsigned Short Integer
LONG
Long
CHAR
Char
VOID
Void
struct SCHEMA
Used for defining record structures
Xbase Compile Options
The OPTIONS.H header file contains compile options which impact the Xbase library.
Options
Option
Description
DEBUG
This define compiles in support for various debugging routines and more extensive error message checking. Use DEBUG mode during application development and to generate smaller production executable programs, comment this define out before building the Xbase library.
UI_HTML_ON
Tuns on the option for the HTML generation class.
INDEX_NDX
This define compiles in support for NDX indexes. Comment this line out to drop support for NDX indexes.
LOCKING_ON
Needed for multi user configuration. It is safe to remove this option for single user environments.
UNIX
Leave this one on for now.
MEMO_FIELDS
This define compiles in support for memo (variable length) fields.
DBT_BLOCK_SIZE
This defines the default block size to use when creating memo .DBT files. It must be an increment of 512 bytes. The maximum block size is 32256.
Send me mail - xbase@startech.keller.tx.us
(c)1997 StarTech | http://www.drclue.net/source/F1xbase_c/html/xbase_c1.html | crawl-002 | refinedweb | 655 | 57.37 |
Mar 17, 2008 01:54 AM|RN5A|LINK
strSQL = "INSERT INTO MyTable (Col1, Col2) VALUES........."
sqlConn = New SqlConnection("........")
sqlCmd = New SqlCommand(strSQL, sqlConn)
sqlConn.Open()
sqlCmd.ExecuteNonQuery()
sqlConn.Close()
But if the SqlDataAdapter object is used to insert/update/delete records in the data source, is it ALWAYS necessary to first make the necessary changes in the DataSet for the data source to reflect the changes?
Mar 17, 2008 02:16 AM|vinz|LINK
Im confused.. what exactly do you mean? Well for me i only use SqlDataAdapter to Fetch or Retrieved data from database and fill in my DataSet/DataTable through adapter..
Mar 17, 2008 07:56 AM|RN5A|LINK
When I run the above script, a new row gets added to the data source, correct? But if the blue lines in the above code are commented, then the INSERT SQL command doesn't add a new row in the data source.
So is it ALWAYS necessary to first make the changes in the DataTable for the data source to reflect the change when SqlDataAdapter is used?
Mar 17, 2008 08:56 AM|vinz|LINK
Not necessarily.. what i mean is that you can INSERT data to your database without using sqlDataAdapter like
using System.Text;
using System.Data;
using System.Data.SqlClient;
private void InsertData()
{
using (SqlConnection conn = new SqlConnection("YOUR CONNECTION STRING");
{
StringBuilder sb = new StringBuilder(string.Empty);
sb.AppendFormat("'{0}','{1}','{2}'",TextBox1.Text, TextBox2.Text); // But I would Suggest to use SP for security purposes
sql = string.Format("INSERT INTO Table1 (FirstName,FamilyName) VALUES ({0});", sb.ToString());
// or you can use this line below or you may use your Parameterized queries
// sql = "INSERT INTO Table1 (Col1,Col2) VALUES ('" + TextBox1.Text + "', '" + TextBox2.Text + "')";
conn.Open();
SqlCommand cmd = new SqlCommand(sql, conn);
cmd.CommandType = CommandType.Text;
cmd.ExecuteNonQuery();
conn.Close();
}
}
protected void Button1_Click(object sender, EventArgs e)
{
InsertData();
}
Mar 18, 2008 03:12 PM|RN5A|LINK
As I said in my previous post, if the blue lines in the code in that post are commented, then the INSERT SQL statement doesn't insert a new row in the data source. This is the reason why I am asking whether it is mandatory to update the DataTable first before updating the data source?
Mar 19, 2008 01:26 AM|vinz|LINK
My Apology.. I got it now... Well im not really sure about that because im not doing that before.. As what i have observed in your code you are just Updating the values from your Database and not in the DataTable so thats why its mandatory to have that line you have commented out.. So if you would try to omit that line the changes doesn't reflect in your datasource as what you have noticed.. If you want to update your DataSource without that line you have commnted then try to fill again your DataSet with the Updated values from the DataBase...
Mar 19, 2008 03:27 AM|RN5A|LINK
My Apology..It's OK...happens sometimes...after all, to err is human... object.
Mar 19, 2008 03:35 AM|vinz|LINK objec
Yes Exactly.. :)
Jul 18, 2011 10:26 PM|Jelgab|LINK
Since this post comes up among the first ones on google when looking for "sqldataadapter/insert/data", I wanted to reply to it. If the question is: Why is it needed to specify the values twice (when the new DataRow is created and also when the parameters are defined):
First, while adding a new DataRow:
dRow(1) = "peter" ...
Second while defining parameters:
.Parameters.Add("@Val2", SqlDbType.VarChar, 50).Value = "peter"
Then the answer is "No, you don't need to indicate this twice". Specifying the values twice is not only a minor annoyance. It might also mean a high risk for a Data Integrity bug like in this example where the data is set directly. But the "duplicated" one is not when you add the DataRow. That is a standard way to insert a new row of data when using a Data Adapter and it is easier this way when adding multiple rows. The duplicated code that should be removed is when you define the parameter. The parameter value does not need to be specified here. To avoid this the SourceColumn property needs to be specified. It is the way to indicate that the data will not be provided when the parameters are defined but somewhere else (when creating the new DataRow). Example with VB2010:
.Parameters.Add(New SqlParameter With {.ParameterName = "@Val2", .SqlDbType = SqlDbType.VarChar, .Size = 50, .SourceColumn = "FirstName" } )
8 replies
Last post Jul 18, 2011 10:26 PM by Jelgab | http://forums.asp.net/t/1234353.aspx?Using+SqlDataAdapter+To+Insert+Update+Delete+Rows | CC-MAIN-2014-52 | refinedweb | 768 | 64.41 |
This repository is part of a series. For the full list check out Design Patterns in Swift
For a cheat-sheet of design patterns implemented in Swift check out Design Patterns implemented in Swift: A cheat-sheet
The problem:
YourMechanic is expanding to Canada. Now our friends to the north can have their cars fixed at their home or office by one of our professional mechanics. To finalize our expansion there are major changes we need to make.
The solution:
We will define an adapter that will implement the same set of functions as our original API. This new adapter will take an instance of our original API as a parameter and will act as a middleman between calls. If a function call in our original API needs to be manipulated in anyway to deal with our new Canadian requirements, the adapter will take care of it. If not, the adapter will simply call the original API function with the same parameters passed to it.
Link to the repo for the completed project: Swift - Adapter
To demonstrate the full scope of an adapter, we will define our originalApi as pseudo-builder class for our quotes. This object will be stateful. This means each instance of it is responsible for a specific quote and that every function call will affect the state of the underlying object. I’m mentioning this since APIs have traditionally be associated with stateless RESTful architecture and seeing it this way might seem a bit foreign.
Lets define our QuoteAPI which is basically an interface for building a quote
protocol QuoteAPI { var tax: Double {get} var laborRatePerHour: Double {get} var partCost: Double {get} var totalCost: Double {get} var laborCost: Double {get} var laborInMinutes: Int {get set} var carMileage: Int {get set} func addPart(part: Part) func removePart(part: Part) }
We begin by defining our readonly values. We do not have write access to our tax and labor hourly rate as mentioned in the requirements. Total cost and labor cost are also values that are derived from other properties. This leaves us with four functions that we can use to set stuff: laborInMinute which is the total amount of minutes required to complete this quote’s appointment, car mileage which is simply the number of miles the car has traveled and a simple add and remove function for adding and removing parts to our quote.
Let see how these functions look in our original quote API.
class OriginalQuoteAPI: QuoteAPI { let tax: Double = 0.20 let laborRatePerHour: Double = 50.00 var laborInMinutes: Int = 0 var carMileage: Int = 0 var parts: Set<Part> init() { self.parts = Set<Part>() } var laborCost: Double { get { return (Double(self.laborInMinutes) / 60.0) * self.laborRatePerHour } } var partCost: Double { get { return parts.reduce(0.0, combine: {$0 + $1.price}) } } var totalCost: Double { get { return (laborCost + partCost) * (1.0 + tax) } } func addPart(part: Part) { parts.insert(part) } func removePart(part: Part) { parts.remove(part) }
We define our tax and hourly labor rates. These values as mentioned are set in stone as far as we are concerned so we’ll hardcode 20% and $50 respectively. We define our laborInMinutes and carMileage along with a set for our Parts ( we will look at our Part class in a bit). We then proceed to initialize these properties by setting our labor time and car mileage to zero. This is fairly standard Swift stuff up to this point.
Next we begin to define our derived variables. Our labor cost is our labor in minutes divided by 60 (to give us duration in hour) multiplied by our hourly rate.
Our parts cost is an aggregate of all our parts prices, summed up. We use Swift’s higher order function reduce to calculate this value. If you are not familiar with higher order functions or this seems odd, I suggest you take a look at this article. Learning higher order functions and closures in general can save you a lot of “for loops”.
Finally we calculate our total cost by adding our parts cost to our labor cost and adding the required tax to the final price.
We also have two functions for adding and removing parts which simply add and remove items from our Set of parts.
This will give us our original API. We will write an adapter for this so it can work with Canadian currency, the metric system and Canadian tax and labor rates. But before we get to that let’s quickly go over our Parts class which we have used in our original QuoteAPI.
class Part: Hashable, Equatable { var partId: Int var name: String var price: Double init(partId: Int, name: String, price: Double) { self.partId = partId self.name = name self.price = price } var hashValue: Int { return partId } } func == (lhs: Part, rhs: Part) -> Bool { return lhs.partId == rhs.partId }
We define our Parts class to implement Hashable and Equatable protocols. Implementing these protocol makes it possible for us to use instances of Part in a Set collection. What these protocols do is that they provide Swift a way to derive a hash value and test for equality between different instances. Once we implement these protocol we need to define a hashValue of type Int and a == function for our Parts class. We can assume that our Part’s class will have a unique partId for each part which we can use to both provide a hash value and check for equality between different instances of Parts.
We also define a name and price for our Parts which we set in our initializer.
Alright let’s look at our Adapter.
import Foundation } var carMileage: Int { get { return Int(Double(target.carMileage) * 1.60934) } set(newValue) { target.carMileage = Int(Double(newValue) * 0.621371) } } var laborInMinutes: Int { get { return target.laborInMinutes } set(newValue) { target.laborInMinutes = newValue } } var laborCost: Double { get { return ((Double(target.laborInMinutes) / 60.0) * self.laborRatePerHour) * usdToCad } } var partCost: Double { get { return target.partCost * usdToCad } } var totalCost: Double { get { return (self.laborCost + self.partCost) * (1.0 + tax) } } func addPart(part: Part) { part.price = part.price * cadToUsd target.addPart(part) } func removePart(part: Part) { target.removePart(part) } }
Let’s break it down and look at it step by step. Before getting into our derived variables and QuoteAPI functions let’s look at what properties we need for our adapter. self.laborInMinutes = 0 }
First off we define our CanadianQuoteAPI as a class that implements our QuoteAPI protocol. Our adapter needs to be able to do everything our original quote API does. We then define a target variable which will be of type QuoteAPI. In this case, this will be an instance of OriginalQuoteAPI. Beside tax, laborRatePerHour which were hardcoded in OrigianlQuoteAPI we are defining two new properties: usdToCad and cadToUsd. These two doubles will be our exchange rates from CAD to USD and vise-versa.
OriginalQuoteAPI may have its tax rates and labor rates hardcored, but our adapter doesn’t have to. Since we are writing this adapter we can make it so it uses a different tax and labor rates, ones that we will pass when we instantiate it.
Our initializer takes in an instance of QuoteAPI which will be our target. It will grab a tax rate, labor rate and our exchange rates as well.
But, wait a second. What happened to our Parts? or car mileage? laborInMinutes? don’t we need to have these properties as well if we want to conform to QuoteAPI? Shouldn’t our CanadianQuoteAPI save these values as well?
Yes we do and no it doesn’t.
var carMileage: Int { get { return Int(Double(target.carMileage) * 1.60934) } set(newValue) { target.carMileage = Int(Double(newValue) * 0.621371) } }
This is a property that is defined in our protocol with a setter and getter. This means our API needs to be able to set and get its value. Since our target API is in miles we can translate its value to kilometers by simply multiplying its value by 1.609 when it is requested by our CanadianQuoteAPI. We then use the same process when we want to set its value. We take the input we receive from CanadianQuoteAPI which will be in kilometers and translate it into miles. Note that we do not store the car mileage in our CanadianQuoteAPI. We translate it and save it in an instance of the OriginalQuoteAPI (our target). Our CanadianQuoteAPI is not a replacement of our original API, it is an adapter. It simply works as a middleman for converting our data from one standard to another.
In the case of carMileage we need to translate our data from one standard to another, before we save or retrieve its value, for something like labor time (laborInMinute) we don’t have to do this.
var laborInMinutes: Int { get { return target.laborInMinutes } set(newValue) { target.laborInMinutes = newValue } }
The definition of time or minute to be more specific is the same in US as it is in Canada. Therefor we simply take the value and pass it as-is to our target. Again note that we do not save any of these values locally and simply act as the middleman (adapter) between original API (target) and the outside world.
Let’s look out the rest of our definition.
var laborCost: Double { get { return ((Double(target.laborInMinutes) / 60.0) * self.laborRatePerHour) } } var partCost: Double { get { return target.partCost * usdToCad } } var totalCost: Double { get { return (self.laborCost + self.partCost) * (1.0 + tax) } }
We have three readonly definitions that we need to cover. Labor cost, part cost and total cost. Labor cost for our CanadianQuoteAPI will be our labor time which is saved in our target, divided by 60, multiplied by our canadian labor rate. Since our Canadian labor rate is already in CAD we do not need to worry about currency conversion.
Our parts cost however will be in USD since we will retrieve it directly from our OriginalAPI. We will convert it to CAD by multiplying with the exchange rate we set initially for our USD to CAD.
Our total cost will be our CanadianQuoteAPI’s labor cost plus our parts cost summed with our Canadian tax rate. All the values we retrieve for these three properties will be in Canadian dollars. We do not need to save any of these values as they are derived directly from our original API.
Finally let’s look at how we add and remove parts.
func addPart(part: Part) { part.price = part.price * cadToUsd target.addPart(part) } func removePart(part: Part) { target.removePart(part) }
Since parts being added through our CanadianQuoteAPI will be in CAD we need to change their price to USD before adding them to the OrigianlAPI. We have to ensure that everything in our original API remains in USD. Remember we are building this adapter so our old system will work under these new condition, as such we have to ensure the new conditions do not alter the data we save in our original API.
We do not need to worry about part’s prices when we are removing them from our part’s set so we simply call the target’s removePart with the part being passed from our CanadianQuoteAPI.
And this is it. We are done with our adapter. Lets test it out.
var originalAPI = OriginalQuoteAPI() // We add two parts, set how long it takes to do the job and add the car's mileage originalAPI.addPart(Part(partId: 15, name: "Brake Fluid", price: 20.00)) originalAPI.addPart(Part(partId: 8, name: "Filters", price: 10.00)) originalAPI.laborInMinutes = 60 originalAPI.carMileage = 11000 print("original API total cost:") print(originalAPI.totalCost) var canadianAPI = CanadianQuoteAPI(target: originalAPI, tax: 0.20, laborRatePerHour: 50.00, cadToUsd: 0.75, usdToCad: 1.2) print("Canadian API total cost with a 1.2 USD to CAD exchange rate:") //Print total cost in CAD print(canadianAPI.totalCost) //Add part through Canadian API, price will be in CAD canadianAPI.addPart(Part(partId: 63, name: "Regular Oil", price: 5.00)) print("Original API total cost after a $5 CAD part is added:") //Print total cost in USD print(originalAPI.totalCost) print("Canadian API total cost after a $5 CAD part is added:") //Print total cost in CAD print(canadianAPI.totalCost) print("Original API part cost after a $5 CAD part is added:") //Print total cost of parts in USD print(originalAPI.partCost) //Print car mileage in miles and km print("Mileage of the car is \(originalAPI.carMileage) Miles") print("Mileage of the car is \(canadianAPI.carMileage) Kilometers") //Change cars mileage through Canadian api, new value is in KM canadianAPI.carMileage = 10000 //Print car mileage in miles and km print("Mileage of the car is \(originalAPI.carMileage) Miles") print("Mileage of the car is \(canadianAPI.carMileage) Kilometers")
Running the test case mentioned above gives us the following output:
original API total cost: 96.0 Canadian API total cost with a 1.2 USD to CAD exchange rate: 103.2 Original API total cost after a $5 CAD part is added: 100.5 Canadian API total cost after a $5 CAD part is added: 108.6 Original API part cost after a $5 CAD part is added: 33.75 Mileage of the car is 11000 Miles Mileage of the car is 17702 Kilometers Mileage of the car is 6213 Miles Mileage of the car is 9998 Kilometers Program ended with exit code: 0
Let’s break it down step by step and verify that our adapter is working correctly.
First off we create and instance of our origianlAPI. We then add two parts, one costing $20.00 and another costing $10.00. We set our labor time for this quote to be 60 minutes. The tax rate and hourly labor rate in our original API was hardcoded at %20 and $50.00 respectluvly. Therefor the total price for this quote would be:
(Filters + Brake Fluid + Labor Cost) x Tax Rate = Total (10.00 + 20.00 + 50.00) x 1.2 = $96.00
And that is what we got.
Now lets instantiate our Canadian adapter. We will set our tax rate and hourly labor rate to be the same, however we set our exchange rate to be 0.75 USD for every CAD and 1.20 CAD for every USD. We check the price against our Canadian adapter and get 103.20. But wait a second that doesn’t look right. Should our price be
((Filters + Brake Fluid + Labor Cost) x Exchange Rate) x Tax Rate = Total ((10.00 + 20.00 + 50.00) x 1.2) x 1.2 = 96 x 1.2 = $115.2?
The answer is no. The labor rate we set in our adapter is an input it received independently from the original API. The $50.00 laborRatePerHour that is passed into our CanadianQuoteAPI is in Canadian dollars. Therefor the labor cost calculated by our adapter considers that as already in CAD and does not multiply it by the 1.2 exchange rate. So the correct formula working behind our adapter is actually this:
(((Filters + Brake Fluid) x Exchange Rate + Labor Cost) x Tax Rate = Total (((10.00 + 20.00) x 1.2) + 50) x 1.2 = $103.20
Next we’ll add a part to our Quote through our Canadian API. Because our part is being added through the Adapter, it will assume that the price is in CAD. And since all our data is saved in our original API it is converted to USD. However when we ask for total price in CAD we are not getting the original CAD price rather the CAD -> USD -> CAD. This is technically correct but it definitely shows a possible flaw in our adapter for this problem. In our case there is a certain denigration of the price since our CAD => USD and USD => CAD are not reflective.
Because of this, when we add a $5 CAD part to our quote it is saved as
CAD 5.00 x 0.75 = 3.75
So when we get our total price in USD through our original API we get:
(Filters + Brake Fluid + Oil + Labor Cost) x Tax Rate = Total
(10.00 + 20.00 + 3.75 + 50.00) x 1.2 = $100.5
Which is fine however when we request for our canadian total, although the formula is still
(((Filters + Brake Fluid) x Exchange Rate + Labor Cost + Oil) x Tax Rate = Total ((((10.00 + 20.00) x 1.2) + 50 + 5.0) x 1.2) = $109.20?
We get $108.60
That’s because, although we marked our part as $5 CAD because it is being converted to USD and back to CAD using our exchange rates, it’s being returned at 90% its original valuation. Which is
((((10.00 + 20.00) x 1.2) + 50 + 4.5) x 1.2) = $108.60
What’s the best way to deal with data that is not completely adaptable. Or when it is mathematically impossible to convert one value to another and have a corresponding inverse function that can convert it back. Thankfully converting Miles to KM and vice-versa is rather trivial. I’ll leave confirming that to you.
Congratulations you have just implemented the Adapter Design Pattern to solve a nontrivial problem.
The repo for the complete project can be found here: Swift - Adapter.
Download a copy of it and play around with it. See if you can find ways to improve its design, Add more complex functionalities. Here are some suggestions on how to expand or improve on the project:
- We need to be able to provide receipts in French as well as in English, assume the original API has an English receipt generator, expand our adapter to provide a french version of it
- How can we deal with prices changing when we go from CAD => USD => CAD? | https://reza.codes/2016-07-08/design-patterns-in-swift-adapter/ | CC-MAIN-2020-10 | refinedweb | 2,954 | 66.54 |
#.
Delegates
Create a new scene. Add three GameObjects: “Controller”, “Foo”, and “Bar”. Parent Foo and Bar to Controller and then create and assign a new C# script for each object according to its name. The contents of each script are provided below:
using UnityEngine; using System.Collections; public class Controller : MonoBehaviour { void Start () { Foo foo = GetComponentInChildren<Foo>(); Bar bar = GetComponentInChildren<Bar>(); foo.doStuff = bar.OnDoStuff; foo.TriggerStuffToDo(); } }
using UnityEngine; using System.Collections; public delegate void MyDelegate (); public class Foo : MonoBehaviour { public MyDelegate doStuff; public void TriggerStuffToDo () { if (doStuff != null) doStuff(); } }
using UnityEngine; using System.Collections; public class Bar : MonoBehaviour { public void OnDoStuff () { Debug.Log("I did stuff"); } }
The delegate was globally defined above the declaration of the Foo class within the “Foo.cs” script. You can tell because it uses the “delegate” keyword. Within that line we show a few important things. We are basically defining a method that will be implemented elsewhere in a way similar to the method declaration you might put inside of an interface. The return type and parameters of the delegate declaration must be followed exactly in any method that tries to become the observer for this delegate, although they do not need to use the same method name. You can see this for yourself because the “OnDoStuff” method of Bar was able to be assigned to the “MyDelegate” definition – this is because they both returned void and did not take any parameters.
The name assigned to the delegate definition is still important. It is used inside the Foo class as a Type from which to declare a property. You are basically saying you want a pointer to a method, and since it is public, it can be assigned at a later point. The delegate can be assigned like any other property, by referencing only the name of a method to assign – don’t use the parenthesis. To actually invoke the method, you just treat the delegate property as if it was a method in your class – you do use the parenthesis and pass along any required parameters.
Run the sample, and Bar should do some work, logging “I did stuff” to the console. At the moment this sample seems like a lot of extra work to do something we could have accomplished in the Foo script alone. However, the beauty of this system is that we now have more options. By delegating work that needs to be done, the way that work is fulfilled can change – even at run time. It’s also possible we don’t want any work to be done, in which case we simply don’t assign the delegate.
The other benefit of this system is that it is loosely coupled. The scripts Foo and Bar are completely ignorant of each other, making them very reusable, and yet they work together as efficiently as if they had direct references to each other. The only script which is not loosely coupled is our controller script, but controller scripts are almost never reusable anyway and this is to be expected.
There are a few gotchas when working with delegates. The first issue to point out is that they keep strong pointers to objects – this means that they can keep an object alive that you thought would have gone out of scope. To demonstrate why this is a problem, modify the Controller script so that you destroy the Bar object after assigning it as a delegate but before triggering the delegate call. Note that I had to modify the Start method to an Enumerator so that I could wait a frame – Unity waits one frame before actually destroying GameObjects.
IEnumerator Start () { Foo foo = GetComponentInChildren<Foo>(); Bar bar = GetComponentInChildren<Bar>(); foo.doStuff += bar.OnDoStuff; GameObject.Destroy(bar.gameObject); yield return null; foo.TriggerStuffToDo(); }
Run the example now, and you will see that the Bar object still performs its work even though the GameObject has already been destroyed. This isn’t necessarily an issue in the demo right now, but if the Bar script tried to reference its GameObject or any Components that had been on it, such as Transform, then you will get a “MissingReferenceException: The object of type ‘Bar’ has been destroyed but you are still trying to access it.”
Revert all of your changes to the original version of the script. Uncomment line 12 of Foo.cs so that our script doesn’t compare doStuff to null, and then uncomment line 11 of Controller.cs where we actually assign it. Run the scene again. This demonstrates that you should always check that a delegate exists before calling it, or else you risk a NullReferenceException.
Revert your changes. Now modify line 11 of Controller.cs to:
foo.doStuff += bar.OnDoStuff;
Notice that the line looks almost identical, we merely added a “+” in front of the assignment operator. With this method of assignment you are essentially stacking objects into the single delegate property, and when it is invoked, all delegates on the stack will be called. To see this in action, you can duplicate line 11 so that there are several additions of bar.OnDoStuff. If you run the scene now you will see a log for each time you added the delegate.
After your last delegate assignment, add another line:
foo.doStuff -= bar.OnDoStuff;
This time we added a “-” before the assignment operator which tells the delegate to remove an object from its delegate stack. If you run the scene now you will see that there is one less log than there had previously been, although if you added more than you removed it should still be logging something. This is important to note, because if you ever get out of balance on adding and removing delegates, you may find yourself executing code more frequently than you anticipated.
Note that there are no negative consequences to attempting to unregister a delegate beyond the number of times that you had registered it. This is helpful because you could for example, unregister any delegate that might have been registered whenever you prepare to dispose of an object, without needing to check if you actually had registered it. In the case of a Monobehaviour, the OnDisable or OnDestroy methods would be great opportunities to unregister all of your listeners. Note also that you can not rely on the Destructor of a native C# object to remove a delegate, because the delegate itself is keeping the object alive.
If you assign a delegate using only the assignment operator “=” without a plus or minus, the entire stack of delegate objects will be replaced with whatever you assign the new value to. For example, you can add “+=” multiple copies of bar.OnDoStuff and then assign “=” a single copy of bar.OnDoStuff, and now only the newly assigned handler will do any work. You can also assign null to the delegate which is an easy way of saying, “Hey remove all of the listeners from this”. The assignment of a delegate can be a double edged sword – it is nice to be able to easily remove all listeners, however, you risk other scripts removing or replacing a delegate which you intended to keep registered. The solution to this issue is to turn your delegate into an event.
Before I finish discussing delegates, there is one last bit of information I want to share. Because it is very common to need to define delegates, C# has predefined several to save you the effort. To use them you will need to reference the System namespace – add a “using System;” line to your script. See below for several use cases of “Action”, a generic delegate with a return type of void, and “Func” which is another delegate with a non void return type (the last parameter type in its generic definition is the type to be returned):
public Action doStuff; public Action<int> doStuffWithIntParameter; public Action<int, string> doStuffWithIntAndStringParameters; public Func<bool> doStuffAndReturnABool; public Func<bool, int> doStuffWithABoolAndReturnAnInt;
and here are sample methods that could observe them:
public void OnDoStuff () {} public void OnDoStuffWithIntParameter (int value) {} public void OnDoStuffWithIntAndStringParameters (int age, string name) {} public bool OnDoStuffAndReturnABool () { return true; } public int OnDoStuffWithABoolAndReturnAnInt (bool isOn) { return 1; }
Events
All events are delegates, but not all delegates are events. To make a delegate an event you must add the word “event” to its property declaration:
public delegate void MyDelegate (); public class Foo : MonoBehaviour { public event MyDelegate doStuff; public void TriggerStuffToDo () { if (doStuff != null) doStuff(); } }
When the delegate is registered as an event in this way, you can no longer use the assignment operator directly. You can now only increment “+=” and decrement “-=” the listeners. This forces the scripts which add listeners to be responsible for themselves and stop listening to the event when they can.
When using events it is common to use a particular pre-defined delegate called an “EventHandler”. This delegate is also generic but not in the same way as “Action” and “Func”. The EventHandler always passes exactly two parameters, the first being an “object” representing the sender of the event and the second being an “EventArgs” which will hold any information relevant to the event. If you reference the generic version of the EventHandler you are defining what subclass of EventArgs is going to be passed along. Following are some examples of their use:
using UnityEngine; using System; using System.Collections; public class MyEventArgs : EventArgs {} public class Foo : MonoBehaviour { // Define EventHandlers public event EventHandler doStuff; public event EventHandler<MyEventArgs> doStuff2; // These methods can be added as observers public void OnDoStuff (object sender, EventArgs e) {} public void OnDoStuff2 (object sender, MyEventArgs e) {} public void Start () { // Here we add the method as an observer doStuff += OnDoStuff; doStuff2 += OnDoStuff2; // Here we invoke the event if (doStuff != null) doStuff( this, EventArgs.Empty ); if (doStuff2 != null) doStuff2( this, new MyEventArgs() ); } }
One last note is that your events can be made static. Static events are ones which do not exist within the instance of a class, but within the class itself. You might consider this pattern when there are multiple objects you wish to listen to events from, without wanting to get a reference to each one. For example, you could have a game controller listen to a static enemy event called “diedEvent”. Each event would pass the enemy which died along as the sender and the game controller could know about each one even though it only had to register to listen to this event a single time. See below for an example:
using UnityEngine; using System; public class Controller : MonoBehaviour { void OnEnable () { Enemy.diedEvent += OnDiedEvent; } void OnDisable () { Enemy.diedEvent -= OnDiedEvent; } void OnDiedEvent (object sender, EventArgs e) { // TODO: Award experience, gold, etc. } }
using UnityEngine; using System; public class Enemy : MonoBehaviour { public static event EventHandler diedEvent; void OnDestroy () { if (diedEvent != null) diedEvent(this, EventArgs.Empty); } }
Pros and Cons of Events
Events are a powerful design pattern, one which makes it very easy to write flexible and reusable code which is also very efficient. They do require a certain level of responsibility to use correctly, or you may suffer some unexpected consequences, but most of these can be easily anticipated by making sure you have a corresponding unregister statement to clean up each of your register statements.
Delegates and Events are a great solution to solve scenarios where you want one-to-one communication and one-to-many communication scenarios. They do not offer efficient solutions to many-to-many or many-to-one scenarios (where the “many” is implemented as many different classes). For example, in the TextBased RPG example I have been working on, it would be nice to allow any script, whether it is based on MonoBehaviour or native C# script to post an event that some text should be logged to the interface. This could happen anywhere in my program as a result of any kind of object and action.
There are two obvious solutions to this problem, but I don’t recommend you use either. The first solution, using events, would be to make the controller which knows to listen for messages to be posted subscribe to the event of each and every class that will actually post a message. What a nightmare. The larger your project grows the harder that would be to maintain.
Another approach would not use events, and would have the objects that wish to post a message acquire a reference to the object which displays the message (perhaps you would make it a singleton) and invoke a method on it directly. This is slightly better than the first solution, but now all of your scripts are tightly coupled to the object displaying a message. We might want to reuse these scripts later with a full-blown visual RPG that doesn’t have a text window, and in that case we would have to manually disconnect that bit of logic in a potentially large number of scripts across your project.
The solution I would pick is a custom NotificationCenter, which I will present in part 3, but since we haven’t gotten that far, I will show one last option.
using System; public class TextEventArgs : EventArgs { public readonly string text; public TextEventArgs (string text) { this.text = text; } } public static class ObjectExtensions { public static event EventHandler<TextEventArgs> displayEvent; public static void Display (this object sender, string text) { if (displayEvent != null) displayEvent(sender, new TextEventArgs(text)); } }
This demonstration uses something called “extensions” which is a way to add functionality to a class that hadn’t previously been there. Extensions must be defined in a static class. The functionality you are adding will be a static method, and the first parameter (begins with “this”) determines what class you are adding functionality to. In my example I added the functionality to System.object which means that ANYTHING whether inheriting from MonoBehaviour or not, can now trigger a display text event. See below for an example of a script posting the event, and another script listening to it.
public class Enemy : MonoBehaviour { void OnDestroy () { this.Display(string.Format("The {0} died.", this.GetType().Name)); } }
public class UIController : MonoBehaviour { void OnEnable () { ObjectExtensions.displayEvent += OnDisplayEvent; } void OnDisable () { ObjectExtensions.displayEvent -= OnDisplayEvent; } void OnDisplayEvent (object sender, TextEventArgs e) { // TODO: a more complete sample would have a reference // to an interface object and append the text there Debug.Log(e.text); } }
And now we have an event based solution for many-to-one or many-to-many communication! The solution I will probably be using for the rest of this project is the Notification Center which will be presented in Part 3, although the event based architecture presented here is capable of completing any professional level project. It is really a matter of personal preference.
2 thoughts on “Social Scripting Part 2”
In the Controller / Enemy code sample, did you make typo by using the “+=” in the OnDisable event? 🙂
Good catch, the correct line should be a “-=” here. Thanks! | http://theliquidfire.com/2014/12/10/social-scripting-part-2/ | CC-MAIN-2017-51 | refinedweb | 2,477 | 60.04 |
I have a need to aggregate based on wall-clock time. For example every night at midnight or every hour on the hour. What is the best way to do this in Streams? Perhaps I am missing something simple. Thanks, Brian
Answer by DanielFarrell (72) | Jan 15, 2014 at 02:33 PM
Just to offer another viewpoint-
Everything you ask for can be done in pure SPL.
You have 2 primary elements to your application requirements; an aggregate calculation, which can imply the Aggregate operator with its associated window, and the need to do something at midnight, or every hour, which should affect the output of the aggregate window. Easy peezy.
--
As stated or implied above, Streams doesn't really have 'events'. The only real event Streams has, is the arrival of a new tuple (or similar) to an operator. Fortunately, you can use a Beacon operator to generate tuples, and respond to the arrival of tuples in a Custom or other operator. Or, instead of passing data (tuples), you can pass 'punctuation', which is more elegant, perhaps cleaner.
So far we have Beacon to generate data, passing to a Custom to tell if its wall clock time to actually create/respond to a special event. Use a Punctor operator if you wish to do this via punctuation, and not data itself.
Aggregates and their associated windows [ are not required ]. You can do the same thing as an Aggregate operator in a Custom operator that contains an (array); a set, map or list. [ And ] the Custom operator is fully programmable, make it do whatever you wish.
How do you know its midnight ? Use getTimestamp(). Its a built in SPL function, time function.
--
Source code-
I didn't write your complete solution. Here are 2 Streams application that demo 80% or more of what you need. The first is small, and shows how to use a Punctor. The second is larger, so be patient, and is meant to show how you can do your own (window) inside a Custom operator.
Oops, the second source code entry makes this answer too long. Here's the first file and I'll see about adding the second ..
----Cut File Here-------------------------------------------------------------------
// // This Streams application demonstrates a 'file sprayer', // which is .. // // . Write (n) lines of output to a given file, then // close, and write to a new file, .. rinse repeat. // . You would do this when writing a log or reject // file, allowing for a number of manageable files. // // Our input has 100 lines, and we write output files // of 10 lines each. // // // FileSink has many simple ways to accomplish the above // design goal; tuplesPerFile, timePerFile, bytesPerFile // All are offered below, although they may be commented // out. // // Here we demonstrate the most advanced means to change // files, which is to change files upon the receipt of // 'punctuation'. To add punctuation to our otherwise dull // data stream, we use a punctor operator. //
// // A shell(C) script aids in the demonstration of this // Streams application: // // . 79_ResetForProgram03.sh deletes the contents of // the output directory. //
namespace Namespace22_FilesAndDirectories;
composite FD03_FileSprayer { graph
stream<int64 my_int64,="" rstring="" my_string=""> My_FileRead as MyO = FileSource() { param file : "/My_Stuff/My_Files/22_FilesAndDirectories/" + "03.InputFile.WebLog.100Lines.txt"; format : line; // // This next line is just to create an easy data element // in the output stream that the next operator, a punctor, // can evaluate. // output MyO : My_Int64 = TupleNumber(); // // My_String defaults to the input data line. // }
stream<rstring my_string=""> My_Punctor = Punctor(My_FileRead) { param // // Below we punctuate on every 10'th line. // // My_Int64 is the incoming tuple number (assigned // above). And we do a mod-10 ( % 10l )to produce // an equality every 10'th row. // punctuate : My_Int64 % 10l == 0l ; position : after; }
() as My_FileSink = FileSink(My_Punctor) { param // // 'id' is a variable maintained by FileSink. id // will start at zero, and increment by 1. As // configured, id will increment upon the receipt // of 'punctuation' in the input data stream. // // There are also variables for time and other, // should you wish for a more unique filename. // file : "/My_Stuff/My_Files/22_FilesAndDirectories/" + "03_OutputDirectory/03.{id}.OutputFile.out"; format : line; // // This next parameter is only for testing, to flush // tuples immediately, not in blocks. // flush : 1u; closeMode : punct; // // closeMode : count; // 'count' or 'time' or 'size' // tuplesPerFile : 2u; // timePerFile : 30.0; // bytesPerFile : 512u; }
}
Answer by hnasgaard (1441) | Jan 14, 2014 at 02:12 PM
There isn't anything built into streams that will trigger an aggregation based on wall clock time. I think what you might have to do is use an Aggregate operator with a punctuated window, and write a small C++ or Java operator that sits upstream which has the following behavior: - passes all incoming tuples through to its output port - generates a punctuation on its output port at the time you want
No one has followed this question yet. | https://developer.ibm.com/answers/questions/4923/$%7B$value.user.profileUrl%7D/ | CC-MAIN-2019-35 | refinedweb | 793 | 63.49 |
I would like to get the "C++ cast like" adaption to work with the code from
zope.interface. In my real use case, I'm using a registry from
Pyramid but it derives from
zope.interface.registry.Components, which according to the changes.txt was introduced to be able to use this stuff without any dependency on
zope.components. And the following example is complete and self contained:
from zope.interface import Interface, implements
from zope.interface.registry import Components
registry = Components()
class IA(Interface):
pass
class IB(Interface):
pass
class A(object):
implements(IA)
class B(object):
implements(IB)
def __init__(self,other):
pass
registry.registerAdapter(
factory=B,
required=[IA]
)
a = A()
b = registry.getAdapter(a,IB) # why instance of B and not B?
b = IB(A()) # how to make it work?
I wonder why
registry.getAdapter already returns the adapted object, which is an instance of
B in my case. I would have expected to get back the class
B, but perhaps my understanding of the term adapter is wrong. As this line works and obviously the adapting code is registered correctly, I would also expect the last line to work. But it fails with an error like this:
TypeError: ('Could not adapt', <....A object at 0x4d1c3d0>,
< InterfaceClass ....IB>)
Any idea how to get this working?
To make
IB(A()) work, you need to add a hook to the the
zope.interface.adapter_hooks list; the
IAdapterRegistry interface has a dedicated
IAdapterRegistry.adapter_hook method we can use for this:
from zope.interface.interface import adapter_hooks adapter_hooks.append(registry.adapters.adapter_hook)
See Adaptation in the
zope.interface README.
You can use the
IAdapterRegistry.lookup1() method to do single-adapter lookups without invoking the factory:
from zope.interface import providedBy adapter_factory = registry.adapters.lookup1(providedBy(a), IB)
Building on your sample:
>>> from zope.interface.interface import adapter_hooks >>> adapter_hooks.append(registry.adapters.adapter_hook) >>> a = A() >>> IB(a) <__main__.B object at 0x100721110> >>> from zope.interface import providedBy >>> registry.adapters.lookup1(providedBy(a), IB) <class '__main__.B'> | http://www.dlxedu.com/askdetail/3/9fd8d1cf0ebf2edcd571ee4836005d2e.html | CC-MAIN-2018-22 | refinedweb | 334 | 62.44 |
Hi,
I am currently working on a project in which both an Ethernet Shield 2 (Ethernet Shield 2) and an Arduino TFT Screen (Arduino TFT Screen) are needed. While the Ethernet Shield is stacked on top of an Arduino Uno Rev 3, the screen is attached to a breadboard and wired to the pins of the Ethernet shield (for which see below) - such since it isn’t built to be stacked on top of any shield directly.
Both the Ethernet Shield as the TFT Screen work without problem when used separately, but the problem for which I need to find a solution is that the shield and the screen default both rely on pin 10. I have been trying to find myself a way around by manually declaring all pins needed by the TFT Screen, instead of using the default hardware pins - such by following the approach described at TFTLibrary. However, when doing so the compiler returns the following error message:
no matching function for call to 'TFT::TFT(int, int, int, int, int)'
My interpretation of the error log (for which see) is that I am providing more arguments (5) than the relevant function in the libraries I rely on in my code (which is the native TFT library that is part of IDE 1.8.0 and the ones it refers to, for which see the attached files) can handle (3), and it is here that I am completely lost. I am certainly not educated enough on libraries to meddle with them, and therefore I would much welcome any ideas that help me solve this problem and have the Ethernet Shield and the TFT Screen work nicely together.
To give you a better understanding of what I am doing, the TFT Screen is wired as follows:
+5V: +5V
MISO: pin 12
SCK: pin 13
MOSI: pin 11
LCD CS: pin 7 // pin 10 in use by ethernet shield 2
SD CS: pin 4
D/C: pin 9
RESET: pin 8
BL: +5V
GND: GND
This scheme is mirrored in the following bits of code that are part of my sketch:
#include <Ethernet2.h> #include <EthernetUdp2.h> #include <SPI.h> #include <TFT.h> #define sd_cs 4 #define mosi 11 #define lcd_cs 7 #define dc 9 #define rst 8 // Initiating an instance of the TFT library named TFTscreen TFT TFTscreen = TFT(lcd_cs, dc, mosi, sd_cs, rst); <snip> void setup() { Serial.begin(9600); Ethernet.begin(mac, ip); TFTscreen.begin(); <snip> }
Thanks in advance for your time and attention!
Adafruit_GFX.cpp (16.8 KB)
Adafruit_GFX.h (11.4 KB)
Adafruit_ST7735.cpp (21.6 KB)
Adafruit_ST7735.h (4.53 KB)
Error_log.txt (32.1 KB) | https://forum.arduino.cc/t/solved-pin-conflict-ethernet-shield-2-with-arduino-tft-screen/436588 | CC-MAIN-2021-43 | refinedweb | 442 | 64.95 |
User Name:
Published: 27 Oct 2010
By: Xianzhong Zhu
Download Sample Code
In this last article of this series, we will learn what to do with reflection. But before making the topic more interesting, we'll first look at how to dynamically create an object.
In the previous articles, we first introduced what is the reflection, and then view the information of target types using reflection, and learn how to create custom attributes, as well as use reflection to traverse them. It can be concluded that in these three sections, we studied what is the reflection. In this last article of this series, we will learn what to do with reflection. But before making the topic more interesting, we'll first look at how to dynamically create an object.
Launch Visual Studio 2010 and create a new Console project, called Reflection4. Then, add a general C# class named Calculator.
In the first half of this article, we'll mainly use this class for illustration of most of the reflection related operations.
As is seen, the above class is very simple. It contains two constructors: one is with one parameter; the other is with no argument. We first take a look at how to create objects using the non-argument constructor through the reflection. There are two ways to create objects, one of which is to use the CreateInstance method of the Assembly:
CreateInstance
Well, the first parameter of the CreateInstance method represents the string name of type instance to create, while the second parameter indicates whether the string name is case sensitive or not. Note the CreateInstance method returns an Object, which means if you want to use this object you need to make a type conversion.
Another way to create objects is to call the static method CreateInstance of the Activator class:
Here, the first argument of the CreateInstance method represents the name of the assembly (null means the current assembly); the second parameter corresponds to the type name you want to create. Also note that the Activator.CreateInstance method returns an object of ObjectHandle, which has to make an Unwrap() invocation to return Object type. And thus, it can be cast into the type we need (in this case is Calculator). ObjectHandle is included in the System.Runtime.Remoting namespace, which is relevant to the Remoting operation. In fact, the ObjectHandle class is just a type which encapsulates the original type for ease of marshalling. More info about it can be found in MSDN.
null
Activator.CreateInstance
Unwrap()
If we want to create objects via constructor having arguments, we can use the overloaded CreateInstance() method of Assembly:
CreateInstance()
Now, let's take a good look at the parameters the CreateInstance method needs to provide:
Next, let's look at how to dynamically call a method. Note here it is not meant to cast the above dynamically-created object from the Object type into the Calculator type and then call its methods. If so, there will be no difference with the common call. Let's start the method invocation using the .Net reflection approach. Before proceeding, let's first add two methods to the Calculator class: one is an instance method; the other is a static method:
On the whole, dynamically calling the methods via .NET reflection there are two ways:
1. Call the InvokeMember method of the Type object, passing in the object of which you want to call the methods (i.e. just the dynamically created instance of the Calculator class), as well as specifying BindingFlags as InvokeMethod. According to the method signature, you may also need to pass associated parameters.
InvokeMember
InvokeMethod
2. First obtain the method object (which is the MethodInfo object) to call through the GetMethod method of the Type object. Then, call the Invoke method on the method object. According to the method signature, you may also need to pass parameters.
GetMethod
Invoke
It should be noted that the use of InvokeMember is not limited to call the object's method. It can also be used to obtain the object's fields, properties, etc., all of which take the similar means. This article only delves into the most commonly-seen method calling.
Let's first check out the first method. The required code is very simple, only two lines.
Note that the obj has been created in the previous section, which is an instance of type Calculator.
Now, if you run the above sample you will get the following output:
Invoke Instance Method:
[Add]: 3 plus 5 equals to 8
The result is 8
In the above InvokeMember method, the first parameter describes the name of the method you want to call. The second parameter shows it is make the method invocation (because InvokeMember is very powerful, which not only can be used to call the method but also can get/set properties, fields. and so on. Related details can be found in MSDN). The third parameter is Binder, with null meaning using the default Binder. The fourth argument specifies making call upon this object (obj is an instance of type Calculator). The final parameter is an array type, specifying the method accepted parameters.
Next, let's look at the static method related method calling.
Now, if you again run the preceding sample you will get the following output:
Invoke Static Method:
[Add]: 6 plus 9 equals to 15
Let's make some comparison with above. First, the fourth parameter is typeof(Calculator), rather than a Calculator instance. This is very easy to understand because we are calling a static method which is not based on an instance of a specific type, but on the type itself. Secondly, because our static method needs to provide two parameters we pass these two parameters in the form of an array.
typeof(Calculator
Now, let's explore the second way to dynamically call a method. First, we should get an instance of MethodInfo. Then, call the Invoke method of that instance. Let's check out the concrete operation:
Now, if you run the preceding sample you will get the following output:
Press any key to continue...
Note in the second line in the above code, we first use the method GetMethod to get an object MethodInfo, specifying the BindingFlags flags being Instance and Public. Because there are two methods named "Add" here we must specify the searching conditions. Then, we use the Invoke method to call the Add method, with the first parameter obj being the instance of Calculator created previously (meaning creating methods upon this instance), with the second parameter being null, that means the method does not need parameters.
Add
null
Next, let' continue to look at how to use this way to call a static method:
Similar with the above, in the GetMethod method, we specify one of the searching criteria being BindingFlags.Static, rather than BindingFlags.Instance, because the Add method we are to call is static. In the Invoke method, note that the first argument can not be an instance of Calculator but the Type type of Calculator or just null, because static methods do not belong to a specific instance.
Through the above examples, it can be concluded that we can use reflection to achieve the maximum degree of polymorphism. For example, you can place a DropDownList control on an ASP.NET page, and then specify the value of its Items property as the methods of some class, and finally in the SelectedIndexChanged event handler of the DropDownList control use the value of Value to call the selected method of the class. In the past, you have to write some if-else (even nested) statements to determine the value returned by the DropDownList control, and then decide which method to call according to the value. Using this method, before the code is running (or before the user selects an option) the compiler does not know which method will be called. This is often the so-called late binding.
Items
SelectedIndexChanged
Till now, we have brought to you too much about the theory. Let's build an interesting example. We all know that in the ASP.NET environments, color settings of the controls, such as ForeColor, BackColor, etc., are all supported via a System.Draw.Color structure type. In some cases, however, we need to use a custom color. For example, we can use something like Color.FromRgb (125, 25, 13) to create a specified color value. But sometimes we feel a bit troublesome because this group of figures is not intuitive enough that we even need to paste this value into PhotoShop to see which color on earth it corresponds to.
ForeColor
BackColor
Color.FromRgb (125, 25, 13)
At this time, we may want to use the default color the Color structure provides. That is, we can use the 141 static properties related to color. But, these values are still based on the color name, such as DarkGreen. This is not intuitive, too. If there is some way to render these named colors onto the page in the form of color block it will be great. Next, let's try to achieve this dream with the help of .NET reflection.
Now we look at the implementation process.
Start up Visual Studio 2010, create a basic ASP.NET Web Application, and name it CustomColorTest. Then, add a new Web Form named Color.aspx and add some style definition in the head part to control the final rendering. Then, drag a Panel control onto the page. The final markup code looks like the following.
In the above cascade style sheet definition, what should be noted is the #pnColors div block, which defines the style in which the color on the page will be displayed. The Panel control whose Id property is pnHolder is used to load our dynamically generated divs.
#pnColors
Id
pnHolder
Our idea is like this: we will add a series of div elements in the Panel control, each of which corresponds to the color block that we are going to show on the page. To do this, we can set the div's text to the color names and RGB values, while we set its background color to the corresponding color (the other styles of the color blocks, such as width, border, etc. have been defined in the <head/>.)
<head
We know that in Asp.Net there is not a Div control, but only an HtmlGenericControl control. Next, we will define a custom Div class in the behind code and let it inherited from HtmlGenericControl.
As we described earlier, the Div class accepts a Color type as the only constructor parameter. And then, in the constructor, first set its InnerHtml property to the various color names and color values (through the R, G, and B properties of the Color structure). Finally, set the div's background color to the corresponding RGB color.
InnerHtml
R
G
B
Note in the above case there may be some color that is very dark. In this case, if we continue to use the dark foreground color, then the text will look dizzy. So I added an if statement: if the background is dark, then we set foreground color brighter.
if
OK, till now what left to do is simple. We only need to invoke the above class with the help of .NET reflection.
The above code is very straightforward. First, create a Div list, to save the color blocks to be created. Then, get the Type instance of the Color type. Then, we use the GetProperties method together with the BindingFlags bit flags to obtain all the static public properties. And then traverse the properties, and use the InvokeMember method to get the property value. Because the returned value is an Object type, we need to cast it into a Color type. Note here the BindingFlags bit flag of the InvokeMember method is specified as GetProperty, meaning getting the property value. The fourth parameter is set to typeof(Color), because the color attribute (such as DarkGreen) is static, not specific to a particular instance - if it is an instance, then you need to pass in the instance of the type that calls this property. Next, we create the div elements based on the related color, add it to the list. Finally, we traverse the list and add the div elements to the Panel control with Id being pnColors.
GetProperties
GetProperty
pnColors
Well now, let's build the above CustomColorTest and preview the web page CustomColorTestPage.aspx. You should see results like Figure 1 below.
The above page CustomColorTestPage.aspx looks a bit chaotic, doesn't it? This is because the list is roughly sorted by color name (except for Transparnet). In fact, we can better sort the list based on color. For brevity, here I will give the detailed implementation process, rather than dwell upon the abstract sort theory. Note this section has nothing to do with the reflection, if you are already familiar with sorting, you can skip.
Now, open again page CustomColorTestPage.aspx and add a control RadioButtonList onto the page, setting its AutoPostBack property to true. Our target is to sort the color blocks not only according to the name but according to the color values.
AutoPostBack
The RadioButtonList control related markup code is as follows:
In the behind code, add an enum type as the criteria for sorting:
Next, let's rewrite the Div class to add a ColorValue field, which represents the value of the color, as well as create a nested class ColorComparer, and two methods named GetComparer:
GetComparer
Now, in the Page_Load event handler, we should accordingly add the related statements to get the current sort criteria:
Page_Load
Before outputting the list to the page, we should call the Sort method of the list:
Sort
Well, all the work is complete. Rebuild the web project and preview the page again and you will see something similar to the following Figure 2.
Note that we can now sort the color blocks according to the name or color value.
In this series of articles, we've discussed many basic concepts and related operations supported by .Net reflection functionality. In this last article, we've first learned the two most common ways to dynamically create an object, and then discussed the use of the two important methods, Type.InvokeMember and MethodInfo.Invoke, with which to call the instance methods and static methods of a given type. Finally, we studied a sample application - traverse the System.Drawing.Color via .NET reflection support, and output the color values. Anyway, as mentioned in the first article, interesting things related to .NET reflection have just begun...
Type.InvokeMember
MethodInfo.Invoke | http://dotnetslackers.com/articles/csharp/C-Sharp-4-0-Reflection-Programming-Part4.aspx | crawl-003 | refinedweb | 2,447 | 63.19 |
Minimum number of operations to make XOR of array equal to zero
Get FREE domain for 1st year and build your brand new site
You are given an array of n non-negative integers. Your task is to find the minimum number of operations to make XOR of array equal to zero by performing the operations which are defined as follows:
- Select the element on which you will perform the operations. Note that all the operations must be performed only on the selected element.
- The operation that can be performed are increment and decrement by one.
Examples
Explanation: We will decrement 7 to 6 and then the XOR of all the elements is zero.Explanation: We will decrement 7 to 6 and then the XOR of all the elements is zero.
Input: No. of elements in array = 3 Elements = 2 4 7 Output: 1
Explanation: XOR of an element with itself is zero.Explanation: XOR of an element with itself is zero.
Input: No. of elements in array = 4 Elements = 5 5 4 4 Output: 0
Explanation of solutionBefore discussing the various approaches to the problem let us briefly review XOR. XOR ( also known as exclusive-or ) of two one bit number is defined by the following table:
XOR of two many bit numbers is simply the XOR of the corresponding individual bits.
Example: The XOR of 5 (101 in binary) and 6 (110 in binary) will be 011 in binary or 3 in decimal. Property of XOR: XOR of two numbers can be zero only when both the numbers are equal.
We can solve this problem by two approaches:
- Brute force approach: Let the selected element be the ith element. Now let us find the XOR of the all the elements excluding the ith element ( n operations ). Now, if the obtained XOR is equal to the ith element, then we don't need to perform any operation as XOR of whole array is already zero; otherwise the no. of operations ( or cost ) required on the ith element will be equal to the absolute difference of obtained XOR and array[i]. Also, we need to calculate the calculate the cost for each element and the minimum of these will be the answer. Hence, the complexity of the solution will be Θ(n2).EXAMPLE: Let us find the answer when the elements of array are 2, 4 ,7 and 8. Now we need to find the minimum number of changes to one element so that XOR of the array is zero.
- Optimized solution: This solution is an optimzed veersion of the solution above. In this solution we will utilise three important facts about XOR:
- It is associative.
- XOR of an element with itself is zero.
- XOR of a number with zero is the number itself.
Now with this property in hand, we will make the process of finding the XOR of all elements excluding the ith element O(1). First, we will find XOR of all the elements of the array and store it in a variable ( lets say A ). Then to find the XOR of all the elements excluding the ith element we will do XOR(A, array[i]). Thus the cost for the ith element will be absolute(array[i]-XOR(A, array[i])). We will calculate the cost for each element of the array and the minimum of these costs will be the answer.
Let us assume that selected number is 2. Then we must change 2 to become equal to 4 XOR 7 XOR 8 i.e. 11. Operations for 2 are equal to 9. Similarly, operations for 4, 7 and 8 are equal to 9, 7 and 7 respectively. Answer is equal to minimum(9,9,7,7) i.e. 7
Let us assume that selected number is 2. Then we must change 2 to become equal to M XOR 2 (XOR of all numbers excluding two) i.e. 11. So operations for 2 are equal to 9. Similarly, operations for 4, 7 and 8 are equal to 9, 7 and 7 respectively (We will run a loop over the array for this). Answer is equal to minimum(9,9,7,7) i.e. 7.
In this way we have reduced the complexity from n2 to n where n is the number of elements in the array.
Solution
- C++
- Java
- Python
C
#include <iostream> #include <climits> #include <cmath> using namespace std;
//function definition int minxor(int matrix[], int length){ int answer=INT_MAX,obtainedxor=0; for(int i=0;i<length;++i){ obtainedxor^=matrix[i]; }
int cost; for(int i=0;i<length;++i){ cost=abs((obtainedxor^matrix[i])-matrix[i]); if(answer>cost){ answer=cost; } }
return answer; }
int main(){ //input int length; cout<<"No. of elements in array = "; cin>>length; int matrix[length]; cout<<"Elements = "; for(int i=0;i<length;++i){ cin>>matrix[i]; }
//output cout<<minxor(matrix, length)<<"\n"; return 0; }
C++
import java.lang.*; import java.util.*;
class OPENGENUS{ //function definition static int minxor(int matrix[], int length){ int answer=Integer.MAX_VALUE,obtainedxor=0; for(int i=0;i<length;++i){ obtainedxor^=matrix[i]; }
int cost; for(int i=0;i<length;++i){ cost=Math.abs((obtainedxor^matrix[i])-matrix[i]); if(answer>cost){ answer=cost; } }
return answer; }
public static void main(String [] args){ //input int length; Scanner sc = new Scanner(System.in); System.out.print("No. of elements in array = "); length = sc.nextInt(); int matrix[] = new int[length]; System.out.print("Elements = "); for(int i=0;i<length;++i){ matrix[i]=sc.nextInt(); }
//output System.out.println(minxor(matrix,length)); } }
Python
#function definition def minxor(matrix,length):
obtainedxor = 0 answer = 9999999999 for i in range(length): obtainedxor^=matrix[i]
for i in range(length): cost = abs((obtainedxor^matrix[i])-matrix[i]) if answer>cost: answer=cost
return answer
#input length = int(input("No of elements in array = ")) matrix = []
print("Elements = ", end="") for i in range(length): matrix.append(int(input()))
#output print(minxor(matrix,length))
Time Complexity
- Worst case time complexity:
O(N)
- Average case time complexity:
Θ(N)
- Best case time complexity:
Ω(N)
- Auxiliary space:
O(1)
- Space complexity:
Θ(N)
Feel free to share your approach in the discussion thread below. Happy Coding :) | https://iq.opengenus.org/minimum-operations-xor-zero/ | CC-MAIN-2021-43 | refinedweb | 1,034 | 53.81 |
import image.jpg
Today I was wanting to screenshot some work I had done on a vector image inside of the window. Now, I have a pretty minimalistic install on my box. Due to this I didn’t have a screenshot application aside from The Gimp… or so I though.
Like almost everything else in Linux, it turns out you can take screenshots from the command line. To do this you use the import command.
import image.jpg
This will change your cursor to a plus symbol. Click the window you want to screenshot and it’ll save it to the current directory.
You may notice however that if your window isn’t in the foreground, it may require two or more clicks to get the window you want up so you can screenshot it. To do this, we simply need a delay.
import -pause 4 image.jpg
The -pause switch will delay the screenshot by the duration specified. In the example, we delay it for four seconds. Once the delay is up, again you will see the mouse cursor change to a plus symbol. Select the window you want to screenshot and it will save it to the current directory, unless you have specified a different one to save to.
Category:Linux | https://oper.io/?p=Screenshots_from_Command_Line | CC-MAIN-2017-26 | refinedweb | 213 | 73.47 |
Mixing Java and Kotlin in one projectThis tutorials walks us through the process of using Java and Kotlin in a single IntelliJ IDEA project.
We'll be using IntelliJ IDEA (Ultimate or Community edition). If using build tools, please see the corresponding entry under Build Tools. To understand how to start a new Kotlin project using IntelliJ IDEA, please see the Getting-Started tutorial.
Adding Java source code to an existing Kotlin project
To add a new Java class to a Kotlin project is very straightforward. All we need to do is create a new Java file (Ctrl+N/Cmd+N) in the correct folder/package.
We can now consume the Java Class from Kotlin or vice versa without any further actions. For instance, adding the following Java class:
public class Customer { private String name; public Customer(String s){ name = s; } public String getName() { return name; } public void setName(String name) { this.name = name; } }
allows us to call it from Kotlin like any other type in Kotlin.
val customer = Customer("Phase") println(customer.getName())
Adding Kotlin source code to an existing Java project
Adding a Kotlin file to an existing Java project is pretty much the same process. The only difference here is that depending on how we do this, slightly different actions need to be taken:
Creating a new Kotlin file
To create a new Kotlin file we simply decide on the location in the project folder and create it.
If this is the first time we're adding a Kotlin file, IntelliJ IDEA will prompt us to add the required Kotlin runtime.
As we're working with a Java project, we'd most likely want to configure it as a Kotlin Java Module. The next step is to decide which modules to configure (if our project has more than one module) and whether we want to add the runtime library to the project or use those provided by the current Kotlin plugin.
Adding an existing Kotlin file
If instead of creating a new file, we want to add an existing Kotlin file to the project, IntelliJ IDEA won't prompt us to configure the Kotlin runtime. We have to invoke this action manually. This can be done via the Tools|Kotlin menu option
which then prompts the same dialog and process as when we create a new Kotlin file.
Converting an existing Java file to Kotlin with J2K
The Kotlin plugin also bundles a Java to Kotlin compiler which is located under the Code menu in IntelliJ IDEA.
Selecting an existing Java file, we can use this option to convert it automatically into Kotlin. While the converter is not full-proof, it does a pretty decent job of converting most boiler-plate code from Java to Kotlin. Some manual tweaking however is sometimes required. | https://dogwood008.github.io/kotlin-web-site-ja/docs/tutorials/mixing-java-kotlin-intellij.html | CC-MAIN-2020-10 | refinedweb | 466 | 67.79 |
Posted 07 Mar 2018
Link to this post
hi ..
i have a dto class called GroupListDto with just three fields
public class GroupListDto
{
public int Id { get; set; }
public string Title { get; set; }
public string Description { get; set; }
}
i use this class as kendo grid model ..
when i write my actionresult in below manner . my model field is empty
public ActionResult Create([DataSourceRequest] DataSourceRequest request, GroupListDto group)
but when is use GroupListDto class fields instead , the fields are filled with correct datas
public ActionResult Create([DataSourceRequest] DataSourceRequest request, string Title , string Description)
does anyone know what is the problem ?
why i cant get data in my dto model , but i get data in its fields !?
Posted 08 Mar 2018
Link to this post
also when i enable batch edit in kendo grid
.Batch(true)
and change the GroupListDto to IEnumerable<GroupListDto> , everything gets correct and i get the list of the data
public ActionResult Create([DataSourceRequest] DataSourceRequest request,
[Bind(Prefix = "models")]IEnumerable<
GroupListDto
> groups)
but as i mentioned before , i dont need batch edit , i just want to get one row of GroupListDto in my controller..
does anyone know what is wrong with my code !?
after spending a lot of time on this issue i found this way .
public ActionResult CreateLite([DataSourceRequest] DataSourceRequest request, [Bind(Prefix = "models[0]")]GroupListDto group)
by adding models[0] , i was able to get the edited recored data ..
but i think [Bind(Prefix = "models[0]") is something extra and must be removed from my actionresult ..
who knows the answer of my problem? :(((
Posted 09 Mar 2018
Link to this post
[AcceptVerbs(HttpVerbs.Post)]
Posted 03 Aug 2020
in reply to
MohamadReza
Link to this post
Posted 03 Aug 2020
in reply to
Development Team
Link to this post
Posted 06 Aug 2020
Link to this post
Hi,
Thank you for sharing what has resolved the issue in the application you are working on! Your response will surely help someone who experience the same issue in the future.
Regards,
Petar
Progress Telerik | https://www.telerik.com/forums/model-is-null-when-using-kendo-grid-crud-operations | CC-MAIN-2021-10 | refinedweb | 336 | 59.64 |
One of the methods of exchanging data between processes with the multiprocessing module is directly shared memory via multiprocessing.Value. As any method that's very general, it can sometimes be tricky to use. I've seen a variation of this question asked a couple of times on StackOverflow:
I have some processes that do work, and I want them to increment some shared counter because [... some irrelevant reason ...] - how can this be done?
The wrong way
And surprisingly enough, some answers given to this question are wrong, since they use multiprocessing.Value incorrectly, as follows:
import time from multiprocessing import Process, Value def func(val): for i in range(50): time.sleep(0.01) val.value += 1 if __name__ == '__main__': v = Value('i', 0) procs = [Process(target=func, args=(v,)) for i in range(10)] for p in procs: p.start() for p in procs: p.join() print v.value
This code is a demonstration of the problem, distilling only the usage of the shared counter. A "pool" of 10 processes is created to run the func function. All processes share a Value and increment it 50 times. You would expect this code to eventually print 500, but in all likeness it won't. Here's some output taken from 10 runs of that code:
> for i in {1..10}; do python sync_nolock_wrong.py; done 435 464 484 448 491 481 490 471 497 494
Why does this happen?
I must admit that the documentation of multiprocessing.Value can be a bit confusing here, especially for beginners. It states that by default, a lock is created to synchronize access to the value, so one may be falsely led to believe that it would be OK to modify this value in any way imaginable from multiple processes. But it's not.
Explanation - the default locking done by Value
This section is advanced and isn't strictly required for the overall flow of the post. If you just want to understand how to synchronize the counter correctly, feel free to skip it.
The locking done by multiprocessing.Value is very fine-grained. Value is a wrapper around a ctypes object, which has an underlying value attribute representing the actual object in memory. All Value does is ensure that only a single process or thread may read or write this value attribute simultaneously. This is important, since (for some types, on some architectures) writes and reads may not be atomic. I.e. to actually fill up the object's memory, the CPU may need several instructions, and another process reading the same (shared) memory at the same time could see some intermediate, invalid state. The built-in lock of Value prevents this from happening.
However, when we do this:
val.value +=1
What Python actually performs is the following (disassembled bytecode with the dis module). I've annotated the locking done by Value in #<-- comments:
0 LOAD_FAST 0 (val) 3 DUP_TOP #<--- Value lock acquired 4 LOAD_ATTR 0 (value) #<--- Value lock released 7 LOAD_CONST 1 (1) 10 INPLACE_ADD 11 ROT_TWO #<--- Value lock acquired 12 STORE_ATTR 0 (value) #<--- Value lock released
So it's obvious that while process #1 is now at instruction 7 (LOAD_CONST), nothing prevents process #2 from also loading the (old) value attribute and be on instruction 7 too. Both processes will proceed incrementing their private copy and writing it back. The result: the actual value got incremented only once, not twice.
The right way
Fortunately, this problem is very easy to fix. A separate Lock is needed to guarantee the atomicity of modifications to the Value:
import time from multiprocessing import Process, Value, Lock def func(val, lock): for i in range(50): time.sleep(0.01) with lock: val.value += 1 if __name__ == '__main__': v = Value('i', 0) lock = Lock() procs = [Process(target=func, args=(v, lock)) for i in range(10)] for p in procs: p.start() for p in procs: p.join() print v.value
Now we get the expected result:
> for i in {1..10}; do python sync_lock_right.py; done 500 500 500 500 500 500 500 500 500 500
A value and a lock may appear like too much baggage to carry around at all times. So, we can create a simple "synchronized shared counter" object to encapsulate this functionality:
import time from multiprocessing import Process, Value, Lock class Counter(object): def __init__(self, initval=0): self.val = Value('i', initval) self.lock = Lock() def increment(self): with self.lock: self.val.value += 1 def value(self): with self.lock: return self.val.value def func(counter): for i in range(50): time.sleep(0.01) counter.increment() if __name__ == '__main__': counter = Counter(0) procs = [Process(target=func, args=(counter,)) for i in range(10)] for p in procs: p.start() for p in procs: p.join() print counter.value()
Bonus: since we've now placed a more coarse-grained lock on the modification of the value, we may throw away Value with its fine-grained lock altogether, and just use multiprocessing.RawValue, that simply wraps a shared object without any locking.
Update (2019-01-23): a reader (Jeremy Cohn) points out that Value provides a way to store a lock on itself with the lock keyword argument, which could then be accessed with get_lock() method. This could simplify the code in this post a bit. | https://eli.thegreenplace.net/2012/01/04/shared-counter-with-pythons-multiprocessing | CC-MAIN-2019-22 | refinedweb | 891 | 65.52 |
Today I posted the final version of the Store Functions for Entity Framework Code First convention to NuGet. The instructions for downloading and installing the latest version of the package to your project are as described in my earlier blog post only you no longer have to select the “Include Pre-release” option when using UI or use the
–Pre option when installing the package with the Package Manager Console. If you installed a pre-release version of this package to your project and would like to update to this version just run the
Update-Package EntityFramework.CodeFirstStoreFunctions command from the Package Manager Console.
What’s new in this version?
This new version contains only one addition comparing to the beta-2 version – the ability to specify the name of the store type for parameters. This is needed in cases where a CLR type can be mapped to more than one store type. In case of the Sql Server provider there is only one type like this – the
xml type. If you look at the Sql Server provider code (SqlProviderManifest.cs ln. 409) you will see that the store
xml type is mapped to the EDM
String type. This mapping is unambiguous when going from the store side. However the type inference in the Code First Functions convention works from the other end. First we have a CLR type (e.g.
string) which maps to the EDM
String type which is then used to find the corresponding store type by asking the provider. For the EDM
String type the Sql Server the provider will return (depending on the facets) one of the
nchar,
nvarchar,
nvarchar(max),
char,
varchar,
varchar(max) types but it will never return the
xml type. This makes it basically impossible to use the
xml type when mapping store functions using the Code First Functions convention even though this is possible when using Database First EDMX based models.
Because, in general case, the type inference will not always work if multiple store types are mapped to one EDM Type I made it possible to specify the store type of a parameter using the new
StoreType property of the
ParameterTypeAttribute. For instance if you had a stored procedure called
GetXmlInfo that takes an
xml typed in/out parameter and returns some data (kind of a more advanced (spaghetti?) scenario but came from a real world application where the customer wanted to replace EDMX with Code First so they decided to use Code First Functions to map store functions and this was the only stored procedure they had problems with) you would use the following method to invoke this stored procedure:
[DbFunctionDetails(ResultColumnName = "Number")] [DbFunction("MyContext", "GetXmlInfo")] public virtual ObjectResult<int> GetXmlInfo( [ParameterType(typeof(string), StoreType = "XML")] ObjectParameter xml) { return ((IObjectContextAdapter)this).ObjectContext .ExecuteFunction("GetXmlInfo", xml); }
Because the parameter is in/out I had to use the
ObjectParameter to pass the value and to read the value returned by the stored procedure. Because I used
ObjectParameter I had to use the
ParameterTypeAttribute to tell the convention what is the Clr type of the parameter. Finally, I also used the
StoreType parameter which results in skipping asking the provider for the store type and using the type I passed.
That would be it. See my other blog posts here and here if you would like to see other supported scenarios. The code and issue tracking is on codeplex. Use and enjoy.
Great work!
Is it also possible to support IEnumerable as an input parameter where T is a primitive type?
For example I am trying to implement the “GROUP_CONCAT” function for MSSQL (To simulate the functionality of GROUP_CONCAT in MYSQL) as a custom stored procedure. (See:)
I have the stored procedure up and running but I cannot call it via a DbFunction because IEnumreable is currently not supported as a paramter.
The signature for this method would look look like this:
[DbFunction("MyContext", "GROUP_CONCAT")]
public static string GroupConcat(IEnumerable<string> collection, string separator = ", ")
{}
An the usage scenario would be this.
from blog context.Set()
select new BlogDto
{
Name = blog.Title,
Tags = CustomDbFunctions.GroupConcat(blog .Tags.Select(x => x.Name))
}
This should return the following:
Name, | Tags
My first blog, | Csharp, Linq, Ef
My second blog | PHP, Ruby
This would basically be the same like DBFunctions.StandardDeviation(IEnumerable ..) where I also have a sequence of inputs and the functions returns an aggregated value.
I don’t think the convention allows using collections as input parameters at the moment. As you pointed out it is possible in case of StandardDeviation so it might also be possible for users’ stored procedures. I would have to investigate to see how this can be done.
I created a workitem for this but I am not sure when I will be able to get to it. I do accept contributions though.
Thanks,
Pawel
Thanks Pawel for your quick response!
Good to hear you also think it should be possible. Would be great If you could find a solution to enable this scenario.
If I find some time, I will probably also dig into it – but I am not sure when I am able to get to it either 😉
Thanks,
David
This is the world we live in – everyone is busy all the time these days…
Thank you for all your work on this project. It’s been very useful in getting TVFs into my project using CodeFirst.
One correction to your post: your NuGet command needs to be updated to
Update-Package EntityFramework.CodeFirstStoreFunctions (you left out Store).
Thanks for pointing this out. I will update the post to reflect this.
I am glad the project works for you!
Thanks,
Pawel
So, is 6.1.1 officially released and can be used in production? We do want to have scalar functions support in our project.
Yes, 6.1.1 was shipped a while ago. Actually, 6.1.2 shipped like a week ago.
I have tried the Code First Store function dll to support oracle stored procedure in my application which developed using EF6. But while calling the stored procedure i m getting the below error. how can i solve);
}
}
}
Regards Dharma
error is
{“ORA-06550: line 1, column 8:\nPLS-00201: identifier ‘C##TEST.ibp_country_getlist’ must be declared\nORA-06550: line 1, column 8:\nPL/SQL: Statement ignored”}
This error comes from Oracle. Unfortunately, I don’t know Oracle and have no idea what could cause this error. On the other hand a couple of folks confirmed that they were able to make the convention work on Oracle. Maybe you could ask them? I am not sure how helpful it is but I found this thread on Oracle’s forums where someone hit an issue that looks very similar. Not that I understand the discussion but apparently there is a difference between local and remote stored procedures.
Thanks,
Pawel
Reblogged this on ITelite. | https://blog.3d-logic.com/2014/10/18/the-final-version-of-the-store-functions-for-entityframework-6-1-1-code-first-convention-released/ | CC-MAIN-2018-17 | refinedweb | 1,148 | 62.88 |
>
Hello,
I'm starting to develop an interactive application that has a lot of content, images and texts (interactive story basically).
I would like to know what the best practice is to support multiple languages within the app (I'm not referring to code - I mean human written words).
The app will be using a lot of 3D Text Meshes, so my first thought would be to attach a script onto all the 3D Text Meshes.
This script will consist of a Text Field for English, German, Spanish, etc ... And whatever the language the user has selected, the scripts will simply swap in and out the requested language within the TextMesh.text.
The only problem with is, is that I may start to lose track of all my 3D Text Meshes throughout the App as there may be loads between different windows, and scenes (Dialog Boxes, Prompts, Buttons, Static Text, etc...).
I would like to know if anyone has a preferable method to overcome this? Thanks.
Answer by chariot
·
Jun 02, 2016 at 01:55 PM
Why not use some scripts like:
public class Texts {
public static string Language = "en";
public const string Russian = "ru";
public const string English = "en";
public const string Deutsch = "de";
public const string Portugal = "pt";
public const string Turkish = "tr";
public static string STRING_001 { get {
switch (LocalTexts.Language) {
case LocalTexts.Russian:
return "из";
case LocalTexts.English:
return "off";
case LocalTexts.Deutsch:
return "von";
case LocalTexts.Portugal:
return "de";
case LocalTexts.Turkish:
return "/";
default:
return "of";
} return ""; } }
}
Then in any scripts (at start point) do this
void Awake() {
switch (Application.systemLanguage) {
case SystemLanguage.Russian:
Texts.Language = Texts.Russian;
break;
case SystemLanguage.German:
Texts.Language = Texts.Deutsch;
break;
case SystemLanguage.Portuguese:
Texts.Language = Texts.Portugal;
break;
case SystemLanguage.Turkish:
Texts.Language = Texts.Turkish;
break;
default:
Texts.Language = Texts.English;
break;
}
}
Then in at any script use it like that
void Awake() {
Text textField = gameObject.GetComponent<Text>();
textField.text = LocalTexts.STRING_001;
}
Answer by Dave-Carlile
·
Jun 02, 2016 at 01:55 PM
A common method for doing this is to wrap every string in a function call. The function uses the passed in string as a key to look up the translated string based on the current language.
For example:
field.text = Translate("How are you?");
The function would return How are you? for English, but Hur mår du? for Swedish. Once you deal with all of your strings that way it's a matter of building up your translation dictionaries. There are tools and APIs available - you can probably find something in the app store or even a more general C# library.
Edit: There is a common GNU interface for doing translations. The Unity Wiki has some code that will read the dictionary files for a language and allow using them for translating.
Answer by honor0102
·
yesterday
the used method uses XML file for language translations
also check sGlorz reply for C#.
Stopping 3D text from showing through game objects in free unity.
1
Answer
How to upload files for use in built game from user's computer?
0
Answers
Text Entry Widgits
1
Answer
Is there a way to make 3D text not visible through another object?
1
Answer
How to split up a text into substrings of single chars at the exact same position
0
Answers | https://answers.unity.com/questions/1196793/best-practice-for-multiple-languages.html | CC-MAIN-2019-26 | refinedweb | 551 | 65.01 |
make generate-plist
Deleted ports which required this port:
Affects: mail/dovecot
Author: pi@FreeBSD.org
Reason:
The VPOPMAIL option was removed, because it was dropped upstream,
so please check your config before upgrading.: 259 (showing only 100 on this page)
1 | 2 | 3 »/dovecot: update to 2.3.19.1
Due to a severe bug in doveadm deduplicate, we are releasing patch
release 2.3.19.1.
mail/dovecot, mail/dovecot-pigeonhole: Upgrade to 2.3.19, 0.5.19
Dovecot Changelog:
+ Added mail_user_session_finished event, which is emitted when the mail
user session is finished (e.g. imap, pop3, lmtp). It also includes
fields with some process statistics information.
See for more
information.
+ Added process_shutdown_filter setting. When an event matches the filter,
the process will be shutdown after the current connection(s) have
finished. This is intended to reduce memory usage of long-running imap
processes that keep a lot of memory allocated instead of freeing it to
the OS.
+ auth: Add cache hit indicator to auth passdb/userdb finished events.
See for more
mail/dovecot: add mail/dovecot-coi to the warning
mail/dovecot-fts-elastic: New FTS plugin for dovecot
PR: 263382
Reported By: bgupta@kde.org
Revert "mail/dovecot: Add FLAVORs for CDB, LDAP, MYSQL, PGSQL, and SQLITE3"
Flavors currently breaks mail/dovecot-pigeonhole,
mail/dovecot-fts-xapian, mail/dovecot-fts-flatcurve.
nc & I (ler) will work to see if we can come to a better way to do this
This reverts commit 0dd69d0adfd2ef48dc949bb2325c2c534117fc29.
mail/dovecot: Add FLAVORs for CDB, LDAP, MYSQL, PGSQL, and SQLITE3
PR: 254164
Approved by: maintainer timeout (>1 year)
devel/icu: update to 71.1
Changes:
Reported by: GitHub (watch releases)
PR: 262654
Exp-run by: antoine
Approved by: fluffy
mail/dovecot, mail/dovecot-pigeonhole: update to 2.3.18, 0.5.18 respectively
Dovecot ChangeLog:
* Removed mail_cache_lookup_finished event. This event wasn't especially
useful, but it increased CPU usage significantly.
* fts: Don't index inline base64 encoded content in FTS indexes using
the generic tokenizer. This reduces the FTS index sizes by removing
input that is very unlikely to be searched for. See for
details on how base64 is detected. Only applies when using libfts.
* lmtp: Session IDs are now preserved through proxied connections, so
LMTP sessions can be tracked. This slightly changes the LMTP session
ID format by appending ":Tn" (transaction), ":Pn" (proxy connection)
and ":Rn" (recipient) counters after the session ID prefix.
+ Events now have "reason_code" field, which can provide a list of
devel/icu: update to 70.1
Changes:
Reported by: GitHub (watch releases)
PR: 258794
Exp-run by: antoine
mail/dovecot: mail/dovecot-pigeonhole: upgrade to 2.3.17, 0.5.17
ChangeLogs:
dovecot:
* Dovecot now logs a warning if time seems to jump forward at least
100 milliseconds.
* dict: Lines logged by the dict process now contain the dict name as
the prefix.
* lib-index: mail_cache_fields, mail_always_cache_fields and
mail_never_cache_fields now verifies that the listed header names are
valid. Especially the UTF8 "–" character has sometimes been wrongly
used instead of the ASCII "-".
+ *-login: Added login_proxy_rawlog_dir setting to capture
rawlogs between proxy and backend.
+ dict: The server process now keeps the last 10 idle dict backends
mail/dovecot: update to 2.3.16
mail/dovecot-pigeonhole: update to 0.5.16
ChangeLogs:
devel/icu: update to 69.1
Changes:
Reported by: GitHub (watch releases)
all: Remove all other $FreeBSD keywords.
Remove # $FreeBSD$ from Makefiles.
Remove occurrences of %%LUA_LIBDIR%%.
Differential Revision:
Remove redundant option descriptions that match the default ones
(ignoring case)
Reported by: danfe (for net/mosquitto), portscan
mail/dovecot: unbreak build with lua54
Reported by: poudriere failure
Approved by: portmgr blanket (fix build)
MFH: 2021Q1
devel/icu: update to 68.1
Changes:
ABI:
Reported by: GitHub (watch releases): fix example config *.conf.ext REINPLACE missed in r537587.
PR: 246963
Submitted by: kfv@irbug.org
MFH: 2020Q2
mail/dovecot: restore the REINPLACE_CMD for the example config.
Overzealous removal.
PR: 246947
Submitted by: gwbr0601@yahoo.de
Pointy Hat To: ler
mail/dovecot: Upgrade to 2.3.10.1, fixing multiple vulnerabilities.
-.
Clean up some REINPLACE warnings whilst we're here.
MFH: 2020Q2
Security: 37d106a8-15a4-483e-8247-fcb68b16eaf8
Security: CVE-2020-10957
Security: CVE-2020-10958
Security: CVE-2020-10967
devel/icu: update to 67.1
Changes:
ABI:
Reported by: GitHub (watch releases)
mail/dovecot: use libexttextcat for lucene.
PR: 244932
Submitted by: igorz@yandex.ru
devel/icu: update to 66.1
Changes:
ABI:.3
Changelog:
* CVE-2020-7046: Truncated UTF-8 can be used to DoS
submission-login and lmtp processes.
* CVE-2020-7957: Specially crafted mail can crash snippet generation.
MFH: 2020Q1
Security: CVE-2020-7046
Security: CVE-2020-7957
Security: 74db0d02-b140-4c32-aac6-1f1e81e1ad30:
* Changed several event field names for consistency and to avoid
conflicts in parent-child event relationships:
* SMTP server command events: Renamed "name" to "cmd_name"
* Events inheriting from a mailbox: Renamed "name" to "mailbox"
* Server connection events have only "remote_ip", "remote_port",
"local_ip" and "local_port".
* Removed duplicate "client_ip", "ip" and "port".
* Mail storage events: Removed "service" field.
Use "service:<name>" category instead.
mail/dovecot: include mention of security.bsd.hardlink_check_{g,u}id in
pkg-message.
PR: 242223
Submitted by: tphilipp@potion-studios.com
mail/dovecot: revert removing patch that is still needed.
PR: 240607
Revert changes that crept in by accident
Reported by: fluffy
Pointy hat: bapt
mail/dovecot: really fix LUA=off.
Pointy Hat To: ler
mail/dovecot: fix breakage when LUA is NOT selected.
PR: 241144
Submitted by: matthias.pfaller@familie-pfaller.de
Reported by: many
Pointy Hat To: ler
Drop the ipv6 virtual category for m* category as it is not relevant anymore
dovecot-fts-xapian: Bump portrevision after dovecot upgrade
Add a note to the dovecot port about the requirement to bump the portrevision
each time dovecot is updated
PR: 241147
Reported by: Matthias Pfaller <matthias.pfaller@familie-pfaller.de>.
devel/icu: update to 65.1
Changes:
ABI:
mail/dovecot: remove no longer needed patch file.
PR: 240607
Submitted by: paul.le.gauret@gmail.com00
onvert to UCL & cleanup pkg-message (categories l-m): [PATCH] lib-storage: Namespace prefix shouldn't be included in all
mailbox name validity checks
Obtained from: upstream github.
mail/dovecot: One should actually TEST their patches.
Fix previous commit.
Pointy Hat To: ler
mail/dovecot: stop whining about TCP_NODELAY errors.
[PATCH] lib: ostream-file: Don't log any errors when setting
TCP_NODELAY
It's likely never useful to log the error, and it seems more and more
unexpected errors just keep popping up.
Obtained from: upstream git.
mail/dovecot: stop spamming the log with EINVAL.
PR: 239172
Submitted by: zillion1@o2.pl
Obtained from: dovecot mailing list.
mail/dovecot, mail/dovecot-pigeonhole: Update to 2.3.7 and 0.5.7 respectively.
dovecot changelog:
*.
mail/dovecot: remove obsolete patch.
no PORTREVISION bump, as it doesn't change the package.
Reported by: herbert@gojira.at: upgrade to 2.
MFH: 2019Q2
Security: CVE-2019-10691
mail/dovecot: upgrade to 2.3.5.1.
* CVE-2019-7524: Missing input buffer size validation leads into
arbitrary buffer overflow when reading fts or pop3 uidl header
from Dovecot index. Exploiting this requires direct write access to
the index files.
MFH: 2019Q1
Security: CVE-2019-7524
devel/icu: update to 64.1
Changes:
ABI:
PR: 236325
Exp-run by: antoine
Differential Revision:
mail/dovecot: upgrade to 2.3.4.
PR: 235523
Submitted by: pascal.christen@hostpoint.ch
MFH: 2019Q1
Security: 1340fcc1-2953-11e9-bc44-a4badb296695
Security: CVE-2019-3814
mail/dovecot: Fix previous commit.
I missed a character typing the patch.
Pointy Hat: ler
mail/dovecot: Pick up mailing list patch for imap-preauth vs. stats-writer.
see the dovecot mailing list thread on imap-preauth and stats-writer between
Stephan Bosch and a FreeBSD user
Obtained from: upstream mailing list.
mail/dovecot: Pick up a mailinglist patch for solr/tika separation.
solr and tika currently use the same http client connection. Upstream
made the attached patches in response to my (ler@) bug report.
Obtained from: upstream mailing list.
mail/dovecot: Add upstream patch to fix a double free in MySQL.
Obtained
from:
mail/dovecot: add option to support libsodium
- libsodium option to support security/libsodium based crypts
- pet portlint
- fix LUA option pkg-plist issues
mail/dovecot: pick up patch from upstream to quiet format warnings.
Obtained
from:
mail/dovecot update to 2.3.4, mail/dovecot-pigeonhole to 0.5.4
dovecot change log:
* The default postmaster_address is now "postmaster@<user domain or
server hostname>". If username contains the @domain part, that's
used. If not, then the server's hostname is used.
* "doveadm stats dump" now returns two decimals for the "avg" field.
+ Added push notification driver that uses a Lua script
+ Added new SQL, DNS and connection events.
See
+ Added "doveadm mailbox cache purge" command.
+ Added events API support for Lua scripts
+ doveadm force-resync -f parameter performs "index fsck" while opening
the index. This may be useful to fix some types of broken index files.
mail/dovcecot: fix thinko in previous update. Don't print config always
PR: 232803
Submitted by: oleg@pcbtech.ru
mail/dovecot: give better error message(s) when there are configuration errors.
PR: 232785
Submitted by: prj@rootwyrm.com
devel/icu: update to 63.1
Changes:
ABI:
PR: 232300
Exp-run by: antoine
mail/dovecot: don't pick up libsodium if installed.
PR: 232236
Submitted by: d8zNeCFG@aon.at
mail/dovecot upgrade to 2.3.3, mail/dovecot-pigeonhole upgrade to 0.5.3.
dovecot changel.
mail/dovecot: upgrade to 2.3.2.1.
v2.3.2 still had a few unexpected bugs:
- SSL/TLS servers may have crashed during client disconnection
- lmtp: With lmtp_rcpt_check_quota=yes mail deliveries may have
sometimes assert-crashed.
- v2.3.2: "make check" may have crashed with 32bit systems
devel/icu: update to 62.1
Changes:
ABI:
PR: 229359
Exp-run by: antoine (only 10.4)
mail/dovecot{,22} add BEFORE: mail to RC script
PR: 228998
Submitted by: ohauer@FreeBSD.org
mail/dovecot: fix "2.3.1 Replication is throwing scary errors"
make makepatch for cleanliness
Submitted by: remko
Reported by: remko
Obtained from: upstream
Scale back my portfolio
I'm releasing maintainership on a number of ports that I no longer have
time to maintain effectively.
Add an upstream patch to fix a panic when a malformed address line
is fed to dovecot, as OpenSMTPd can do.
Submitted by: gahr
Reported by: brnrd
Obtained
from:)
devel/icu: update to 61.1
Changes:
ABI:
PR: 227042
Exp-run by: antoine
MFH: 2018Q2 (required by Firefox 61).
Improve clarity of dovecot's pkg-message
Change an ambigious "enable" to the actual value that causes a problem,
and fix spelling of "gid".
No PORTREVISION bump---there's a major update coming shortly, and this
change will get picked up then.
PR: 218392
Submitted by: Jeremy Chadwick
Update dovecot to 2.2.34, and bump pigeonhole.
*..
mail/dovecot: update to 2.2.33.2.
One more patch release with some fixes:
- doveadm: Fix crash in proxying (or dsync replication) if remote is
running older than v2.2.33
- auth: Fix memory leak in %{ldap_dn}
- dict-sql: Fix data types to work correctly with Cassandra
bump dovecot-pigeonhole PORTREVISION as well.
mail/dovecot: fix a parallel build issue.
Reported by: leres
Obtained
from:
(part)
mail/dovecot: upgrade to 2.2.33.1.
- dovecot-lda was logging to stderr instead of to the log file.
Update dovecot to 2.2.33, and bump pigeonhole.
* doveadm director commands wait for the changes to be visible in the
whole ring before they return. This is especially useful in testing.
* Environments listed in import_environment setting are now set or
preserved when executing standalone commands (e.g. doveadm)
+ doveadm proxy: Support proxying logs. Previously the logs were
visible only in the backend's logs.
+ Added %{if}, see
+ Added a new notify_status plugin, which can be used to update dict
with current status of a mailbox when it changes. See
+ Mailbox list index can be disabled for a namespace by appending
":LISTINDEX=" to location setting.
devel/icu: update to 59.1
- Temporarily keep C++98 working in consumers for Clang's default -std=
Changes:
PR: 218788
Submitted by: takefu@airport.fm, dcarmich@dcarmichael.net (early version)
Exp-run by: antoine
Update dovecot to 2.2.32, and bump pigeonhole.
*. This tells Dovecot
Approved by: portmgr blanket
Servers and bandwidth provided by New York Internet, iXsystems, and RootBSD
7 vulnerabilities affecting 73 ports have been reported in the past 14 days
* - modified, not new
All vulnerabilities
Last updated:2022-08-05 19:03:39 | https://aws-1.freshports.org/mail/dovecot/ | CC-MAIN-2022-33 | refinedweb | 2,089 | 59.8 |
Hi,
> I think (but I'm not 100% sure about this) that the "library:" is just a convention for
a namespace associated with a SWC.
Which is inconsistent with the mx namespace but I'll assume that's for historic reasons.
> But this convention allows the sole "http:" namespace to stand out as the language namespace
that indicates the use of MXML 2006 vs. MXML 2009, which I think is nice.
If we were thinking of adding new apache namespace should we keep the distinction between
2006 and 2009? I'd prefer to drop the year from the URI if we can.
> In general, XML namespaces are not expected to map to web pages.
Sure but it a nice to have I think and also in common usage. The W3C maps namespace URIs
to real URLs (eg). Perhaps we should follow a similar format
as described here (and elsewhere)?
But I like the idea as if a user of the SDK goes to that link it take them to the Apache Flex
site - where they can find out more info etc.
Thanks,
Justin | http://mail-archives.apache.org/mod_mbox/incubator-flex-dev/201208.mbox/%3C26817905-74C4-4F62-952B-BF566039275D@classsoftware.com%3E | CC-MAIN-2017-22 | refinedweb | 183 | 80.31 |
Managed Extensions: Using the Microsoft Word Spell Checker via Automation
Welcome to this week's installment of .NET Tips & Techniques! Each week, award-winning Architect and Lead Programmer Tom Archer demonstrates how to perform a practical .NET programming task using either C# or Managed C++ Extensions.Many times in our programming lives we find that some features from another application would be extremely beneficial within our own software. One great example of that is the Microsoft Word spell checker. While many C# and VB.NET examples illustrate Automation from a .NET application, I couldn't find one that showed how to automate Word from Managed C++. In addition, since I ran into several "gotchas", I thought that this task would make a nice addition to the .NET Tips & Techniques series.
The following is a step-by-step demo of how to use Automation to access the Microsoft Word spell checker from a Managed C++ application. (The accompanying demo application allows you to quickly test these steps.)
- Create a new C++ Windows Forms application. I named mine OfficeWord.
- From the Solution Explorer, right-click References and then click Add References.
- When the Add References dialog box appears, click the COM tab.
- Locate the entry for "Microsoft Word 11.0 Object Library" and click the Select button, followed by the OK button. This will add the necessary references to your .NET project.
- Add the following
usingstatement to the top of your form code:
using namespace Microsoft::Office::Interop;
Note: The class that you use to automate Microsoft Word is actually in the Microsoft::Office::Interop::Word namespace. However, that namespace includes an interface called System that will conflict with the .NET System namespace. As a result, I include the Microsoft::Office::Interop namespace and qualify the objects within that namespace with the Word namespace name (e.g., Word::ApplicationClass).
- Add a text box (edit control) to the form that will contain the value to be spell checked.
- Add a button to perform the spell check.
- Now you need to figure out how to use the Word objects. The easiest way to determine which classes, method, and properties are available from a reference is to use the Visual Studio ObjectBrowser. From the Visual Studio View menu, select Object Browser.
- Expand the Microsoft.Office.Interop.Word entry as shown in the following figure:
- You'll notice that the Application class (selected) has a COM class name of ApplicationClass. Knowing this will come in handy shortly. Also note the members of the Application class in the right pane, including the CheckSpelling method. As you can see, the Object Browser is a great way to spelunk through COM objects to determine their functionality. This is especially useful in situations where the objects are not documented very well.
- Now that you know the name of the COM object and the method you want to call, add a handler for the spell check button and code it as follows (You must use the COM class name of ApplicationClass in order to avoid compiler errors.):
private: System::Void button1_Click(System::Object * sender, System::EventArgs * e) { try { Word::ApplicationClass* winword = new Word::ApplicationClass(); Object* o = System::Reflection::Missing::Value; bool success = winword->CheckSpelling(textBox1->Text, &o, &o, &o, &o, &o,&o, &o, &o, &o, &o, &o, &o); System::Text::StringBuilder* str = new System::Text::StringBuilder(); str->AppendFormat(S"Your text has {0} errors", success == true ? S"NO" : S"spelling"); MessageBox::Show(str->ToString(), S"Spell Check Results"); } catch(Exception* e) { MessageBox::Show(e->Message); } }
Note: The Word object's CheckSpelling method takes a large number of arguments. In order to indicate that you don't wish to pass values for those arguments, you can simply use the System::Reflection::Missing object.
- Build and run your application to see results similar to those shown in the following figure:
Final NotesThe focus of this article was to illustrate how to use Automation from Managed C++. In that vain, I used one parameter of one method of the incredibly rich and robust Word object. To read more about the different parameters you can use with the CheckSpelling method (such as specifying a custom dictionary, ignoring case, etc.) and to view what else you can automate from the various Office products, refer to the MSDN Web site._0<< | http://www.developer.com/net/cplus/article.php/3428561/Managed-Extensions-Using-the-Microsoft-Word-Spell-Checker-via-Automation.htm | CC-MAIN-2014-52 | refinedweb | 714 | 55.95 |
0.
- Declare a vector named result of Point type
- Traverse the points object array until the foremost left point is found
- Add that point to the result vector
- Find the next point “q” such that it is the most counterclockwise point off all other points
- Set p to point “q” for the next iteration.
- Keep adding up while “p” is not equal to leftmost.
Description
So our main idea to solve the convex hull is to use orientation. We are going to find and start with the leftmost or maybe the lowest X coordinate. And we can take it until all our points are found in which a set of some points can accumulate.
We are going to pass the Object array points of user class Point, which we already define it at the start of the code. Our arguments of points and lengths of the integer are passed into the convex hull function, where we will declare the vector named result in which we going to store our output.
Now initialize the leftmost point to 0. we are going to start it from 0, if we get the point which has the lowest x coordinate or the leftmost point we are going to change it.
Now traverse all the points and find out the lowest one. Store the position of leftmost to “p” and declare a point
Now, start a do-while loop in which the first thing we gonna does is adding up the first point as an output.
Now we have to find the most counterclockwise point to all other points, for this, we are going to use orientation. We made a separate function for this, which checks if the orientation of triplets is 2 or not if it is found to be 2 then update the value of point “q”.
This should be continued up until the “p” is not equal to leftmost.
Example
Given points are:
{ { 0, 3 }, { 2, 2 }, { 1, 1 }, { 2, 1 }, { 3, 0 }, { 0, 0 }, { 3, 3 } };
leftmost=0;
After traversing all the points, our first lowest x co-ordinate will be (0,3) it will store as a result.
Now it is going to check which x,y pair has most counterclockwise as it will give orientation as 2 and update the value of point “q”.
Pair to be found as (0,0).
Now, copy the value of point “q” into p as a next point for again finding out the most counterclockwise point.
Until the value of p is not equal to leftmost we are gonna use this loop.
Our output will be: (0,3), (0,0), (3,0), (3,3)
Implementation
C++ program for Convex Hull Algorithm
#include <iostream> using namespace std; #define INF 10000 struct Point { int x; int y; }; int orientation(Point p, Point q, Point r) { int val = (q.y - p.y) * (r.x - q.x) - (q.x - p.x) * (r.y - q.y); if (val == 0) return 0; return (val > 0) ? 1 : 2; } void convexHull(Point points[], int n) { if (n < 3) return; int next[n]; for (int i = 0; i < n; i++) next[i] = -1; int l = 0; for (int i = 1; i < n; i++) if (points[i].x < points[l].x) l = i; int p = l, q; do { q = (p + 1) % n; for (int i = 0; i < n; i++) if (orientation(points[p], points[i], points[q]) == 2) q = i; next[p] = q; p = q; } while (p != l); for (int i = 0; i < n; i++) { if (next[i] != -1) cout << "(" << points[i].x << ", " << points[i].y << ")\n"; } } int main() { Point points[] = { { 0, 3 }, { 2, 2 }, { 1, 1 }, { 2, 1 }, { 3, 0 }, { 0, 0 }, { 3, 3 } }; cout << "The points in the convex hull are: "; int n = sizeof(points) / sizeof(points[0]); convexHull(points, n); return 0; }
The points in the convex hull are: (0, 3) (3, 0) (0, 0) (3, 3)
Java program for Convex Hull Algorithm
import java.util.*; class Point { int x, y; Point(int x, int y) { this.x = x; this.y = y; } } class ConvexHull{ public static int OrientationMatch(Point check1, Point check2, Point check3) { int val = (check2.y - check1.y) * (check3.x - check2.x) - (check2.x - check1.x) * (check3.y - check2.y); if (val == 0) return 0; return (val > 0) ? 1 : 2; } public static void convex_hull(Point points[], int lengths) { if (lengths<3) return; Vector<Point> result = new Vector<Point> (); int leftmost = 0; for (int i = 1; i<lengths; i++) if (points[i].x<points[leftmost].x) leftmost = i; int p = leftmost, pointq; do { result.add(points[p]); pointq = (p + 1) % lengths; for (int i = 0; i<lengths; i++) { if (OrientationMatch(points[p], points[i], points[pointq]) == 2) { pointq = i; } } p = pointq; } while (p != leftmost); System.out.print("The points in the convex hull are: "); for (Point temp: result) System.out.println("(" + temp.x + ", " + temp.y + ")"); } public static void main(String[] args) { Point points[] = new Point[7]; points[0] = new Point(0, 3); points[1] = new Point(2, 3); points[2] = new Point(1, 1); points[3] = new Point(2, 1); points[4] = new Point(3, 0); points[5] = new Point(0, 0); points[6] = new Point(3, 3); int lengths = points.length; convex_hull(points, lengths); } }
The points in the convex hull are: (0, 3) (0, 0) (3, 0) (3, 3)
Complexity Analysis for Convex Hull Algorithm
Time Complexity
O(m*n) where n is the number of input points and m is the number of output points.
Space Complexity
O(n) where n is the number of input points. Here we use an array of size N to find the next value. | https://www.tutorialcup.com/interview/algorithm/convex-hull-algorithm.htm | CC-MAIN-2021-31 | refinedweb | 933 | 74.63 |
idea:
add, search, annotate, link, view, overview, recent, by name, random
meta:
news, help, about, links, report a problem
account:
browse anonymously,
or get an account
and write.
user:
pass: register,
Let's say I want to take the minimum of two numbers, whether int or float.
In C:
#define MIN(x,y) ((x < y) ? x : y)
In C++ :
template <class T>
T min(T x, T y)
{ return
(x < y) ? x : y; }
In 'weak C' you can return a type 'unknown', and you can leave out type specifiers on function parameter declarations.
I.e. :
unknown min(x, y)
{ return (x < y) ? x : y; }
when you do something like :
int i, j;
int k = min(i, j);
The compiler knows the result and input types, so it compiles the appropriate function
int min(int, int);
for you, and reports any errors that result.
This has two advantages :
(a) weak-C is simpler: no distinction between macros, functions and templates.
(b) can easily mix types, unlike C++ templates; but can be debugged properly, unlike C macros.
shoot holes in, please
back:
main index | http://www.halfbakery.com/idea/weakly_20typed_20functions | CC-MAIN-2017-04 | refinedweb | 182 | 70.84 |
Welcome To Snipplr
Everyone's Recent ActionScript 3 Snippets
- All /
- JavaScript /
- HTML /
- PHP /
- CSS /
- Ruby /
- Objective C
This code shows how not only to create a text file onto the local computer desktop, but also how to post the contents of a 'text box' object into the file.0 480 posted 6 years ago by SinVerguenzaGames
In this script you also removing unused listener.0 445 posted 7 years ago by TheRabbitFlash
This script allow you copy to the clipboard current layer properties in format: &selectionBounds.x = &layerx; &selectionBounds.y = &layery; // &selectionBounds.width = &layerwidth; // &selectionBounds.height = &layerheight;0 767 posted 7 years ago by TheRabbitFlash
This script allow you copy to the clipboard current layer properties in format: &layer_name.x = &layer_x; &layer_name.y = &layer_y; // &layer_name.width = &layer_width; // &layer_name.height = &layer_height;0 536 posted 7 years ago by TheRabbitFlash
Allows for EXIT function on mobile device via mobile device BACK key button.0 567 posted 7 years ago by RiveraEraGames
File > Settings... > File and Code Templates0 441 posted 8 years ago by hejaaa
A simple utility class that will help you convert a color value (from numbers) to hexadecimal color value.0 571 posted 8 years ago by vamapaull
Just a simple example script for populating a bunch of MapMarker objects on a Map using the distriqt NativeMaps ANE0 463 posted 9 years ago by koriner
A complete example of how we can create a Preloader in actionscript3 to load an external .swf file. STRUCTURE ------------------ Source Swf |_ Main.fla |_ Main.as |_ Main.swf Preloader Swf |_0 582 posted 9 years ago by jquery404
Credit for this goes to Adobe, Ric Ewing, Kevin Williams, Aden Forshaw and Sidney de Koning.0 931 posted 9 years ago by adrianparr
This function returns the trimmed bounding box of a bitmap which contains all non-transparent pixel which color values are higher than the specified treshold (default 0xFF333333 in ARGB notation).0 470 posted 9 years ago by tinytiger
Algunas veces se quiere comprobar si un numero esta entre un rango de otro número menor y otro mayor.0 390 posted 9 years ago by fenixkim
Credit goes to Bruno Imbrizi.0 571 posted 9 years ago by adrianparr
Credit goes to Bruno Imbrizi.0 731 posted 9 years ago by adrianparr
Removes duplicating elements from Array, containing only objects of Point class0 414 posted 10 years ago by romech
Easily breaks a string into an array of words. *If your string has special characters such as latin, add those characters to the r var. Just like var r:RegExp = /[^\wçà áâãèéêìÃîòóôõùúûüÀÃÇÂÃÈÉÊÌÃÎÒÓÔÕÙÚÛÜ¡]+/...0 672 posted 10 years ago by izaiasdotcom
Sorts an array of objects using the native sortOn method0 618 posted 10 years ago by Narayon 629 posted 10 years ago by vamapaull
Recently I needed to capture a JSON feed of the top artists from Last.fm. The script makes a request for a JSON response which I later treat as a dictionary object. You'll probably need to sign up for an API Key before using this script0 511 posted 10 years ago by chrisaiv
a handy class to manage facebook connections using as30 573 posted 10 years ago by Matias
Hi; Sometimes you need to "trim" textifield for specific height... This solves just this, perfect for news, news-readers, rss etc1 540 posted 10 years ago by burnandbass
If you want to tween an object relatively, but the new value changes and needs to be a variable, just cast it as a string.0 559 posted 10 years ago by yannxou
Validate Birth day.1 431 posted 10 years ago by steppannws
TimedText (TT) XML captions files can have namespaces that cause problems when parsing them in AS3. To get around this you can use this code to remove the namespace from the root XML node using Regex. This example uses0 519 posted 10 years ago by adrianparr
ActionScript 3 AS3 Convert TextField LineBreaks to CRLF for Display as Plain Text (Notepad) on Windows
Linebreaks differ between Flash TextField and a plain text file like Notepad. In this example we convert the html linebreaks in Flash to \r\n0 682 posted 10 years ago by adrianparr
It's very useful if you build a FLV player for example, and want to convert the time into minutes:seconds (example: 6:13) //apply it to your project like this (and don't forget to import the class): time.text = TimeUtil.getTimecode(timeValue);0 485 posted 10 years ago by vamapaull
simple as3 class that help you to avoid repeat add MouseEvent each time0 465 posted 10 years ago by mgraph
Retrieve remote image from an URL and store it locally. Use this class for managing offline mode.2 692 posted 10 years ago by spawnrider
An easy way to detect shakes on mobile devices with equipped accelerometer.0 563 posted 10 years ago by vamapaull
Class version of TimelineManipulation.0 437 posted 10 years ago by okhy | https://snipplr.com/all?language=actionscript-3 | CC-MAIN-2022-21 | refinedweb | 847 | 59.53 |
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode.
Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript).
I want to control the on/off of a node in expresso.
and i found the "on" attribute can do it.
but I can't add the input port in python.
anyone can help me ?
thank you very much!
Hi @milkliu
Please provides us more information about the initial setup and the expected behavior.
Are you trying to control the enable state of the current python node, or any other nodes?
Should these nodes be linked together?
Cheers,
Maxime.
I guess you want to create a node of the node.
@merkvilson
sorry,I should put more information about my question.
I want to add the "on" port in the gif below use python.
Do you want to add the port with a script from the script manager or a Python GvNode?
@m_adam I want to add it with a script from the script manager
Hi @milkliu,
Unfortunately, I do have bad news, after some research it's currently not possible to add the "On" port in Python due to a bug.
With that's said here is an example of how to creates Node and Port in Xpresso. (As I said it does not work for the On port, and some generics ports so this example will currently not work)
import c4d
def main():
# Checks if selected object is valid
if op is None:
raise ValueError("op is none, please select one object.")
# Retrieves the xpresso Tag
xpressoTag = op.GetTag(c4d.Texpresso)
if xpressoTag is None:
raise ValueError("Make sure the selected object get an Xpresso Tag.")
# Retrieves the node master
gvNodeMaster = xpressoTag.GetNodeMaster()
if gvNodeMaster is None:
raise RuntimeError("Failed to retrieves the Node Master.")
# Retrieves the Root node (the Main XGroup) that holds all others nodes
gvRoot = gvNodeMaster.GetRoot()
if gvRoot is None:
raise RuntimeError("Failed to retrieves the Root Node.")
# Creates a Link node in the GvRoot
linkNode = gvNodeMaster.CreateNode(gvRoot, c4d.ID_OPERATOR_OBJECT, x=100, y=100)
if linkNode is None:
raise RuntimeError("Failed to creates the link Node.")
# Defines the target of the link node to the host object of the Xpresso (in this case objHost, is the same as op)
objHost = linkNode.GetNodeMaster().GetOwner().GetObject()
linkNode[c4d.GV_OBJECT_OBJECT_ID] = objHost
# Creates the "On" Port
onPort = None
if linkNode.AddPortIsOK(c4d.GV_PORT_INPUT, c4d.GV_OBJECT_OPERATOR_ON):
onPort = linkNode.AddPort(c4d.GV_PORT_INPUT, c4d.DescID(c4d.DescLevel(c4d.GV_OBJECT_OPERATOR_ON)), c4d.GV_PORT_FLAG_IS_VISIBLE, True)
if onPort is None:
raise RuntimeError("Failed to creates the On Port.")
# Updates Cinema 4D
c4d.EventAdd()
if __name__ == '__main__':
main()
Hopefully, it will be fixed in the next release, but I can't promise anything.
In any case, I will bump this thread when the fix is available.
Cheers,
Maxime.
@m_adam Although I didn't get a solution to this problem, I found a solution to my own problem. Thank you very much.
This issue is now fixed in R21. | https://plugincafe.maxon.net/topic/11386/how-to-control-the-on-off-of-a-node-in-expresso | CC-MAIN-2021-31 | refinedweb | 515 | 66.64 |
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode.
Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript).
Does Manager ID change between different c4d versions such as R24, R25 and R26 If Manager ID is based on the options in "Restrict to" in Command Manager.
The Manager ID is explained here..
Hi,
Those IDs should not change but there are not guaranties. There are no symbols to define those IDs.
You can use the following code to displays all the registered Managers.
import c4d
from c4d import gui
def main():
pluginList = c4d.plugins.FilterPluginList(c4d.PLUGINTYPE_MANAGERINFORMATION, True)
for plugin in pluginList:
print (plugin, plugin.GetID())
# Execute main()
if __name__=='__main__':
main()
Cheers,
Manuel | https://plugincafe.maxon.net/topic/14060/is-there-a-fixed-manager-id-like-c4d-constant | CC-MAIN-2022-27 | refinedweb | 139 | 59.5 |
#include <FXWizard.h>
#include <FXWizard.h>
Inheritance diagram for FX::FXWizard:
For example, a Wizard may be used to install software components, and ask various questions at each step in the installation.
DECOR_TITLE|DECOR_BORDER|DECOR_RESIZE
0
10
Construct free-floating Wizard.
Construct Wizard which will always float over the owner window.
[inline]
Return a pointer to the button frame.
Return a pointer to the "Advance" button.
Return a pointer to the "Retreat" button.
Return a pointer to the "Finish" button.
Return a pointer to the "Cancel" button.
Return the container used as parent for the subpanels.
Change the image being displayed.
Return the current image.
Return number of panels.
Bring the child window at index to the top.
Return the index of the child window currently on top.
[virtual]
Save to stream.
Reimplemented from FX::FXTopWindow.
Load from stream. | https://fox-toolkit.org/ref12/classFX_1_1FXWizard.html | CC-MAIN-2021-25 | refinedweb | 139 | 64.07 |
Created on 2012-09-07 18:56 by mcdonc, last changed 2013-02-02 16:18 by benjamin.peterson. This issue is now closed.
The symptom is an exact duplicate of the symptom reported in the following (closed) issue:
The issue is also related to the following other issues:
To reproduce the symptom driving the patches that will be attached to this issue:
git clone git://github.com/pypa/pip.git
cd pip
/any/python setup.py dev
/any/python setup.py test
You can either wait for the entire test suite to finish or you can press ctrl-C at any time (the tests take a long time). In either case, a traceback like the following will be printed to the console.
Error in atexit._run_exitfuncs:
Error in sys.exitfunc:
From what I understand in other issues, multiprocessing.util._exit_function shouldn't actually be called *after* the containing module's globals are destroyed (it should be called before), but this does indeed happen.
Patches will be attached that anticipate the symptom and prevent a shutdown error. One will be attached for Python 2.7 branch, the other for the Python tip. Each causes functions that are called at shutdown time to keep a reference around to other functions and globals used within the function, and each does some checks for the insane state and prevents an error from happening in this insane state.
Patch for tip.
2.7 branch patch.
The patch makes sense. I'll take another look over the weekend, but it seems to be ready to be applied.
+ # NB: we hold on to references to functions in the arglist due to the
This is a nit, but I think adding "NB:", "Note:", etc. to the beginning of a comment is redundant because by being a comment it is already implicit that it should be noted.
New changeset 27d410dd5431 by Alexander Belopolsky in branch '3.2':
Issue #15881: Fixed atexit hook in multiprocessing.
New changeset 08c680918ff8 by Alexander Belopolsky in branch 'default':
Issue #15881: Fixed atexit hook in multiprocessing.
New changeset db67b848ddc3 by Alexander Belopolsky in branch '3.2':
Issue #15881: Fixed 3.2 backport.
Applied to 3.2 and 3.3. Thanks for the patch!
Leaving it open pending 2.7 commit.
New changeset b05547e8ff92 by Alexander Belopolsky in branch '3.2':
Issue #15881: Added NEWS entry and proper credit.
I see the same error on Windows (when pressing ^C), but on Linux I get
Error in sys.exitfunc:
Traceback (most recent call last):
File "/usr/lib/python2.7/atexit.py", line 28, in _run_exitfuncs
import traceback
File "/usr/lib/python2.7/traceback.py", line 3, in <module>
import linecache
File "/usr/lib/python2.7/linecache.py", line 9, in <module>
import os
File "/usr/lib/python2.7/os.py", line 119, in <module>
sys.modules['os.path'] = path
AttributeError: 'module' object has no attribute 'modules'
This also suggests that module teardown has begun before/while sys.exitfunc is running.
I suspect the problem is caused by nose's isolate plugin.
With this enabled, a copy of sys.modules is saved before each test and then restored after the test. This causes garbage collection of newly imported modules. The destructor for the module type causes all globals to be replaced by None.
This will break the atexit function registered by multiprocessing since it depends on globals.
PS. A simple work-around (which does not require people to upgrade to a bugfixed version of Python) is to put
try:
import multiprocessing
except ImportError:
pass
near the beginning of setup.py. After this change I don't get the error when running "python setup.py test".
Actually, I am not so sure it is the isolate plugin. But I do think that sys.modules is being manipulated somewhere before shutdown.
Actually it is test.with_project_on_sys_path() in setuptools/commands/test.py that does the save/restore of sys.modules. See
New changeset 2b79b4848f44 by Richard Oudkerk in branch 'default':
Issue #15881: Clarify comment in exit function
I can reproduce the bug against the 2.7 tip.
Reviewing the 2.7 patch:
- The 2.7 tip has 'Misc/ACKS' instead of 'Doc/ACKS.txt'.
- In 'Misc/ACKS', the line adding 'Chris McDonough' should add it in alpha order.
- The remainder of the patch looks correct, and applies cleanly to the tip of the 2.7 branch.
- After applying the patch to the 2.7 tip, I can no longer reproduce the bug.
- After applying the patch, the full test suite ('./python -m test.testall -j3') passes all tests.
Targeting this for 2.7.4. If Alexander doesn't get to it, ping me and I'll do it
Philip, if you could backport, that'd be great.
New changeset 0a58fa8e9bac by Benjamin Peterson in branch '2.7':
Issue #15881: Fixed atexit hook in multiprocessing. | https://bugs.python.org/issue15881 | CC-MAIN-2021-21 | refinedweb | 804 | 69.38 |
Gli 2010 . in stock , 12.5km/l , route is islamabad express way
Can any one share the average of 1.6 Altis MT and AT
let me place one thing too.
what if I drive this same corolla with 5 passengers, and in 2nd gear at 80 km/hr - what would be my average then?
when asking a question that has variable answers, its mandatory that you put down the testing data first.
I don’t know about Xli or Gli but i am here to share Altis Crusietronic 1.6 L average.1) Average in City with AC = 11 km<?xml:namespace prefix = o<o:p></o:p>2) Average in City without AC = Not checked<o:p></o:p>3) Average on Highway with AC =@ 80 Km / Hr its 19 + and if 120 Km/ hr its 16 and if 100 km/ hr its 18 driving with cruise control.<o:p></o:p>4) Average on Highway without AC = Not Checked<o:p></o:p>
my experience with xli on highway/motorway with ac on 1-2
speed 85-90 km/hr =17-17.5 km/L " 90-100 =16 km/L " 100-110 =15 km/L " 110-120 =14-14.5 km/L " 150-160 =11-12 km/L
there is no one who knows 1.6 M/T average?
@vikingbro i am having gli not altis
Altis Crusietronic 1.61) Average in City with AC = 112) Average in City without AC = 133) Average on Highway with AC = 15.3 on motorway @ 120kph4) Avegare on Highway without AC = 18+ on motorway @ 120kph
Thanks, Rizwan400 & sgo7 for sharing this information. I am planning to buy new Altis and looking for the difference between Altis AT & MT versions, like comfort, driving consumption and pro's and con's of sun roof.
Dear All
Its a nice fuel average. Ok, you guys are already using the car that is the new corolla 1.6. Now i want to buy two cars in near future, one is Honda City 2011 manual and the other car i havent decided. I need your advice so please help me out. I dont like Honda Civic, primarily it is not suitable for our bumpy roads and doesnt have adequate trunk capacity. I needf to know what is the maximum speed of Corolla 1.6 GLi automatic, altis manual anyone of you has achieved on the motorway. How is the overall built quality and other features. Looking at the price GLi 1.6 suits me, but i never liked automatic gears. In Altis Manual 6 speed, does it touch the same maximum speed which 1.8L altis was doing?
New shape of civic is coming in November 2011 with a price tag of 22 lakh, and this becomes too expensive for me. Corolla suits me due to its comfort on bumpy roads, Islamabad to Sargodha, sialkot, Taxila, Gadoon amazai and etc. Civic no doubt has better built quality, more better seats, more modern equipment but this is i am not requiring. I have driven corolla XLi 1300 from Islamabad to sargodha and then back from there after 30 minutes of stay, it was good.
Even as for work horse i like Toyota rather than city.
Gentlemen, those who have experience of corolla GLi 1600cc or altis 1600cc manual please advise me. u can call me on my mobile even 0315-3067897.
I have driven Nissan sunny 1999 model EX salon 1.4L. beleive me this 1.6L average is better than Nissan. Please tell me that due to dual VVTi is there any further noise reduction in the cabin of corolla GLi or altis. I hate noise in cabin which honda civic has. Is new corolla GLI/ ALTIS 1.6 is smooth in driving on high speeds that is 160km/hr, how are the brakes? Thankyou for your time.Best Regards
Tashfeen
Can anyone post pictures of new corolla GLi1.6 in detail with interior, exterior and Altis manual 1.6L. I am a bit afraid of fuel consumption of automatic. If anybody is using altis manual 1.6L, please tell us in detail about its fuel consumption.
As far as i know engine is very quiet now and cabin noise is low
Altis 2011A/T with AC local=7-8km/LA/T w/o AC local=9km/LA/T with AC highways(2-3k RPM)=12-13 km/LA/T without AC highways(2-3krpm)=14-15 km/L which is maximum it can deliver
It is quite strange, why is it so that cruisetronic which is automatic gives better fuel consumption as compared to manual altis.....anyhow thanks.
Being a mechanical engineer and experience of working as test driver in NISSAN (Yakashima Japan, Tokyo), i beleive all automatic cars have higher idle revs as compared to manual. Please re-consider again the data provided.
Thankyouand best Regards
driving style.
M/T would always be more fuel efficient than a/t as torque converter always has losses over clutch type system... You're right... One way to reduce economy of M/T is to drive it with low gears and high rpms depending upon driving habits... At high revs & low gears, the fuel efficiency drops drastically!!!!
I agree, if we drive family sedans like drag cars, then we do loose lot more, on fuel plus engine life, CV joints and lot more, plus we prove to be show off. Anyway, i have been searching lot for a better car, but right now, Corolla 1.6Dual VVTi seems to be a better option. I tried to find out the spares for Belta, but they are not available, secondly city may be good on fuel in manual transmission, but i havent heard a good repute of this Honda. What mostly people said that it is has Thud on jumps, not like corolla which one can keep driving at high speed on rough roads.
I dont know, but Honda cars are not as strong as Toyota, although Honda seats are far far better than Corolla.
Can anyone post pictures of GLi automatic interior, or ALTIS manual 1.6 2011?
Can somebody tell me, one who has driven GLi or altis 1.6 Dual VVTi about its maximum speed? What is the maximum speed this one can go, or has touched? I am a fast driver.
[QUOTE=tashfeen67;2941795]Can somebody tell me, one who has driven GLi or altis 1.6 Dual VVTi about its maximum speed? What is the maximum speed this one can go, or has touched? I am a fast driver.[/QUOT
"DEW NA KIA TO PHIR KIA JIYA"
"Dil hona chahida jawan, umran ich ki rakhya"
Seems like you've been living your life in FAST LANE that you couldn't notice coming of ripe old age.
siane kehte hain .. jo BABAY iss umer main bigartey hain wo phir kabhi nahi sudharte
Just kidding sir, take it easy, I am just envious of you
good going | https://www.pakwheels.com/forums/t/corolla-2011-petrol-average/169379?page=2 | CC-MAIN-2019-04 | refinedweb | 1,161 | 73.88 |
Shimul, my friend, suggested me to add some advanced feature of creating setup project like registry key. This blog contains basic steps of adding a registry key from setup project as well as reading a registry key value from the program.
How to see the registry keys:
Click on Start Menu > Run > regedit [enter]. Here you will see the following.
You can find a short description on registry key here
How to Add Registry Keys from Setup Project:
- In Solution Explorer, right click on Setup Project.
- Click on View > Registry.
- Expand the tree view of Registry key (setup) in the left side where you want to add a key. Here we will add a key in following directory.
- Right click on [Manufacturer] > New > String Value.
- Rename the newly added key name. Set its value from it’s properties.
How to modify Manufacturer:
You can see the Registry value after installing which is like below:
Sample Code to Read Registry Key:
using Microsoft.Win32;
string v = getRegistryKeyValue(@"SOFTWARE\BUET", "MyKey");
MessageBox.Show("Key value : " + v);
private string getRegistryKeyValue(string keyPath, string keyName)
{
try
{
RegistryKey rk = Registry.CurrentUser.OpenSubKey(keyPath);
string s = (string)(rk.GetValue(keyName));
rk.Close();
return s;
}
catch (Exception ex)
{
return null;
}
}
3 Comments:
thanks buddy :)
i want to add an uninstall link in the User's Program Menu, can you help me with this?
This is an awesome post. Figured it out. | http://tipsntricksbd.blogspot.com/2008/07/registry-key-add-read-in-vs-2005.html | CC-MAIN-2018-05 | refinedweb | 234 | 67.55 |
Because apparently I don't have one. This code is from Teach Yourself C++ in 21 Days, listing 5.7, which is supposed to be written with the ASCII standard. I get the error message ISO C++ forbids declaration of ' from Dev C++ on line 25 (an opening brace for the function VolumeCube). The code is exactly as it appears in the book, save for "using namespace std". Thanks in advance.
Code:#include <iostream> using namespace std; int VolumeCube(int length, int width = 25, int height = 1); int main() { int length = 100; int width = 50; int height = 2; int volume; volume = VolumeCube(length, width, height); cout << "First volume equals: " << volume << "\n"; volume = VolumeCube(length, width); cout << "Second time volume equals: " << volume << "\n"; volume = VolumeCube(length); cout << "Third time volume equals: " << volume << "\n"; return 0; } VolumeCube(int length, int width, int height) { return (length * width * height); } | https://cboard.cprogramming.com/cplusplus-programming/60124-can-someone-brain-point-out-something-here.html | CC-MAIN-2017-22 | refinedweb | 145 | 68.5 |
The.
Those are the bloggers who wrote their solutions in their blogs:
The commentators who got it right are:
David Poll, Michael Mrozek, Mark R, OJ, Edward Shen, ken, Thaison Lam, Shailendra, Simon Ask Ulsnes, pavan kumar, Kirill Sorokin, Florian K., Stephen Burton, Poul Foged Nielsen, Micha? Bendowski, Ivan Grasso, Marius Klimantavicius, thebol, steve_barham, Shams Mahmood, Samuel Williams, Sebastian U, Tim C, Hegi, NathanLaan, Guy Gervais, anton, Mori, Stephen Goldbaum, kranthi, ZagNut, Angelo vd Sijpt, krzysztof kreja, AS, Luke, Richard Vasquez, Christof Jans, Pavel, Randoom, Luca Giurina, Josh Bjornson, leppie, Kris Warkentin, cleg, grigri, Austinian, John, Martin, Jesus DeLaTorre, Hugh Brown, Michael, Tim Kington, Chris Jobson, JJ (it should be ((n+1)*(n+2)/2)), st0le, Guido Domenici, Michael Dikman and Frank D.
This Weeks’t forget to mention the complexity of your solution!
You can get updates by RSS so you catch up with the next posts in this series and get the correct answer to today’s question. As always you may post the solution in your blog or comment. Comments will be approved next week.
We pay for user submitted tutorials and articles that we publish. Anyone can send in a contributionLearn More
James Curran Said on Jul 7, 2008 :
Are the numbers in the list unique? Or could we have (2,4,7,3,7) & 14 with the match being 7 & 7?
Shahar Y Said on Jul 7, 2008 :
Hi James,
To make things less complicated, you can assume that the numbers in the list unique.
Alex Said on Jul 7, 2008 :
The most time-efficient way would be to utilize a hashtable. (Or, simply a set if your language supports it)
Let’s take the example of searching for a sum 14
in the list 2,6,4,9,1,12,7.
Iterate through the list. For each value x in the list, add it to the hashtable, then check if 14-x exists in the hashtable. If not, go to the next item. if it does, return true (you found your pair). That “add to hashtable before checking” thing is necessary in order to prevent a 14-7=7 situation.
If you reach the end of the list without finding a matching pair, return false.
This algorithm is O(n) time, but also O(n) space.
The most space-efficient way is O(n^2) time, O(1) space:
For indexes 0,1,2,3,4,5, check 0&1…0&5,1&2…1&5, etc. A simple (14-list[i] == list[j]) comparison against every pair of elements evaluated will yield the results. Again, return when you find true, and if you make it all the way through the list without a match, return false.
Shams Mahmood Said on Jul 8, 2008 :
Here’s my solution, assuming all integers in the input list are positive:
private boolean isMPresentInInput( final int[] input, final int m ) {
final boolean[] present = new boolean[m + 1];
for( int i = 0; i < input.length; i++ ) {
if( input[i] <= m ) {
present[input[i]] = true;
}
}
for( int i = 0; i < m / 2; i++ ) {
if( present[i] && present[m - i] ) {
return true;
}
}
return false;
}
Time: O(n + m)
Space: O(m)
Morgan Cheng Said on Jul 8, 2008 :
Here is a time O(n*log(n)) and space O(1) solution:
The first step: sort the N-element list which takes O(n*log(n)) time complexity.
The second step: search two elements whose sum is M. Since the list is already sorted(smaller to the left and larger to the right), we find one number from left to right, while the other number from right to left. The index of left number is indexed by i, and the index of right number is indexed by j. For each step, we calculate delta = M - Array[i] and compare it to Array[j]. If delta equals Array[j], we can return true; if delta Array[j], we increment i and continue. When i >= j, we break and return false.
The C# code is like below:
static bool CheckSum(int[] numArray, int sum)
{
Array.Sort(numArray);
for (int i = 0, j = numArray.Length - 1; i numArray[j])
{
break;
}
else if (delta < numArray[j])
{
–j;
}
else // (delta == numArray[j])
{ return true;
}
} while (i < j);
}
return false;
}
Mark R Said on Jul 8, 2008 :
The naive approach, O(n^2): for each number in the list, test each number further down the list; return true if they sum to m. When you reach the end of the list, return false.
A faster approach, O(n log n): sort the list using your favorite O(n log n) sorting algorithm. For each number list[i], subtract from m and return true if a binary search of the sublist list[i..n] finds this value. Return false when you have iterated to i>n, or list[i]*2 > m.
I hope this is the optimal solution, because it’s the best I would have done in the time allotted for an interview.
Kimmen Said on Jul 8, 2008 :
The [first] solution I came up with is actually using a look-up table ;P, a hash table storing numbers. For each number, check if m - list[i] is in the look-up table. Return true if it is, else add the list[i] to the hash table.
This will end up with O(n) in both memory and time complexity. But I have no doubts there’s an algorithm with better memory complexity!
Tim Said on Jul 8, 2008 :
My answer is here:
In summary:.
Kirill Sorokin Said on Jul 8, 2008 :
I believe the below one is good enough both performance and memory-wise.
for (i = 0; i < n; i++) {
for (j = i + 1; j < n; j++) {
if (m - array[i] == array[j]) {
return true;
}
}
}
return false;
Maneesh Said on Jul 8, 2008 :
A simple way to do so is to take the first element of the list(n[0]), subtract it from the result(m), suppose we get x=m-n[0], now if x is positive, we check whether x exists in the list, else we can move to the next element in the list.If x exists in the list, then return true. Repeat this process for all elements in the list. The worst case would be if the last element is a match or there is no match, in which case the complete list needs traversal . In order to optimize the search, we can sort the list and take only the elements which are lesser than the expected result and work with them. This ways the number of traversals can be lowered, although the sorting would entail its own cost, which can be minimized using a good sorting algorithm. I will be posting an implementation on my blog in a day or so.
Luca Giurina Said on Jul 8, 2008 :
int find_copple_max(list_numbers *l, int m)
// return 0 if no copple of numbers > m, else 1
{
int max_first, max_second;
max_first = l->value;
l=l->next;
max_second = l->value;
l=l->next;
while ((l != NULL) || (max_first + max_second value > max_first)
max_first = l->value;
else if (l->value > max_second)
max_second = l->value;
l = l->next;
}
if (max_first + max_second > m)
{
printf(”%d + %d > %d”, max_first, max_second, m);
// Print result
return 1;
}
return 0;
}
Complexity is O(n) max (visit the complete list elements). Memory used: two int;
Heiko Hatzfeld Said on Jul 8, 2008 :
You can find a blogpost here:
Password is “foo”
I will unprotect it on monday.
Paul Said on Jul 8, 2008 :
It seems to me that the answer I’d look for in a job applicant would be to subtract each number in n from m and check to see if the result was in n:
public bool DoesSumExist(List n, int m)
{
foreach (var number in n)
{
if (n.Contains(m - number)) return true;
}
}
That’s probably not the answer you’re looking for, but it’s applicable to an interview question, since I’d be looking for C# coders to work in a business environment where use of the tools (List) and keeping maintainability are important.
Dejan Dimitrovski Said on Jul 8, 2008 :
Assuming we already have a binary search tree implemented, first insert all the nodes in the tree. Then start traversing through the nodes… If the node has a value greater then the sum that we are looking for then move to the next node on the left and ignore all the nodes on the right.
If the value is smaller then the sum we are looking for, start traversing further through the child nodes, check the sum of the current node value and the child node value, if smaller continue traversing, if not ignore the rest on the right.
Hard to express myself correctly in this small text-box but maybe i’ll blog about it later on if i have time
James Curran Said on Jul 8, 2008 :
My solution (er..um… solutions)
Christof Jans Said on Jul 8, 2008 :
static bool CanSum(int[] inputList, int m)
{
inputList = inputList.OrderBy(i => i).ToArray();
int minIdx = 0;
int maxIdx = inputList.Length - 1;
while (minIdx m) maxIdx–;
else minIdx++;
}
return false;
}
//complexity O(n)
Dan Sydner Said on Jul 8, 2008 :
Assuming that m>n.
Fast version O(n) time, O(m) space:
Call the elements of the list x1…xn for each xi insert a 1 into an array, a, at index i. (Time O(n))
Iterate over the the elements and see if a[m-xi] == 1 and if so return true. (O(n) lookups that take O(1) time)
Slow version O(nlogn) time, O(n) space:
Sort the list (Time O(nlogn)) call. For each element xi in list do a binary search in the sorted list to see if m-xi exists. (O(n) lookups that take O(logn) time)
Xerxes Said on Jul 8, 2008 :
Hi - i’ve posted my (probably quite wrong) solution and my (possibly even more wrong) proof on my blog:
appreciate any feedback you could provide.
Alessio Spadaro AS Said on Jul 8, 2008 :
Re-post, as today i got a wp-install page as a result, so i’m not sure it worked.
The first solution uses a HashSet to record the elements as it scans the array as follows:
hs = new HashSet
for each elem in array:
if hs.contains(m-elem) return true
hs.add(elem)
return false
contains and add are both constant time operations on an hashset, so we got O(n) for both execution time and memory.
The second approach sorts the array with a N*logN method (e.g. HeapSort) and then use two indexes to scan the sorted array like this:
sort(a)
h = 0
t = n-1
while (hm t-=1
if sum<m h+=1
return false
Doing so will result in a O(1) memory and O(NLogN) execution
Michael Mrozek Said on Jul 8, 2008 :
I assume we’re not supposed to provide two algorithms, one for complexity and one for memory; the instructions are somewhat unclear.
Sort the list; there are many simple O(n log n) sorts including the one used by C++ in this code. Put pointers at the first and last nodes and sum them. If the sum is less than m, increase the low pointer; if it’s higher than m, decrease the high pointer; if it equals m, return true. Keep trying until the low and high pointers touch, then return false. This is O(n), so the overall complexity is still O(n log n). Memory usage is O(1) assuming the sort is in-place like this code. I’m suspicious that the problem asks for true/false instead of the actual numbers, but I can’t think of a way to lessen the complexity to take advantage of it.
bool check(int l [], const unsigned int lSize, const int m) {
std::sort(l, l + lSize);
int* start = l;
int* end = l + lSize - 1;
while(start != end) {
const int sum = *start + *end;
if(sum m) {
end–;
} else {
return true;
}
}
return false;
}
Jon Said on Jul 8, 2008 :
private static bool Question(List n, int m)
{
bool result = false;
foreach (var i in n)
{
var list = new List(n);
list.Remove(i);
if (list.Contains(m - i))
{
result = true;
break;
}
}
return result;
}
C#: this should handle both a list that has a unique value of 7 (would return false when it sees only one 7), and a list that has 2 values of 7 (would return true because there are two values of 7).
Luca Giurina Said on Jul 9, 2008 :
Sorry… little logic patch…
-while ((l != NULL) || (max_first + max_second <= m))
+while ((l != NULL) && (max_first + max_second <= m))
Luca!
Ronnie Hoogland Said on Jul 9, 2008 :
IList numbers = new List() { 2, 6, 4, 9, 1, 12, 7 };
int m = 14;
var q = from a in numbers
from b in numbers
where (a + b) == m && a < b
select new { Number1 = a, Number2 = b };
foreach (var numberSet in q)
{
Console.WriteLine(”{0} {1}”, numberSet.Number1, numberSet.Number2);
}
Console.ReadLine();
Guido Domenici Said on Jul 9, 2008 :
Here’s a solution that uses O(n) for storage (by using a hashtable) and has a complexity O(n). The idea is that you loop on the array and, on each iteration, if the hashtable contains the number that, added to the current one, yields the target number, then you got your solution:
bool PairExists (int[] nums, int target)
{
Dictionary foundNums = new Dictionary();
for (int i = 0; i < nums.Length; i++)
{
int addendumToFind = target - nums[i];
if (foundNums.ContainsKey(addendumToFind))
{
return true;
}
foundNums.Add(nums[i], true);
}
return false;
}
Michael Said on Jul 9, 2008 :
Did not found anything better than BitArray. Instead of BitArray can be used hashtable, but it is overhead.
required memory is m/8
just one loop.
c#
using System;
using System.Collections;
public class MyClass
{
public static void Main()
{
Console.WriteLine(checkSum(new int[] {9, 2}, 5));
Console.WriteLine(checkSum(new int[] {5, 0}, 5));
Console.WriteLine(checkSum(new int[] {2, 2}, 4));
Console.WriteLine(checkSum(new int[] {5, 0, 2, 7 , 1, 9}, 5));
Console.WriteLine(checkSum(new int[] {7, 8, 9, 7 , 8, 9}, 14));
Console.WriteLine(checkSum(new int[] {}, 2));
Console.WriteLine(checkSum(new int[] {1, 2, 3, 4, 5, 7, 8}, 12));
Console.WriteLine(checkSum(new int[] {1, 2, 3, 4, 5, 7, 8}, 4));
Console.WriteLine(checkSum(new int[] {1, 2, 3, 4, 5, 7, 8}, 8));
Console.WriteLine(checkSum(new int[] {9, 2, 5, 4, 3, 2, 1}, 13));
}
public static bool checkSum(int[] numbers, int m)
{
BitArray myBits = new BitArray(m+1);
foreach (int i in numbers)
{
if (i <= m)
{
myBits[m - i] = true;
if (myBits[i])
{
return true;
}
}
}
return false;
}
}
Michael Dikman Said on Jul 9, 2008 :
I will sort the n integers first in O(nlogn) complexity.
Then i will move with 2 pointers - one from the start (p1) and one from the end (p2) over these integers, with these rules:
If arr[p1] + arr[p2] m, advance p2
Do this until arr[p1] + arr[p2] = m, then return true
Or until p1 and p2 meet then return false;
(this can be done in recursion)
Space complexity is only 2 pointers.
lucas Said on Jul 10, 2008 :
One silly solution.
Store the numbers in a Hashtable. than traversal the list , calc x=m-n[i], if hashtable[x]==true. then return true.
time complexity: o(n)
space complexity:o(n)
ZagNut Said on Jul 10, 2008 :
In C#:
// O(n log n)
static bool TwoNumsInList(List numbers, int sum)
{
for (int i = 0; i < numbers.Count; i++)
for (int j = i; j < numbers.Count; j++)
if ((numbers[i] + numbers[j]) == sum)
return true;
return false;
}
Firefly Said on Jul 13, 2008 :
This is a very easy problem if the list is unique
bool check(List list, int m)
{
//walk the list
foreach(var item in list)
{
if( l.Contains( m-item) )
return true;
}
return false;
}
Firefly Said on Jul 13, 2008 :
since we are looking for i1+i2 = m where i1,i2 is some number in the list. So i2 = m-in
so all we need to do is check to see if list(n) contain i2.
It’s worth pointing out that the efficiency of my solution depend on the efficiency of the list.Contains method which also greatly depend on the type of the list.
Morgan Cheng Said on Jul 14, 2008 :
switch (reader.NodeType)
{
case XmlNodeType.Element:
Console.Write(”", reader.Name);
break;
case XmlNodeType.Text:
Console.Write(reader.Value);
break;
case XmlNodeType.CDATA:
Console.Write(”", reader.Value);
break;
case XmlNodeType.ProcessingInstruction:
Console.Write(”", reader.Name, reader.Value);
break;
case XmlNodeType.Comment:
Console.Write(”“, reader.Value);
break;
case XmlNodeType.XmlDeclaration:
Console.Write(”");
break;
case XmlNodeType.Document:
break;
case XmlNodeType.DocumentType:
Console.Write(”<!DOCTYPE {0} [{1}]“, reader.Name, reader.Value);
break;
case XmlNodeType.EntityReference:
Console.Write(reader.Name);
break;
case XmlNodeType.EndElement:
Console.Write(”", reader.Name);
break;
}
Stephen Goldbaum Said on Jul 14, 2008 :
Assuming non-unique and entries are added to themselves.
Stephen Goldbaum Said on Jul 14, 2008 :
Simplified my solution since the first post.
SS Said on Jul 18, 2008 :
Please take care of integer overflow. I have improved method_2 over method_1.;
} | http://www.dev102.com/2008/07/04/a-programming-job-interview-challenge-11-summing-numbers/ | crawl-002 | refinedweb | 2,884 | 69.62 |
Notifications and observers
One key feature of the framework in addition to small memory footprint and type validation is the implementation of the observer pattern. In Anatomy, we introduced the notion of static observers. Here we will discuss them in more details along with dynamic observers. We will also describe in depth the possible signature for notification handlers and the arguments they receive upon invocation.
Note
The point at which notifications are fired has been discussed in Introducing the members.
Static and dynamic observers
An observer is a callable that is called each time a member changes. For most members it will be:
when the member get value for the first time either through an assignment or a first access when the default value is used. We will refer to this as a ‘create’ event.
whenever a different value is assigned to the member. We will refer to this as an ‘update’ event.
when the value of a member is deleted. We will refer to this as a delete event.
Note
The
ContainerList member is a special case since it can emit
notifications when elements are added or removed from the list. This will
be referred to as ‘container’ events.
The distinction between static and dynamic observers comes from the moment at which the binding of the observer to the member is defined. In the case of static observers, this is done at the time of the class definition and hence affects all instances of the class. On the contrary, dynamic observers are bound to a specific instance at a later time.
The next two sections will focus on how to manage static and dynamic observers binding, while the following sections will focus on the signature of the handlers and the content of the notification dictionary passed to the handlers in different situations.
Static observers
Static observers can be bound to a member in three ways:
declaring a method matching the name of the member to observe but whose name starts with
_observe_
using the
observedecorator on method. The decorator can take an arbitrary number of arguments which allows to tie the same observer to multiple members. In addition,
observeaccept as argument a dot separated name to automatically observe a member of an atom object stored in a member. Note that this mechanism is limited to a single depth (hence a single dot in the name).
finally one can manage manually static observer using the following methods defined on the base class of all members: +
add_static_observerwhich takes a callable and an optional flag indicating which change types to emit +
remove_static_observerwhich takes a single callable as argument
Dynamic observers
Dynamic observers are managed using the
observe and
unobserve
methods of the
Atom class. To observe one needs to pass the name of the
member to observe and the callback function. When unobserving, you can either
pass just the member name to remove all observers at once or a name and a
callback to remove specific observer.
Notification handlers
Now that we discussed all kind of observers and how to manage them, it is more than time to discuss the expected signatures of callback and what information the callback is passed when called.
For observers connected to all members except
Signal, the callback should
accept a single argument which is usually called change. This argument is a
dictionary with
str as keys which are described below:
'type': A string describing the event that triggered the notification:
'created': when accessing or assigning to a member that has no previous value.
'update': when assigning a new value to a member with a previous value.
'delete': when deleting a member (using
delor
delattr)
'container': when doing inplace modification on a the of
ContainerList.
'object': This is the
Atominstance that triggered the notification.
'name': Name of the member from which the notification originate.
'value': New value of the member (or old value of the member in the case of a delete event).
'oldvalue': Old value of the member in the case of an update.
Note
As of 0.8.0
observe and
add_static_observer also accepts an optional
ChangeType flag which can be used to selectively enable or disable
which change
type events are generated.
from atom.api import Atom, Int, ChangeType class Widget(Atom): count = Int() def on_change(change): print(change["type"]) Widget.count.add_static_observer(on_change, ChangeType.UPDATE | ChangeType.DELETE) w = Widget() w.count # Will not emit a "create" event since it was disabled w.count += 1 # Will trigger an "update" event del w.count # Will trigger a "delete" event
Warning
If you attach twice the same callback function to a member, the second call will override the change type flag of the observer.
In the case of
'container' events emitted by
ContainerList the change
dictionary can contains additional information (note that
'value' and
'oldvalue' are present):
'operation': a str describing the operation that took place (append, extend, __setitem__, insert, __delitem__, pop, remove, reverse, sort, __imul__, __iadd__)
'item': the item that was modified if the modification affected a single item.
'items': the items that were modified if the modification affected multiple items.
Note
As mentioned previously,
Signalemits notifications in a different format. When calling (emitting) the signal, it will pass whatever arguments and keyword arguments it was passed as is to the observers as illustrated below.
class MyAtom(Atom): s = Signal() def print_pair(name, value): print(name, value a = MyAtom() a.s.connect(print_pair) a.s('a', 1)
Suppressing notifications
If for any reason you need to prevent notifications to be propagated you can use
the
suppress_notifications context manager. Inside this context manager,
notifications will not be propagated. | https://atom.readthedocs.io/en/latest/basis/observation.html | CC-MAIN-2022-21 | refinedweb | 932 | 52.29 |
Ring Display: Building Custom ZK Components
Ring Display: Building Custom ZK Components
Learn how to dynamically show off your site's products using custom components built with CSS, Java, and JavaScript.
Join the DZone community and get the full member experience.Join For Free
Hello.
From a top view, it will look like our content frames are moving in a circle, but they should still always face the viewer.
Got it, So Which Library Are We Integrating This Time?
Well, there are a bunch of JavaScript animation frameworks with 3D effects. But, this time, we are going to do it all with CSS and math.
Mathematics? Argh… I’m Already Getting a Headache
Settle down, it’s easy as pi.
I Hate You So Much.
Jokes aside we are just going to calculate angles, you may even enjoy it. The first thing you need to know is that in css, we can chain transformations.
For example, let’s consider the following steps:
Rotate 45 degrees to the left.
Translate 100px in front of you.
Rotate 45 degrees to the right.
The result is that you are facing the same direction as you started, after a translation to of 100px in the direction of 45 degrees to the left. From a top-down view, it would look a little like this:
Alright, So We Are Moving Stuff Around a Bit, Then What?
Well, if we do this with, let’s say, 8 panels, with an angle of 1/8 of the total circle between each item, they would land in a circle of radius 100px. And it would look like this:
I see… but When it Turns, We Need to Recalculate Every Angle. That’s Going to Be a Nightmare!
Not quite. We can make it simpler but just rotating the whole frame. This way, we don’t need to transform the individual panels any further. We only need to make sure they stay facing toward the user.
Alright, so let’s list the calculations required.
radius - Circle radius: This will be the translation distance from the center. Items will be moved from the center by this distance. We can either define a static value or calculate one on the fly.
nbItems - Number of Items: That one is easy.
Theta - The angle of the “slice” of the circle between two items: A full circle is 360 degrees, so theta is 360/nbItems
Item angles – the angle between each item position and the origin position (first item). We will use this angle to perform the first rotation, which will align the items for translation. This angle will arbitrarily be 0 degrees for the first item, since we use it as a reference. For each other item, it will be equal to theta times their index.
Frame angle – the angle between the frame starting position and its current position. It will be equal to the item angle of the item to show in the center of the view.
Item inverse angle – the angle applied to each panel after the translation to make them face the viewer. For each item, this angle is calculated as the item angle (theta * index) minus the frame angle. It will be updated each time the frame rotates when choosing a new item to display in the center.
How Are They Going to Move? What About the Animation?
What about it? We will be using CSS properties to position all these items. And you know what? The transformation property in CSS can be animated just by adding a transition. Writing the whole animation process will be done in a single line of CSS.
Alright, Code Time!
First order of business will be to test this idea in a sandbox page to make sure it works. Once we are satisfied with that, we can go on to package everything nicely.
Since we are going to make all of this into a component eventually, we can work from a zul page. It will give us access to the ZK client environment. We are going low-tech with mostly CSS, but we will still enjoy having ZK shortcuts available, as well as a jQuery JavaScript framework bundled in.
First Let's Have a Look at the Structure
We want the component itself to act as a container. Since we won’t use any advanced features, the ZK Div component is a natural choice.
We want our moving items to be containers as well. While we are prototyping with text content only, we want to be able to nest any ZK component in the final product.
ZK Div components are a good choice here. They will render as regular HTML div elements, but with hooks for ZK events and behaviors. More complicated containers such as ZK window, or panel would just add more complexity to the building process but could have been chosen for this job if we wanted their specialized features such as title, or close button.
<div sclass="frame"> <div sclass="item">test 1</div> <div sclass="item">test 2</div> <div sclass="item">test 3</div> <div sclass="item">test 4</div> <div sclass="item">test 5</div> <div sclass="item">test 6</div> <div sclass="item">test 7</div> <div sclass="item">test 8</div> </div>
That’s it? That’s Surprisingly Short.
Well yeah, did you expect a 500-line sample? We just want to make sure it works, remember?
Now that the items are ready, we just need to make them move a little. We have two cases here. First, the initial rendering. Then, repositioning the frame when one of the items is clicked.
Let’s define our objects first:
Frame: the container that we are going to rotate to make our circle turn. We are using a query selector to find it by its class name.
Items: the collection of moving items turning on our circle. We are using a query selector to find them all by their shared class name.
Let’s translate our calculations into individual JavaScript variables and functions, this will make assembly easier. For this test, we are not trying to play with dynamic width or perfect centering, this will come later. We just want to make sure our idea works.
Radius: let’s just go with a flat 500px. We will worry about resizing and dynamic size later. For now we just want to make sure the general idea is working.
nbItems: For this one, we will just count the number of objects in the items collection. This will let us add or remove children to test larger or smaller circles simply by adding or removing elements with the item class.
Theta: the slice size, as mentioned previously, will be equal to 360 degrees divided by the number of items.
selectedIndex: the index of the item to be displayed in the center of the screen. We can use this index to track the position of the circle and change it to move to a different item.
Now for the dynamic stuff.
rotateItems: we will call this method whenever the selected index changes. It will set the new rotation values for the items and the frame. We calculate the
frameAngle as described above, by multiplying the current index and the theta value. We are also making this value negative to achieve a counter-clockwise numbering, which is more natural to read from the side of the circle.
In the same way, we are calculating an
itemAngle for each individual item based on their index and the theta value.
With all these values, we can apply the following transformations.
On each item, starting at the center of the circle: rotate by itemAngle, move horizontally by radius, and then counter rotate by the negative of frameAngle plus itemAngle (item inverse angle).
On the frame, starting at the center of the screen: move toward the back of the view by radius (to reposition the center of the circle away from the viewing plane) and rotate by the current frameAngle based on the selected item’s angle.
var frame = document.querySelector('.frame'); var items = frame.querySelectorAll('.item'); var radius = 500; var nbItems = items.length; var theta = 360 / nbItems; var selectedIndex = 0; function rotateItems() { var frameAngle = theta * selectedIndex * -1; for ( var i=0; i < items.length; i++ ) { var item = items[i]; var itemAngle = theta * i; item.style.transform = 'rotateY(' + itemAngle + 'deg) translateZ(' + radius + 'px) rotateY('+ ( -1 * (frameAngle + itemAngle)) +'deg)'; } frame.style.transform = 'translateZ(' + -radius + 'px) rotateY(' + frameAngle + 'deg)';} jq(".item").each(function(index,item){ jq(item).on('click',function(evt){ selectedIndex = jq(item).index(); console.log(selectedIndex); rotateItems(); }); }); //initialize item posisition on page load rotateItems();
Right. Let’s Declare Some CSS. Let’s Animate This!
.frame { transform-style: preserve-3d; transition: transform 1s; } .item { transition: transform 1s; }
Huh... Somehow, I’m Underwhelmed.
I told you so. We just tell the browser to animate the transition, and since we are not just moving the items from point to point, but rotating them, it will follow the circle path.
Also, don’t forget the
transform-style: preserve-3d rule in CSS. We tell the frame that it should be the point of reference for the transformation of its children. Without it, the items would rotate relative to the page itself.
Let’s test this! Here's the full page, with a bit more CSS to make it pretty.
Ok That’s Fun and All, but We Can’t Reuse That.
You are absolutely right. We want to package it in a way that is reusable, easy to maintain, and easy to plug into any ZK application we develop.
We are using custom structures, custom styles, and custom scripts to create this effect. This would make deploying this type of script one by one horrible, so let’s package it as a component instead.
A Component?
Yes. Components are building blocks of a ZK page. The goal here is to just declare a <ringdisplay> component in your zul file, rather than include custom scripts manually, or load all our resources using script and style tags.
As a packaged component, all our resources and scripts will be stored in a jar file. We will include custom classes, component configuration files, and default options. Then, we only have to drop this package in a ZK application to start using the <ringdisplay> tag in a zul page.
Let’s Make That! What’s the Package Structure?
Well, if you've never heard of components, you can find a good primer here.
First things first. We don’t need to start from zero here. There is a maven archetype for component creation, which already contains the structure and testing tools. Have a look here if you need some details on how to use it to initialize your project.
The project will initialize with a single component, but I already added the files to handle two.
Slow Down, Partner. Two Components?
Yes, the ringdisplay component will be the container, and each panel will be a separate ringitem component. We can package multiple components in the same add-on jar file, but it’s better to stay consistent. You always want granularity, so, as a rule of thumb, you can consider creating separate packages if you don’t have strong ties between every component in your project.
I will only describe the ringdisplay component files, since the ringitem follows the same structure.
So, What’s in the Archetype?
Many things. We have Java classes to represent the component on the server side, JavaScript files containing the client-side code and CSS files containing the styles.
Just Wait a Second, There Is More in There Than in Our Test!
There is a bit more. Since we are implementing the final version of the component, I thought I might as well add some quality of life features, such as infinite scrolling and optional flags for different usages. I already added the following:
Infinite scroll. The circle will use the shortest path instead of just resetting the rotation when going from the last to the first item, or from the first to the last.
Optional attributes. perspective, depth, and itemWidth can be set on the ringdisplay component to adjust these parameters on each instance.
Resizing support. The circle radius is calculated based on the width of the ringdisplay. If the size of the ringdisplay is dynamic, the circle radius will grow and shrink when the container does.
Event listening on index selection and on item click.
Click to turn: Boolean parameters to choose if clicking on an item that will make the ringdisplay turn to the item’s position or just stand still and send a click event.
Display improvements such as opacity fading in the background and item vertical centering.
API methods to call from Java. The next and previous pictures can be tied to other ZK behaviors and events.
Since we have full control over the component code, we can add or remove any feature we like!
Alright, Let’s Have a Look at the Java Classes Then.
First, the Java class for ringdisplay. It is located in src/main/java/[package name]/Ringdisplay.java. This class will provide the server-side behavior for our component. It will handle events, getters, and setters. It will also contain the private and public methods available on our ringdisplay.
So, We Must Write Every Single Behavior Ourselves? That Will Take Forever!
Not forever, but close enough that I’m not willing to do it. Fortunately for us, we can just extend a ZK component that already has defined behaviors. In this case, let’s extend a ZK div. It already has all of the container traits necessary for our component to act as such, and we will just need to add our customizations as new methods, or by overriding default methods of the Div component.
Let’s Have a Look at the Class Content.
First, we extend Div here.
... import org.zkoss.zul.Div; public class Ringdisplay extends Div { ...
Then, we define our internal values. Those values will be unique for each instance of the component. We can use them to store component states. Of course, if we want to use or modify them directly, we will need to give them getters and setters.
/* properties of the component. * Values declared here will act as default if the user doesn't specify it */ private String _perspective ="1000px"; private int _itemWidth = 200; private String _depth = "100px"; private int _selectedIndex = 0; private boolean _clickToTurn = true;
In the setter, we can use the
smartUpdate method inherited from the ZK
abstractComponent to send updates to the client engine when necessary. Since the component class is a server-side only object, these updates are the trigger to notify the client when a value has changed.
/* properties setters * smartUpdate() will be called to automatically propagate the value to the client * if the value is not equal to the previous value */ public void setClickToTurn(boolean clickToTurn) { if (!Objects.equals(_clickToTurn, clickToTurn)) { _clickToTurn = clickToTurn; smartUpdate("selectedIndex", _clickToTurn); } }
There is a special method called service in all ZK components. This method is invoked whenever the client sends an update to the server. It receives events and dispatches them as necessary. In our class, we override the default service to handle events specific to our new component or pass them to the super implementation otherwise.
Lastly, we define default client events listeners, which will authorize the client to send events with the declared name(s) to our component, even if we don’t manually declare them for each instance. We can declare these in a static bloc to make sure that they are executed for each new instance.
static { addClientEvent(Ringdisplay.class, "onItemSelect", 0); }
What About the JavaScript Files? What Are They Good for?
Well there are two we need to write: the widget and the mold. The widget is the JavaScript client-side object mirroring the server-side Java instance of the component class. It will contain both the property values and the behavior used by the UI on the client-side. The other is the mold. Simply put, the mold is used to generate the HTML elements used in the page.
Let’s Look at the Widget.
Alright, the first thing you may notice is that the widget properties are significantly like the Java class. After all, we want to keep track of the same variables.
What’s That “$define” Business?
You mean these lines? It’s a ZK coding shortcut. Instead of writing a getter and a setter, properties declared with
$define automatically generate a transparent getter, and will execute operations after the setter is invoked.
For example, this:
$define: { depth: function(){ if(this.desktop) { updatePosition(); } } }
is equivalent to that:
getDepth(){ return this._depth; }, setDepth(depth){ this._depth = depth; if(this.desktop) { updatePosition(); } }
And the Mold?
The mold will generate the DOM elements representing our widgets in the html page. For the ring display, we want to output an outer div (fixed) and an inner div (rotating when updating the selected index).
In the mold file, you will find a single JavaScript function with an out argument. The out argument must be passed to every child of the component. They will add their output to it. We also have access to our widget instance, and therefore we can use its functions to fill in attributes as we need.
In this case, we first output the outer div.
out.push('<div ', this.domAttrs_(),'>');
this.domAttrs() is a default method which will automatically fill in the class and id attributes based on our object’s properties.
Then, we add the inner div.
out.push('<div id="',uuid,'-ring" class="',zcls,'-ring">');
Since this node is also part of the same object, we give it an ID based on the UUID of the main node, but with a suffix (
-ring). Same for the class, which is the main class with the suffix
-ring.
Finaly, we output every child of this component:
for (var w = this.firstChild; w; w = w.nextSibling) w.redraw(out);
Then we close our tags.
out.push('</div>'); out.push('</div>');
Completed, it will look like:
out.push('<div ', this.domAttrs_(),'>'); out.push('<div id="',uuid,'-ring" class="',zcls,'-ring">'); for (var w = this.firstChild; w; w = w.nextSibling) w.redraw(out); out.push('</div>'); out.push('</div>');
Got That. And Now CSS? Any Difference With Regular CSS?
None. Just write CSS selectors matching our widget classes and fill in the rules. These stylesheets will be compiled into a .wcs file automatically served by ZK to the client. This ensures that our CSS rules are deployed to the page.
So, There’s Also Some XML-Looking Files…
Right on. There are two essential configuration files needed to let another ZK application use our plugin.
-/src/main/resources/web/js/ringdisplay/zk.wpd.
Zk.wpd is the package descriptor. It’s a catalog of resources that need to be bundled and served to the clients when adding our widgets to a page. If we declare a widget class in this file, the packaging process will look for the JavaScript widget file (in the same folder as the zk.wpd file), the mold file(s) in the /mold folder and the stylesheets in the /css folder. If we must load external scripts, we would declare them in here too.
In this case, we are building our widgets using only basic Div components, so we don’t need to add a dependence on other packages. If we were using classes from a different package, we would ensure that they are loaded first by adding a depends instruction to our package.
-/src/main/resources/metainfo/zk/lang-addon.xml
Lang-addon is a language definition file that defines a component set. In this case, we want to add more components to the default zul langage. We do this by declaring the language name for our addon as zul/html, which will make our ringdisplay and ringitem available in zul pages.
In lang-addon, we tell ZK which client-side widget and server-side Java class should be used when parsing a zul file containing our component.
For example, in the following snippet, we define the name to be used in a zul page, the target (Java) class to represent the component, the widget (JavaScript) class to represent the widget, the mold(s) available for this component and the stylesheets to load when this component is used.
<component> <component-name>ringdisplay</component-name> <!-- required --> <component-class>org.potix.ringdisplay.Ringdisplay</component-class> <!-- required --> <widget-class>ringdisplay.Ringdisplay</widget-class> <!-- required --> <mold> <mold-name>default</mold-name> <mold-uri>mold/ringdisplay.js</mold-uri> <css-uri>css/ringdisplay.css.dsp</css-uri> </mold> </component>
We could also define default values or other advanced language features. For more details, see the lang-addon documentation page.
And with that, the component is complete. The only thing left is to package and use it.
Let’s Go, How Do We Put it in Our Main Application?
First, we build it. We are using maven in this project, so packaging the component is quite easy. We only need to run a maven command to generate the jar package:
mvn clean package
The resulting package will be found under the /target/ folder as /target/ringdisplay-0.0.1.jar
And Then?
Well, you don’t need me to tell how to add a jar to a Java application. You can do everything you would do with a jar: add it to your build path, class path, or lib folder; or install it into your local maven repository and declare it in your main application’s POM file.
Let’s go with the last one. First, we run the maven install command on our component project’s POM file location:
mvn clean install
Then we declare it in the main application’s POM with:
<!-- require local mvn install --> <dependency> <groupId>org.potix</groupId> <artifactId>ringdisplay</artifactId> <version>0.0.1</version> </dependency>
Install Complete! What Now?
Just use it in a zul file. I made a Step 2 page to illustrate the kind of design we could create using this component.
<ringdisplay selectedIndex="@load(vm.selectedIndex)" hflex="1" height="500px" itemWidth="250" perspective="1000px" depth="300px" clickToTurn="false"> <forEach items="@load(vm.chairModel)"> <ringitem> <image src='${each.imageUrl}' height="300px" /> </ringitem> </forEach> </ringdisplay>
Have a look at the full projects here:
In Summary
We have seen how to use CSS to create a simple but visually interesting presentation, and how to integrate ZK behaviors and events to this display. We also went through the whole component creation process, and the critical files to use when building a ZK component from the ground up.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/ringdisplay-building-custom-zk-components | CC-MAIN-2019-35 | refinedweb | 3,803 | 66.03 |
Integrate with Facebook to help you build engaging social apps and get more installs.
Check out this quick tour to find the best demos and examples for you, and to see how the Felgo SDK can help you to develop your next app or game!
Integrate with Facebook to help you build engaging social apps and get more installs.
Allow people using your app to publish content to Facebook. When people of Facebook engage with these posts they are directed to your app or your app's App Store page (if they don't have your app installed).
Let your users easily sign in to your app with their Facebook account. If they have already signed in, they don't have to reenter their username and password again.
The Graph API is the primary way to get data in and out of Facebook's social graph. You can use it to query data, post new stories, upload photos and a lot of other use cases.
Allow your users using your app to publish from your app to Facebook. When people of Facebook engage with these posts they are directed to your app or your app's App Store page (if they don't have your app installed).
You can use the Facebook item to retrieve information about a logged in user and to send read and publish graph requests.
Facebook can help your apps & games become more popular and engaging for your users.
Retrieve information about your users and their friends
Integrating the Facebook plugin allows your app to get additional information about your users. You can greet them by their first name or save time during a registration process by prefilling the gender or birth date from Facebook.
Reach other people by posting to a User's or App Wall
You can post a message directly to a user's feed, to one of the user's friend's or to your page's feed by using Facebook::postGraphRequest(). See the Direct Wall Post Example for implementation details.
Add a Log In to your App
The Facebook plugin integration allows people to log into your app quickly without having to remember another username and password and therefore improves your conversion. A Facebook login also allows to login to the same account across different devices and platforms.
Like a Facebook Page or App
There is no default interaction of liking Facebook app pages, as initially opening a session with openSession() already registers a user with your app. Instead of liking an app, you can like specific objects in your app or game, e.g. different levels, achievements, etc. These are called stories that follow the "actor verb noun" syntax, for example "Karen likes MyGame Level 1".
Additionally to the use cases for your apps you can also make advantage of the Facebook plugin when integrating in your games. In the following sections, the most popular use-cases are explained:
Brag About Game Progress
It's common practice in mobile games that whenever a player reaches a goal or makes some progress in a game, he can post about it in his news feed (also referred to as his "wall"). The user will then enter a personal message or opinion about something noteworthy that happened in the game, which makes it more interesting for others in comparison to pre-filled text. The message can either be posted on the logged in user's wall, to a friend's wall or to the wall of the Facebook page of your game.
Scores
In addition to other scoring services like Felgo Game Network Facebook can also be used for storing scores online. This has the advantage that friends of your users can compare their results and thus get motivated to beat
the other's highscores. Have a look at the Score API to see how to integrate scores by using Facebook::postGraphRequest() and getGraphRequest() with the
"me/scores" request type.
Invite Friends
Allowing to invite the player's friends to your game can be a major driver of game downloads. You can filter the friends who did not download your game yet to avoid multiple requests.
Match-Making with other Facebook Users
If you are using any kind of multiplayer functionality like in turn-based games, you can select other players that are already using your game as gaming partners. These do not necessarily need to be friends with the logged in player.
Further Game-Related Functionality Available with Facebook
For more information on gaming-related functionality that Facebook offers have a look at.
The Facebook plugin supports single sign-on (SSO) which makes it convenient for your users to login to Facebook.
The SSO feature allows your users, to login to your app with their existing Facebook account and therefore simplifies the login process.
There are 3 different scenarios for SSO:
In scenario 1 and 2 the user is asked to give your app the requested permission defined in Facebook::readPermissions and Facebook::publishPermissions without the need of entering his Facebook credentials beforehand. These are the most convenient methods for your users.
In all other cases the plugin open a web view as a fallback which asks your users to enter their login credentials and afterwards to grant the app permissions. Once the users are logged in, the credentials are stored as cookies in the web view and it's not required to enter the credentials again for every continuing openSession() calls.
So in both cases, either native Facebook integration or not, the user only needs to log in once for the lifetime of your application.
Since the login credentials are stored as explained before, changing to another Facebook user which is often needed during development requires some additional steps. If the native Facebook app is installed, you need to logout in the native app to be able to login with another user after the next call of openSession(). If you are testing on iOS and do not have the native Facebook app installed, open Safari and also logout there, because the login credentials are stored as cookies. Also make sure, to log out your Facebook account in the iOS Settings in the Facebook section, where the application-wide login credentials are stored.
The following sections describe the steps required for adding Facebook connection to your game.
Here is a quick overview of the steps needed:
Go to and create a new Facebook app. On the dashboard, add Facebook Login by clicking "Set Up". You can skip the quickstart guide, as the Facebook item already handles these steps.
In the app settings you now see your App ID and App Secret, which you will need in the next step. You should also create a Facebook canvas page, which is shown when users click on the Facebook graph stories in their web
browser. If this html page is for example hosted on, add
felgo.com to the App Domains. The
following screenshot shows the settings of a test application, where we set the canvas url to the one of ChickenOutbreak Demo.
To test your app or game on iOS & Android, add these platforms in the Facebook settings. Enter your app identifier you have set in the config.json
file as the iOS
Bundle ID and Android
Package Name. The
Class Name should be set to
net.vplay.helper.VPlayActivity, which is also set in your
android/AndroidManifest.xml configuration. For Android, also add the Key Hashes for each of your used signing certificates. See Facebook::printAndroidKeyHash() on how to get your key hashes. The following screenshot shows example settings to configure your Facebook app for iOS & Android.
Note: The Deep Linking setting will launch your app or game when users click on a message in their timeline. We recommend to enable it, as it brings players back to your app or game.
After creating the Facebook app, add the Facebook component to your main qml file. The following example shows the Facebook additions: add the
import Felgo 3.0 statement, the
Facebook component with your Facebook::appId and set the facebookItem property to the id of your Facebook item.
import Felgo 3.0 import QtQuick 2.0 GameWindow { Facebook { id: facebook // this is the Facebook App Id received from the settings in developers.facebook.com/apps appId: "569422866476284" // the permissions define what your app is allowed to access from the user readPermissions: [ "public_profile", "email", "user_friends" ] publishPermissions: [ "publish_actions" ] } // the Scene will follow here }// GameWindow
Facebook requires you to submit your app for review. They need to approve each type of user data item your app wants to access. These items are, for example, the user's friends, posts, or liked pages. In the developer dashboard, click "App Review".
Then click "Start a Submission". Here you can add the privacy concerning permissions your app or game uses.
For example, to allow interacting with the user's friends, you need the friends list permission. For this, check the
user_friends item from the list.
Facebook requires a description of how your app or game uses the permissions. Click "details" and describe how to use Facebook login and the permission step by step.
You will also need to upload a video how the Facebook login works in your app or game. You can use a screen recording tool to create a video similar to this:
Then you can submit your app for review. It usually takes between 24 hours to 14 days until your app gets approved. Once the app was approved, you can use the Facebook item, for example using Facebook::openSession().
To try the plugin or see an integration example have a look at the Felgo Plugin Demo app.
Please also have a look at our plugin example project on GitHub:.
The Facebook item can be used like any other QML item. Here is a simple example of how to add a simple Facebook integration to your project:
import Felgo 3.0 Facebook { appId: "xxxxxxxxxxxxxxxx" readPermissions: [ "public_profile", "email", "user_friends" ] publishPermissions: ["publish_actions"] Component.onCompleted: { openSession() } }
Note: The
user_friends permission only allows to get a list of friends that also use your app and are connected to Facebook.
Before any Facebook interaction you have to open a valid session first. The following example opens a session at app startup and prints information about session state changes to the console:
Facebook { id: facebook appId: "YOUR_APP_ID" onSessionStateChanged: { if (sessionState === Facebook.SessionOpened) { console.debug("Session opened."); // Session opened, you can now perform any Facebook actions with the plugin! } else if (sessionState === Facebook.SessionOpening) { console.debug("Session opening..."); } else if (sessionState === Facebook.SessionClosed) { console.debug("Session closed."); } else if (sessionState === Facebook.SessionFailed) { console.debug("Session failed."); } else if (sessionState === Facebook.SessionPermissionDenied) { console.debug("User denied requested permissions."); } } Component.onCompleted: { facebook.openSession(); } }
You can send a pre-defined message with a defined message text with the Facebook::postGraphRequest() method. The following example sends a message to the own wall, without any further parameters like a link, description or caption. These parameters are also available for this request though, so you can set them if required.
facebook.postGraphRequest("me/feed", { "message" : "Hello me!" })
You can also provide a
to parameter to post on a friend's wall or on a Facebook page. Please keep in mind that posting to a Facebook page is only possible when the logged in user liked the page before.
Posting directly to walls usually has a weaker growth effect than adding a personal message, because the friends' engagement and interest in personal messages is higher. However, you could still open a native input dialog and send the message with the above call afterwards. This has the advantage that the user needs not leave the app and no Facebook dialog is shown. Also, the native Facebook app is not launched and the user stays in your game.
Note: Posting on the user wall requires the
publish_actions permissions. At the first post request, the user must allow the publish permission for your app. Facebook apps that use this permission also
require a review by Facebook before the app can be used by people other than you as the app developer. Startup or Business license. To activate plugins and enable their full functionality it is required to create a license key. You can create such a key for your application using the license creation page.
This is how it works:
To use the Facebook plugin you need to add the platform-specific native libraries to your project, described here:
Add the following lines of code to your
.pro file:
FELGO_PLUGINS += facebook
FBSDKCoreKit.frameworkand
FBSDKLoginKit.frameworkfrom the
iossubfolder to a sub-folder called
ioswithin your project directory.
.profile:
ios { LIBS += -framework Accelerate }
Note: Adding the
Accelerate framework manually is a hotfix that only applies to Felgo 3.3.0.
Project-Info.plistfile within the
iossubfolder of your project and add the following lines of code:
<key>CFBundleURLTypes</key> <array> <dict> <key>CFBundleURLSchemes</key> <array> <string>fb{your-app-id}</string> </array> </dict> </array> <key>LSApplicationQueriesSchemes</key> <array> <string>fbapi</string> <string>fb-messenger-api</string> <string>fbauth2</string> <string>fbshareextension</string> </array>
right before the closing tags:
</dict> </plist>
Make sure to replace
{your-app-id}with your Facebook app's app ID.
build.gradlefile and add the following lines to the dependencies block:
dependencies { implementation 'net.vplay.plugins:plugin-facebook:3.+' }
Note: If you did not create your project from any of our latest wizards, make sure that your project uses the Gradle Build System like described here.
In order to use the Facebook plugin within your app you need to create a Facebook app at first.
The Google Play Package Name of your app, found in the AndroidManifest.xml file.
The Class Name as
net.vplay.helper.VPlayActivity (also set in your project's AndroidManifest.xml file).
The Key Hashes for each of your used signing certificates. See Facebook::printAndroidKeyHash() on how to get you key hashes (the hash in the screenshot above is only a sample value!).
Note: Other SDK versions higher than the stated ones might also be working but are not actively tested as of now.
See also Facebook::openSession(), Facebook::sessionState, Facebook::publishPermissions, and Facebook::postGraphRequest(). | https://felgo.com/doc/plugin-facebook/ | CC-MAIN-2021-04 | refinedweb | 2,359 | 62.48 |
- Problem
- Current Situation
- Final implementation
- Idea 1: The
CacheManager
- Idea 2: Cache control
The problem of cache invalidation
This was the design discussion behind the
trac.cache module (trunk, 1.0.x). See also apidoc:api/trac_cache.
Problem
Trac uses various caches at the component level, in order to speed-up costly tasks. Some examples are the recent addition of ticket fields cache (#6436), others are the InterMapTxt cache, the user permission cache, the oldest example being the Wiki page cache.
Those cache are held at the level of
Component instances. For a given class, there's one such instance per environment in any given server process. The first thing to take into account here is that those caches must be safely accessed and modified when accessed by concurrent threads (in multi-threaded web front ends, that is). That's not a big deal, and I think it's already handled appropriately.
But due to the always possible concurrent access at the underlying database level by multiple processes, there, and you end up confused at best, filing a bug on t.e.o at worst.
This doesn't even have to imply a multi-process server setup, as all what is needed is e.g. a modification of the database done using trac-admin.
This proposal was previously logged as wiki:TracDev/Proposals/Journaling.
Current Situation
So the current solution to the above problem is to use some kind of global reset mechanism, which will not only invalidate the caches, but simply "throw away" all the
Component instances of the environment that has been globally reset. That reset happens by the way of a simulated change on the TracIni file, triggered by a call to
self.config.touch() from a component instance. The next time an environment instance is retrieved, the old environment instance is found to be out of date and a new one will be created (see
trac.env.open_environment). Consequently, new Component instances will be created as well, and the caches will be repopulated as needed.
Pros:
- it works well
Cons:
- it's a bit costly - though I've no numbers on that, it's easy to imagine that if this full reset happens too frequently, then the benefits from the caches will simply disappears. In the past, when the reset rate was abnormally high due to some bug, the performance impact was very perceptible.
- it's all or nothing - the more we rely on this mechanism for different caches, the more we'll aggravate the above situation. Ideally, invalidating one cache should not force all the other caches to be reset.
Final implementation
The final implementation, committed in [8071], is a refinement of the CacheManager idea. The documentation for this feature (starting with the content of this section) now lives in TracDev/CacheManager.
- Creating a cached attribute is done by defining a retrieval function and decorating it with the
@cached_valuedecorator. For example, for the wiki page names:
@cached_value, the attribute should be decorated with the
@cacheddecorator. Accessing the attribute then yields a proxy object with two methods,
get()and
invalidate(), taking an optional
dbargument. For example, this is used in the case of ticket fields to invalidate them in the same transaction as e.g. an enum modification.
-.
The sections below are kept as documentation of the implementation process.
Idea 1: The
CacheManager
This idea introduces a centralized cache manager component that manages cached data, retrieval from the database and invalidation of cached data. The assumption is that it doesn't make sense to retrieve data to be cached from the database more than once per HTTP request.
Every cache is identified by a unique identifier string, for example
'ticket.fields' and
'wiki.InterMapTxt'. It has an associated retrieval function that populates the cache if required, and a generation number that starts at 0 and is incremented at every cache invalidation.
A new table in the database stores the cache identifiers, along with the current generation number and possibly the time of the last invalidation (for timed cache invalidation). The schema would be something like:
Table('cache', key='id')[ Column('id'), Column('generation', type='int'), Column('time', time='int'), ]
So how is the cache used?
- HTTP request: At the beginning of every HTTP request, the complete
cachetable is read into memory. This provides the
CacheManagerwith the current state of the database data. Timed invalidation could also be done at this point, by dropping cached data that is too old.
- Retrieval of cached data: The
CacheManagercan be queried for a reference to a cache. At this point, it checks if the generation number of the cached data matches the number read at the start of the HTTP request. If it does, the cached data is simply returned. Otherwise, the cached data is discarded, the retrieval function is called to populate the cache with fresh data, and the data is returned.
- Invalidation of cached data: Invalidation of cached data is done explicitly after updating the database by incrementing the generation number for the cache in the
cachetable, in the same transaction as the data update, and invalidating the currently cached data in the
CacheManager.
Pros:
- Caches are managed in a single place, and the cache logic is implemented once and for all. This should avoid bugs due to re-implementing cache logic for every individual cache.
- Cached data is consistent for the duration of an HTTP request.
- Caches can be made fine-grained. For example, it may be possible to use separate caches for the values of every ticket field (not sure we want that, though). Invalidation is fine-grained as well.
Cons:
- One additional database query per HTTP request. I don't know how much impact this can have, but I would expect this to be negligible, as the
cachetable should never grow past a few dozen rows.
- Caches must be invalidated explicitly. The same drawback applies to the current situation, so nothing is lost there.
Open questions:
- This strategy should work well in a multi-process scenario. In a multi-thread scenario, proper locking must ensure that cached data is not modified during a request. It may be possible to use thread-local storage to ensure that a single request has a consistent view of the cache, even if a second thread invalidates the cache.
Comments and improvements are welcome. If this approach sounds reasonable, I'd like to do a prototype implementation and apply it to a few locations (the wiki page cache and ticket fields).
Prototype implementation
A prototype implementation of this approach is available in cache-manager-r7941.patch. It implements the cache manager and has been applied to the wiki page, InterMapTxt and ticket field caches.
- Creating a cached attribute is extremely simple: declare a retrieval function with the desired name of the attribute and apply the
@cached_valueor
@cacheddecorator. For example, for the wiki pages:
@cached_value('wiki.WikiSystem.pages'), create the attribute with the
@cacheddecorator. Accessing the attribute then yields a proxy object with two methods,
get()and
invalidate(), taking an optional
dbargument. This is used in the case of ticket fields, to invalidate them in the same transaction as e.g. an enum modification.
- The cache is consistent within a request. That is, a cached attribute will always have the same value during a single transaction. Obviously, cached values should be treated as immutable.
- The
cachetable is read the first time a cached attribute is accessed during a request. This avoids slowing down requests that don't touch cached attributes, like requests for static content for example.
- To test the patch, the table
cachemust be created by hand in the database, as follows (for SQLite):
CREATE TABLE cache ( key TEXT PRIMARY KEY, generation INT );
- Two new
trac-admincommands allow listing the
cachetable and invalidating one or more caches. They are intended for debugging purposes, and should probably be removed if the proposal is applied to trunk.
Discussion
Comments and testing are very welcome. The implementation is quite complete, except for the missing database upgrade module. I have only tested with several concurrent
tracd instances so far.
cboos: feedback
It's really nice, I like it a lot!
When reviewing the code, I think I've detected some possible issues in
CacheManager.get.
- in case there are multiple "out-of-date" threads, each might trigger a retrieval. An improvement would be to check if the
CacheManageralready has a "newer" generation.
- in the locked section, if the generation increases after the cached value retrieval and before the fetch of the latest generation, the
CacheManagermay think it is up to date yet have old data.
Those are admittedly corner cases, I hope I have not missed more important issues while focusing on that ;-) See the first patch attachment:cache-manager_get-corner-cases.patch.
Another little improvement I'd like to propose is to not have to bother with key names and let the decorators figure out the key from the method itself. See attachment:cache-manager-automatic-key.patch. I've also added more documentation to the decorators and changed the order of the definitions in a top-down way (first the decorators, then the descriptors, ending with the proxy class), as I think it's easier to understand that way.
rblank:
Thanks for the improvement ideas, I'll integrate them shortly.
- I'm not sure the DB race condition you describe can actually happen. At least with SQLite, issuing a
SELECTsets the lock state to
SHARED, which disallows writes, so it should not be possible to increase the generation between the data retrieval and fetching the generation. I don't know how this behaves with other databases, though. Maybe it's just safer to fetch the generation first.
- You're right about automating the cache key generation. I didn't want to do it first, because renaming a module or class would have changed the key. But we're not going to rename them, and even if we do, it will only leave an orphaned row in the
cachetable, so it's no big deal. Your patch proposed
{module}.{function}as the key, I'd like to make it
{module}.{class}.{function}.
- If the keys are auto-generated, the decorators don't need any arguments anymore. This allows simplifying them even more by dropping the
cached()and
cached_value()functions, and calling the descriptor classes
cachedand
cached_valuedirectly.
cboos:
- SELECT statements don't start a transaction in PySqlite, as opposed to the other DML statements. So in my understanding, each retrieval is "atomic" and I think there can indeed be a race condition between the SELECT(s) done for data retrieval and the SELECT for fetching the generation. As this picked my curiosity, I tried to see how multiple SELECTs could be done within a single transaction, and this is indeed possible, but a bit heavyweight: see e.g. pysqlite:IdRange, look for
def get_id_range. So I think it's better to simply cope with the race.
- simple oversight on my part; sure, make the class name part of the key.
- great, I didn't know one can do that
cboos: CacheManager used in non-Component classes
When thinking about how to use the CacheManager for getting the
youngest_rev information from CachedRepository, I saw two additional problems:
- we can't use the decorators here, as the CachedRepository is not a Component (and shouldn't be, as we may have many instances per environment)
- so far we have avoided propagating the
envto the
CachedRepository. I think we can no longer afford to do this, if we want to access the CacheManager conveniently. Having the
envavailable would also simplify the
getdbstuff.
So we need to access the CacheManager directly, using something like:
@property def metadata(self): CacheManager(self.env).get('CachedRepository.metadata:'+self.name, self.get_metadata) def get_metadata(self, db): # SELECT * FROM repository WHERE id=self.name
Do you see a better way to do it?
rblank:
Yes, by instantiating
CacheProxy in the constructor and storing it as an instance attribute. This gives it the same interface as if
@cached was used.
self.metadata = CacheProxy('CachedRepository.metadata:' + self.name, self.get_metadata, env)
This does indeed require
env, and changing that will make the
CachedRepository unit tests a bit more complicated :-/
rblank: Update with feedback
The attachment:cache-manager-r7989.patch is an updated patch which should take into account all corner cases described above. Cache keys are now auto-generated from the module, class and attribute name. I have also added the database upgrade code, so the
db_version is now 22.
Are there any other issues that should be considered? If not, the next step would be to plan the integration into trunk. Are there any special considerations when upgrading the database version? What else (other than committing) must be done?
cboos: feedback
- Last round of feedback:
- The API documentation should also mention that
cachedand
cached_valuemust be used within Component sub-classes, and what the
retrievermethod should look like
- in
CacheManager.invalidate, the
if fetchone UPDATE,
else INSERTis not thread-safe (again for the same reason that a SELECT doesn't start a transaction) so we should rather do
try INSERT,
except UPDATE.
Both points are minor and could be done on trunk.
- What to do next?
- maybe send a mail on Trac-dev (in the same thread you started a while ago) saying the topic work is done and ask if anyone has some extra feedback to give
- after the commit, warn loudly on the milestone:0.12, on the TracDev/ReleaseNotes/0.12 and 0.12/TracUpgrade pages that the DB version has increased. It's not that it's problematic to do the upgrade, it's rather because it's inconvenient to downgrade. As long as we keep the DB version compatible, users can eventually go back and forth between trunk and 0.11-stable. Once they did an upgrade, it's not that convenient anymore (but still relatively easy to do in this specific case, of course).
- We could also think about adding some tests for this, though that might be more involved.
rblank:
- Replies to last round of feedback:
- Will do.
- That's what I tried first, but the error due to the
INSERTrolled back the whole transaction. I'll have to find a way to do this in a single statement.
cboos:
Hm right, that can be problematic. So what about this:
cursor.execute("SELECT generation FROM cache WHERE key=%s", (key,)) do_update = cursor.fetchone() if not do_update: try: cursor.execute("INSERT INTO cache VALUES (%s, %s)", (key, 0)) except Exception: do_update = True if do_update: cursor.execute("UPDATE cache SET generation=generation+1" "WHERE key=%s", (key,))
If we were in a transaction, then I suppose the SELECT/INSERT sequence can't fail. Conversely, if it fails, then we were not in a transaction, and we can follow-up with an UPDATE to recover from the failed INSERT.
rblank: Alternative for atomic UPSERT
That could work, yes. How about this:
cursor.execute("UPDATE cache SET generation=generation+1 " "WHERE key=%s", (key,)) cursor.execute("SELECT generation FROM cache WHERE key=%s", (key,)) if not cursor.fetchone(): cursor.execute("INSERT INTO cache VALUES (%s, %s)", (key, 0))
If the row already exists, it is updated, the
SELECT returns a row and we're done.
If not, the
UPDATE does nothing except starting a transaction (or we may already be in a transaction), the
doesn't return any rows, and we do the
INSERT in the same transaction. Doesn't the
UPDATE even return the number
of altered rows? That would void the need for a separate
SELECT. I'm not sure though that the
UPDATE starts a
transaction if no rows are altered. We may have to use a dummy row that is always updated in addition to the desired row.
cboos:
Looks great! I don't think we can have something any simpler, in particular the UPDATE doesn't seem to return the number of modified rows for all backends (at least, that doesn't seem to be possible with SQLite3 and Pysqlite).
rblank: Updated patch
The attachment:cache-manager-r7992.patch improves the docstrings for the decorators and the proxy, and makes invalidation atomic. I'll now ask for feedback on trac-dev.
Idea 2: Cache control
I'm currently thinking about the following solution.
Each time a cache needs to be invalidated (i.e. in the current situations where we call
config.touch()), we would instead call
env.cache_invalidate(cache_key), where
cache_key is some unique key identifying that cache (e.g. "InterMapTxt" or "repository-reponame" for the MultipleRepositorySupport/Cache). This call will atomically increment some generation value associated to the key, in the db (that might be tricky - select for update for Pgsql, explicit transaction for Pysqlite). A simple
create table cachecontrol (key text, generation int) should be enough.
At periodic times, e.g. in
open_environment, we would call
env.cache_update(). That will do a
select * from cachecontrol. The results are stored besides the previously known latest values, therefore we can quickly see which caches need a refresh.
Whenever a Component has to fetch a value from the cache, it will first call
env.cache_is_valid(cache_key). If the result is true, it can retrieve values from the cache. If not, the cache has to be updated first. Once the cache is refreshed, the component calls
env.cache_validate(cache_key).
Example: InterMapTxt cache
For convenience, if a Component only manages one cache (the common case), it can pass
self instead of a string key and its class name will be used.
Only the code changes for trac/env.py and trac/wiki/interwiki.py are roughly implemented. Not tested yet and just to illustrate the above.
See attachment:cache_control-r7933.diff. For testing, manual creation of a cache_control table is needed:
CREATE TABLE cache_control( key text primary key, generation int);
The method names and API has a bit evolved, now I have:
env.update_cache_control(), called in
open_environment
env.invalidate_cache(key), called by the Component in place of
config.touch()
env.is_cache_valid(key), called by the Component when checking for cache validity
env.validate_cache(key)once the cache has been updated
That concludes my initial approach to the problem. Now let's take into account what was proposed in idea 1…
Discussion
While the two approaches are quite close in spirit, there are a few differences.
I initially thought that having the cache control managed at the level of the environment was more natural than having a specialized Component (it's a "service" offered by the environment to all its Components, like providing them with a db connection).
But I see your point in having the cache logic handled "once for all", without the need to re-implement it in various places. If that's doable in a not too complicated way, it may be worth doing it.
I've not yet added time-based invalidation, but if really needed, that can be added as well.
The open problem I see as well is about maintaining a coherent view from the cache during the lifetime of a given request. That might indeed be another argument in favor of a dedicated component with a more advanced cache logic. Anyway, the patch above is at least a first step that seems to work fine in my testing.
Indeed, the basic idea is the same. My goal was to push as much of the logic into the
CacheManager as possible, so that cache users would only have two functionalities: get the data (this could even be hidden by using a
property-like descriptor) and invalidate the cache. There should be no need for cache users to "remember to first check if the cache is valid, then …": this logic is common to all cache users, and can be integrated into the cache manager.
Attachments (6)
- cache_control-r7933.diff (5.6 KB ) - added by 12 years ago.
Proof-of-concept for solution 2 - env cache control and sample usage by
InterWikiMap
- cache-manager-r7941.patch (25.4 KB ) - added by 12 years ago.
Prototype implementation of the CacheManager from idea 1
- cache-manager_get-corner-cases.patch (1.7 KB ) - added by 12 years ago.
applies on top of attachment:cache-manager-r7941.patch
- cache-manager-automatic-key.patch (5.2 KB ) - added by 12 years ago.
applies on top of attachment:cache-manager_get-corner-cases.patch
- cache-manager-r7989.patch (27.0 KB ) - added by 12 years ago.
Updated patch taking feedback into account
- cache-manager-r7992.patch (28.2 KB ) - added by 12 years ago.
Updated patch.
Download all attachments as: .zip | https://trac.edgewall.org/wiki/TracDev/Proposals/CacheInvalidation | CC-MAIN-2021-17 | refinedweb | 3,426 | 56.05 |
Improving the Design and Implementation of Object-Oriented Code: The Ongoing Quest for Data Integrity
An Object-Oriented Implementation Using C++
Object-oriented languages really gained momentum in the late 1990s, but they've been around for a lot longer. Smalltalk and C++ are good examples. Whereas Smalltalk is considered a pure object-oriented language, C++ is actually an object-based language. This distinction exists because Smalltalk enforces object-oriented concepts and C++ doesn't. This point is important because modern object-oriented languages do enforce object-oriented concepts (at least mostly).
C++ is a powerful programming language that was developed to be backwardly compatible with C. You can design and implement object-oriented code in C++, but you can also use a C++ compiler to write straight C code (which is obviously not object-oriented). I've interviewed many people who insist that they're C++ programmers, but in fact they're simply using a C++ compiler to write C code.
Working from the original application in Listing 1, let's design an object-oriented implementation in C++ (by designing a class). We might as well just jump right in to see what the accessor methods would look like. In this case, the setter is setAge() and the getter is getAge(). The code is in Listing 3.
Listing 3C++ implementation designed with a class.
// CPPLicense01.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include <stdio.h> #include <iostream> // License.h // class License { private: int age; public: void setAge(int); int getAge(); }; int main() { License joe; joe.setAge(16); printf("age = %d\n", joe.getAge()); system("pause"); return 0; } // License.cpp // #include "stdafx.h" #include <stdio.h> void License::setAge(int a) { age = a; } int License::getAge() { return (age); }
In this design, you can see immediately that the data and the code are defined (encapsulated) in a single class called License—a true expression of encapsulation. Also, as is normally the case, the data is declared as private and the methods as public. Thus, the attribute, age, has private access, while the methods, setAge() and getAge(), have public access.
This practice illustrates one of my primary object-oriented design guidelines: When you design an application, initially make all attributes private. This practice is sound because the default access for all data will be private.
The syntax of accessor methods sheds a lot of light on the direction that data access has evolved. We once used the equal sign (=) to set variables, as in this line of code from our previous C program:
age = 16;
Now we use methods to set the data values, as in this line from the OO approach:
joe.setAge(16);
Later we'll see how the pendulum may be starting to swing back in the direction of the equal sign. | http://www.informit.com/articles/article.aspx?p=2231455&seqNum=4 | CC-MAIN-2019-09 | refinedweb | 468 | 56.45 |
10 May 2010 23:59 [Source: ICIS news]
LONDON (ICIS news)--European recycled polyethylene terephthalate (R-PET) prices are still mostly at record highs with values continuing to rise due to supply shortages, sources said on Monday.
“The situation is still not under control. It’s a wild market, prices are changing day-by-day,” one flake buyer said.
Regional pricing was becoming more apparent, according to players, with the highest figures seen in ?xml:namespace>
“The [R-PET] market is highly regional. This has become more and more evident in the last few weeks,” a flake and pellet buyer said.
Colourless bottle prices increased by €40/tonne ($51/tonne) from two weeks ago at the low end of the range, to €400-500/tonne FD (free delivered) NWE (northwest Europe), according to global chemical intelligence service ICIS pricing.
The record high of €500/tonne was first reported on 26 April 2010. The previous high was €460/tonne, established from 16 June to 3 November 2008.
Mixed coloured bottle prices rose by €80/tonne from two weeks ago at the bottom end of the range, to €260-330/tonne FD NWE, according to ICIS pricing. The new record price of €330/tonne was first seen on 26 April 2010.
The previous high was €250/tonne, recorded from 4 June 2007 to 9 June 2008.
Colourless flake prices established a new high of €950/tonne FD NWE, rising €50/tonne at the top end of the range from two weeks ago, bringing prices to €840-950/tonne FD NWE, according to ICIS. Prices first hit record levels two weeks ago when colourless flakes were trading at €840-900/tonne.
The previous high was €885/tonne, reported from 16 June 2008 to 3 November 2008. Some sources said that colourless flake was now trading as high as €1,000/tonne due to low availability and strong demand, but this was not widely confirmed.
“We’ve heard €1,000/tonne for flake to the [downstream] strapping industry. This is out of the question for [downstream] fibre producers,” a flake buyer said.
Mixed coloured flakes continued to trade at a record high of €600-750/tonne FD NWE, buyers and sellers said. Some sources saw prices as low as €550/tonne, but this was not widely confirmed.
In line with the general market trend, the highest prices were reported in
The previous high was €650/tonne, seen from 16 June 2008 until 3 November 2008.
Food grade pellet prices increased by €10/tonne at the top end of the range, to €1,060/tonne FD NWE, because of tight supply. Some sources said that food grade pellets could climb further in the coming weeks because of the high price of virgin polyethylene terephthalate (PET), which they must trade below in order to remain competitive.
One food grade pellet producer reported prices as high as €1,200-1,250/tonne FD NWE, but this was not widely confirmed. Prices at €1,200-1,250/tonne would be above the record high of €1,150/tonne, established from 12 September 2006 until 30 October 2006.
R-PET has been in tight supply since the fourth quarter 2009. This was initially caused by low collection rates at recycling facilities due to severe winter conditions across
As weather conditions improved, market players had observed a strong pickup in Asian buying interest, which had kept supply short. According to sources, however, colder than expected temperature conditions were once again leading to low collection rates at post-consumer recycling facilities.
Some players continued to report R-PET bottles, which are the top of the R-PET supply chain, being sold into Asia from
Along with this, virgin PET bottle weight was reduced by 20% in 2009, according to market estimates, meaning that more bottles needed to be collected for each kilogramme of R-PET produced.
Further shortening the market, the global economic weakness and concerns over the environmental impact of plastic bottle usage caused a reduction of PET bottle consumption of around 20% in 2009 compared to 2008, sources said.
Buyers and sellers said that the R-PET flake market was suffering the biggest shortages. Some flake buyers said that they were now investigating switching to granulates instead of flakes, sourced from
“We’re trying to source non-flake alternatives for production, so granulates and things, we’re checking
It was unclear how long the market would remain tight, with some sources estimating that supply shortages would stay in place until at least the end of June.
“The imbalance of supply and demand brings a lot of stress,” a pellet manufacturer said.
($1 = €0.78) | http://www.icis.com/Articles/2010/05/10/9358030/europe-r-pet-continues-to-hit-record-highs-on-supply-shortages.html | CC-MAIN-2015-22 | refinedweb | 775 | 57.91 |
This site uses strictly necessary cookies. More Information
Hi!
In a Java program, when the program is executing and there is an error, the compiler stops the program and throws an exception. Is there something similar to this in Unity? I mean, in case my player reference is null or something similar, I want to some how now that it is null and to quit the game. I do not want the user to be able to continue playing if something is broken in the game. Anything already implemented in Unity or should I have a method that is called every time there is an error to quit the application?
Thanks!
Answer by Kiwasi
·
Jul 16, 2014 at 01:45 AM
C# uses the try-catch-finally and throw methods
I tried using a try catch block. I did this:
using System;
try {
//something that breaks
}
catch(Exception e) {
Debug.LogException(e, this);
}
This, however, does not stop or quit the game. I can continue playing. Should I do Application.quit()?
Application.quit()
Thanks!
If you want it to quit then yes, you should call Application.quit
However in most cases you will want to attempt to recover from the error first, or at the least display a dialog box with the error details.
So recovering is just trying to fixing what is wrong. A dialog box will just be a window on the screen? Also, is this good practice for a final release? I mean, these tips might work when debugging, but what about a game that is already released? Should I also display a window to the user with the error? (They probably won't understand anything). Should I quit and somehow send a log file to my account or something.
Prevent editor-prev.log file from getting too big
1
Answer
C# script failing silently?
0
Answers
`UnityEngine.Debug' does not contain a definition for `LogException'
3
Answers
Debug.Log Override?
3
Answers
Is there a way to set Unity to write message log when the calculation has Infinity or NaN?
3
Answers
EnterpriseSocial Q&A | https://answers.unity.com/questions/749492/what-to-do-when-there-is-an-error-at-runtime.html | CC-MAIN-2021-43 | refinedweb | 349 | 66.94 |
By Keith Gladstien
Created
26 July 2010
26 July 2010
Requirements
Prerequisite knowledge
Previous experience working with Flash Professional is recommended.
User level
All
Required products
Sample files
In this article, you'll find strategies to optimize performance of applications made with Flash Professional. The process of optimization involves editing your FLA project files to ensure that the published application's realized (or actual) frame rate is sufficient to make animations play back fluidly.
If you've ever run a Flash project and seen stuttering animation, that is the behavior you want to avoid. If you'd like to replicate a test with stuttering animation, create a project with a simple animation and assign a frame rate less than 10 (such as 5). When you test the movie by publishing the SWF file, you'll see an example of stuttering animation.
There are two main components that determine Flash performance: CPU/GPU usage and memory usage. These components are not independent of each other. Some optimization adjustments applied to improve one aspect will have a negative impact on the other. In the sections below, I'll explain how that works and provide reasons why you might judiciously decide to, for example, increase memory use in order to decrease the CPU/GPU load.
If you develop Flash games for mobile devices, it's likely you'll need to use some of the techniques discussed below to achieve acceptable frame rates. If you are creating non-game apps for the desktop, it's possible to achieve acceptable frame rates with little or no familiarity with the techniques described in this article.
The only way to accurately judge the performance of your app is to run it on the target platform – your development platform can have very different performance characteristics, especially if you're developing for mobile devices. In late 2012, Adobe released a new tool called Adobe Scout (formerly known as "Project Monocle") that lets you do just this.
Scout is a profiler and performance debugging tool for Flash content. It lets you accurately measure the performance of your app – its frame rate, CPU utilization, memory use, rendering performance, and much more. It supports remote profiling, meaning that you can profile your app while it's running on a mobile device. This lets you tune the performance of your app for the specific platform you're targeting.
You can download Scout from here. For more information about what Scout can do, and how to use it, you can read the getting started guide.
Scout can help you to detect memory leaks in your content, but version 1.0 doesn't show you which objects are in memory and causing the leak. More detailed memory features are planned for future releases, coming soon. In the meantime, if you discover memory issues on the target platform, you can use the MT class to debug your app and resolve issues. (In the provided sample files folder, open the ActionScript class located in this directory:
MT/com/kglad/MT.as.)
The code in the MT class is adapted from code provided by Damian Connolly on his site divillysausages.com. The MT class reports frame rate, memory consumption, and lists any objects that are still in memory. To use the MT class, follow these steps:
- Import the MT class:
import com.kglad.MT;
- Initialize it from the document class or the project's main timeline with this line:
MT.init(this,reportFrequency);
In the line above, the this keyword references the movie's main Timeline and reportFrequency is an integer. The main Timeline reference is used to compute the realized frame rate and reportFrequency is the frequency (in seconds) that the trace output reports the frame rate and amount of memory consumed by a Flash application. If you don't want to output periodic frame rate and memory reporting data, pass 0 (or anything less). Even if you choose not to output the frame rate, you can still use the memory tracker part of this class.
- To track objects you create in your app, add this line:
MT.track(whatever_object,any_detail);
In this line of code, the first parameter is the object you want to track (to see if it is removed from memory) and the second parameter is an optional string containing anything you want to test. (Developers typically use this parameter to get details about what, where, and/or when you started tracking a specific object).
- To create a report that reveals whether your tracked objects still exist in memory, add this line:
MT.report();
It is not necessary that you understand the MT class to use it. However, it is a good idea to check out how the Dictionary class is used to store weak references to all the objects passed to
MT.track(). The class includes extensive comments that describe how it works.
Many of the sample file tests provided at the beginning of this article use the MT class. To learn more about working with the MT class, check out the tests to see how the MT class is used.
Similar to the observer effect in physics, the mere fact that we are measuring the frame rate and/or memory and/or tracking memory, changes the frame rate and memory utilization of your app. However, the effect of measurement can be minimized if the trace output is relatively infrequent. Additionally, the absolute numbers are usually not important. It is the change in frame rate and/or memory use over time that is important for debugging and optimization. The MT class does a good job of reporting these types of changes.
The MT class does not allow trace outputs more than once per second to help minimize spuriously low frame rate reports caused by frequent use of the trace method. (The trace method itself can slow the frame rate.) It's important to note that you can always eliminate trace output as a confounder of frame rate determination by using a textfield instead of trace output, if desired.
The MT class is the only tool used by the sample file test projects to check memory usage and pinpoint memory problems. It also indirectly measures CPU/GPU usage (by checking the actual frame rate of the executing app).
In the sections below, I'll begin by discussing memory management guidelines, with sub-topics listed in alphabetical order. Next, I'll provide information on CPU/GPU management with sub-topics related to that goal.
It may seem logical to provide the techniques in two sections. However, as you read through this article, remember that memory management affects CPU/GPU usage, so the recommendations listed in the memory management section work in tandem with the tips listed in the CPU/GPU section.
Before providing the specific best practices you can use, I think it is also helpful to include information about the techniques so that you can learn which are the easiest or hardest to implement. I'll also provide a second list that prioritizes the techniques from greatest to least benefit.
Keep in mind that these lists are subjective. The order depends on personal developer experience and capabilities, as well as the test situation and test environment.
- Do not use filters.
- Always use reverse for-loops. Avoid writing do-loops and while-loops.
- Explicitly stop Timers to ready them for garbage).
- Reuse Objects whenever possible.
- Event.ENTER_FRAME loops: Use different listeners and different listener functions applied to as few DisplayObjects as possible.
- Pool Objects instead of creating and garbage collecting Objects.
- Use partial blitting.
- Use stage blitting.
- Use Stage3D.
- Use stage blitting (if there is enough system memory).
- Use Stage3D.
- Use partial blitting.
- Use cacheAsBitmap and cacheAsBitmapMatrix with mobile devices.
- Explicitly disable mouse interactivity when mouse interactivity not needed.
- Do not use filters.
- Use the most basic DisplayObject needed.
- Reuse Objects whenever possible.
- Event.ENTER_FRAME loops: Use different listeners and different listener functions applied to as few DisplayObjects as possible.
- Use reverse for-loops. Avoid writing do-loops and while-loops.
- Pool Objects instead of creating and garbage collecting Objects.
- Strictly type variables whenever possible.
- Use weak event listeners and remove listeners.
- Replace dispatchEvents with callback functions whenever possible.
- Explicitly stop Timers to prepare them for garbage collection.
- Stop Sounds to enable garbage collection for Sounds and SoundChannels.
With these priorities in mind, proceed to the next section to learn how to update your Flash projects to manage memory more efficiently.
The list of suggestions below is not exhaustive but it contains strategies that can significantly improve the performance of Flash content.
There is an increase in memory use when dispatching events because each event must be created and memory is allocated to it. That behavior makes sense: events are objects and therefore require memory.
I tested a handful of events and found each consumed 40 to 128 bytes. I also discovered that using callback functions used less memory and ran more efficiently than using events. (See the test files in the sample files folder:
callback_v_dispatchEvent.)
Memory use increased when you apply a dynamic filter. According to Adobe Help documentation , using a filter doubles memory use. In real world testing with Flash Professional CS6, I've found that while filters do cause an increase in memory use, they do not come close to doubling the amount of memory used. (To review the test examples, review the sample files in the
filtersfolder.)
The Shape, Sprite, and MovieClip objects each use different amounts of memory. A Shape object requires 236 bytes, Sprite requires 412 bytes, and Movieclip requires 448 bytes.
If you are using many thousands of DisplayObjects in a project, you may be able to save substantial memory by using a Shape if interactivity is not required. Or, use a Sprite in cases when a timeline is not needed.
At the start of your app, create all the object references you'll ever need during the entire time your app is open and pool (store) those references in an array. Whenever an object is needed, retrieve it from the array.
Whenever an object is no longer needed, return it to the array.
array_v_vectorfolder.)
While there are performance benefits to using object pooling, the main benefit is that it makes it easy to manage memory. If you have a problem with unlimited increases in memory utilization, object pooling can prevent that problem. It is a technique that generally improves performance and reduces memory use.
I saw a 10% increase in frames per second using pooling and a decrease in memory use of about half when testing a SWF file that contains many objects being garbage collected and recycled on each frame. (Check out the sample files in the folder named
pooling_v_gc.)
Whenever you create objects in a loop, strive to create one object outside the loop and reuse it repeatedly inside the loop. It is not always possible for all projects, but there are many situations where this technique is helpful.
The section that describes blitting includes an example that reuses a number of objects. You can examine that test file to see how that is accomplished.
The issue with sounds in relation to memory usage is relatively minor. When a sound is playing, it cannot be garbage collected (when using Flash Professional CS6 to test the file). When the Sound finishes playing or a SoundChannel instance is used to stop() the sound, the Sound is prepared for garbage collection. (To learn more, see the sample test files in the folder named
sound_test.)
The issue with Timers is more critical. If a Timer has not stopped (because its
currentCount.
A Timer only uses 72 bytes of memory so this is unlikely to become a noticeable problem in a desktop/browser Flash game. However, if you open, play, and then close a Flash game running on a mobile device repeatedly without ever restarting the game, you may see a noticeable problem.
To see the code, open the files in the folder named
gc_timer_test.
Another unexpected result of testing with the MT class is that it makes no difference whether you use weak or strong listeners. They were both treated like weak listeners in my testing with Flash Professional CS6. (See the test files in the
strong_v_weak_listenersfolder.)
Currently, the only way I know how to measure this directly is to use an operating system tool. Windows includes the Windows Task Manager (performance tab) and Mac OS provides the Activity Monitor. Both tools allow you to see CPU usage but, generally, neither is very useful for testing Flash performance.
As a result, you are left measuring CPU/GPU usage indirectly by checking your app's actual frame rate. The MT class enables you to check a project's frame rate, along with memory use reporting and memory tracking.
Enabling the cacheAsBitmap property of a DisplayObject significantly improves performance (and increases memory) as long as the DisplayObject does not undergo frequent changes that require frequent updates to the bitmap. Essentially, this means verifying that the DisplayObject does not change appearance in any way other than changing its location on the stage. If there are frequent bitmap updates, performance will decrease.
How frequently you can update a cached bitmap and still see a performance benefit, depends on several factors. The most important factor is, not surprisingly, how frequently you are updating the bitmap.
In any case, use the MT class to test your specific project, both with and without cacheAsBitmap enabled for DisplayObjects that require bitmap updates. (It is a no-brainer when deciding whether to use cacheAsBitmap for DisplayObjects that require no bitmap updates: Use it!)
If you have a DisplayObject (movie clip) and you want to enable its cacheAsBitmap property, add this line:
mc.cacheAsBitmap = true;
Enabling cacheAsBitmap is always beneficial even when changing the scale, skew, alpha and/or rotation (but not changing a movie clip's frames) of a DisplayObject when publishing for mobile devices.
Specifically, when publishing a project for mobile devices, you can enable the cacheAsBitmap and assign the cacheAsBitmapMatrix property of your DisplayObjects and realize a substantial performance boost, like this:
mc.cacheAsBitmap = true;
mc.cacheAsBitmapMatrix = new Matrix();
You do not have to use the default identity matrix. However you'll find that there are only a few reasons to use something other than the default matrix.
Stage blitting, a term that describes bit block transferring, involves the use of bitmaps to render the final display. Instead of adding DisplayObjects to the display list, pixels are drawn to a stage-sized bitmap which has been added to the Stage. To convey animation, the bitmap's pixels are updated in a loop. Typically, an Event.ENTER_FRAME loop using the BitmapData class's
copyPixel()method is applied to the stage-sized bitmap's bitmapData property using other bitmapData objects created outside the animation loop.
This technique is more complicated than adding objects directly to the display list but it is much more efficient—often making the difference between unacceptable frame rates and excellent frame rates for Flash app. To be sure, there is absolutely no reason to use this strategy unless you need the increased frame rate.
I compared a SWF file with 10,000 squares moving and rotating across the Stage using movie clips (see the sample file titled
blit_test/blit_test_mc.fla). Then I updated the same SWF file with some basic optimization techniques (see the sample file named
blit_test/blit_test_basic_optimizations.fla) and stage blitting (see
blit_test/blit_test2).
The first SWF file ran at about 15 fps, which is unacceptable. However, there were a few basic tweaks that can be easily applied to improve performance before embarking on more difficult to institute techniques like blitting.
First, I reversed the for loops to gain a little performance boost (see the section on loops below) and, more importantly, I used some constants instead of recalculating the same values repeatedly. Those changes provided a significant (~40%) speed boost to almost acceptable frame rates, ~21fps.
Using stage blitting to encode the same display yielded a frame rate of 54 fps, an impressive 350+% boost in frame rate.
However, as I previously mentioned, the process of blitting is more complex. The steps involve:
- Initializing the Stage display bitmap assets (Bitmap instance, BitmapData instance, and Rectangle instance) onto which all the displayed pixels are copied during each Event.ENTER_FRAME event loop.
- Populate a data array with all the data used to update the display. (This step is not always necessary.)
- Populate an array of BitmapData objects. If you had an animation on a movie clip's timeline, this is where you store a BitmapData object of each frame (for example, by using a sprite sheet. In the sample test file I created a BitmapData instance for each angle the rectangles can be rotated using ActionScript.
- Create an Event.ENTER_FRAME event loop.
- Update the data in the Event.ENTER_FRAME loop, copy the appropriate pixels from the array created in step 3 to the appropriate location (determined using the data array from step 2) of the BitmapData instance created in step 1.
For more details, review the file in
blit_test/blit_test2. It contains extensive comments.
The downside to stage blitting, other than the difficulty coding, is that a large amount of memory may be consumed when creating the needed bitmaps. That is a significant factor when creating an app for a device like the iPad that has high screen resolution (1024 x 768 for the first and second generation iPad, and 2048 x 1536 for the third generation iPad) and relatively low memory (RAM) capacity (256MB, 512MB, and 1GB for first, second, and third generation, respectively).
Generally, your game should consume no more than half the available RAM. That includes not just bitmaps but everything else in your game that consumes RAM.
As the name implies, partial blitting combines the use of the Flash display list and copying pixels to BitmapData objects. Typically, each object displayed on Stage is a bitmap that is added to the display list and manipulated as usual with display objects like movie clips. Each object's animation is blitted to an array of BitmapData objects.
For example, using the previous example of squares rotating and moving across the Stage, I blit the squares and their various rotations, store those BitmapData objects in an array, add bitmaps to the display list, and then manipulate the bitmaps just like any display object (like the movie clips described above) in the Event.ENTER_FRAME loop. And then finally, I assign the bitmapData property of the bitmaps to the appropriate array element. (To see how this works, review the
blit_test/partial_blitting_test.flafile.)
The partial blitting test was not nearly as fast as stage blitting when tested on my PC (24-26 fps). But keep an open mind because partial blitting may be faster than stage blitting in other situations. In addition, it's easier to code partial blitting than Stage blitting, so if you can achieve acceptable frame rates with partial blitting, that eliminates the additional work required for stage blitting.
Creating multiple Event.ENTER_FRAME listeners that are applied to one instance calling multiple listener functions was slightly more efficient than creating one Event.ENTER_FRAME listener calling one listener function, which then called other functions.
However, it is a different situation when you have multiple objects each with their own Event.ENTER_FRAME listener, compared with one object with an Event.ENTER_FRAME listener. There is approximately a two-fold performance gain using one object with an Event.ENTER_FRAME listener compared with many objects that each have their own Event.ENTER_FRAME listener. (To review the tests, check out the files in the
enterframe_test_one_v_many_loops_with_different_movieclipsfolder.)
In Flash, reverse for loops are the fastest executing loops. If a stored list of same-type objects is needed in the loop, a reverse for loop using a Vector to reference the list of objects is the fastest way to go.
All three loops execute faster if you use an int for the iteration parameter, rather than using an uint. All three loops execute faster if you decrement the loop variable, rather than increment it. (Note: If you decrement a loop variable i and use i>=0 as the terminal condition, you will trigger an endless loop if i is a uint.)
All three loops execute faster if you use a variable or constant for the terminal condition rather than an expression or object property. Because the initial condition only needs to be evaluated once (and not with each loop iteration), there is no significant difference whether you use an expression or object property for the initial condition in any of these loops.
Anything that can be moved outside a loop without affecting the result should be moved. That includes declaring objects outside the loop (see the section about reusing objects) where using the new constructor inside a loop sometimes can be moved outside the loop and the terminal loop condition, if it is an expression, should be evaluated outside the loop.
I have seen some mention that using objects that each reference the next object is faster than using an array to reference the objects. In my tests, I found that statement to be false.
Using an array was both easier and faster to both initialize and to use. Using a Vector instead of an array, of course, was even faster. (See the sample test file in the
for_loop_v_sequential_loopfolder.)
None of these suggestions is likely to make a major difference under most conditions. However, these tweaks are worth implementing if you are trying to squeeze every bit of efficiency out of your coding or if your project involves iterating through a large number of loops.
Movie clips and sprites can interact with the mouse. Even when you do not code for any mouse interactivity, Flash Player checks for mouse interactions when these objects are present. You can save some CPU cycles by disabling mouse interactivity for objects that do not require mouse interactivity.
This strategy is very helpful in situations when you notice a performance problem (or your computer's fans increase speed) when your mouse moves across the Stage. Disabling mouse interactivity improves performance and can quiet your computer fan.
During testing, I saw the frame rate increase by about 2 1/2 times when disabling all movie clips in a test file. The sample test code is located in the
mouse_interactivityfolder.
Even though more recent Flash Player versions appear to remove listeners when objects are garbage collected and having strong listeners does not appear to delay garbage collecting, you should still explicitly remove all event listeners as soon as possible. The sooner a listener is removed, the less CPU cycles are consumed by the listener. In addition, you may not know which Flash Player version a user has installed. Older versions of Flash Player may not garbage collect objects—even those with weak listeners. Do not rely on the newer capabilities of Flash Player to optimize poor coding.
Stage3D is a GPU-enabled display rendering model that became available with the release of Flash Player 11. This model is especially helpful for 3D rendering but can also be advantageously used for 2D displays using frameworks, such as Starling.
Because display rendering has typically been handled by the CPU, (which also does all the other work needed to run an app), harnessing the power of the GPU for rendering frees the CPU to do all the other work. This significantly improves performance on devices with capable GPUs.
To view Stage3D content, you must use Flash Player version 11 or higher. To use the Stage3D API, you will need to publish SWF files to use Flash Player 11 or future releases. If you are working with Flash Professional CS6, you're all set. If you have Flash Professional CS5 or CS5.5 you can update your installation of Flash to enable publishing to Flash Player 11. For more details, read See the blog post by Rich Galvan titled Adding Flash Player 11 support to Flash Professional CS5 and CS5.5.
Unfortunately, using the Stage3D API is difficult. However, there are several free public frameworks available that handle the low-level code needed to use Stage3D which offer easier to use APIs.
One of these frameworks, Starling, is designed for developing 2D games. It is easy to use and effectively abstracts the complexity of Stage3D. The Starling API can be found on the Starling Framework Reference site.
I tested Starling to see how it compared to blitting and partial blitting. In some situations, Starling performed worse than both blitting options. In fact, it performed much worse than the un-optimized 10,000 square movie clip test.
However, if you deselect the permit debugging option in the Starling test, that simple tweak more than doubled the frame rate and the resulting SWF file was comparable to the un-optimized 10,000 square movie clip test. That is still a disappointment. However, part of the problem is that I use the debug version of Flash Player to test the files and Starling appears to perform much worse in the debug vs. non-debug version of Flash Player.
In addition, the 10,000 square movie clip test does not show Starling at its best. If you are using many movie clips that each contain a timeline with animation, Starling will almost certainly out-perform anything you can build that utilizes simple optimizations.
Only blitting provides the performance needed to exceed the benefits of using Stage3D and Starling. But blitting may not be practical because of the memory required to create the needed bitmaps.
The sample test files are located in the
starling_testfolder.
To use the Starling framework, follow these steps:
- Download the starling.swc file.
- Add it to your Flash project's Library path by following these steps:
- Choose File > Publish Settings > ActionScript Settings.
- Click the Library path tab and then click the Browse to SWC file icon.
- In the Open File dialog box that appears, navigate to select the starling.swc file on your desktop.
- Click Open to add starling.swc to your Library path.
- Click OK to close the Advanced ActionScript 3.0 Settings panel and then click OK again to close the Publish Settings.
- Save the FLA file and you are ready to use Starling.
If you publish a mobile air game that uses Stage3D (which includes the use of frameworks like Starling that use Stage3D), set the Render mode to Direct. If you publish an embedding HTML file, set the Window Mode to Direct in the Publish Settings.
You can learn more about Starling and the Stage3D API on the Adobe Gaming site.
In addition to the optimization techniques described above, there are two other best practices you can adhere to when developing Flash projects to improve playback:
- Specify the class type of every variable you declare. The code runs faster when you take the time to type all variables, and the compiler displays more descriptive, helpful information when encountering errors. Check out the test files in the sample files folder:
variables_typed_v_untyped.
- Rather than using arrays to store data information, use Vectors. To see how this works, review the test files in the sample files folder:
array_v_vector.
Hopefully the recommendations outlined in this article will help you improve the performance of the projects you create in Flash Professional. To learn more about building animations, applications, and games in Flash, visit the Flash Developer Center.
This work is licensed under a Creative Commons Attribution-Noncommercial
- Designing for a multi-device, multi-resolution world
- Automating tasks in Flash Professional CS5
- Using the Adobe Flash Sprite Sheet Generator
- Optimizing performance for mobile AIR applications | https://www.adobe.com/devnet/flash/articles/optimizing-flash-performance.html | CC-MAIN-2018-22 | refinedweb | 4,600 | 54.52 |
One 2.0 ships with a number of built-in providers including:, 2.0 without having to write any encryption code of your own.:
<configuration>
<connectionStrings>
<remove name=”LocalSqlServer”/>
<add name="LocalSqlServer" connectionString="Data Source=localhost;Initial Catalog=appservicesdb;Integrated Security=True" providerName="System.Data.SqlClient"/>
</connectionStrings>
</configuration>
Hit save, and all of the built-in application services are now using your newly created and defined SQL Server database.
Note: The one downside with the above.
Hope this helps,
Scott
P.S. In some future blog post I’ll walkthrough actually using some of the above new APIs.
Many people when using Visual Studio 2005 require some form of authentication, membership etc but by...
This page lists some of the more popular “ASP.NET 2.0 Tips, Tricks, Recipes and Gotchas”
Hi Jules,
If you want to change your connection-string name, you'll want to re-register your membership, roles and profile providers to use a new connection-string. You can then name this whatever you want.
Hi Mike,
To create the blank database instance you'll want to create a new Database within SQL Server. You should be able to find a "Create Database" menu item somewhere within your SQL admin tool (the location is slightly different between SQL 2000 and SQL 2005). You can then name the database whatever you want, and point the wizard at it.
Hi Brunedito,
It could be that you installed a named instance of SQLExpress on your machine. Could that be the problem?
Also -- have you tried creating a blank SQL Express database and accessing it to see if that works?
Thanks,
Hi Tristan,
Is your application configured to use Forms based authentication or Windows authentication?
Hi Scott,
It's set to use Forms based authentication. I have tried making it windows authenticated, it makes no difference, the security setup wizard still throws An Error was encountered
I've manually created an admin account in code and it appears within the Admin Tool, I can modify the settings of the user.
My current user name on the home screen is my machine login and domain. Would the location of the project have any bearing? I've got it in My Documents/Visual Studio 2005/WebSites/Sharp-Test.
Can you send me an email with your web.config file? I can then help figure out what is going on here.
Thx!
Hi Kenny,
The images are there for me right now (I assume you mean the images in the blog post above). Can you check again -- or maybe you are having a proxy/connection issue?
To use full-blown SQL 2005 (rather than SQL Express), you'll want to use the step in the article above to create a SQL 2005 database and point the app at it. Unfortunately there isn't a way to have SQL 2005 use the SQL Express database in the /app_data folder.
Hi Dusty,
I'm pretty sure the issue you are running into above is the application name isn't set correctly. This blog post describes the issue and how to fix it:
This blog post describes a way you can get more detailed error messages even when remotely:
You could skip the Roles check for the moment just to see what the error is.
Hi Vasu,
If you can send me email (scottgu@microsoft.com) I will try and connect you up with a hoster in NZ.
Do you have the connection strings setup correctly for your membership and roles setup? What you are describing sounds like you might not have these set correctly.
hi scott,
I have made it work
thanks very much
Hi Jeewai,
The issue is almost that your database is using a Beta version of the database schema. You'll want/need to recreate the schema using the final ASP.NET 2.0 release database schemas for it to work.
Hi Geoff,
If you want to send me an email with more details about the error you are seeing (along with your web.config file), I can help debug it with you. My email address is: scottgu@microsoft.com
Hi Angus,
I suspect the problem might be with the connection string you are using, and specifically the security account you are connecting with. Are you using windows integrated security to connect? If so, then you might want to read this article to learn more about how Windows handles multiple-hops in this scenario:
That is the question, in shared hosting [ aspnet ]. Here is my problem, we run many web servers with
Hi Ian,
It sounds like you have a security configuration mis-set potentially. Can you check to see how you are connecting to the remote machine in the VS 2005 Server Explorer? Does it have the exact same connection string as how you are using it in VS 2003?
Yep -- you can change the connection string name to whatever you want. Simply add a new provider declaration under <membership>, <roles>, etc and point it at your new connection string name.
Hi Har,
Yes -- you should be able to use this just fine against a SQL 7.0 database.
Hi John,
Are you sure you are setting the applicationName in your config file ()?
Also -- have you opened the database and checked to see whether data is stored in it?
If you want to send me email (scottgu@microsoft.com) I can try and connect you up with someone who might be able to help investigate what is going on. It sounds like you might have some type of provider connection issue -- which is the only reason for such a long delay.
Hi JButler76,
Here is a pointer to where you can download the shipping ASP.NET 2.0 SQL Provider source:
PingBack from
Hi Derek,
You could use these tables with a WindowsForms application. In fact, with the next release of Visual Studio we are looking to provide an API that makes doing so easy.
What I would recommend for now is to build a custom set of web-services that expose the membership/roles APIs for you to use remotely from a WinForms client.
I'd recommend not having your WinForms applications even connect to the database directly. Instead, I'd recommend building a web service API that runs on the server and accesses the tables - and have the WinForms clients access those.
That will protect your database better and give you one extra level of control.
Hi Eric,
Can you send me email with more details about the issue you are seeing? I can then loop in some folks who should be able to help.
Hi Kumar,
Can you send me an email with your exact error message? I can then help debug it.
Hi Mark,
You don't have to use the LocalSqlServer connectionstring name. You can register any connectionstring name you want, and then simply add providers to <membership> or <roles> or any other application service and point at your new connection string name.
Hi Brian,
The problem you are having is that the worker process that you are running IIS under (which is the "NETWORK SERVICE" account doesn't have security permissions to access the SQL database). You'll want to go into the SQL admin tool and grant access permissions for that account to fix this.
ASP.NET 2.0 includes a nifty new feature known as Health Monitoring. The name might be a bit misleading,...
Hi Randy,
Network Service is the account that worker processes within Windows use. It doesn't actually allow people on the network to access it (in fact - it can't be logged in remotely).
So you shouldn't need to worry about external clients accessing a file ACL'd with it.
Hi Notso,
Can you send me email repeating this question? I will then loop someone else in on the email thread who might be able to help.
Hi Polly,
The raw .SQL files for generating the schema can be found in this directory:
c:\Windows\Microsoft.NET\Framework\v2.0.50727
You could try using those directly to create the schema in your existing database.
Alternatively, you might also want to checkout this blog post I did a few days ago:
It points to some simplified schema table providers that you can alternatively use.
Hi Jeff,
Apologies for the delay in getting back to you. I got your email as well and will try and get back to you this weekend.
Sorry!
Hi Barry,
Can you send me an email (scottgu@microsoft.com) describing this more? I can then help you get this configured and setup.
en la mayoría de conferencias que he dado, mostrando el manejo de Seguridad con las nuevas apis: Membership,...
en la mayoría de conferencias que he dado, mostrando el manejo de Seguridad con las nuevas apis: Membership,
Thanks Scott, I was realy wondering abt keeping the Membership in SQL Server 2000
Scott,
Is there a way to set the connection string for the provider at runtime?
Thanks
Hi Scott, <br>
I am having the exact same problem and in the same situation as Barry earlier. <br>
I created the tables, views and stored procs using the scripts located in the Framework folder in the correct order. Everything installed fine. However, when I am trying to create a user or login, it gives me the following error. <br>. <br>
<br>
There is no way I can run aspnet_regsql.exe in the remote hosting providers' machine. Also, I checked the aspnet_SchemaVersions table to make sure that it has all 6 records and the version is 1 and latestversion field is 'True'. Yet, I get this error message. Am I missing something?
<br>Project would be setback by months if I cannot deploy to this remote server using the providers out of the box. Remote server is running SQL Server 2005. (BTW, have no problem locally with the membership API. It simply works!)
<br>
-Vamsi
PingBack from
Unfortunately I don't think there is a super easy way to change this at runtime, other than by modifying the provider itself. If you want to learn about how to customize the providers you might want to check out these two blog posts:
and
Hi Vamsi,
Can you send me email (scottgu@microsoft.com) with more details about this issue? I can then investigate and help.
I am having a problem trying to get the WebServer to use my remote SQL Database . I am not sure if it has something to do with the web.config file or if it has something to do with the database permissions. Here is the Error: Login failed for user '<domain name>\<webserver name>$'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
Exception Details: System.Data.SqlClient.SqlException: Login failed for user '<domain name>\<webserver name>$'.
I am not trying to use Windows Authentication to connect to the database and I have created a connection string using an account I setup called WebAdmin. Below is my web.config file:
<remove name="LocalSqlServer"/>
<add name="LocalSqlServer" connectionString="Data Source=DDMBBJB05; Integrated Security=True;Initial Catalog=DD_InventoryControl;User ID=*******;Passowrd=*********"
providerName="System.Data.SqlClient"/>
</connectionStrings>
<system.web>
<roleManager enabled="true" />
<authentication mode="Forms"/>
<membership defaultProvider="MyMembershipProvider">
<providers>
<add connectionStringName="LocalSqlServer" enablePasswordRetrieval="false"
enablePasswordReset="true" requiresQuestionAndAnswer="false"
applicationName="/" requiresUniqueEmail="true" passwordFormat="Hashed"
maxInvalidPasswordAttempts="5" minRequiredPasswordLength="7"
name="MyMembershipProvider" type="System.Web.Security.SqlMembershipProvider" />
</providers>
</membership>
Also, I do not have Visual Studios, so what is the easiest way to setup users and create roles?
Thanks.
Ben
Hi Ben,
I wonder if you might be running into this gotcha that I blogged about here:
Can you try this suggestion out and see if it fixes in?
Great work on this blog. I am finally connected to my SQL Server 2000 database and all my provider info in the web.config is apparently working.
Question 1. Can I delete the appdata folder now?
Question 2. When I Remote Desktop into my Development machine (Windows XP) across my wireless network, there seems to be some permission problems. ASP.Net Configuration tool does not run and when I run my pages (login and create user), they do not work. I get some error about not being able to create the User instance.
Thanks for your help in advance
I've got a couple of questions about security. By default, all of the tables are created under the dbo schema. If my plan is to write a highly secure web application, will this pose any sort of security problems for me later on down the road? I would set up a new user and grant it read/write access to the dbo.aspnet* tables.
It is my understanding that the only way to change the schema the providers use is to re-write the providers myself. While it would probably be a great excersise, I need to concentrate on writing my application, rather than rewriting the tools already provided to me.
Also, are the provided Login controls protected against SQL injection attacks?
<add name="LocalSqlServer" connectionString="Data Source=DDMBBJB05; Integrated Security=True;Initial Catalog=DD_InventoryControl;User ID=*******;Passowrd=*********"
Just a tiny one - Password is incorrectly spelt? Was this a cut'n'paste - if so then maybe thats the root of the login failure - ie not authenticated. Just a thought.
Hi Robert,
The schema doesn't require the database account to run with DBO permissions. So you can connect using a normal security account and not have to worry about elevation.
Scott,
with the starter kits is a good way to learn ASP.NET?
on which hosting companies to use for a first site?
hostmysite
This was a very useful article for me. Thanks for the good information.
Can this application services database be setup within an existing database? This is a scenario where the site is hosted by a 3rd party, and we can have only one database for the site.
Hi Andrew,
I'm going to be writing this hosting migration tutorial in the next two weeks.
We are actually going to be releasing a new automated tool soon that will help with this.
thanks,
First, import the "System.Data.OleDb" namespace. We need this namespace to work with Microsoft Access and other OLE DB database providers. We will create the connection to the database in the Page_Load subroutine. We create a dbconn variable as a new OleDbConnection class with a connection string which identifies the OLE DB provider and the location of the database.
Hi Scott, i'm having the problem that whichever way I attempt to manage user roles i get the error: "Unable to connect to SQL Server database.) "
I have taken the following steps:
I have run aspnet_regsql to setup my local SQL Server 2000 database. I can see the tables have been created in EM
I have the following in my web.config:
-----
<connectionStrings>
<add name="ConnectionString" connectionString="Data Source=WORKSTATION-1;Initial Catalog=Booking_Dev;user=*****;password=*****;Trusted_Connection=yes;" providerName="System.Data.SqlClient"/>
</connectionStrings>
<system.web>
<roleManager enabled="true" />
<trace enabled="true" pageOutput="true"/>
<authentication mode="Forms"/>
<membership defaultProvider="AspNetSqlMembershipProvider">
="ConnectionString"
applicationName="/"
maxInvalidPasswordAttempts="5"
requiresQuestionAndAnswer="false"
minRequiredPasswordLength="4"
requiresUniqueEmail="true"
passwordFormat="Clear"
enablePasswordRetrieval="true"
enablePasswordReset="false"
minRequiredNonalphanumericCharacters="0"/>
</providers>
</membership>
I have been able to add a user both with asp.net configuration tool and by the following code:
Membership.CreateUser("Paul","paul","email@email.com");
However when attempting to either use the code:
Roles.CreateRole("Cheese");
or
by clicking "Create or Manage roles" in the security tab i always get the connection problem message.
Have i missed a step or can you suggest where the problem might be - i've tried everything i could think of.
Regards
Paul
Great! I look forward to the article. If you need someone to review the article and try the steps, I'd be happy to. I could tell you useful info like "yes, I followed all the steps with no problems", or "this part presumes knowledge that entry level web programmers don't have", etc. In any case, I'm looking forward to it.
Hi Paul,
I believe the problem you are running into above is that you have registered a membership provider, but not declared a provider for the roles provider. So when you call Roles.CreateRole() it fails - since it is trying to connect to the default SQL Express provider.
Can you add a role provider to your web.config file (clearing the previous one like you do for membership) and then try again?
Scott, many thanks for your response.
I have added the following code to the web.config:
<roleManager enabled="true" >
<providers>
<add name="AspNetSqlRoleProvider" connectionStringName="ConnectionString"
applicationName="/"
type="System.Web.Security.SqlRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />
</providers>
</roleManager>
Seems to be working perfectly now. Thanks you once again for your assistance.
I'm having the same problem that IanC had (September 9). Does anyone know how he solved that problem? I've been racking my brain trying to solve this.
Amit
hey
i can not find and run the aspnet_regsql.exe file..what could be the problem?
Success! I was able to solve my problem. For anyone else thats having this problem, I had to inject a Connect Timeout in the prompt that asks for your server name. For instance, my server name was Nightcrawler so where aspnet_regsql was asking for my server name I had to write, "Nightcrawler; Connect Timeout=180" aspnet_regsql then had no problems connecting to my remote machine.
Which custom providers are generally pushed into machine.config?[Membership / Profile / RoleManager] entries for sql can be found at machine.config.
I think listing a cuatom provider in machine.config helps VS.NET 2005 in discovering the providers in the admin mode.
Please clarify
Debasish Bose
Oracle Corp.
PingBack from
PingBack from
I have created sql server 2000 database. Used "LocalSqlServer" connection to point to it. Used built in membership,roles mechanism to manage my users. It worked fine on my development machine but when i uploaded the database to the hosting server via Sql Web Admin tool, i fail to use the membership, roles mechanism. the Exception i get is "Invalid objectname 'dbo.aspnet_SchemaVersions'." I checked and found that in the hosting database the "aspnet_SchemaVersions" table has different owner other than "dbo" , thats why dbo.aspnet_SchemaVersions cann't be found. My question is can i use schemas other than 'dbo' to create the aspnet* tables and stored procedures to use membership,roles api? Plz help.
Hi,
Could you inform me please were did you get that window when you were metioning the third step "Step 3: Point your web.config file at the new SQL Database"
Hi Anjan,
The good news is that we have a tool coming out shortly (either today or tomorrow) that will help with hosting the ASP.NET DB schemas in a shared hosting environment, and prevent you from running into DBO permission issues like you described above. I'll be blogging about this later this week - so check back on my blog for details.
PingBack from
Hi Juvan,
The window I showed in step 3 was from the IIS admin tool (which includes a GUI that supports connection string management).
Alternatively, you can just open up the web.config file directly and configure it there.
Hi Scott, will that tool ever come? I urgently need that or any other way to make membership system work on a shared hosting database. I have to do this within 4 days from today. So , plz if u can help.
I just blogged about this new tool here:
It should help with deploying your membership data to your remote hoster.
PingBack from
PingBack from
PingBack from
I'm trying to do something slightly different on my setup here. SQL Server 2005 Express was installed for me by a third party app and I want to connect to that.
When I ran the ASPNET_REGSQL utility I was not able to connect to the SQL server, despite the fact that I could connect using the SQL Management Studio Express. I then found that it worked perfectly if I put the named pipes name in to the server name box.
I'm now having trouble modifying the connection strings for it. (I'm trying to use WebParts by the way).
I've placed this in to my web.config:
<remove name="LocalSqlServer"/>
<add name="LocalSqlServer" connectionString="Data Source=HELMVMSVR1/HELM;Initial Catalog=aspnetdb;Integrated Security=True" providerName="System.Data.SqlClient"/>
</connectionStrings>
but it doesn't work, says it cannot connect.
I also tried modifying the value in the IIS snap in to:
data source=.\HELM;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|aspnetdb.mdf;User Instance=true
(so basically putting in the correct instance name)
The error I get in the second case above is:
An error occurred during the execution of the SQL file 'InstallCommon.sql'. The SQL error number is 1802 and the SqlException message is: CREATE DATABASE failed. Some file names listed could not be created. Check related errors.
CREATE FILE encountered operating system error 5(error not found) while attempting to open or create the physical file 'C:\INETPUB\WWWROOT\WEBARTS-0-0-8-0\APP_DATA\ASPNETDB_TMP.MDF'.
Creating the ASPNETDB_d6399dae01f648d292811428a592b7dc database...
Can anyone help?
Hi Daniel,
Can you send me an email (scottgu@microsoft.com) with more details about this problem? I can then help you get it working.
PingBack from
PingBack from
PingBack from
PingBack from
PingBack from
Hi Scott
I am using asp.net 2 and sql server 2000, which i've set as membership, personalization and role provider, and forms authentication. It all works fine in debug mode, but cannot log in through an anonymous browser. I tried checking off permissions on the db, but still no access. Any clue?
thanx
PingBack from
PingBack from
Hi James,
What does your database connection string look like? Does it use Windows authentication or SQL credentials to connect?
These two articles should help walkthrough how to connect to SQL using each approach:
PingBack from
I'm getting the error:
"Login failed for user pc01\ASPNET" when accessing the database created on sqlserver express. What kind of permissions should this DB need?
Thanks in advance
My conn string is:
<connectionStrings>
<remove name=”LocalSqlServer”/>
<add name="LocalSqlServer" connectionString="Data Source=.\SQLEXPRESS;Initial Catalog=mydb;Integrated Security=True" providerName="System.Data.SqlClient"/>
</connectionStrings>
Hi, Scott
How can i get all user informations after login on other pages (User ID , Email....)?
I'm using Login Control and build my owner membership provider.
Hi Neo,
Scott Mitchell has a great series here that I recommend:
It covers how to use the Membership support in more detail.
Hi JP,
You'll want to make sure that the ASPNET worker process account has read and write ACL access to the SQL Express database file, and the /app_data directory it is contained within.
You can do this by right-clicking on the app_data folder and then grant the ASPNET worker process account permissions to it.
PingBack from
I get the following message when trying to connect to a remote SQL 2000 server with SQL Server Management Studio: "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. (Microsoft SQL Server, Error: -2)"
Works perfectly well when I connect remotely to the SQL Server instance with Enterprise Manager. Any ideas and assistant of what may be causing the different experience will be greatly appreciated.
PingBack from
PingBack from
Hi Zvi,
My guess is that there might be some port conflict in terms of configuration - since timeout errors usually mean the tool can't connect at all.
What is your configuration?
Great post... but of course I'm here because I've had a problem.
I've used an SQL 2000 db and followed your instructions, however while the ASP.NET Config site works perfectly (I can create accounts, manage roles etc) I cannot log-in using my login page.
Stranelgy, if i use the CreateUserWizard to create an account on my site the user is created and logged in... So my DB/application relationship would seem to be correct.
My login page just contains an unaltered login control. Nothing fancy...
When I log in I just keep getting the 'Login Failed' message.
And yes, I've checked CAPS aren't on! ;)
Am I missing something basic?
Thanks Scott
ARGH!!!!!
Problem solved...
I noticed I had a Login1_Authenticate module that I didnt remember writing... I must have doubled clicked on the control.
No wonder the login wasn't working. It was going to the empty module.
Anyway, great post Scott.
Shame you have to deal with the rest of us bumbling fools.
I have read this and hundreds, yes hundreds of posts concerning connecting to Sql server express outside of the Visual Studio ide. I have tried numerous walk throughs and WebConfig strings from many forums without success. Before I toss in the towel and start looking for alternatives as a back end for a small web application that works in VS, am I missing something or just wasting time trying to deploy sql express. I do not want to ramp up to full Sql server. It would seem from all the posts on google the express version is far from easy to connect to when deployed in ISS.
Any unbiased comments appreciated
Thanks a helpfull site
Kim
Hi Kim,
Can you send me email (scottgu@microsoft.com) with the specifics of what you are trying to-do? I'm not entirely sure I understand your scenario. Are you looking for a connection-string that allows you to connect to a SQL Express database? Or are you looking for a away to "upsize" a SQL Express database to SQL for a hosting environment?
Send me an email with more details and I'd be happy to help.
Thanks for responding Scott,
My apologies for the frustrated post, I think I may have finally resolved my connection woes with Sql Express by switching to Sql Authentication, creating a user and using those creditials to create a new data connection within Visual Studio Server Explorer.
I then tested a SqlDataSource on one of my applications forms using this replacement connection and it worked when run under IIS in IE. I will send more details in a followup email, after more testing.
Thanks again Scott
Scott! you are simply the best!
That sounds like the fix - I bet your SQL server was not allowing your IIS account to access it using Windows Authentication. You can either use SQL Authentication, or go into the SQL Manager and grant the IIS account access to the database.
I was wondering; Is there a way to link multiple membership datastores to create a hierarchy where a global datastore role could be a member of a subsystem's datastore role? As in a AD structure where a Domain Admin group or role can be added to a local machines Admin group (role). Can this work with the 'Out of the box' Membership provider or would I have to create a custom provider to accomplish this?
Hi Harold,
You can do this with the ActiveDirectoryMembership provider if you are calling into an Active Directory in a star/tree configuration (meaning it already aggregates users in a domain).
If you want to aggregate across multiple separate stores, then you'd need to create a custom membership provider.
Thanks Scott! By the way, impressive blog!!
Harold
I posted my problem in forums and they redirected me here and yet i am not clear. My problem is,
I need to connect to a test database server's aspnetdb database...When i try to open the website administration tool's security tab i get.
I tried to open the aspnet_SchemaVersions and the table is empty...
would running the command
aspnet_regsql.exe -E -A all -S servername
again solve this problem?...
your help is very much appreciated.
Hi Sankar,
I believe what is happening is that you have an older ASP.NET Beta schema installed on that database. Can you delete the schema and make sure you are running the aspnet_regsql tool that shipped with ASP.NET 2.0 (the final release) to recreate it?
Hi, Scott,
I'm trying to start working with profiles in ASP.NET so I've created aspnetdb database via aspnet_regsql and it was successful. Now I try to select a provider via ASP.NET Web Application Administration tool. And although connection string in machine.config appears to be correct
<add name="LocalSqlServer" connectionString="data source=localhost;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|aspnetdb.mdf;User Instance=true" providerName="System.Data.SqlClient" />
but i keep getting "Could not establish a connection to the database" message when hitting "test" href.
Even if I add section to web.config file:
<remove name="LocalSqlServer"/>
<add name="LocalSqlServer"
connectionString="Data Source=localhost;
Initial Catalog=aspnetdb;
Integrated Security=True"
providerName="System.Data.SqlClient"/>
- the error is still the same... :(
p.s. DB is really created and I use MS SQL Express server that is running and its tables are visible via Server explorer in VS 2005
Hi Alexander,
Are you running this using IIS or the built-in VS web-server? If you are running with IIS, then you need to make sure that the ASP.NET/IIS worker process account has access to the SQL database you are trying to connect to.
I've read all posts fairly thoroughly without specific answer. I act as both DBA and developer and usually do corporate intranet apps, so I want to confirm I'm configuring security properly on the database.
If using Forms authentication, do I simply need to grant the anonymous IIS account EXEC permissions on the stored procedures? Does it need to be a member of the data reader or data writer SQL Server roles?
If using Windows authentication, do I simply create a SQL Server group and give them the above permissions, then add each Windows account to this group?
Thanks as always.
Dan
i've tried the same thing on my home notebook, where i had windows xp home sp2 and there were no any problems with it - i used all default settings for localhost. As you know there's not IIS on home edition.
Ok, here i have windows xp professional and IIS really HAS BEEN installed - i guess not correctly, so i removed it. I had suspicion, that IIS in some way interfere with VS web server, but now it doesn't work anyway! :( Please, advice me something...
i've found the problem. Something wrong with password. When I run sqlexpress server in 'local system mode', i.e. without password - connection is successful. But it's not correct all in all - I want to use windows authentication (i am a part of domain in the office). But my windows login/password doesn't work, when i try to indicate it. And I don't have 'domain' textbox there, although i should have it, according to MSDN...
The windows identity you'll use when connecting to the SQL database will be the process identity of the worker process in IIS - and not your own windows identity.
Can you open up the SQL Database Admin tool and make sure that the IIS account has access to the database?
hello Scott, i'm having the same problem with Alexander. Is the SQL Database Admin tool you refer to is SQl Server Configuration Manager( I use SQL server 2005)? If so what do i need to tweak...?
I'm using SQL Server for Role Management and the WSAT tool seems to be working correctly. But I'm getting the same error message as Brunedito: "Cannot open database "aspnetdb" requested by the login. The login failed." If it's because of a named instance of SQL Express on my machine, how do I remove/remedy it??
Using SQL Config Manager, I stopped the SQLExpress service and now get the error message: "System.Data.SqlClient.SqlException: Cannot open database "aspnetdb" requested by the login. The login failed."
Any help appreciated!
Anthony :-)
Hi Anthony,
Are you using a SQL Express or a SQL Server database? If SQL Server, how did you configure the providers within your web.config file?
Here are two common issues that people run into when registering new providers that you want to watch out for:
Hi Azlan,
Can you provide more details on the error you are running into? I'm not sure which of the issues above you were referring to.
You know, I think I see why this is happening, especially if I choose to only export my schema. If I only export my schema, then the aspnet_SchemaVersions table does not get populated with necessary rows for the CheckSchemaVersions stored procedure to verify.
This is a pain, because I want to deploy my app and database fresh, with no data, just the schema. But the Sql Database Publishing Wizard doesn't give you the option to selectively choose which tables you want to script to just the schema or schema and data.
So, I guess the best solution would be to just disable this schema checking altogether, so I can just export my local database schema only, and have it work. If I modify the CheckSchemaVersion sp to always return a successful value, will that do the trick?
sorry Scoot, noob error i didn't modify my web.config file. i was wondering where did the password we set in the web site administration tool is stored? I can't find it in the aspnet_users table? If so hw can i unhashed it?
Glad you got it working. When you create new users the password is one-way hashed and stored within the database. This means that even if the database is compromised, hackers can't reverse engineer the origional password.
Hi Adam,
Is there a reason why you can't just re-create all of the tables that the aspnet_regsql tool creates on your remote host? This might be the easiest approach, and they don't consume much storage so I don't think there is any performance or storage reason not to re-create them. This would probably simplify life considerably if you could just keep them (but not use them).
Thanx 4 the fast reply. Well I'm thinking to let my user see their password in a page. So Maybe i need to see where their password is stored in the database .... Any suggestion on how do i do this?
i've just trying out the your tutorial and i would like to expand it a bit. I wanted to integrate dropdownlist with role admin. Each role won't get the same values in the drop down list. do u have info regarding of this or some links that i can study?
Hey Scott,
My database is a mixture of the aspnet_regsql tables and my own tables. In my dev and stage environments, I've got a bunch of garbage data in there, that I don't want to publish up to the production server on a clean "from scratch" publish. The Database Publishing wizard doesn't give you an option to publish certain tables with schema only and others with schema and data, it's all or nothing.
So, if I want to publish just the schema, then I end up missing the schema version rows that the aspnet_regsql script creates.
So, what I ended up doing was just altering the stored procedure that checks the schema version, and made it always return the successful return value. This worked like a charm, now I can export schema only and don't get the "requires a database schema compatible with schema version '1'" error anymore. HTH someone else out there with a similar issue.
PingBack from
Am attempting to run aspnet_regsql.exe to install the tables and storedprocs on to an instance of SQL Server 2000.
However, the process fails with an error code of 515 and I have been able to find very limited information to assist in understanding/resolving this.
The text of the error message reads:
Cannot insert the value NULL into column 'Column', table 'tempdb.dbo.#aspnet_Permissions____0000000026', column does not allow nulls. INSERT fails.
Both GUI and command line give the same response so I'm confident it's not a typographical error.
I am using the sa account credentials when running the program so wouldn't have thought that permissions would have been an issue.
I would be grateful if you could shed any light on what might be going on with this.
The article was nice helping. And i 've a query. I need to have the backend for my personalization,membership and role provider to be a another DB, rather that SQL Servers(Say DB2).
Any idea, how to incorporate to have another database for my personlization activities.
Thanks for your views.
Regards,
NavKrish
PingBack from
Would be grateful if you assist me with an issue I am having when attempting to install the ASP.NET application services to an instance of SQL Server 2000.
After selecting the target server and database the subsequent step fails with a SQL error number 515. The process trying to execute is InstallCommon.sql
SqlException message is:
Cannot insert the value NULL into column 'Column', table 'tempdb.dbo.#aspnet_permissions_____0000000001A'; column does not allow nulls.
The same error occurs if running from command line or GUI, or from the local machine or remote.
I'm using the sa account so wouldn't have thought that permissions were an issue.
Thanks in advance for any insight that you can provide.
thanks for article, but am having problem. I ran aspnet_regsql and using exisiting DB to store necessary table and sprocs. have following in my web.config
<add name="somename" connectionString="server=sqlsvrname;Initial Catalog=existingdbname;UID=appuser;Password=apppwd" />
<sessionState mode="SQLServer" allowCustomSqlDatabase="true" sqlConnectionString="data source=sqlsvrname;database=existingdbname;user id=appuser;password=apppwd" cookieless="false" timeout="40"/>
getting following err msg
'Unable to use SQL Server because ASP.NET version 2.0 Session State is not installed on the SQL server. Please install ASP.NET Session State SQL Server version 2.0 or above'
PingBack from
PingBack from
I'm using SQL server 2005. I've followed the steps and the Website Admin Tool is working perfectly on my local machine. However, I would now like to connect to a remote server. I'm still following the steps by changing my web.config file so that it would point to the remote server, but I keep on getting this error whenever I try to test the provider connection: ."
This is my web.config:
<remove name="LocalSqlServer"/>
<add name="LocalSqlServer" connectionString="Data Source=DATAWAREHOUSE;Initial Catalog=aspnetdb;Integrated Security=SPPI; User ID= ******; Password= *******" providerName="System.Data.SqlClient"/>
</connectionStrings>
Can you help me on this? By the way, the remote server uses SQL Server 2005 and I've already setup the aspnetdb along with my SQL login credentials.
Thanks and hope to hear from you. :-)
Hi Sp,
Typically that error means that the schema wasn't correctly installed in that target database location.
Can you double check that it was set correctly?
Is there any way to export the whole database to a remote server like a shared hosting.
I can export the database tables and views using databse export wizard in SQL server 2005. but this does not include other objects like stored procedures and schemas.
if you have answered this question already and I missed the post I apologize in advance.
Hi Bob,
Here is a good two part blog posting I did that describes how to export a SQL database to a .SQL file that you can then FTP up to a hosted server and upload into a database there:
PingBack from
PingBack from
PingBack from
PingBack from
PingBack from
PingBack from
PingBack from
PingBack from
PingBack from
PingBack from
PingBack from
PingBack from
One of my favorite features in Orcas is the ability to leverage the ASP.NET Application services from
This Teched in Orlando the new client application framework called " Acropolis " will be announced. From
Pingback from Greek style @ TechEd « Grumpy Wookie
As a test over the last couple of days I tried to tie in a little test application to my Community Server
A friend of mine requested an ASP.NET photo gallery and I thought the Personal Web Site Starter Kit could
Pingback from configuring ASP.NET 2.0 Application Services to use SQL Server 2000 or SQL Server 2005 « Open source for developer
Pingback from setup membership api in your project with database schema « Razwan Kader Personal web
Installing ASP.NET Membership, Roles and Profiles support in SQL Server
Pingback from ASP.NET 2.0 Membership, Roles, Forms Authentication, and Security Resources - ScottGu’s Blog « vincenthome’s Software Development
Pingback from Tony YangYang’s Online Word » Let ASP.NET 2.0 create “ASPNETDB.mdf” in my own database
Pingback from Wintivity ??? » Blog Archive » Forms Based Authentication in SharePoint using SQL MembershipProvider
通过ASP.NET控件实现简单登录系统
Pingback from Encapsulating Templates for Reuse - Global Point Forum
Pingback from Sky Blog: I have a dream to help me cope with anything. » Configuring an ASP.NET 2.0 Application to Work with Microsoft SQL Server 2000 or SQL Server 2005
This blog will mention the steps to setup membership database to show show how to configure each web
Pingback from autenticazione | hilpers
Pingback from VS2008 using SQL2k as membership provider? | keyongtech
ThebiggestfeaturesbroughtbyASP.NET2.0aremostlikelythenewservicesformembership,roles,...
Pingback from Web Administration Tool Crashes when creating user | keyongtech
Sharing ASP.NET security Database between different applications
Thiscommandwillupdatetheconfigurationinformation(Machine.config)tousethisnewprovider.
E... | http://weblogs.asp.net/scottgu/archive/2005/08/25/423703.aspx | crawl-002 | refinedweb | 6,976 | 65.01 |
I'm not familiar with compat header generation, sorry if the comments below are obvious or plain wrong.
On Mon, Jun 29, 2020 at 05:50:59PM +0200, Jan Beulich wrote: > As was pointed out by "mm: fix public declaration of struct > xen_mem_acquire_resource", we're not currently handling structs > correctly that has uint64_aligned_t fields. #pragma pack(4) suppresses > the necessary alignment even if the type did properly survive (which > it also didn't) in the process of generating the headers. Overall, > with the above mentioned change applied, there's only a latent issue > here afaict, i.e. no other of our interface structs is currently > affected. > > As a result it is clear that using #pragma pack(4) is not an option. > Drop all uses from compat header generation. Make sure > {,u}int64_aligned_t actually survives, such that explicitly aligned > fields will remain aligned. Arrange for {,u}int64_t to be transformed > into a type that's 64 bits wide and 4-byte aligned, by utilizing that > in typedef-s the "aligned" attribute can be used to reduce alignment. > > Note that this changes alignment (but not size) of certain compat > structures, when one or more of their fields use a non-translated struct > as their type(s). This isn't correct, and hence applying alignof() to > such fields requires care, but the prior situation was even worse. Just to clarify my understanding, this means that struct fields that are also structs will need special alignment? (because we no longer have the 4byte packaging). I see from the generated headers that uint64_compat_t is already aligned to 4 bytes, and I assume something similar will be needed for all 8byte types? > There's one change to generated code according to my observations: In > arch_compat_vcpu_op() the runstate area "area" variable would previously > have been put in a just 4-byte aligned stack slot (despite being 8 bytes > in size), whereas now it gets put in an 8-byte aligned location. Is there someway that we could spot such changes, maybe building a version of the plain structures with -m32 and comparing against their compat versions? I know we have some compat checking infrastructure, so I wonder if we could use it to avoid issues like the one we had with xen_mem_acquire_resource, as it seems like something that could be programmatically detected. > Signed-off-by: Jan Beulich <jbeul...@suse.com> > > --- a/xen/include/Makefile > +++ b/xen/include/Makefile > @@ -34,15 +34,6 @@ headers-$(CONFIG_XSM_FLASK) += compat/xs > cppflags-y := -include public/xen-compat.h > -DXEN_GENERATING_COMPAT_HEADERS > cppflags-$(CONFIG_X86) += -m32 > > -# 8-byte types are 4-byte aligned on x86_32 ... > -ifeq ($(CONFIG_CC_IS_CLANG),y) > -prefix-$(CONFIG_X86) := \#pragma pack(push, 4) > -suffix-$(CONFIG_X86) := \#pragma pack(pop) > -else > -prefix-$(CONFIG_X86) := \#pragma pack(4) > -suffix-$(CONFIG_X86) := \#pragma pack() > -endif > - > endif > > public-$(CONFIG_X86) := $(wildcard public/arch-x86/*.h public/arch-x86/*/*.h) > @@ -57,16 +48,16 @@ compat/%.h: compat/%.i Makefile $(BASEDI > echo "#define $$id" >>$@.new; \ > echo "#include <xen/compat.h>" >>$@.new; \ > $(if $(filter-out compat/arch-%.h,$@),echo "#include <$(patsubst > compat/%,public/%,$@)>" >>$@.new;) \ > - $(if $(prefix-y),echo "$(prefix-y)" >>$@.new;) \ > grep -v '^# [0-9]' $< | \ > $(PYTHON) $(BASEDIR)/tools/compat-build-header.py | uniq >>$@.new; \ > - $(if $(suffix-y),echo "$(suffix-y)" >>$@.new;) \ > echo "#endif /* $$id */" >>$@.new > mv -f $@.new $@ > > +.PRECIOUS: compat/%.i > compat/%.i: compat/%.c Makefile > $(CPP) $(filter-out -Wa$(comma)% -include > %/include/xen/config.h,$(XEN_CFLAGS)) $(cppflags-y) -o $@ $< > > +.PRECIOUS: compat/%.c Not sure if it's worth mentioning that the .i and .c files are now kept. Roger. | https://www.mail-archive.com/xen-devel@lists.xenproject.org/msg76611.html | CC-MAIN-2020-29 | refinedweb | 574 | 57.47 |
Hide Forgot
Description of problem:
On my updated rawhide installation, pcscd constantly consumes around
35% of the cpu cycles.
Version-Release number of selected component (if applicable):
pcsc-lite-1.3.3-1.fc7
How reproducible:
In my case, just boot the machine.
Additional info:
From fedora-test-list@redhat.com:
On 2/19/07, Ray Strode <rstrode@redhat.com> wrote:
> Miles Lane wrote:
> > Hi,
> >
> > On my updated rawhide installation, pcscd constantly consumes around
> > 35% of the cpu cycles. I have taken to shutting of the service. Any
> > idea how I can troubleshoot this? Is this a bug, or just an issue
> > with my configuration?
> pcscd is a smart card daemon. If you don't use smart cards for
> authentication, you can just turn it off with /sbin/chkconfig pcscd off
>
> It sounds like a bug, can you file it? Running strace -s512 -f -p
> $(/sbin/pidof pcscd) might give some indication what it's doing (or
> getting a backtrace from gdb)
Very strange. When I attached strace, the cpu cycles used dropped
down to nothing. Then, when I stopped pcscd and tried to restart it,
the process won't start up again. I wonder whether strace somehow
caused pcscd to not stop cleanly? I'll file a bug and post with the
bug ID. Should I CC you inside the bug report?
strace -s512 -f -p $(/sbin/pidof pcscd)
Process 2081 attached with 2 threads - interrupt to quit
[pid 2063] select(6, [5], NULL, NULL, NULL <unfinished ...>
Process 2063 detached
Process 2081 detached
Process 2063 detached
[root@hogwarts ~]# /etc/init.d/pcscd stop
Stopping PC/SC smart card daemon (pcscd): [ OK ]
[root@hogwarts ~]# /etc/init.d/pcscd start
Starting PC/SC smart card daemon (pcscd): [FAILED]
/var/log/messages contains:
Feb 19 10:19:41 hogwarts pcscd: pcscdaemon.c:93:GetDaemonPid() Can't
open /var/run/pcscd.pid: No such file or directory
Feb 19 10:19:41 hogwarts pcscd: pcscdaemon.c:415:main() file
/var/run/pcscd.pub already exists.
Feb 19 10:19:41 hogwarts pcscd: pcscdaemon.c:417:main() Maybe another
pcscd is running?
Feb 19 10:19:41 hogwarts pcscd: pcscdaemon.c:420:main() I can't read
process pid from /var/run/pcscd.pid
Feb 19 10:19:41 hogwarts pcscd: pcscdaemon.c:423:main() Remove
/var/run/pcscd.pub and /var/run/pcscd.comm
Feb 19 10:19:41 hogwarts pcscd: pcscdaemon.c:425:main() if pcscd is
not running to clear this message.
- kill all processes called pcscd (if any)
- remove /var/run/pcscd.pub
- start pcscd as: pcscd --foreground --debug
- copy&paste the generated logs in this bug report
thanks
#> pcscd --foreground --debug
pcscdaemon.c:319:main() pcscd set to foreground with debug send to stderr
debuglog.c:211:DebugLogSetLevel() debug level=debug
pcscdaemon.c:533:main() pcsc-lite 1.3.3 daemon ready.
hotplug_libusb.c:394:HPEstablishUSBNotifications() Driver ifd-cyberjack.bundle
does not support IFD_GENERATE_HOTPLUG
Not enough debug info.
Do you still have a pcscd consuming 35% of CPU?
Use Ctrl-C to kill pcscd.
Now try: strace -f -F pcscd --foreground --debug
Created attachment 148403 [details]
strace output
Yes, the cpu cycles are still getting chewed.
Miles:
2 Questions:
1) is this new for 1.3.3?
2) does it go away under any of the following conditions:
a. Temporarily rename /usr/lib/pkcs11/libcoolkeypk11.so
b. Temporarily move /usr/lib/pcsc/drivers/ifd-egate.bundle our of the
drivers directory.
You should restart pcscd and gdm after trying each of these. If the 35% CPU goes
away we can start looking at these components instead.
Thanks.
bob
I am not sure how long ago this problem started. Can you point me to earlier
packages for pcscd that I can test?
It has nothing to do with coolkeys, because I didn't have it installed. I just
tried installing it to see whether that would help, but it didn't. I tried
renaming /usr/lib/pcsc/drivers/ifd-egate.bundle and
/usr/lib/pcsc/drivers/ifd-cyberjack.bundle, but that also did not help.
It looks like pcscd is continuously scanning the USB bus.
I would need some debug from pcscd but you need to recompile pcscd after
patching the file src/hotplug_libusb.c like:
Index: src/hotplug_libusb.c
===================================================================
--- src/hotplug_libusb.c (révision 2408)
+++ src/hotplug_libusb.c (copie de travail)
@@ -39,7 +39,7 @@
#include "sys_generic.h"
#include "hotplug.h"
-#undef DEBUG_HOTPLUG
+#define DEBUG_HOTPLUG
#define ADD_SERIAL_NUMBER
#define BUS_DEVICE_STRSIZE 256
Maybe Bob can provide a RPM with the needed change.
Then start: pcscd --foreground --debug
Setting up the entire Gnome build environment would be difficult for me, since I
am low on disk space. So, it would be very helpful if you could provide the
debug package. Thanks!
done:
hotplug_libusb.c:196:HPReadBundleValues() Increase driverTracker to 96 entries
hotplug_libusb.c:182:HPReadBundleValues() Found driver for: Pertosmart Card Reader
hotplug_libusb.c:182:HPReadBundleValues() Found driver for: Pertosmart Card Reader
hotplug_libusb.c:182:HPReadBundleValues() Found driver for: WB Electronics
Inifinty USB Ulimited
hotplug_libusb.c:182:HPReadBundleValues() Found driver for: REINER SCT CyberJack
hotplug_libusb.c:182:HPReadBundleValues() Found driver for: REINER SCT CyberJack
pp_a
hotplug_libusb.c:234:HPReadBundleValues() Found drivers for 93 readers
hotplug_libusb.c:394:HPEstablishUSBNotifications() Driver openct-ifd.bundle does
not support IFD_GENERATE_HOTPLUG
ifd-egate.i386 0.05-16 installed
openct.i386 0.6.11-2.fc7 installed
pcsc-lite-openct.i386 0.6.11-2.fc7 installed
ccid.i386 1.2.1-1.fc7 installed
ifd-egate.i386 0.05-16 installed
pcsc-tools.i386 1.4.8-1.fc7 installed
pcsc-lite.i386 1.3.3-1.rawhide_bob installed
ctapi-cyberjack-pcsc.i386 2.0.13beta5-2.fc7 installed
pcsc-lite-libs.i386 1.3.3-1.fc7 installed
pcsc-perl.i386 1.4.4-3.fc7 installed
- send the output of: "cat /etc/reader.conf"
- rename /usr/lib/pcsc/drivers to /usr/lib/pcsc/drivers.old and start again: send the _complete_ pcscd
logs and check the CPU utilisation.
Created attachment 148551 [details]
reader.conf
Created attachment 148553 [details]
output of "strace -f -F pcscd --foreground --debug"
Moving the /usr/lib/pcsc/drivers to /usr/lib/pcsc/drivers.bak causes pcscd to no
longer chew cpu cycles. This doesn't seem too surprising, given that:
hotplug_libusb.c:109:HPReadBundleValues() Cannot open PC/SC drivers directory:
/usr/lib/pcsc/drivers
hotplug_libusb.c:110:HPReadBundleValues() Disabling USB support for pcscd.
Now create an empty directory /usr/lib/pcsc/drivers and move the drivers from
/usr/lib/pcsc/drivers.bak one after the other in the new directory. After each
move restart pcscd to check if you have the problem or not.
I suggest to start with ccid.i386 as I know it well.
The idea is to know if the problem is in pcscd itself of in a driver loaded by
pcscd.
Having these three directories present causes no trouble: ifd-ccid.bundle,
ifd-egate.bundle, serial
Having either of these two directories present causes the CPU usage to jump:
ifd-cyberjack.bundle and openct-ifd.bundle
Ok. So the bug(s) should be in these two drivers.
Bob, can you try to reproduce the problem?
Miles, where did ifd-cyberjack.bundle and openct-ifd.bundle come from?
bob
ctapi-cyberjack-pcsc-2.0.13beta5-2.fc7
pcsc-lite-openct-0.6.11-2.fc7
I've gone to long with out responding to this bug.
OK, I have 2 theories here both revolve around the fact that
ifd-cyberjack.bundle and openct-ifd.bundle are using the old 'wakeup and poll
method'.
Theory 1 (most likely:). Jack found a bug in ifd-egate-0.05-16 where is you
weren't using the new polling method, you would wind up in a busy loop (bug
232983). He fixed that in ifd-egate-0.05-17.
If ifd-cyberjack.bundle or openct-ifd.bundle is loaded, pcsc-lite will switch
back to the old polling method rather than the wake up with udev method.
We can test theory 1 either by removing ifd-egate or using the latest version of
ifd-egate and seeing if the problem goes away.
Theory 2 (less likely). There is a missing sleep in pcsc-lite when we fall back
to the old poll method. If loading ifd-cyberjack.bundle by itself triggers the
CPU jump, then it's likely that theory 2 is the problem.
*** Bug 247433: | https://partner-bugzilla.redhat.com/show_bug.cgi?id=229263 | CC-MAIN-2020-05 | refinedweb | 1,393 | 54.18 |
College football scoreBoard
business• B9
Brigham Young......... 14 Ole Miss..................... 13
South Florida............. 23 Notre Dame............... 20
Jackson State............ 42 Concordia.................... 2
LSU............................ 40 Oregon...................... 27
Grambling................. 21 Alcorn State............... 14
Boise State................ 35 Georgia...................... 21
SUNDAY, s e pt m e b e r 4, 2011 • $1.50
Art as business
Officials say creativity vital ick sburgp ost.com
topic
Ever y day Si nCE 1883
A dwindling base Edwards Annexation Plan 20 22
Vaudeville Old
Cem
ete
U.S Hwy 8
oad
Hw U. S se
ou oH at
. Rd
t Po
Mississippi River:
ad
ry Ro
te Ceme
Edwards
Today: showers; high of 83 Tonight: showers; low of 71
467
17.9 feet Rose: 0.00 foot Flood stage: 43 feet
Legend
N W
Current boundary
E S
Area of proposed annexation
467
A9
DEATH
ry R
y
80
0
O ld
C1 WEATHER
By The Associated Press
20
Mt. Moriah Rd.
Magic, music at SCHS
Paul Barry•The Vicksburg Post
The town of Edwards has sought to annex in all directions around its current boundaries.
• Alean D. Burse
A9
this week in the civil war deMaj. Gen. feat at Bull George McClellan Run. McClellan’s forces spend the late summer weeks training, drilling and training some more. Some observers watch and wait impatiently, critical of McClellan’s weeks of drills while urging a resumption of battle. Yet Lincoln is willing so far to give McClellan time to pull together a unified fighting force after its panicked, disordered retreat from Bull Run. Inaction ultimately will be McClellan’s undoing in months further ahead.
INDEX Business................................ B9 Puzzles................................... B8 Dear Abby............................ B7 Editorial.................................A4 People/TV............................. B7
Advertising....601-636-4545 Classifieds....... 601-636-SELL Circulation......601-636-4545 News................601-636-4545
See A2 for e-mail addresses
ONLINE VOLUME 129 NUMBER 247 4 SECTIONS
Tiny Edwards battling annexation, services By Danny Barrett Jr.
dbarrett@vicksburgpost.com
EDWARDS — Growth and beauty are words often spoken when people talk about the past and present in Edwards. “It was one of the most beautiful little Mississippi towns when I moved here,� said 56-year resident Dorothy Brasfield, recalling a thriving downtown during the mid-20th century that boasted groceries, banks, fiveand-dime stores and a dealership to buy automobiles to hit the open road, usually U.S. 80. Annexation that would expand the town’s boundaries fourfold, construction of the first grocery store in Edwards in decades and whether a banking institution will return to town are today’s conversation pieces. The expansion plan has lost steam in the courts in recent months, but it’s an issue Mayor R.L. Perkins still believes can be a positive for the west Hinds County community. “In order for your town to grow properly, you need a solid tax base,� Perkins said, conceding the key benefit of adding 5.3 square miles to the town’s 1.7 square miles is extra property tax revenue, though at the same 47 mills currently collected. About 20 miles east of Vicksburg,
Thousands lose power as storm drenches Southeast JEAN LAFITTE, La. —ippi’s coastal casinos, however, were open and reporting brisk business. In Jean Laffite, water was a foot deep under Eva Alexie’s house, which is raised about eight feet off the flat ground. “I should be used to this,� said Alexie, a 76-year-old storm veteran who lost a home to Hurricane Ike in See Lee, Page A9.
David Jackson•The Vicksburg Post
Edwards Mayor R. L. Perkins talks about the town’s future. the town of Edwards shrank in the past decade — coming in at 1,034 residents in the 2010 count, down from 1,347 in 2000. In 2008, a move to expand south and west of Mississippi 467 ended amid outcries from property owners against an extra tax bill. A year later, the city hired Oxford planning consultant Slaughter & Associates and formed maps proposing added growth north of Interstate 20, east to Buck Reed Road and west about a half-mile past Jones Road. About 800 people would be annexed in the process. A 2010 case against 11 landowners and firms who hired lawyers remains active in Hinds County Chancery Court, though fierce opposition See Edwards, Page A9.
Documents reveal ties between Libya, CIA By The Associated Press
The site of a new Dollar General store on Jackson Street in Edwards.
Edward’s Cleaners next to an empty storefront on S. Main Street in Edwards.
TRIPOLI, Libya —’s efforts to turn Libya’s mercurial leader from foe to ally and provide an embarrassing example of the U.S. administration’s collaboration with authoritarian regimes in the war on terror. The documents, among tens of thousands found in See Libya, Page A9.
& $ 5 ( < 2 8 Âś 9 ( *52:17275867 6)URQWDJH5G9LFNVEXUJ06
00( ('',,&&$ / $/ $VVRFLDWHV $VVRFLDWHV 2 ) 2 )9 ,9 &, &. .66 %% 8 8 55** Affiliated with
Affiliated with
A2
Sunday, September 4,
The Vicksburg Post
A HOUSE DIVIDED
Congress returns unpopular, facing challenges By The Associated Press WASHINGTON — Congress returns to work thisterm
The associated press
In this 2009 photo President Barack Obama is greeted on Capitol Hill in Washington after delivering a speec to a joint session of Congress.
‘I’m not the least bit surprised that the rating of Congress is abysmal. If we could do the work that we are supposed to be doing in a reasonable and timely way, it would improve.’
‘Everyone complain all you want about Congress. You should complain plenty. But don’t think the country is about to fall apart because of what’s going on in Washington.’ Pat Toomey sur-
Harry Reid prisedDestroyingclass.
community calendar Churches Providence M.B. — Revival, 7 p.m. Monday-Friday; the Revs. Michael Wesley Sr. and Earl Cosey Jr; the Rev. Earl Cosey Sr., pastor; 7070 Fisher Ferry Road.
CLUBs Knights of Columbus — Noon-2 p.m. Monday; barbecue lunch, all you can eat; $10 adults and $5 children; Fisher Ferry Road. American Legion Post 3 — 2012 membership drive, 1 p.m. Monday; barbecue chicken or rib plates with trimmings, $6 per plate or free for new 2012 members; 1712 Monroe St. Vicksburg Kiwanis — Noon Tuesday, Jacques’ Cafe; Ricky Rudd, Team Depot. VAMP — Noon Tuesday; Leslie Horton, director of the Tobacco Free Coalition of Warren and Claiborne counties; Heritage Buffet, Ameristar Casino. Lions — Noon Wednesday; Jeff Riggs, Afghanistan deployment; Toney’s. Retired Education Personnel of Vicksburg Warren County — 1 p.m. Thursday; auditorium of the Vicksburg Hinds Community College; selection of member of the year and election of officers; 601638-4506. Military Order of the Purple Heart and Ladies Auxiliary — Meeting 9 a.m. Thursday; 11, all combat wounded veterans are invited to lunch;
Charlie Tolliver 601-636-9487 or Edna Hearn 601-529-2499; Battlefield Inn. Vicksburg Toastmasters Club No. 2052 — Thursday meeting is canceled. TIES-Vicksburg Young Professionals — 5:30-7:30 p.m. Sept. 13; monthly social, sponsored by Physician Practices of River Region; Drs. Andrew Nye, Carlos Latorre, Dedri Ivory and Drishna Goli, special guests; One Medical Plaza West Campus, North Frontage Road.
PUBLIC PROGRAMS Kiddie City — 9 a.m. until Tuesday; back-to-school open house; 601-638-8109; 1783 M.L. King Blvd.
Tuesday Vicksburg AlAnon — Noon Tuesday; second floor, First Presbyterian Church, 1501 Cherry St.; 601634-0152. River City Mended Hearts — 5 p.m. Tuesday; River Region Medical Center, Rooms C and D; family and friends welcome. Divorce Care — 6 p.m. Tuesday; video seminar/support group for those separated or divorced; free program; Mafan Building, 1315 Adams St.; 601636-2493. AFL-CIO — Labor Day Celebration, noon Monday; Smith Park, Jackson; DJ Outlaw; free food and entertainment for children; meet and speak with candidates for public office.
Vicksburg Housing Authority Career Center — Job opportunities for residents only; Manney Murphy, 601-6381661 or 601-738-8140. Serenity Overeaters Anonymous — 6-7 p.m. Wednesday, Bowmar Baptist Church, Room 102C; 601-638-0011. Al-Anon — 7:30 p.m. Wednesday; family, friends of alcoholics and addicts; 502 Dabney Ave.; 601-636-1134. Sock Knitting Workshop — 10 a.m.-noon Thursdays in Sept. — 8,15,22, and 29; Brenda Harrower, presenter;
$80 members and $90 nonmembers includes all supplies; space is limited and reservations are required; SCHF, 601-631-2997 or e-mail info@ southernculture.org.
BENEFITS Shopping Extravaganza — Outlet at Vicksburg tickets $15 each; being sold until Oct. 7 by high school DECA members and marketing students from Hinds Junior College; Donna Cook, 601-629-0608 or dkcook@hindscc.edu.
Eugene Davis
In Loving Memory of Jer’Lisa Nicole Minor
Little did we know that God would call you home on 9/04/2009. We were heartbroken because we love you so much. We are grateful to Him for blessing us with such a beautiful gift. Forever you will be in our hearts. We love you and miss you so very much. Until we meet again, Your Family
6/9/1942 - 7/31/2011 Thank You Our hearts have been lifted in our time of sorrow from all of your kind acts. The visits, calls, cards & words of encouragement was, is & will always be greatly appreciated. Words cannot express our gratitude. May God keep you all. Thank you. The family of the late Eugene Davis
Sunday, September 4, 2011
The Vicksburg Post
President to Congress: Pass transportation bill WASHINGTON — President Barack Obama is appealing to Congress to pass a transportation bill that would put money in the pipeline for roads and construction jobs, arguing that it’s an economic imperative. Republicans say they support passing the bill, but Obama says time is running out and “political posturing” may stand in the way. “There’s no reason to put more jobs at risk in an industry that has been one of the hardest-hit in this recesPresident Barack Obama sion,”.
washington
BY THE ASSOCIATED PRESS
Architect: MLK inscription stays WASHINGTON — The executive architect of the new Martin Luther King Jr. Memorial in Washington said an inscription on the monument won’t be changed, despite criticism from poet Maya Angelou that it makes King sound arrogant. Ed Jackson Jr. said.”
Congressman defends plan to skip speech PALATINE, Ill. — Rep. the address. Joe Walsh Walsh said at a meeting of Republicans in his district that such a move should be reserved for “momentous” topics like war, not what he called a political speech. Walsh said he will still read Obama’s speech.
A3
A4
Sunday, September 4,
Mississippi’s Hispanic population grew 105.9 percent between 2000 and 2010 with Hispanics now comprising 2.7 percent of the state’s total population.
Growth of state’s Hispanic population an economic reality
OUR OPINION
Quarter Take pride in Vicksburg coin Vicksburg is a collectible. So read the headline in Wednesday’s Vicksburg Post after the unveiling of the latest in the America The Beautiful quarter series. The quarter depicts the USS Cairo, a Union gunboat sunk in the Yazoo River in December 1862. The residents of Vicksburg should hold their heads high with pride on their sleeves for all to see because for this city — for any city — this is a big deal. Only 56 sites‚ one in each of the 50 states, five territories and the District of Columbia, will be featured on the quarters. The series will feature national parks and other sites of national historical significance. The choice of Vicksburg is a feather in the cap of this community.
More than 2,000 students and 400 others participated in the Tuesday ceremony that brought the “collectible” headline. Uncirculated rolls of coins were sold as well, giving residents a chance to forever hold Vicksburg as a collectible. The tails side image depicts the Cairo on the Yazoo River as it would have been seen when it served the Union Navy during the Civil War. Inscriptions are VICKSBURG, MISSISSIPPI, 2011 and E PLURIBUS UNUM — Latin for “out of many, one.” Once circulated, the 64 million quarters will travel the world. Hopefully, many who find themselves with a Vicksburg quarter will delve a bit deeper into the story of the gunboat. That exploration might lead to
further probes into events surrounding the sinking and Vicksburg’s key role in the American Civil War. It also provides our own residents the chance to further enhance their knowledge of this city’s role in the war. The story of the Cairo is fascinating. To try to retell the story here would be impossible. But the story of the Cairo is readily available at the Vicksburg National Military Park — as are the remains of the boat that lay on the floor of the Yazoo River for nearly 100 years before being raised in the 1960s. Mississippi has other historical sites that could have been chosen. But the U.S. Mint selected only one. Vicksburg, grab a roll or two. This is special.
Reversal of police car decision a win-win The people have spoken and the City of Vicksburg has listened. A win-win. Two weeks ago, the Vicksburg Board of Mayor and Aldermen passed a policy change that would require most police officers to drive their personal cars to and from work so their cruisers could remain at police headquarters overnight. The move, the city said, would save the city money in gasoline. We applaud the city’s effort to try to find belts to tighten and purse strings to pinch. Then came the outcry. Aldermen said they were inundated with concerns from the community
about the plan. Residents said they felt safer with marked patrol cars parked in neighborhoods across the city. Criminals, we would hope, also would be less likely to act out with a patrol car in proximity. Police Chief Walter Armstrong, who favored the reversal of the policy, also pointed to emergency response times. He pointed to a storm on Aug. 20 when a shift was called back in to work early to assist with emergency response. The original decision came on the heels of a community meeting in which Armstrong pleaded for neighborhood involvement in fighting crime.
Residents got involved and influenced a change in policy that will benefit everyone. Cuts still will have to be found as the city grapples with a budget shortfall. City and Warren County officials are winding up budgets for the fiscal year that begins Oct. 1. In our form of government, officials are elected by the people to do the people’s work. In this case, the people spoke; the elected officials listened. Exactly how local governments should work. It’s a Democracy.
Economic doors opening in area As Vicksburg bid farewell to a number of workers at one longstanding company on Monday, another in the area promised grand plans with many jobs, and more potentially are in the wings. A LeTourneau Technologies-built Douglas 240-C class shallow-water rig left the facility Monday, leaving in its wake much uncertainty. Joy Global of Milwaukee purchased the company for $1.1 billion. As many as 250 workers, it is hoped, will remain at the plant, but whether the company will build big rigs or ever get back to the 600 employees it once had is uncertain. Some of those workers, though, also had good news on Monday as shipbuilder St. John Enterprise Inc. of Garyville, La., announced a $32 million upgrade to a Madison Parish shipbuilding facility. The 100,000-squarefoot, 56-acre site once was home to a
Northrop Grumman plant, before that company moved to the Gulf Coast. St. John’s plans are to hire 104 people in a year. Company CEO Ron Lewis on Monday said he asked for 100 resumes and received 169 from people living in Louisiana. The inference is that St. John not only wants to expand, but to hire locally. That could be a boon for not only former LeTourneau workers, but for others weathering a sour economy. Mississippi’s unemployment rate for July sat at 10.4 percent, and Warren County’s was about a point higher. Louisiana’s unemployment rate for July was 7.6 percent. Statewide, the Mississippi Legislature on Friday approved a plan to issue bonds to lure two companies and about 1,750 jobs to the state. The Legislature approved the floating of bonds and tax incentives of about $175 million to two
companies — a California-based solar energy company, expected to employ 900 people in Lowndes County, and HCL CleanTech, which plans to locate its headquarters in Olive Branch and build satellite facilities in the state. The Legislature also gave the OK to expand an existing tax rebate to help Huntington Ingalls add 3,000 shipbuilding jobs on the Gulf Coast. The state issues bonds as long-term debt in efforts to fund large projects. In this case, with the Legislature’s authorization, the state Bond Commission will meet Sept. 19 to issue the bonds. When the LeTourneau rig began its trip south en route to Sabine Pass, Texas, it took with it many jobs and left much uncertainty as to its future. But on the same day, hope arrived on both sides of the river — hope for more jobs, more growth and more prosperity. And that is a good thing. by 2050, that percentage is projected to expand to 30 percent. This country’s Hispanic population grew 43 percent over the last decade. Nine states, including Mississippi, saw their Hispanic population double over the last decade. Mississippi’s Hispanic population grew 105.9 percent between 2000 and 2010 with Hispanics now comprising 2.7 percent of the state’s total population. Those SID.” The report establishes that fact to be most prominent in states with large Hispanic populations like California, Nevada, Texas and Colorado. Mississippi Hispanics have a higher education (four-year or two-year degrees) attainment rate of 18.2 percent compared to 33.8 percent for white Mississippians. The national average higher education attainment rate for Hispanics is 18.6 percent while for whites the percentage is 42.2 percent.. •
SALTER
Sid Salter is a syndicated columnist. Contact him at 662-3252506 or ssalter@library.msstate.edu.
Sunday, September 4, 2011
The Vicksburg Post
WEEK IN Vicksburg A roller coaster of 90s served as high temps during the week in Vicksburg. Overnight lows ranged from the upper 60s to low 70s. No rainfall was recorded locally. The Mississippi River dropped on the local gauge from 19.3 feet to 17.9 before steadying at 18. Forecasters expected it to recede slightly, predicting a reading of 17.8 feet for today. More than 7 pounds of synthetic marijuana and a collection of Xanax and guns were seized by law enforcement from the Pecan Ridge Mart at Freetown and Culkin roads. Two store employees were arrested in the raid. City officials amended a previous policy restricting the use of city vehicles after working hours, allowing police to continue taking cars home. The decision was made after public outcry, with many citizens telling officials they like seeing marked patrol cars in their neighborhoods. The old Magnolia Avenue High School, also known as Bowman High School, is being featured on an Internet website as having a “distinguished” role in a Rockefeller Foundation study. The site,, features history, photos, alumni memories and other aspects of the school. St. John Enterprises Inc. of Garyville, La., announced plans to invest millions in barge manufacturing in Madison Parish — just across the Mississippi River from Vicksburg. The announcement was made days before an unrelated announcement in which LeTourneau Technologies’ parent representatives said they are planning the sale of local drilling products operations to Houston-based Cameron International Corp. Derrick Collins, a former Porters Chapel Academy coach, pleaded guilty in Warren County Circuit Court to bank robbery and eluding law enforcement. He was sentenced to 15 years in prison. Thousands gathered at the USS Cairo museum in the Military Park to watch the launch of the Vicksburg America the Beautiful quarter, part of a U.S. Mint series. The coin depicts the Cairo as it would have appeared steaming on the Yazoo River in 1862. Budget plans devised for consideration by the Warren County Board of Supervisors include pay raises for each of the county’s 270 employees, including deputies. The board, as well as the City of Vicksburg’s Board of Mayor and Aldermen, plan to adopt budgets Tuesday for the fiscal year beginning Oct. 1. In an address to Rotary Club members, developer A.G. Helton said he hopes to begin construction at the unfinished strip mall at Halls Ferry Station by Oct. 1. He said he expects the facility, behind Walgreens Drug Store, to create 60 to 75 jobs. Clear Creek Golf Course golfers won eight singles matches on Sunday to defeat the Vicksburg Country Club team in the annual battle of each club’s best golfers. It marked the third straight victory for Clear Creek. About 35 residents were told funding for the buyout of homes flooded in the spring could become available by March 1. City, state and federal representatives spoke to the group of homeowners, who may apply for the voluntary buyout program. Local deaths during the week were Richard LaMont Poole, Vernon Plett, Luella “Leola” Cooper, Betty Tritz Starnes, Eddie Gray Cannon, Robert Arthur Friesz and Doris J. Lewis.
A5
Last ‘poet of the masses’ has passed into glory OXFORD — Blues music is honesty. It’s raw, gut-level honesty. willing and able to perform it. Last week, David “Honey Boy” Edwards died in Chicago, the city most Mississippi-born blues artists grew to call home in the last century. They could make a living there, often performing for other ex-patriot African-Americans whose subsistence jobs in the Delta went away. A lot of people have made a living
CHARLIE
MITCHELL
Blues music is high art. It’s poetry for the masses. The words and music come, unfiltered, from the souls of those willing and able to perform it. songs black sharecroppers sang while drinking inter-
national satellite radio. Nearby the Delta Blues Museum is undergoing an expansion. In Cleveland, Tricia Walker is directing a renaissance of regional music at the Delta State University Music Institute. And the B.B. King Museum in Indianola is those skills, just 350 years apart. But their work isn’t as accessible, doesn’t speak to us directly the way David “Honey Boy” Edwards and his fellow bluesmen do. truthfully, not from fans treating them as superstars. Fame wasn’t bad. Truth was what mattered. Poets for the masses. Can’t say it any better than that. •
Charlie Mitchell is a Mississippi journalist. Write to him at Box 1, University, MS 38677, or e-mail cmitchell43@yahoo.com.
LETTERS TO THE EDITOR
Local patriots stand up against Tucker’s nonsense The Aug. 28 letter to the editor had me cheering with pride and joy. It all started with the heading, “Tucker’s venom not needed in local newspaper.” You don’t have to read many of Ms. Tucker’s columns before realizing it was very typical of all her writings. To her, any item worthy of note in the civilized world, is to always be viewed through a racial prism. That’s her “shtick.” Unfortunately, the fact she is syndicated tells me a fairly large portion of the audience agrees with her “liberal” views (they prefer being called “progressives” though I don’t know why). Absolutely stunning and worthy of being read by every American was the wonderful, well-thought and perfectly factual responses, by Mr.’s Allred, Perkins, Peters, Hall, Ruhl and Barber, as they answered the many liberal and left-leaning letters that followed Tucker’s column last week. Knowing we still have a body of straight-thinking patriots with the guts to stand and be counted was immensely satisfying. Thank you from the bottom of my heart. For all those who follow Tucker’s philosophy, let me quote an oftenquoted statement from a very popular Democratic president of the.
1960s, John F. Kennedy: “Ask not what your country can do for you, but rather what you can do for your country.” Well imagine that, he must have been a Tea Party member. We must now either get back to that philosophy, or we face the destruction of our precious Constitution, which has already been seriously bent by those among us who seek only their
own gain. Al and Nan Lundin Vicksburg
Obama can do more In response to James Montgomery and Angela Johnson, I agree with you — partially. Yes, no one really expects President Barack Obama to have completely fixed the problems of this country in the 2 1/2 years he has been acting as president (notice I said acting) but what we do expect is for him to have made strides in the right direction. True, the country was headed downhill when he took office, but he has done nothing to reverse this, rather he has taken giant leaps to do everything to speed it up. Since he has been in office, the national debt has doubled. How can you think that raising taxes and not cutting spending is the answer? It is unfathomable to think that would work to reverse the national debt. The problem is government spending. Why did Standard and Poor’s downgrade the U.S.? Too much spending. They didn’t mention anything about not enough tax increases, or not enough revenue increases, yet you Democrats constantly blame the Tea Party and the Republi-
cans. It is the Democrats who were against spending cuts, which the S&P has said was the reason for the downgrade. Try this: Rather than raising taxes on the wealthy, first let’s try to collect taxes from the 51 percent of the people who don’t pay any taxes. How can you think it is conceivable to raise taxes more on the few folks who are paying taxes, while still allowing more than half of the people to pay nothing? Is it fair for rich people to have to pay more because they have proved themselves successful and for you to pay less because you have proved yourself unsuccessful? According to the Democrats, yes that is fair simply because they can afford to pay more. That is bull. The problem with the Democrats is that their voter base comprises the vast majority of the 51 percent who don’t pay any taxes and continue to freeload off the backs of the, mainly Republicans, who do work. Democrats know this and will not do anything to fix the problem because they will lose their voter base. Understand that the 49 percent of Americans paying taxes are tired of paying for the other 51 percent, who pay nothing. Jeremy Stokes Kabul, Afghanistan
Family planning as a pro-life cause in East Africabear-
MICHAEL
GERSON
From a distance, it seems like a culture war showdown. Close up, in places such as Bweremana, family planning is undeniably pro-life.
ing years is due to maternal causes. The women of Bweremana are attempting to spread and minimize their risk. In a program organized by Heal Africa, about 6,000 contribute the equivalent of 20 cents each Sunday to a common fund. When it is their turn methods of contraception are morally acceptable for adults. Children are gifts from God, but this doesn. Contraceptives do not solve every problem. But women in Bweremana want access to voluntary family planning for the same reasons as women elsewhere: to avoid highrisk pregnancies, to deliver healthy children, and to better care for the children they have. And this is a pro-life cause. •
Michael Gerson’s email address is michaelgerson(at)washpost.com.
A6
Sunday, September 4, 2011
The Vicksburg Post
Ala. suspect wants report N.C. man gets life for killing 8 at nursing home on Mass. killing sealed BOSTON — The highest court in Massachusetts is set to decide whether a report on an inquest into the 1986 death of Amy Bishop’s brother should remain sealed from public view. Lawyers for Bishop and The Boston Globe are to make arguments before the Supreme Judicial Court on Amy Tuesday. Bishop The Globe says the report should be made public because there is interest in finding out what went wrong with the original investigation into Seth Bishop’s death. Police in Braintree, Mass., ruled it accidental. After Bishop was charged with killing three of her colleagues at the University of Alabama last year, Massachusetts authorities reopened the investigation. An inquest led to her being indicted for murder in her brother’s killing.
N.M. fire spared pot plant operation ALBUQUERQUE, N.M. — This summer’s Las Conchas fire in New Mexico scorched tribal lands and threatened one of the nation’s premier nuclear facilities. But it somehow spared more than 9,000 marijuana plants in a remote area of Bandelier National Monument. Officials said no arrests
nation
BY THE ASSOCIATED PRESS have been made in the growing operation in the park’s backcountry. But authorities said they were looking for at least two suspects. They estimate the plants were 6 to 10 feet tall had a street value of around $10 million.
Teen charged with posing as doc assistant KISSIMMEE, Fla. — Authorities say a teen was arrested after posing as a physician’s assistant at a central Florida hospital. Kissimmee police say the 17-year-old performed CPR on a patient in cardiac arrest at the Osceola Regional Medical Center and also performed physical examinations and other forms of care on an undisclosed number of unsuspecting patients. He faces five charges five counts of impersonating a physician’s assistant.
2 planes collide in air over Alaska ANCHORAGE, Alaska — Two single-engine planes collided in the air Friday near a remote western Alaska village, sending one aircraft crashing nose first and leaving its pilot presumed dead, authorities said. Just the two pilots were aboard the planes when they collided about 400 miles west of Anchorage. One plane was able to land safely and the pilot was uninjured.
Jer’Lisa Nicole Minor 1/11/1988 - 9/04/2009
“If tears could build a stairway, And memories a lane, I’d walk right up to heaven And bring you home again� Keep smiling down on me, until we meet again. Deidre
CARTHAGE, N.C. (AP) —. “That man killed my mom like she was a roach,� said Linda Feola, whose mother, 98-year-old Louise DeKler, was shot at close range by Stewart at Pinelake Health and Rehabilitation Center in Carthage on March 29, 2009. “That man will not be
Tessie Garner, was among the victims. The victims,. Stewart, who did not testify during the month-long trial, was acquitted of two charges of attempted first-degree murder involving two victims who were wounded but not killed. He was convicted on multiple assault and firearms charges in addition to the eight murder charges. Stewart arrived at Pinelake
The associated press
Robert Kenneth Stewart where my mom is. There is no way.� Stewart, who plans to appeal the verdict, looked on without visible reaction as relatives of the victims voiced their grief and anger before the sentence was pronounced. “It feels like you’ve had a part of your body removed that you know you can’t ever get back again,� said Bernice Presnell, whose mother, 75-year-old
* -0"7
SALE
PAY ,-',2#0#12
‘TIL
**
BRING IN THIS AD
MONDAY ONLY TO RECEIVE A
150
$
1&-..',% !-3.-, OFFER GOOD AT ***-!2'-,1
VALID ON A $699 MINIMUM PURCHASE MONDAY 9/5/11 ONLY!*
14#
14#
ON ACCENT CHAIR
ON QUEEN BED
* -0"7
* -0"7
1*#
1*#
QUEEN BED ONLY: REG. PRICE 299
ACCENT CHAIR: REG. PRICE $349
$
&30075&'*#13..*'#1*12 0,6.(//<)851,785(3($5/ $,5325752$' 3($5/06
52206725(3($5/ +:<($67 3($5/06
0,6.(//<)851,785(0$',621 *5$1'9,(:%/9' 0$',62106
1(;772-$&.621$,53257
1(;772-$&.621$,53257
1(;7720$/&27+($7(5
‡
6/((36725(5,'*(/$1' +,*+/$1'&2/21<3.:< 5,'*(/$1'06
6/((36725()/2:22' 0$&.(1=,(/$1( )/2:22'06
5(1$,66$1&(&(17(5
1(;772',&.¡663257,1**22'6
Prior sales excluded. All items shown are subject to prior sale. *Interest Incentive is as follows: Receive NO Interest ‘til 2013 on minimum financing of $500-greater with down payment and minimum monthly payments required. Interest retroactive to purchase date if balance not paid in full within option period. Subject to credit approval. **Be in our store Monday, 9/5/11 at 10 AM ONLY to receive a $150 Shopping Coupon off a $699 minimum purchase. One discount per household. Previous purchases excluded. Not to be used in conjunction with any other offer. This discount not applicable to Thomasville, Tempurpedic, Infant, Super Value Furniture, Red Dot Discount Merchandise, Furniture Protections Plans, Tax, Delivery, and Special Orders. Finance and special offers not valid at Miskelly’s Clearance Center. See store for complete details. Offer Valid 9/5/11.
Sunday, September 4, 2011
The Vicksburg Post
A7
THE VICKSBURG POST
THE SOUTH Karen Gamble, managing editor | E-mail: newsreleases@vicksburgpost.com | Tel: 601.636.4545 ext 137
Remembering Temple SEAN MURPHY
POST WEB EDITOR
By The Associated Press
Everybody loves a rebel, except in Mississippi Saturday’s Ole Miss football game featured 50,000 fans, many of whom were cheering on the Rebels. A “Go! Rebels” chant is so much more inviting than “Go! Rebel Black Bears.” What is in a name, anyway? This newspaper has not officially been The Vicksburg Evening Post for almost 20 years, yet folks around here continue to call it by that name, or for brevity sake, “E-nen Post.” So it is likely that Ole Miss fans will continue to refer to the school’s team by that awful word, you know the one we cannot say, the one that comes before Black Bear... shhhh. We cannot say that word, at least in reference to the University of Mississippi. Even though that word is a symbol — one great American once said symbols are for the symbol-minded — it causes too much pain and heartache to say, let alone use it in reference to an athletic team. To the national and international media, though, that word has been ballyhooed all summer. The Middle East — always a political powder keg — is undergoing massive social and political change. A group of mostly young people, fed up with the actions of their government, have risen in such places as Yemen, Egypt and Libya. Syria’s Bashar al-Assad is under assault by a group of similarthinking people. The United States has intervened military and diplomatically in many of these uprisings, the latest being a bombing campaign to help depose Libyan leader Moammar Gadhafi. We are assisting the resistance, the rebels, as they are happily referred to. The national media is in love with these rebels. There are rebel strongholds, rebel advances and rebel leaders. There are rebel tanks, rebel rockets and rebel spokesmen. The rebels are a force of good to be aided and cheered on to victory. These rebels deserve all the support we can muster, we are reminded, but Mississippi’s rebels, eh, not so much. If the rebels from Oxford buckled under politically correct pressure to dump their nickname in favor of a bear, and being as the University is a state-funded institution using Americans’ tax dollars, would it not at least be prudent to ask those in the Middle East to have a bit more understanding? Should we reconsider our financial, diplomatic and military support until the protesters are referred to as the Middle East Rebel Camels? Rebel Sandstorms, perhaps? Fair is fair. Ole Miss’ rebels, forget it; Middle East rebels, Hooray! But then again, what’s in a name? •
Sean P. Murphy is web editor. He can be reached at smurphy@vicksburgpost. com
Rare photo of Robert E. Lee could set charity mark NASHVILLE, Tenn. — Internet bidding is heating up for a new view of Confederate Gen. Robert E. Lee. A tintype that shows a rare angle of the oft-photographed general was recently donated to Goodwill in Murfreesboro and The Tennessean reports the picture could set a record for the thrift’s online bidding site. It was already up to $8,000 Saturday and if that figure holds up, it will set a new mark. Larry Hicklen, a Civil War To view the memoraphoto, or bid bilia store on it, visit owner, said shopgoodhe had will.com a “holy smokes!” moment when he saw the anonymously donated tintype. “I knew the picture was old, with all the traits of a tintype. It was the right era, not something cranked out in 1961,” said Hicklen, who owns Middle Tennessee Civil War Relics in Murfreesboro. “When I blew it up, the clarity of the image, it was not in perfect focus, so it’s a picture of a picture, which is something they did a lot back then. But the thing that is getting the collectors excited is the view of Lee.” The tintype was offered for a minimum bid of $4 and 52 people had bid on it by Saturday afternoon. The auction continues through Wednesday. Suzanne Kay-Pittman, Goodwill’s manager for public relations and communication, said the single highest-priced item sold ononlinegoodwill.com was an early 1900s watercolor that sold for $7,500 in 2009 to a museum in New Orleans. Proceeds from the tintype sale will go to Goodwill’s Middle Tennessee operations.
Online
KATIE CARTER•The Vicksburg Post
Aaron Bell, left, and his brother, Gene Bell, both of Vicksburg and Rosa A. Temple High School graduates, share a laugh as they look at Temple memo-
rabilia from the Jacqueline House African American Museum Saturday at the Temple High School all-class reunion.
Members of the Rosa A. Temple High School Reunion Choir, above, pray before the start of the all-class reunion program Saturday evening at Vicksburg Junior High School. More than 400 graduates from classes 1959 to 1971 attended the reunion. The reunion weekend began with a meet-and-greet Friday night and continued Saturday with a program at Vicksburg Junior High School, where attendees got to tour the new Rosa A. Temple Annex. The reunion is scheduled to wrap up today with a 10 a.m. worship service at Springhill M B Church, 815 Mission 66. At right, Temple High School graduate Carmen Reese, class of 1969, of Atlanta looks at a portrait of Rosa A. Temple in the entrance to the new Temple Annex at Vicksburg Junior High School.
‘Picayune’ hump of La. land giving wetland scientists hope By Cain Burdeau The Associated Press WEST BAY, La. — In 2003, the U. Then the people waited. And waited. But nothing happened and no land was gained. Finally, something changed this year. Scientists say historic flooding on the river — coupled with recent work by the Army
The associated press
Dr. Paul Kemp with the National Audubon Society walks along new land created by Mississippi River Diversion in West Bay near Venice, La. See Wetlands, Page A8.
A8
Sunday, September 4, 2011
Jones seeks re-election to tax collector’s office Antonia Flaggs Jones has announced her candidacy for Warren County tax collector. Jones, 39, won the office without opposition in a 2009 special election after her appointment to the post by the Warren County Board of Supervisors. She is opposed in the Nov. 8 general election by Republican Patty Mekus, an employee of Vicksburg Catholic School. Jones is a 17-year employee of the Tax Collector’s Office, having held clerk and deputy clerk positions since being hired by former tax collector Pat Simrall in December
1994. Maintaining the highest performance standards when it comes to accounting principles, proper diversion of tax revenue collections and compliance with state law are of utmost importance, Jones said. A pilot program through the state Department of Revenue to speed up purchasing and renewing license plates was implemented this year, Jones said. Elected countywide, tax collectors in Mississippi receive payments of property taxes and fees on real estate and vehicles for the county, and,
Antonia Flaggs Jones by contract, for the City of Vicksburg.
Blaze destroys Highland Avenue house A Saturday morning fire that began in the kitchen destroyed a house at 2927 Highland Ave., Vicksburg Fire Department Capt. Carl Carson said. Carson said firefighters arrived at the 8 a.m. blaze to find the house fully engulfed, with flames coming through the windows of the house. The cause of the fire was undetermined, he said. Carson said Dorothy Newsome, the owner of the house, and her family managed to flee the fire. No one was injured.
Trailer damaged in Friday fire A Friday night fire caused minor damage to a mobile home at 47 Mitchell Lane, near Fisher Ferry Road, and burned about three acres of woods behind it, Fisher Ferry Volunteer Fire Department Lt. Mitch Lang said. Lang said firefighters from Fisher Ferry, LeTourneau
crime & fire from staff reports
and Culkin volunteer fire departments were called at 7:09. He arrived at the mobile home to find smoke and fire billowing from the rear of the trailer, which faced the woods. The cause of the fire and the names of the owner and renter of the mobile home were unavailable. No one was at home. Lang said one firefighter, Scotty Smith, sprained his ankle fighting the blaze. Smith was taken to River Region Medical Center, where he was treated and released.
City man, nephew jailed after fight A Vicksburg man and his nephew were charged Saturday with third-offense domestic violence after a fight in their home at 1510 1/2 Marcus St., Vicksburg police
Sgt. Sandra Williams said. Williams said Lee Brown, 59, and his nephew, Dandri Brown, 39, were arrested at their home about 9:39 a.m. Lee Brown was in the Warren County Jail Saturday night on $2,500 bond, while Dandri Brown was in jail on $5,000 bond. Williams said the fight was the result of an argument between the men that began earlier in the morning.
public meetings this week Tuesday Vicksburg Board of Mayor and Aldermen, 10 a.m., board meeting room, City Hall Annex, 1413 Walnut St.
Friday Special meeting, Vicksburg Convention and Visitors Bureau Board of Directors, 9 a.m., old Levee Street Depot
Wetlands Continued from Page A7..”
The Vicksburg Post
Sunday, September 4, 2011
The Vicksburg Post
A9 Pen-
Edwards Continued from Page A1. formed quickly and has won a key victory from the state’s high court. Town officials appealed a ruling by Chancery Judge Denise Owens allowing the defendants to hire their own expert witness. On Aug. 18, a three-judge panel of the Mississippi Supreme Court ruled in the landowners’ favor, and the defense hired Oxford planner Bridge & Watson Inc. A week later, the town moved to drop the effort entirely, provided each side pays its own legal fees. A hearing on that motion is set for Sept. 21. It remains unclear if and when the town again will attempt annexation. For defendants living in the proposed annexed areas, the question has centered on “what’s in it” for them. The answer, they say, is nothing. “There’s just no reason for them to do it,” Brasfield said, adding her family land northeast of town would have been
tQ.
annexed and taxed by both the town and the county. She also heads up a homeowners association in Edwards. “They admitted in court it was just for the ad valorem taxes. They can’t offer us anything right now.” Perkins says the annexed properties would get a higher fire protection rating, lowering insurance rates. Fire protection districts in Mississippi are rated by the Mississippi State Rating Bureau. A fire insurance rebate program sends money from a nominal tax on all premiums back to localities, where money is typically reinvested in new equipment and training. Localities often request re-ratings after such as new hydrants or modernized trucks are purchased. Individual districts are rated on a scale of 1 to 10, with 1 the best, on factors such as water supply, location, equipment and personnel. Edwards rates a 9 while the West Hinds Volunteer Fire Protection District, which actually handles the town’s fire responses, rates
a 10, said Joe Shoemaker, a manager with MSRB. “I really don’t go into town except to go to the post office,” said Fant Fancher, whose 140 acres sits off Askew Ferry Road, north of Interstate 20 and inside the proposed annexation zone. “And that’s because Postal Service requires it be in a downtown area.” Fancher, one of the 11 named in the suit, also remembers the “beautiful little town” that had already begun to fade when he moved there in 1974. “There was Noble Grocery, Mississippi Valley Gas, Hubbard Motor Company, a lumber yard,” he said. A grocery store and a bank, two community staples on which Perkins has harped since taking office in 2005, are on their way back to Edwards. Dollar General plans to open a 7,000 square-foot store on U.S. 80 by October, said Tawn Earnest, a spokeswoman for the Goodlettsville, Tenn.-based discount retailer.
Its construction means not only a place to pick up a loaf of bread without driving to Vicksburg or Clinton, but to rebuild a withered job base. “The town needs Dollar General desperately,” said Dave Montgomery, whose brother, Sonny, ran the Hubbard dealership, which sold Plymouths at U.S. 80 and Magnolia Street from the late 1940s to the 1970s. “We need businesses.” A bank may take longer, but officials with Hope Credit Union in Jackson have said they are interested in opening a branch in Edwards and one in Utica. The institution, which caters to economically depressed locales, was allowed by Utica officials to operate inside City Hall to open new accounts. Sights are set on a permanent location in Edwards, now without a bank. The BancorpSouth location near town hall, previously a Merchants Bank and Bank of Edwards, and the Utica branch were two of eight underperforming branches closed in Missis-
sippi and 23 shuttered overall by the Tupelo-based multistate bank. Bank officials met with residents Aug. 22 and, with input from residents and church leaders, will decide the best place to house it, CEO Bill Bynum said. BancorpSouth location still owns the closed branch and a purchase would be necessary for the credit union to move in. “It’s further along in Utica because we started the conversation earlier,” Bynum said. “We’re committed to serving those folks in Edwards. We have no facility right now, but we’re working with community to identify a location.” Gloria Christiansen, who pens an online blog about her adopted hometown, would be “tickled to death” to have either a store or a bank. Whether it sinks or swims is a long-term question for the community, she said. “It’s usually so much easier to hop on I-20 to Clinton or Vicksburg,” she said. “That’s the thing — will they support it?”. In Alabama, rough seas forced the closure of the Port of Mobile. Pockets of heavy rain pounded the beaches Saturday, and strong winds whipped up the surf and bowed palm trees. But just a couple miles inland, wind and rain dropped significantly.
militant group who sought to assassinate Gadhafi. Belhaj says CIA agents tortured him in a secret prison in Thailand before he was returned to Libya and locked in the Abu Salim prison. He insists he was never a terrorist and believes his arrest was in reaction to 9/11. validity of the documents could not be independently verified, but their content seems consistent with what has been previously reported.
Lee Continued from Page A1.., Saturday night, spinning intermittent bands of stormy weather, alternating with light rain and occasional sunshine. It was moving northnorthwest at about 4 mph in the late afternoon. Its maximum sustained winds dropped to 50 mph, and their intensity was expected to decrease further by today. Tropical storm warnings stretched from the Louisiana-Texas state line to Destin, Fla. The National Weather
Libya Continued from Page A1..
The associated press
Boys play football in flooded yards after Tropical Storm Lee passed through southern Louisiana Saturday. Service in Slidell. Entergy reported more than 37,000 customer outages at one point SaturdayHakmonth civil war and continue to target regime forces as rebels hunt for Gadhafi. Belhaj is the former leader of the Libyan Islamic Fighting Group, a now-dissolved
death The Vicksburg Post prints obituaries in news form for area residents, their family members and for former residents at no charge. Families wishing to publish additional information or to use specific wording have the option of a paid obituary.
Alean D. Burse Alean D. Burse died Saturday, Sept. 3, 2011, at River
Region Medical Center after a lengthy illness. She was 81. Mrs. Burse lived in Vicksburg and was a retired X-ray technician. She was a member of Zion Traveler’s M. B. Church. Funeral arrangements are incomplete with W. H. Jefferson Funeral Home in charge.
PRECISION FORECAST BY CHIEF METEOROLOGIST BARBIE BASSSETT TODAY
TONIGHT
83°
71°
Showers and thunderstorms with a high in the lower 80s and a low in the lower 70s
WEATHER This weather package is compiled from historical records and information provided by the U.S. Army Corps of Engineers, the City of Vicksburg and The Associated Press.
LOCAL FORECAST MONday-WEDNESday Chance of showers and thunderstorms; highs in the upper 70s; lows in the lower 70s
STATE FORECAST TOday Chance of showers and thunderstorms; highs in the lower 80s; lows in the lower 70s MONday-WEDNESday Chance of showers and thunderstorms; highs in the upper 70s; lows in the lower 70s
Almanac Highs and Lows High/past 24 hours............. 86º Low/past 24 hours............... 71º Average temperature......... 79º Normal this date................... 80º Record low..............52º in 1974 Record high......... 103º in 2000 Rainfall Recorded at the Vicksburg Water Plant Past 24 hours.........................N/A This month................ 0.0 inches Total/year.............. 23.78 inches Normal/month......0.43 inches Normal/year........ 36.93 inches Solunar table Most active times for fish and wildlife Monday: A.M. Active..........................12:03 A.M. Most active................. 6:17 P.M. Active...........................12:31 P.M. Most active.................. 6:45 Sunrise/sunset Sunset today........................ 7:25 Sunset tomorrow............... 7:23 Sunrise tomorrow.............. 6:40
RIVER DATA Stages Mississippi River at Vicksburg Current: 17.9 | Change: NC Flood: 43 feet Yazoo River at Greenwood Current: 17.2 | Change: NC Flood: 35 feet Yazoo River at Yazoo City Current: 12.8 | Change: Flood: 29 feet Yazoo River at Belzoni Current: 16.2 | Change: NC Flood: 34 feet Big Black River at West Current: 2.2 | Change: NC Flood: 12 feet Big Black River at Bovina Current: 6.4 | Change: NC Flood: 28 feet StEELE BAYOU Land....................................69.3 River....................................64.6
MISSISSIPPI RIVER Forecast Cairo, Ill. Monday.................................. 19.5 Tuesday.................................. 19.2 Wednesday........................... 18.9 Memphis Monday.....................................4.9 Tuesday.....................................4.6 Wednesday..............................4.3 Greenville Monday.................................. 22.8 Tuesday.................................. 22.2 Wednesday........................... 21.9 Vicksburg Monday.................................. 17.6 Tuesday.................................. 17.1 Wednesday........................... 16.6
A10
Sunday, September 4, 2011
NATO kills ex-Gitmo detainee
Vatican rejects Irish criticism over sex abuse VATICAN CITY (AP) — The Vatican on Saturday vigorously rejected claims it sabotaged efforts by Irish bishops to report priests who sexually abused children to police and accused the Irish prime minister of making an “unfounded” attack against the Holy See. Irish officials defended their claims that the Vatican exacerbated the abuse crisis and criticized the Holy See for offering an overly “legalistic” justification of its actions in dealing with priests who rape and molest children. The Vatican issued a 24-page response to the Irish government following Prime Minister Enda Kenny’s unprecedented July 20 denunciation of the Vatican’s handling of abuse — a speech that cheered abuse-weary Irish Catholics but stunned the Vatican and prompted it to recall its
ambassador. Kenny’s speech was inspired by the publication of a government-mandated independent report into the County Cork diocese of Cloyne in southwest Ireland, which found that the Vatican had undermined attempts by Irish bishops to protect children by suggesting that their policy requiring abuse to be reported to police might violate church law. The Cloyne document was the fourth report since 2005 on the colossal scale of priestly sex abuse and cover-up in Ireland, a once staunchly Catholic country that has seen the church’s influence wither in light of the scandal. But it was the first to squarely find the Vatican culpable in promoting the culture of secrecy and cover-up that kept abusers in ministry and able to prey on more children.
The Vicksburg Post
Father Federico Lombardi The Vatican has long rejected accusations — in lawsuits and public opinion — that it was responsible for the abuse scandal, which erupted in Ireland in the 1990s, the U.S. in 2002 and in mainland Europe and beyond last year. Thousands of people have come forward with accusations that priests molested them as children, bishops covered up the crimes and the Vatican turned a blind eye — or in the case of Cloyne actively interfered when bishops tried to bring the priests to justice.
KABUL, Afghanistan (AP) — NATO and Afghan forces. Another former detainee who joined the al-Qaida franchise in Yemen was killed in a recent U.S. airstrike there. Troops surrounded Melma’s house in Jalalabad..
River Region Medical Center welcomes Dr. Dedri Ivory, Vicksburg’s first rheumatologist, to our community. Dr. Ivory specializes in treating patients with arthritis, osteoporosis, gout and other diseases that cause muscle, bone and joint pain. Board certified in internal medicine, Dr. Ivory completed her residency at Akron General Medical Center and her rheumatology fellowship at the University of Missouri. She treats patients at Street Clinic, which features a convenient on-site infusion area where patients may receive IV treatments, if needed. For an appointment with Dr. Ivory, call 601-883-3340.
STREET CLINIC 104 McAuley Drive
RiverRegion.com
WORKING TO BRING
RELIEF FROM BONE AND JOINT PAIN.
DEDRI IVORY, M.D. RHEUMATOLOGIST MEMBER OF THE MEDICAL STAFF AT RIVER REGION MEDICAL CENTER
college SCOREBOARD Alabama 48 / Kent St. 7 LSU 40 / Oregon 27
BYU 14 / Ole Miss 13 Grambling 21 / Alcorn St. 14 Mississippi College 33 / Millsaps 27 Auburn 42 / Utah St. 38
Houston 38 / UCLA 34 Oklahoma 47 / Tulsa 17 Texas 34 / Rice 9 South Florida 23 / Notre Dame 20
INSIDE: Top 25, SEC, C-USA and Mississippi scores/B2 • JSU rolls over Concordia/B3
THE VICKSBURG POST
SPORTS Sun day, Sep tember 4, 2011 • SE C TI O N B PUZZLES B8
Steve Wilson, sports editor | E-mail: sports@vicksburgpost.com | Tel: 601.636.4545 ext 142
college football
BYU stuns Ole Miss with late TD By David Brandt The Associated Press
Survival Late TDs help Auburn escape upset bid from Utah State/B3
Schedule PREP SOFTBALL
VHS at Madison Central Tuesday, 6 p.m. WC hosts Clinton Tuesday, 6 p.m.
PREP VOLLEYBALL WC hosts MRA Tuesday, 6:15 p.m.
On TV 6:30 p.m. ESPN - NASCAR celebrates Labor Day with a race under the lights in Atlanta. Kasey Kahne has the pole, but Brad Keselowski has won two of the last four Sprint Cup races. Preview/B6
Who’s hot CAMERON COOKSEY Vicksburg High quarterback threw for 332 yards and five touchdowns in a 32-31 win over Richwood, La., on Friday night.
Sidelines Saints shelve Ivory, trim roster to 53
METAIRIE, La. (AP) — The New Orleans Saints put last season’s leading rusher Chris Ivory on the physically unable to perform list Saturday and cut two special teams stalwarts in trimming the roster to an NFL-mandated 53 players. Ivory, who led New Orleans with 716 yards rushing and five touchdowns in 2010, has yet to recover from offseason foot surgery or sports hernia surgery. He’ll be out at least six weeks. Safeties Chris Reis and Pierson Prioleau were released after being key members of the Saints’ special teams unit the past two seasons. The Saints cut 21 players in all on Saturday and made several other moves to get down to the 53-man limit.
LOTTERY La. Pick 3: 2-1-2 La. Pick 4: 8-4-0-7 Easy 5: 3-10-17-27-35 La. Lotto: 2-8-14-17-29-35 Powerball: 15-25-52-53-54 Powerball: 2; Power play: 5
Weekly results: B2
OXFORD — Linebacker Kyle Van Noy saw the bouncing football heading toward the end zone with only one thought in mind: Take advantage of this opportunity. And after nearly four full quarters of missed chances, Van Noy didn’t let this one get away, corralling the football with 5:09 remaining for the go-ahead touchdown in BYU’s stunning 14-13 comeback victory over Ole Miss on Saturday at Vaught-Hemingway Stadium. “I just got lucky,” Van Noy said. “... I was trying not to panic, but my adrenaline was really running because I knew this was a good play.” in the season opener for both teams. .
Grambling tops Alcorn in opener From staff reports
bruce newman•The associated press
Ole Miss quarterback Zack Stoudt looks to pass during Saturday’s game against BYU. Ole Miss lost, 14-13, in the season opener. — Bru-
nettineeded quality win after a
dismal 4-8 season in 2010. BYU had other ideas. “There’s a fine line between winning and losing,” Nutt said. “When you’re playing a really good team like that that’s rated high with experience you can’t beat yourself and you can’t give gifts. And we did that.” The Cougars kept pounding away, with a 69-57 advantage in offensive plays. By the fourth quarter, the Rebels’ defense looked gassed and didn’t have an answer. Ole Miss’ top two running backs — Brandon Bolden and Enrique Davis — were injured during the game. Nutt said Bolden’s injury was to his left ankle and could possibly be a fracture. Bolden, a 5-foot-11, 215pound senior, rushed for 964 yards and 14 touchdowns last season.
Kenneth Batiste rushed for 83 yards and a touchdown, D.J. Williams threw a pair of touchdown passes, and Grambling beat Alcorn State 21-14 in the seasonopener for both teams on Saturday. Alcorn has not beaten Grambling since 2006. Grambling ripped off 21 straight points after Alcorn took a 7-0 lead in Brandon the first Bridge quarter. Williams tossed a 23-yard touchdown pass to Bakari Maxwell and a 7-yarder to Mario Louis with 5 seconds left in the first half to give the Tigers a 14-7 lead. Batiste’s 5-yard TD run early in the third quarter extended the lead, but Alcorn made a comeback attempt in the fourth. Quarterback Brandon Bridge cut it to 21-14 with a 6-yard TD run with 12:39 left in the game. The Braves got the ball back three more times, but punted and turned it over on downs twice. The closest they got to the end zone on the final three drives was Grambling’s 30-yard line, where a run on fourth-and-1 was stuffed. Bridge finished the game 17-of-25 passing for 175 yards and a touchdown, and carried the ball 11 timed for 45 yards and a score. Terrance Lewis caught five passes for 107 yards, including a 53-yard TD pass from Bridge with 4:46 left in the first quarter. Williams finished 16-of-24 passing for 161 yards and two TDs for Grambling.
LSU pounds Oregon in showdown By The Associated Press ARLINGTON, Texas — Jarrett Lee admirably directed LSU’s offense in place of suspended quarterback Jordan Jefferson, and missing cornerbackpunt returner Cliff Harris was too much for Oregon to overcome in a rare seasonopeningstring Oregon running back De’Anthony Thomas fum-
On B3 • Mississippi roundup • Auburn survives scare bled on consecutive touches, one on a rushing attempt and then on the ensuing kickoff. Oregon, which lost to Auburn in the BCS national championship game last season, has consecutive losses for the first time since losing its last three regularseason the suspended Harris would have been playing on defense.
The other LSU touchdown before halftime came when fill-in punt returner Kenjon Barner fielded a punt inside the 5 and took a couple of steps back. That’s when Tyrann Mathieu stripped the ball away, scooped it up after it bounced on the turf and took a couple of steps into the end zone. LSU has won 34 consecutive non-conference games, the longest such streak in the nation, including all 23 in the regular season under seventh-year coach Les Miles. The overall streak dates back to the Tigers’ 26-8 loss to Virginia Tech in the 2002 season opener. “Our football team is united. They play together,” Miles said. “You put a ball on the line and they’ll scrap you for it. This is a great group of guys.”
The associated press
LSU’s Tyrann Mathieu (7), Tharold Simon (24), Eric Reid (1) and Lavar Edwards (89) celebrate Simon’s interception against Oregon on Saturday night. No. 4 LSU beat thirdranked Oregon, 40-27.
B2
Sunday, September 4, 2011
on tv
BY THE ASSOCIATED PRESS AUTO RACING 10 a.m. ESPN2 - NHRA, qualifying for U.S. Nationals, at Indianapolis 11 a.m. Versus - IRL, Indy Lights, at Baltimore 1 p.m. Versus - IRL, IndyCar, Baltimore Grand Prix 1 p.m. Speed - FIM World Superbike, at Nuerburg, Germany (tape) 4 p.m. ESPN2 - NHRA, qualifying for U.S. Nationals, at Indianapolis (tape) 6:30 p.m. ESPN - NASCAR, Sprint Cup, AdvoCare 500, at Hampton, Ga. 9 p.m. Speed - AMA Pro Racing, at Millville, N.J. (tape) COLLEGE FOOTBALL 11 a.m. ESPN - Prairie View A&M vs. Bethune-Cookman 2:30 p.m. ESPN - Marshall at West Virginia 6:30 p.m. FSN - SMU at Texas A&M PREP FOOTBALL 1 p.m. ESPN2 - Archbishop Wood (Pa.) vs. Pittsburgh Central Catholic GOLF 6 a.m. TGC - European PGA Tour, European Masters Noon TGC - PGA Tour, Deutsche Bank Championship 2 p.m. NBC - PGA Tour, Deutsche Bank Championship 6 p.m. TGC - Nationwide Tour, Mylan Classic (tape) MAJOR LEAGUE BASEBALL 12:30 p.m. TBS - Texas at Boston 12:30 p.m. FSN - L.A. Dodgers at Atlanta 1:10 p.m. WGN - Pittsburgh at Chicago Cubs 7 p.m. ESPN2 - Chicago White Sox at Detroit TENNIS 10 a.m. CBS - U.S. Open
sidelines
from staff & AP reports
Tennis Serena rolls at U.S. Openseeded Victoria Azarenka, a Wimbledon semifinalist two months ago. In the fourth round, Williams will face Ana Ivanovic, the 2008 French Open champion, who is seeded 16th. Also into the fourth round with victories Saturday were 2010 French Open champion Francesca Schiavone, who got past Chanelle Scheepers 5-7, 7-6 (5), 6-3; No. 17 Anastasia Pavlyuchenkova, who beat 2008 U.S. Open runner-up Jelena Jankovic 6-4, 6-4; and No. 10 Andrea Petkovic, who defeated No. 18 Roberta Vinci 6-4, 6-0. On the men’s side, Roger Federer moved into the fourth round for the 30th consecutive Grand Slam tournament by overcoming what he called “tricky wind” and a secondset blip to defeat No. 27 Marin Cilic of Croatia 6-3, 4-6, 6-4, 6-2. Also advancing Saturday was No. 8 Mardy Fish, the top-seeded American, who has yet to drop a set after beating Kevin Anderson 6-4, 7-6 (4), 7-6 (3).
flashback
BY THE ASSOCIATED PRESS Sept. 4 1983 — Lynn Dickey of Green Bay completes 27 of 31 passes, including 18 straight, for 333 yards and four touchdowns to lead the Packers in a 41-38 overtime victory over Houston..
The Vicksburg Post
scoreboard college football Top 25 scores
Saturday No. 1 Oklahoma 47, Tulsa 17 No. 2 Alabama 48, Kent St. 7 No 4 LSU 40, No. 3 Oregon 27 No. 5 Boise St. 35, No. 19 Georgia 21 No. 6 Florida St. 34, La.-Monroe 0 No. 7 Stanford 57, San Jose St. 3 No. 9 Oklahoma St. 61, La.-Lafayette 34 No. 10 Nebraska 40, Chattanooga 7 No. 12 South Carolina 56, East Carolina 37 No. 13 Va. Tech 66, Appalachian St. 13 No. 15 Arkansas 51, Missouri St. 7 South Florida 23, No. 16 Notre Dame 20 No. 18 Ohio St. 42, Akron 0 No. 21 Missouri 17, Miami (Ohio) 6 No. 22 Florida 41, Florida Atlantic 3 No. 23 Auburn 42, Utah St. 38 No. 25 Southern Cal 19, Minnesota 17 Today No. 8 Texas A&M vs. SMU, 6:30 p.m. No. 24 West Virginia vs. Marshall, 2:30 p.m. ———
Mississippi scores
Saturday Jackson St. 42, Concordia, Ala. 2 BYU 14, Ole Miss 13 Alabama St. 41, Miss. Valley St. 9 Grambling 21, Alcorn St. 14 Mississippi College 33, Millsaps 27, OT Belhaven at Louisiana College, (n) Louisiana Tech at Southern Miss, (n) ———
Southeastern Conference scores
Saturday Auburn 42, Utah St. 38 Alabama 48, Kent St. 7 BYU 14, Ole Miss 13 Tennessee 42, Montana 16 Arkansas 51, Missouri St. 7 Florida 41, Florida Atlantic 3 Vanderbilt 45, Elon 14 South Carolina 56, East Carolina 37 LSU 40, Oregon 27 Boise St. 35 Georgia 21 ———
Conference USA scores
Saturday Tulane 47, Southeastern Louisiana 13 Houston 38, UCLA 34 Central Florida 62, Charleston Southern 0 South Carolina 56, East Carolina 37 Texas 34, Rice 9 Oklahoma 47, Tulsa 17 Stony Brook at UTEP, (n) Louisiana Tech at Southern Miss, (n) ———
SWAC scores
Saturday Jackson St. 42, Concordia (Ala.) 2 Hampton 21, Alabama A&M 20 Alabama St. 41, Mississippi Valley St. 9 Langston 19, Ark.-Pine Bluff 12 Grambling 21, Alcorn St 14 Tennessee St. 33, Southern 7 Today Prairie View at Beth.-Cookman, 11 a.m.
BYU Ole Miss
BYU 14, OLE MISS 13
0 0 0 14 — 14 0 3 7 3 — 13 Second Quarter Miss—FG Rose 20, :49. Third Quarter Miss—Sawyer 96 interception return (Rose kick), 8:34. Fourth Quarter Miss—FG Rose 29, 14:15. BYU—Apo 19 pass from Heaps (J.Sorensen kick), 9:52. BYU—Van Noy 3 fumble return (J.Sorensen kick), 5:09. A—55,124. ——— BYU Miss First downs................................20........................13 Rushes-yards.......................31-91...................29-64 Passing....................................225......................144 Comp-Att-Int..................... 24-38-1............... 15-28-0 Return Yards.............................29......................140 Punts-Avg............................5-36.2..................4-56.8 Fumbles-Lost............................0-0.......................2-2 Penalties-Yards......................6-40.....................4-40 Time of Possession.............34:37...................25:23 INDIVIDUAL STATISTICS RUSHING—BYU, Di Luigi 12-56, Kariya 11-35, Quezada 4-14, Nelson 1-6, Team 1-(minus 2), Heaps 2-(minus 18). Ole Miss, Davis 12-28, Bolden 4-21, J.Scott 5-17, Brunetti 3-13, Thomas 1-3, P.Moore 1-1, Brassell 2-(minus 1), Stoudt 1-(minus 18). PASSING—BYU, Heaps 24-38-1-225. Ole Miss, Stoudt 13-25-0-140, Brunetti 2-3-0-4. RECEIVING—BYU, Di Luigi 5-32, Apo 4-46, Alisa 3-28, Holt 2-26, Jacobson 2-25, Falslev 2-21, Wilson 2-18, Kariya 2-13, Hoffman 1-9, Quezada 1-7. Ole Miss, Logan 4-52, Moncrief 3-16, J.Scott 2-25, Mosley 2-21, Allen 1-13, Herman 1-12, Thomas 1-5, Greer 1-0.
JACKSON ST. 42, CONCORDIA 2
Concordia-Selma 0 2 0 0 — 2 Jackson St. 14 14 0 14 — 42 First Quarter JcSt—Richardson 60 pass from Therriault (Ja. Smith kick), 14:52. JcSt—Dunn 1 run (Ja.Smith kick), 13:37. Second Quarter JcSt—Therriault 1 run (Ja.Smith kick), 11:37. Conc—Safety, 1:20. JcSt—Therriault 1 run (Ja.Smith kick), :16. Fourth Quarter JcSt—Wright 64 fumble return (Ja.Smith kick), 13:00. JcSt—Sims 2 run (Selita kick), 1:35. ——— Conc JcSt First downs................................16........................17 Rushes-yards.......................42-48...................29-59 Passing....................................187......................218 Comp-Att-Int..................... 12-32-4............... 15-32-0 Return Yards...............................0........................85 Punts-Avg............................4-28.5..................2-30.5 Fumbles-Lost............................4-3.......................3-2 Penalties-Yards..................15-104...................10-69 Time of Possession.............38:56...................21:04 INDIVIDUAL STATISTICS RUSHING—Concordia-Selma, Craig 8-27, Ray 6-25, Manning 12-14, Morris 1-7, Robinson 2-7, McClain 2-2, Coachman 4-(minus 9), Dobbs 7-(minus 25). Jackson St., Sims 5-24, Gooden 3-10, Rush 3-6, Dunn 4-6, McDonald 2-5, Therriault 11-5, Corley 1-3. PASSING—Concordia-Selma, Dobbs 9-24-3-150, Coachman 3-8-1-37. Jackson St., Therriault 15-320-218. RECEIVING—Concordia-Selma, Morris 5-87, Manning 2-31, Tillman 2-30, Harris 1-23, Andrews 1-10, Craig 1-6. Jackson St., Richardson 4-108, Drewery 4-41, Rollins 3-44, Wilder 2-10, Perkins 1-17, Gooden 1-(minus 2).
prep football Mississippi Prep Polls Fared How teams ranked in the Mississippi Associated Press state poll did on Friday night:
Class 6A 1. 2. 3. 4. 5. 5.
S. Panola (2-1) def. Memphis University, 19-7. Olive Branch (2-0) was idle. Oak Grove (2-0) was idle. Meridian (3-0) beat Canton, 28-0. Madison Central (1-2) beat Petal, 28-24. Petal (1-2) lost to Madison Central, 28-24.
Class 5A 1. 2. 3. 4. 5.
Picayune (3-0) beat Forrest AHS, 17-9. West Point (0-2) lost to Columbus, 35-27 in OT. Pearl (2-1) lost to Northwest Rankin, 21-10. West Jones (1-1) lost to Laurel, 37-20. Long Beach (1-0) was idle.
Class 4A 1. 2. 3. 4. 5.
Lafayette (3-0) beat Oxford, 40-12. Tylertown (3-0) beat Franklin County, 47-0. Noxubee County (2-1) beat New Hope, 45-29. Mendenhall (1-2) lost to Wayne County, 51-12. Laurel (3-0) beat West Jones, 37-20.
1. 2. 3. 4. 5.
Forest (1-2) lost to Jackson Prep, 24-14. Aberdeen (2-1) beat Amory, 14-7. Hazlehurst (2-1) lost to Brookhaven, 7-6. Franklin County (2-1) lost to Tylertown, 47-0. East Side (3-0) beat Broad Street, 68-20.
1. 2. 3. 4. 5.
West Bolivar (3-0) beat Ray Brooks, 43-6. Bassfield (3-0) beat Prentiss, 24-0. Calhoun City (3-0) beat Water Valley, 16-14. Lumberton (1-2) lost to West Marion, 20-18. Taylorsville (2-0) beat Collins, 35-7.
1. 2. 3. 4. 5.
Durant (3-0) beat Williams Sullivan, 64-0. Cathedral (2-0) beat Loyd Star, 56-7. French Camp (2-1) lost to Ackerman, 25-7. West Oktibbeha (2-0) was idle Mount Olive (0-3) lost to Salem, 19-6.
nascar
Class 3A
Class 2A
Class 1A
MAIS 1. Jackson Aca. (3-0) beat River Oaks, 29-22. 2. Jackson Prep (3-0) beat Forest, 24-14. 3. Pillow Aca. (1-2) lost to Southern Baptist, Tenn., 49-28. 4. Trinity (2-1) beat Bowling Green, 48-0. 5. Brookhaven Aca. (2-1) lost to Simpson Aca., 35-23.
———
MHSAA
Region 2-6A
Team Overall Region Northwest Rankin.....................3-0.......................0-0 Vicksburg................................1-1.......................0-0 Clinton......................................1-2.......................0-0 Greenville-Weston....................1-2.......................0-0 Madison Central.......................1-2.......................0-0 Murrah......................................1-2.......................0-0 Jim Hill......................................1-2.......................0-0 Warren Central.......................0-3.......................0-0 Sept. 2 Vicksburg 32, Richwood (La.) 31 Hattiesburg 29, Warren Central 7 Bastrop (La.) 58, Greenville-Weston 6 Provine 46, Murrah 39 Madison Central 28, Petal 24 Brandon 18, Clinton 10 Jim Hill 29, Lanier 12 Northwest Rankin 21, Pearl 10 Friday’s Games Callaway at Murrah, 7:30 p.m. Clinton at Provine, 7:30 p.m. Jim Hill at Forest Hill, 7:30 p.m. Greenville-Weston at Pine Forest, Fl., 7:30 p.m. Tylertown at Vicksburg, 7:30 p.m. Warren Central at Natchez, 7:30 p.m. Brandon at Northwest Rankin, 7:30 p.m. Madison Central at West Monroe, La., 7 p.m.
Region 4-1A
Team Overall Region Bogue Chitto............................3-0.......................1-0 Salem.......................................3-0.......................1-0 Hinds AHS...............................2-1.......................1-0 Cathedral..................................2-0.......................0-0 Resurrection.............................1-0.......................0-0 Stringer.....................................1-1.......................0-0 Mount Olive..............................0-3.......................0-1 University Christian..................0-2.......................0-1 St. Aloysius.............................0-3.......................0-1 Dexter.......................................0-2.......................0-0 Sept. 2 Bogue Chitto 21, University Christian 18 Salem 19, Mount Olive 6 Hinds AHS 14, St. Aloysius 9 Bay Springs 31, Stringer 28 Cathedral 56, Loyd Star 7 Open date: Resurrection, Stringer, Dexter Friday’s Games Stringer at Hinds AHS, 7:30 p.m. Bogue Chitto at Mount Olive, 7:30 p.m. Salem at St. Aloysius, 7:30 p.m. Dexter at University Christian, 7:30 p.m. Saturday’s Games Cathedral at Resurrection, 7 p.m.
Region 6-4A
Team Overall Region Port Gibson.............................3-0.......................0-0 Florence....................................3-0.......................0-0 Magee.......................................2-1.......................0-0 Raymond..................................2-1.......................0-0 Mendenhall...............................1-2.......................0-0 Germantown.............................1-2.......................0-0 Richland....................................0-3.......................0-0 Sept. 2 Florence 38, McLaurin 21 Magee 31, Lawrence County 17 Raymond 28, Wingfield 13 Port Gibson 24, Jefferson County 6 Crystal Springs 31, Richland 25 North Pike 32, Germantown 35 Wayne County 42, Mendenhall 12 Friday’s Games Port Gibson at Madison, La., 7 p.m. McLaurin at Richland, 7:30 p.m. West Jones at Magee, 7:30 p.m. Collins at Mendenhall, 7:30 p.m. Presbyterian Christian at Germantown, 7:30 p.m. Terry at Florence, 7:30 p.m. Crystal Springs at Raymond, 7:30 p.m. ———
MAIS
District 4-A
Team Overall Region Porters Chapel........................3-0.......................1-0 Heidelberg Academy................2-1.......................1-0 Prentiss Christian.....................1-2.......................1-0 Newton Academy.....................1-1.......................0-0 Park Place................................1-2.......................0-1 Ben’s Ford................................0-3.......................0-2 Sept. 2 Porters Chapel 19, Ben’s Ford 0 Prentiss Christian 27, Park Place 20 Heidelberg Academy 16, Amite 12 Open: Newton Academy Friday’s Games Heidelberg at Sylva-Bay, 7 p.m. Porters Chapel at Newton Aca., 7 p.m. Ben’s Ford at Park Place, 7 p.m. Sumrall at Prentiss Christian, 7 p.m.
District 3-A
Team Overall Region Claiborne Academy..................2-1.......................2-0 Wilkinson Christian...................3-0.......................1-0 CENLA......................................2-1.......................1-0 Riverfield...................................2-1.......................1-1 Glenbrook.................................1-2.......................0-2 Tallulah Academy...................0-3.......................0-1 Union Christian.........................0-2.......................0-1 Amite........................................0-3.......................0-0 Sept. 2 Heidelberg Academy 16, Amite 12 Claiborne Aca. 12, Tallulah Aca. 7 Wilkinson Christian 36, Glenbrook 0 Prairie View 45, Union Christian 0 CENLA 26, Riverfield 14 Friday’s Games Glebrook at Amite, 7 p.m. Tallulah Academy at Riverfield, 7 p.m. Wilkinson Christian at Centreville Aca., 7 p.m. CENLA at Prairie View, 7 p.m. River Oaks at Union Christian, 7 p.m.
District 3-AA
Team Overall Region Riverdale..................................1-1.......................1-0 River Oaks...............................1-2.......................0-0 Prairie View..............................1-2.......................0-1 Central Hinds..........................0-3.......................0-0 Sept. 2 Copiah Academy 16, Central Hinds 10 Prairie View 45, Union Christian 0 Jackson Academy 29, River Oaks 22 ACCS 35, Riverdale 12
West Division
Friday’s Games Riverdale at Baton Rouge Christian, 7 p.m. Lamar Aca. at Central Hinds, 7 p.m. CENLA at Prairie View, 7 p.m. River Oaks at Union Christian, 7 p.m.
Sprint Cup AdvoCare 500 Lineup
After Saturday qualifying; race today At Atlanta Motor Speedway Hampton, Ga. Lap length: 1.54 miles (Car number in parentheses) 1. (4) Kasey Kahne, Toyota, 186.196. 2. (33) Clint Bowyer, Chevrolet, 185.922. 3. (18) Kyle Busch, Toyota, 185.841. 4. (83) Brian Vickers, Toyota, 185.772. 5. (24) Jeff Gordon, Chevrolet, 185.735. 6. (17) Matt Kenseth, Ford, 185.71. 7. (99) Carl Edwards, Ford, 185.561. 8. (56) Martin Truex Jr., Toyota, 185.542. 9. (39) Ryan Newman, Chevrolet, 185.486. 10. (22) Kurt Busch, Dodge, 185.325. 11. (43) A J Allmendinger, Ford, 185.288. 12. (42) Juan Pablo Montoya, Chevrolet, 185.177. 13. (11) Denny Hamlin, Toyota, 185.127. 14. (2) Brad Keselowski, Dodge, 185.115. 15. (16) Greg Biffle, Ford, 185.059. 16. (00) David Reutimann, Toyota, 184.8. 17. (48) Jimmie Johnson, Chevrolet, 184.462. 18. (9) Marcos Ambrose, Ford, 184.272. 19. (6) David Ragan, Ford, 184.015. 20. (14) Tony Stewart, Chevrolet, 183.899. 21. (29) Kevin Harvick, Chevrolet, 183.801. 22. (27) Paul Menard, Chevrolet, 183.68. 23. (47) Bobby Labonte, Toyota, 183.394. 24. (20) Joey Logano, Toyota, 183.382. 25. (1) Jamie McMurray, Chevrolet, 183.339. 26. (78) Regan Smith, Chevrolet, 183.152. 27. (31) Jeff Burton, Chevrolet, 183.121. 28. (66) Michael McDowell, Toyota, 183.025. 29. (88) Dale Earnhardt Jr., Chevrolet, 182.898. 30. (46) Scott Speed, Ford, 182.856. 31. (38) J.J. Yeley, Ford, 182.5. 32. (5) Mark Martin, Chevrolet, 182.44. 33. (7) Robby Gordon, Dodge, 181.759. 34. (87) Joe Nemechek, Toyota, 181.693. 35. (36) Dave Blaney, Chevrolet, 181.437. 36. (34) David Gilliland, Ford, 180.745. 37. (51) Landon Cassill, Chevrolet, 180.575. 38. (13) Casey Mears, Toyota, 180.
Sprint Cup standings
(Number of victories in parentheses) 1. x-Kyle Busch (4)............................................. 830 2. x-Jimmie Johnson (1).................................... 830 3. x-Matt Kenseth (2)......................................... 798 4. x-Carl Edwards (1)........................................ 795 5. x-Kevin Harvick (3)........................................ 782 6. Jeff Gordon (2).............................................. 782 7. Ryan Newman (1)......................................... 762 8. Kurt Busch (1)............................................... 749 9. Dale Earnhardt Jr. (0)................................... 728 10. Tony Stewart (0).......................................... 710 11. *Brad Keselowski (3)................................... 689 12. Clint Bowyer (0)........................................... 688 13. *Denny Hamlin (1)....................................... 672 14. A J Allmendinger (0)................................... 664 15. Kasey Kahne (0).......................................... 656 x-Clinched spot in the Chase for the Championship. *Wild-card leaders; the two drivers outside of the top 10 in points who finish with the most victories will qualify for the Chase.
——— Nationwide Series Great Clips 300 Results
Saturday At Atlanta Motor Speedway Hampton, Ga. Lap length: 1.54 miles (Start position in parentheses) 1. (1) Carl Edwards, Ford, 195 laps, 143.6 rating, 0 points. 2. (3) Kyle Busch, Toyota, 195, 130, 0. 3. (5) Ricky Stenhouse Jr., Ford, 195, 117.6, 42. 4. (8) Kevin Harvick, Chevrolet, 195, 119.8, 0. 5. (7) Kasey Kahne, Chevrolet, 195, 111.6, 0. 6. (14) Justin Allgaier, Chevrolet, 195, 98.9, 39. 7. (2) Brad Keselowski, Dodge, 195, 104.8, 0. 8. (11) Aric Almirola, Chevrolet, 195, 94.1, 36. 9. (16) Jason Leffler, Chevrolet, 195, 91.4, 35. 10. (9) Elliott Sadler, Chevrolet, 195, 88.4, 34. 11. (4) Ryan Truex, Toyota, 195, 88, 33. 12. (19) Brian Scott, Toyota, 195, 84.8, 32. 13. (13) Steve Wallace, Toyota, 195, 82.9, 31. 14. (21) Jeremy Clements, 195, 79.7, 31. 15. (15) Mike Bliss, Chevrolet, 194, 78.5, 29. 16. (18) Mike Wallace, Chevrolet, 193, 70, 28. 17. (23) Matt Carter, Ford, 193, 66.4, 27. 18. (17) Joe Nemechek, Toyota, 192, 73.6, 26. 19. (20) Kenny Wallace, Toyota, 192, 71, 25. 20. (24) Michael Annett, Toyota, 191, 68.6, 24. 21. (10) Jamie McMurray, Chevrolet, 190, 92, 0. 22. (36) Josh Wise, Chevrolet, 190, 64.2, 22. 23. (27) Blake Koch, Dodge, 189, 58.3, 21. 24. (42) Derrike Cope, Chevrolet, 189, 48.6, 20. 25. (41) Morgan Shepherd, 188, 50.3, 19. 26. (35) Robert Richardson Jr., 187, 50.2, 18. 27. (29) T.J. Bell, Chevrolet, 187, 51.7, 0. 28. (26) Eric McClure, Chevrolet, 184, 53.6, 16. 29. (34) Jennifer Jo Cobb, Dodge, 183, 40.5, 15. 30. (39) Kevin Lepage, Chevrolet, 183, 48.5, 14. 31. (31) John Jackson, Toyota, 178, 37, 13. 32. (12) Reed Sorenson, accident, 169, 82.1, 12. 33. (6) Trevor Bayne, accident, 169, 93.7, 12. 34. (40) Dennis Setzer, handling, 104, 43.4, 10. 35. (43) Clay Greenfield, suspension, 94, 38.1, 0. 36. (22) Timmy Hill, transmission, 76, 34.7, 8. 37. (28) Jeff Green, vibration, 17, 41.4, 7. 38. (30) Scott Riggs, parked, 15, 41.6, 6. 39. (33) Tim Andrews, suspension, 10, 36.9, 5. 40. (32) Chase Miller, overheating, 6, 36, 4. 41. (38) Carl Long, Ford, transmission, 6, 32.4, 3. 42. (37) Mark Green, handling, 4, 30.3, 2. 43. (25) Johnny Chapman, overheating, 1, 29.9, 1. Race Statistics Average Speed of Race Winner: 132.811 mph. Time of Race: 2 hours, 15 minutes, 40 seconds. Margin of Victory: 0.697 seconds. Caution Flags: 6 for 27 laps. Lead Changes: 21 among 8 drivers. Leaders Summary (Driver, Times Led, Laps Led): C.Edwards, 5 times for 101 laps; K.Busch, 6 times for 31 laps; K.Harvick, 5 times for 31 laps; K.Kahne, 1 time for 24 laps; R.Stenhouse Jr., 2 times for 3 laps; J.Allgaier, 1 time for 2 laps; T.Bayne, 1 time for 2 laps; J.Clements, 1 time for 1 lap.
Nationwide Series standings 1. 2. 3. 4. 5. 6.
Ricky Stenhouse Jr........................................ 909 Elliott Sadler................................................... 896 Reed Sorenson.............................................. 869 Aric Almirola................................................... 845 Justin Allgaier................................................ 840 Jason Leffler.................................................. 811
mlb American League East Division
W New York.......................84 Boston...........................84 Tampa Bay....................75 Toronto..........................69 Baltimore.......................55
L 53 54 63 70 82
Central Division
W Detroit............................77 Cleveland.......................69 Chicago.........................68 Minnesota......................58 Kansas City...................58
L 62 67 68 79 82
Pct .613 .609 .543 .496 .401
GB — 1/2 9 1/2 16 29
Pct GB .554 — .507 6 1/2 .500 7 1/2 .423 18 .414 19 1/2
W L Pct GB Texas.............................79 61 .564 — Los Angeles..................74 64 .536 4 Oakland.........................63 76 .453 15 1/2 Seattle...........................58 80 .420 20 Saturday’s Games N.Y. Yankees 6, Toronto 4 Oakland 3, Seattle 0 Detroit 9, Chicago White Sox 8 Boston 12, Texas 7 Tampa Bay 6, Baltimore 3 Kansas City 5, Cleveland 1 Minnesota at L.A. Angels, (n) Today’s Games Toronto (Cecil 4-7) at N.Y. Yankees (Sabathia 18-7), 12:05 p.m. Texas (M.Harrison 10-9) at Boston (Lackey 12-10), 12:35 p.m. Baltimore (Guthrie 6-16) at Tampa Bay (Hellickson 11-10), 12:40 p.m. Cleveland (J.Gomez 1-2) at Kansas City (Francis 5-14), 1:10 p.m. Minnesota (Slowey 0-3) at L.A. Angels (Pineiro 5-6), 2:35 p.m. Seattle (Beavan 3-4) at Oakland (Cahill 9-13), 3:05 p.m. Chicago White Sox (Buehrle 11-6) at Detroit (Scherzer 13-8), 7:05 p.m. Monday’s Games Baltimore at N.Y. Yankees, 12:05 p.m. Detroit at Cleveland, 12:05 p.m. Boston at Toronto, 12:07 p.m. Texas at Tampa Bay, 12:10 p.m. Chicago White Sox at Minnesota, 1:10 p.m., 1st game Kansas City at Oakland, 3:05 p.m. Chicago White Sox at Minnesota, 7:10 p.m., 2nd game Seattle at L.A. Angels, 8:05 p.m.
——— National League East Division
W Philadelphia...................88 Atlanta...........................81 New York.......................67 Washington....................64 Florida............................60
L 46 57 70 73 77
Central Division
W Milwaukee......................83 St. Louis........................74 Cincinnati.......................68 Pittsburgh......................64 Chicago.........................59 Houston.........................47
L 57 65 71 75 80 92
Pct GB .657 — .587 9 .489 22 1/2 .467 25 1/2 .438 29 1/2 Pct .593 .532 .489 .460 .424 .338
GB — 8 1/2 14 1/2 18 1/2 23 1/2 35 1/2
West Division
W L Pct GB Arizona..........................78 60 .565 — San Francisco...............73 65 .529 5 Los Angeles..................68 70 .493 10 Colorado........................65 73 .471 13 San Diego.....................60 78 .435 18 Saturday’s Games Pittsburgh 7, Chicago Cubs 5 St. Louis 6, Cincinnati 4 Milwaukee 8, Houston 2 Washington 8, N.Y. Mets 7 L.A. Dodgers 2, Atlanta 1, 10 innings Philadelphia at Florida, (n) Colorado at San Diego, (n) Arizona at San Francisco, (n) Today’s Games Philadelphia (Halladay 16-5) at Florida (Ani.Sanchez 7-7), 12:10 p.m. L.A. Dodgers (Kershaw 17-5) at Atlanta (Delgado 0-1), 12:35 p.m. N.Y. Mets (Pelfrey 7-11) at Washington (L.Hernandez 8-12), 12:35 p.m. Milwaukee (Marcum 11-5) at Houston (W.Rodriguez 10-9), 1:05 p.m. Cincinnati (Arroyo 8-11) at St. Louis (E.Jackson 4-2), 1:15 p.m. Pittsburgh (Morton 9-8) at Chicago Cubs (R.Wells 6-4), 1:20 p.m. Arizona (D.Hudson 14-9) at San Francisco (Vogelsong 10-5), 3:05 p.m. Colorado (A.Cook 3-8) at San Diego (Latos 6-13), 3:05 p.m. Monday’s Games L.A. Dodgers at Washington, 12:05 p.m. Houston at Pittsburgh, 12:35 p.m. Cincinnati at Chicago Cubs, 1:20 p.m. Arizona at Colorado, 2:10 p.m. San Francisco at San Diego, 3:05 p.m. Milwaukee at St. Louis, 3:15 p.m. Atlanta at Philadelphia, 6:05 p.m. N.Y. Mets at Florida, 6:10 p.m.
minor league baseball Southern League North Division
W z-Chattanooga (LAD)....41 x-Tennessee (Cubs)......39 Carolina (Reds).............30 Jackson (Mariners)........29 Huntsville (Brewers)......27
L 27 29 37 39 40
Pct. GB .603 — .574 2 .448 10 1/2 .426 12 .403 13 1/2
South Division
W L Pct. yz-Mobile (D’backs).......46 22 .676 Mississippi (Braves)...34 34 .500 Jacksonville (Marlins)....32 36 .471 Montgomery (Rays).......31 37 .456 x-Birm. (White Sox).......30 38 .441 x-clinched first half y-clinched division (refers to second half) z-clinched playoff spot Saturday’s Games Carolina 10, Mississippi 9 Jackson 6, Jacksonville 0 Chattanooga 3, Montgomery 2 Birmingham 5, Tennessee 0 Mobile 6, Huntsville 5 Today’s Games Tennessee at Birmingham, 5 p.m. Mississippi at Carolina, 5:15 p.m. Montgomery at Chattanooga, 5:15 p.m. Jackson at Jacksonville, 6:05 p.m. Mobile at Huntsville, 6:43 p.m.
LOTTERY Sunday’s drawing La. Pick 3: 6-4-6 La. Pick 4: 3-5-3-5 Monday’s drawing La. Pick 3: 0-0-8 La. Pick 4: 2-4-1-2 Tuesday’s drawing La. Pick 3: 6-1-0 La. Pick 4: 7-9-8-1 Wednesday’s drawing La. Pick 3: 2-6-3 La. Pick 4: 8-4-0-2 Easy 5: 3-7-9-27-33 La. Lotto: 9-10-11-18-36-39 Powerball: 13-19-35-47-57 Powerball: 29 ; Power play: 5 Thursday’s drawing La. Pick 3: 8-9-3 La. Pick 4: 1-8-9-0 Friday’s drawing La. Pick 3: 2-8-6 La. Pick 4: 0-3-6-4 Saturday’s drawing La. Pick 3: 2-1-2 La. Pick 4: 8-4-0-7 Easy 5: 3-10-17-27-35 La. Lotto: 2-8-14-17-29-35 Powerball: 15-25-52-53-54 Powerball: 2; Power play: 5
GB — 12 14 15 16
Sunday, September 4, 2011
The Vicksburg Post
college football
Pac-12 commissioner open to more expansion
Auburn survives scare in opener By The Associated Press
Friday that multiple conferences have shown interest in the Sooners recently and he expects to decide whether to leave the Big 12 or not within the next three weeks. Then on Saturday, Oklahoma State billionaire booster Boone Pickens said he doesn’t think the Big 12 will last much longer and believes the Cowboys eventually will end up in the Pac-12. Pickens.” Scott refused to comment “on any particular conversations” or specific schools. Texas A&M announced this week that it is leaving the 10-team Big 12 and applying for membership in another conference, likely the Southeastern Conference. the did the rest, ducking his head and powering through the Utah State defenders. Utah State twice led by double digits against a team clearly feeling the effects of the departures of numerous starters from last year’s national champions.
JSU cruises past Concordia; Valley crushed by Hornets
AJ McCarron stepped up in Alabama’s quarterback race Saturday, throwing for a touchdown and 226 yards as the No. 2 Crimson Tide crushed Kent State. McCarron had a 24-yard scoring toss to Marquis Maze and finished 14-of-23 passing. Running back Trent Richardson scored three touchdowns.
ARLINGTON, Texas (AP) — Pac-12 commissioner Larry Scott is willing to listen if Oklahoma or anyone else wants to join his conference. Speaking before No. 3 Oregon played No. 4 LSU at Cowboys Stadium on Saturday night,
By The Associated Press Casey Therriault ran for a pair of touchdowns and threw for another, leading Jackson State in a 42-2 romp over Concordia-Selma in the season opener for both teams on Saturday. Therriault was 15-for-32 passing for 218 yards and ran for 5 yards on 11 carries for the Tigers, who scored early and often to subdue the Hornets. A 60-yard touchdown pass from Therriault to Rico Richardson opened the scoring eight seconds into the game. Therriault added 1-yard scoring runs to begin and end the second quarter for a 28-2 halftime lead. After a scoreless third period, Andre Wright picked up a fumble and took it 64 yards for a touchdown and a 35-2 Tigers advantage with 13 minutes left. Concordia turned the ball over seven times in the contest.
Alabama St. 41, Miss. Valley St. 9 Greg Jenkins threw three touchdown passes to Nick Andrews and ran for another to lead Alabama State (1-0) to an easy win over Mississippi Valley State (0-1).
Jenkins was 19-of-27 passing for 188 yards. He also ran the ball 16 times for 56 yards, including a 12-yard TD scamper in the third quarter. Andrews caught 10 passes for 104 yards, including touchdowns of 36, 5 and 8 yards. Quarterback Garrick Jones rushed for 113 yards for Mississippi Valley State.
MC 33, Millsaps 27 Tommy Reyer scored on an 8-yard run in overtime to give Mississippi College the win over Millsaps. Reyer also threw two touchdown passes, and Steven Knight rushed for 103 yards on only 14 carries for the Choctaws, who wasted an early 10-point lead and then stormed back from a 10-point deficit in the second half. MC’s Chris Campbell tied the game at 27 with a field goal with 4:14 to play in the fourth quarter. In overtime, two penalties backed Millsaps up to the 34-yard line and it was unable to score. That opened the door for the Choctaws, who took advantage of a pass interference penalty to set up Reyer’s winning scramble. Garrett Pinciotti completed 18 of 35 passes for 263 yards and three touchdowns for Millsaps.
Wedding Invitations 1601-C North Frontage Road • Vicksburg Phone: (601) 638-2900 speediprint@cgdsl.net
B3
Alabama 48, Kent St. 7
dium because of thunderstorms. The game was delayed for 2 hours and 10 minutes, then again for 43 minutes when bad weather returned in the fourth quarter. The game lasted 5 hours, 59 minutes.
South Carolina 56, East Carolina 37 Fifth-year senior Stephen Garcia came off the bench to run for two touchdowns and throw for another as he rallied the 12th-ranked Gamecocks past East Carolina. Marcus Lattimore added 112 yards and three TDs for South Carolina.
Boise St. 35, Georgia 21 The associated press
Auburn running back Michael Dyer (5) celebrates with fans after Saturday’s 42-38 win over Utah State.
Florida St. 34, ULM 0
USF 23, Notre Dame 20
EJ Manuel threw for 252 yards and two touchdowns, and backup quarterback Clint Trickett threw a touchdown pass on his first college play as No. 6 Florida State beat Louisiana-Monroe. Florida State limited ULM to 191 yards and 12 first downs.
Kayvon Webster returned a fumble 96 yards for an early touchdown as South Florida came to Notre Dame for the first time and stunned the 16th-ranked Irish in a game disrupted for hours because of storms. In the first half, Notre Dame had two fumbles, a costly holding penalty that nullified a Cierre Wood TD run and then an interception of Dayne Crist by USF’s Devekeyan Lattimore in the end zone. At halftime, fans were asked to evacuate Notre Dame Sta-
Houston 38, UCLA 34 Case Keenum threw for 310 yards and two touchdowns, Tyron Carrier caught 10 passes for 138 yards and Houston beat UCLA.
Kellen Moore threw for three touchdowns — giving him 102 in his career — and No. 5 Boise State romped past 19th-ranked Georgia, boosting its hopes of making another run to a major bowl. Moore, the nation’s toprated passer last season and expected to be a leading Heisman contender, carved up Georgia’s defense after a sluggish start. He completed 28 of 34 for 261 yards, with his first scoring pass — a 17-yarder to freshman Matt Miller — giving him 100 for his career. He had two more before halftime to lead the Broncos to yet another marquee opening victory. In the last three seasons, Boise State has started the season with victories against Oregon, Virginia Tech and now Georgia.
B4
Sunday, September 4, 2011
Killing snakes always has served a purpose I was challenged by a reader of a recent column in which I claimed to have been in on the deaths of thousands of snakes. Well, I’m a country boy, OK? And as the firstborn son, I was handed the parental responsibility of killing snakes around the house and yard, so Big Robert wouldn’t have to worry with that any more. Back then, the yard more or less covered not only what I had to mow around the house, but the milking barn and pasture on the east, the haybarn and store and shop and pasture to the west, as well as the Mammy Grudge ditchbank which borders all of the above on the south, and which we daily skinny-dipped in. Our hunting or frog-gigging escapades took in the next mile or so of the Mammy Grudge, plus the swamp woods to the southeast, and we had confirmation of how snakey those were by Official U.S. Sources. A couple of decades ago we were spending a summer afternoon at the Swimming Hole, when a truckload of young men drove by, stopped, backed up, debated, then discharged the driver, who approached us. “Mr. Neill?” he greeted me, “This is the coolest-looking place we’ve seen in weeks. Would you mind if we got in and cooled off a while?” He talked nicely enough, but this was obviously one of the sweatiest, dirtiest, stinkingest young men that I had seen in a while, and his companions standing in the road looked no less disreputable. “Not like you are right now,” I replied mildly. “But if y’all strip off over by that hydrant under the cypress tree while Miss Betsy goes for a pitcher of mint tea, and wash off with the hose first, sure, you can jump in and cool off.” They raced for the cypress. Later, the leader told me that they were a survey crew for one of the Guv’mint departments charged with measuring the levels of the tributaries of the Mississippi River, out to about 25 miles of the channel itself. “Mr. Neill, we’ve been all the way down the west side of the river this summer, and back up the east side from the Gulf to here, and you’ve got more big mean water moccasins down in those swamp woods than anywhere else we’ve ever been!” I asked them back. But being overrun with poisonous snakes isn’t why I’ve excelled at killing lots of them.
robert hitt
neill
No, I blame that on the actor Marlon Brando, who starred in a film (I think it was “On the Waterfront”) in which he wore a snakeskin jacket. Troy was my across the pasture neighbor and big brother, and we saw that show together, which “flang a cravin’” upon us for snakeskin jackets ourownselves. We approached Miz Mac right away: “If we get the snakeskins, will you sew us snakeskin jackets for next fall, please, Ma’am?” She was quite negative: “Not in this life!” and went on washing dishes. So we went to Miz Janice. My mother was churning butter at the time, reading a book, and listening to the kitchen radio. Troy and I went to the fridge for mint tea, and as we headed back outside with cups, I stuck my head back in to ask, “Hey, Momma: if Troy and I get the skins, would you sew us a couple of snakeskin jackets, please, Ma’am?” She never looked up from her book: “Sure.” Troy and I shot up probably a thousand .22 cartridges apiece that summer. We hunted ruthlessly for the poisonous moccasins and copperheads (we didn’t have many rattlesnakes around Brownspur), but we didn’t cull any serpent large enough to donate its epidermis to the cause. Chicken snakes, rat snakes, blue runners, puff adders, water snakes, even large garter snakes were fair game. We became experts at snake-skinning. If one cuts the head off a snake and shucks the skin back six inches from the neck, then sticks the stillwrithing neck up to a sapling or fence post, the snake will obligingly hold itself to the object whilst one shucks its skin off. We then salted and dried them on the wall of Troy’s garage. When school started that fall, we proudly marched up to my mother with two bales of dried snakeskins, cleaned and all ready for sewing into jackets. It’s the only time that I recall my mother lying to me. She wouldn’t do it!
• Robert Hitt Neill is an outdoors writer. He lives in Leland, Miss.
sports arena Submit items by e-mail at sportsatv.
Hinds CC alumni golf tournament The Warren-Claiborne chapter of the Hinds Community College Alumni Association will host a golf tour-
nament on Sept. 21 at Clear Creek Golf Course in Bovina. The tournament begins at 1 p.m., and the registration fee is $75 per player or $300 for a four-man team. All proceeds go toward student scholarships at Hinds. For information or to register, call Hinds alumni coordinator Abby Brann at 601857-3350, e-mail her at abby. brann@hindscc.edu, or call Clear Creek golf pro Kent Smith at 601-638-9395.
Schwartzel surges into lead at Deutsche Bank NORTON, Mass. (AP) —., shot 63 in the afternoon. Schwartzel, frustrated by a poor approach shot on the
golf
18th hole on Friday, ran off five birdies in a six-hole stretch Satuday and was tied for the lead at the halfway point. The top 70 on the FedEx Cup list after this week advance to the third playoff event outside Chicago in two weeks, with the top 30 from there going to Atlanta for a shot at the $10 million prize. The FedEx Cup playoffs ended for Ian Poulter, Anthony Kim and Stewart Cink, among others. They missed the cut and already were outside the top 70 on the list of players who are trying to advance to Chicago.
The Vicksburg Post
Sunday, September 4, 2011
The Vicksburg Post
mlb
Red Sox snap out of funk, slam Rangers
Dodgers take down Atlanta in 10 innings ATLrunner 1/2-game lead in the NL wild-card standings, has lost two straight and dropped to 27-19 since the AllStar. Uggla has hit safely in 50 of his last 57 games. Atlanta’s Mike Minor allowed one run and six hits with
By The Associated Press
Los Angeles Dodgers center fielder Matt Kemp catches a fly ball in the fourth inning of Saturday’s game against the Atlanta Braves. two walks and seven strikeouts in six innings. The lefthander had won three straight decisions. Nathan Eovaldi gave up one run, three hits, five walks and struck out five. The righthanderbats over his last 13 games, but Rivera, who entered with a .412 average in 51 career atbats against Atlanta, struck out in his first three times up before driving in the deciding run.
Carolina sinks M-Braves with huge comeback By The Associated Press In one nightmarish inning, the Mississippi Braves let a seemingly easy victory slip away. Carolina scored nine runs in the bottom of the seventh inning Saturday night, quickly erasing a six-run deficit before going on to beat the M-Braves 10-9. Bill Rhinehart and Cody Puckett started Carolina’s big comeback with back-to-back homers to cut it to 7-3. The Mudcats then got five consecutive hits, including a tworun single by Brodie Greene to take an 8-7 lead. Rhinehart added an RBI groundout and Jake Kahauleio hit a sacrifice fly to make it
B5
10-7. The M-Braves scored twice in the ninth to get within a run, but left the tying run at third. Ernesto Mejia homered twice for the M-Braves and went 3-for-5 with four RBIs. Cory Harrilchak was 3-for-4 with two doubles and two runs scored. Rhinehart drove in three runs for Carolina. The loss snapped a fourgame winning streak for the M-Braves as they wind down the season. Mississippi and Carolina will play today at 5:15 p.m. in Kinston, N.C., then conclude the season on Monday with a game beginning at 11 a.m.
The Red Sox had managed all of two runs during a brief two-game skid. The offensive drought ended with flurry, though,. 3 1/3 innings, allowing four runs and seven hits. He was pulled shortly after Saltalamacchia’s homer to left tied
The associated press
Boston’s Carl Crawford, right, and Texas Rangers catcher Yorvit Torrealba watch Crawford’s grand slam during the fourth inning of Saturday’s game at Fenway Park. The Red Sox won, 12-7. it.” Elsewhere in the American League on a busy Saturday, it was the New York Yankees 6, Toronto 4; Oakland 3, Seattle 0; Detroit 9, the Chicago White Sox 8; Tampa Bay 6, Baltimore 3; and Kansas City 5, Cleveland 1.
Brewers 8, Astros 2 George Kottaras became the first major league player to hit for the cycle this season and the Milwaukee Brewers beat the Houston Astros.. Kottaras went 4-for-5 with two RBIs and scored twice. Prince Fielder drove in two runs with a grounder and a single. Chris Narveson (10-6) allowed two runs and four hits over five innings in his first start for the Brewers since Aug. 22. He walked four and struck out four, working out of jams in the second and third. Narveson is trying to overcome a pair of injuries to his pitching hand. He went on the 15-day disabled list Aug. 9 after injuring himself with scissors while repairing his glove. He left his last start on Aug. 22 with an injury to the middle finger on his left hand. Carlos Lee extended his hitting streak to 14 games with a two-run homer for Houston in the fifth. Norris gave up six runs, five earned, and nine hits in 5 1-3 innings. He struck out five. In other National League games, it was Pittsburgh 7, the Chicago Cubs 5; St. Louis 6, Cincinnati 4; Washington 8, the New York Mets 7; Florida 8, Philadelphia 4; Colorado 5, San Diego 4; and Arizona 7, San Francisco 2.
B6
Sunday, September 4, 2011
NASCAR
Keselowski rolling toward Chase HAMPTON, Ga. (AP) — tonight’s next-tolast heal. Keselowski clearly isn’t buying that theory. He returned to his Nationwide car for the race Saturday night after putting in 66 laps of practice with his Cup team in the No. 2 Dodge. “I wish I could pinpoint what it is,” Keselowski said. “I have a hard time believing that having a broken foot makes you a better race car driver. I just think it’s the team coming together and clicking as one. I’m proud to be part of that.” Keselowski has climbed 10
The associated press
Brad Keselowski celebrates after winning the Sprint Cup Series race at Bristol on Aug. 27. Keselowski has won two of the last four races heading into tonight’s AdvoCare 500 at Atlanta Motor speedway.
On TV 6:30 p.m. ESPN Sprint Cup, AdvoCare 500 spots in the standings, putting him just outside the top 10 and a guaranteed shot at the championship. But, with three wins on the year, Keselowski is all but assured of claiming one of two wild cards, which go to the drivers from 11th to 20th with the most victories. “It always works in cycles,” he said. “You try your best to capitalize when you’re on top of the cycle. You try your best to minimize the amount of time when you’re on bottom of the cycle. When you have success, you can try and learn and try and repeat it and try
to minimize the bad part of it. We’re on top of the cycle right now. It can very easily turn around and put us at the bottom of the cycle when it counts in the Chase.” Keselowski will start 14th in tonight’s race. Kasey Kahne, another driver looking for a victory that would bolster his Chances at making the Chase, took the pole on Saturday with a speed of 186.196 mph. Kahne is 15th in the points race and acknowledged he must win tonight or next week in Richmond to have a shot to qualify for the Chase. Points leader Kyle Busch qualified third, one spot ahead of Brian Vickers. Jeff Gordon, Matt Kenseth, Carl Edwards, Martin Truex Jr., Ryan Newman and Kurt Busch round out the top 10.
The Vicksburg Post
Sunday, September 4, 2011
The Vicksburg Post
‘Eat, drink and party’
TONIGHT ON TV n MOVIE “Blade Runner” — A specialized detective, Harrison Ford, in 2019 Los Angeles receives an order to terminate obsolete android slaves, Rutger Hauer and Sean Young./5:30 on SYFY n SPORTS NASCAR — The Sprint Cup Series makes its lone visit to the Peach State for the AdvoCare 500 at Atlanta./6:30 on ESPN n PRIMETIME Sean Young “Mike & Molly” — Mike’s mom and Molly argue about who can take better care of him when he gets ill./8 on CBS
THIS WEEK’S LINEUP n EXPANDED LISTINGS TV TIMES — Network, cable and satellite programs appear in Sunday’s TV Times magazine and online at. com
MILESTONES n BIRTHDAYS Mitzi Gaynor, actress, 80; Jennifer Salt, actress, 67; Martin Chambers, rock musician, 60; Damon Wayans, actor-comedian, 51; Richard Speight Jr., actor, 42; Ione Skye, actress, 41; Wes Bentley, actor, 33; Beyonce, singer-actress, 30; Carter Jenkins, actor, 20; Trevor Gagnon, actor, 16. n DEATH
peopLE
Jackson doctor files emergency appeal In an 11th hour appeal, lawyers for Michael Jackson’s doctor sought to overturn a judge’s refusal to sequester jurors, arguing they would be “poisoned” by publicity unless they were kept in isolation during the involuntary manslaughter trial. Attorneys late Friday also asked to halt the start of jury selection on Thursday until the isDr. Conrad sue of sequestration is decided by California’s Murray 2nd District Court of Appeal. Dr. Conrad Murray is accused of giving Jackson an overdose of the anesthetic propofol in his home just before the pop star’s 2009 death. Jackson was said to be suffering from insomnia and was desperate for sleep. Murray has pleaded not guilty to involuntary manslaughter and could face up to four years in prison if convicted.
Saggy pants cost Green Day singer a seat Green Day front man Billie Joe Armstrong says his sagging pants cost him a seat on a Southwest Airlines flight. The singer-guitarist for the San Francisco Bay area band sent a message to his Twitter followers expressing his indignation at being tossed from an Oakland-to-Burbank flight for wearing his trousers too low. “Just got kicked off a southwest flight because my pants sagged too low!... No joke!” he wrote. Southwest spokesman Brad Hawkins released a statement saying Armstrong was allowed onto the next flight to Burbank and had told a customer relations agent who contacted him he had no further complaints.
ANd one more
Poodle helps save man from fire Authorities say a toy poodle saved a life in Utah by leading firefighters through his owner’s smoke-filled home to a man asleep in the basement..
TOMORROW’S HOROSCOPE
BY BERNICE BEDE OSOL • NEWSPAPER ENTERPRISE ASSOCIATION Virgo (Aug. 23-Sept. 22) — In many situations, moneymaking tips given by “insiders” are of little or no value. However, when information comes from one who has made it big, take a second look. Libra (Sept. 23-Oct. 23) — Rather than engage in activities that involve a lot of mental gymnastics as you generally would, you’re likely to find a lot of enjoyment in pursuits that are more on the physical side. Scorpio (Oct. 24-Nov. 22) — An investment you thought of as being a loss might suddenly turn a profit. Sagittarius (Nov. 23-Dec. 21) — Something important that has been totally out of your hands is making its way back to you. Capricorn (Dec. 22-Jan. 19) — You are likely to get a chance to participate in a development that is being run by another and that is doing quite well. Aquarius (Jan. 20-Feb. 19) — You will be entering into a new cycle in which big opportunities abound, stemming from partnership arrangements. Pisces (Feb. 20-March 20) — Should you come up with a good idea regarding a new way to advance your ambitions and aspirations, move on it promptly, even if it is a shot in the dark. Aries (March 21-April 19) — If you want to end up having the upper hand when involved in a competitive situation, whether it involves a sport, game, romance or business, adopt a positive mindset. Taurus (April 20-May 20) — Important changes can be made that could have some far-reaching effects. Gemini (May 21-June 20) — When an on-the-spot call needs to be made, do so without hesitation. Cancer (June 21-July 22) — An abundance of opportunities are all around you, both in your personal life and in work-related situations. Leo (July 23-Aug. 22) — Ventures or enterprises you personally direct or have a hand in developing should live up to your expectations.
B7
Reality show ‘Russian Dolls’ stirs controversy NEW YORK (AP) — A mother is lecturing her 23-yearoldcamera, during a restaurant date. The scene is captured in a new TV reality show called “Russian Dolls,” which premiered on the Lifetime cable network in August and is shown Thursdays at 10:30 p.m.
The associated press
“Russian Dolls” cast member Diana Kosov
On TV “Russian Dolls’ is on Lifetime Sundays at 10:30 p.m. well-educated, hardworking community.” Kosov, a hairdresser, had to mend relations with her Mexican-born boss over remarks she’d made on the show about her daughter, Diana Kosov, dating the Hispanic man. “I told her, ‘I’m not racist,”’ she says. “I love any kind of people.”As for the scene with the knife, “I am not killer!” said. Real life mightknit.”
Christian family man isn’t right choice for atheist
DEAR ABBY ABIGAIL
VAN BUREN.
B8
Sunday, September 4, 2011
new on the shelves The Warren County-Vicks- mine people, relationships, and burg Public Library reports on institutions. The entire fabric of humanity depends upon people new books regularly. • “little princes” by Conor depending upon each other for Grennan is the story of one their word, honesty and loyalty. man’s promise to bring home Dr. Laura shares for the first the lost children of Nepal. In time her own personal expesearch of adventure, 29-year- rience with betrayal, humiliaold Conor traded his day job tion and pain which have led for a yearlong trip around the her to a powerful desire for globe, a journey that began with revenge. Millions of Dr. Laura’s listeners struga three-month gle with the stint volunteeridea of taking ing at the Little revenge and Princes Children’s Home, how to accept an orphanage when it cannot be achieved. in war-torn Nepal. Conor For many who was initially h ave b e e n reluctant to volbetrayed, jusunteer, unsure tice may never whether he be served, she had the proper reminds us. skill, or enough Empathetic yet passion, to get never sacchainvolved in a rine, direct yet developing never harsh, Dr. country in the “little princes’ by Conor Laura encourmiddle of a civil ag e s r e a d Grennan war. But he was ers to explore soon overcome their feelings by the herd of rambunctious, and learn to get beyond them, resilient children who would supplying tools they can use challenge and reward him in to achieve fulfilling, contented a way that he had never imag- lives, free of rancor and the ined. When Conor learned the need to settle scores. Powerunthinkable truth about the sit- ful and thought-provoking, uation, he was stunned: The this book gives readers the children were not orphans emotional defense they need at all. Child traffickers were to overcome the worst life will promising families in remote throw at them, whether it’s a villages to protect their chil- cheating spouse, a lying sibling dren from the civil war — for a or a ruthless colleague. huge fee — by taking them to • “The Chicken Chronicles” safety. They would then aban- by Alice Walker is a record of don the children far from home, her remarkable experiences. in the chaos of Nepal’s capital, For the past several years, on Kathmandu. For Conor, what a farm north of San Francisco, began as a footloose adven- the celebrated writer Alice ture becomes a commitment Walker has diligently cared for to reunite the children he had a flock of chickens. Over time, grown to love with their fami- her blossoming relationship lies, but this would be no small with “her girls” developed in task. unexpected ways, becoming a • “The Strawberry Letter” source of inspiration, strength by Shirley Strawberry offers a and spiritual discovery. Her message of strength. Co-host flock even helped Walker conof the nationally syndicated nect more profoundly with Steve Harvey Morning Show, her own past as a girl in rural Shirley delivers more of the no- Georgia. By turns uplifting and nonsense woman-to-woman heartbreaking, this book lets us straight talk her listeners have see a new and deeply personal come to love. Shirley tells it like side of one of the greatest writit is — from the heart. Whether ers of our time. It is also a powthe topic is cheating boyfriends, erful touchstone for anyone crazy mothers-in-law, job trou- seeking a deeper connection bles, or money problems, Shir- with the natural world. ley’s girlfriend-next-door hon• “The New Cool” by Neal esty has made the Strawberry Bascomb is the story of a Letters segment of the show a visionary teacher, his FIRST huge hit. Now, in this uplifting Robotics team, and the ultimotivational guide, she brings mate battle of smarts. On a her vivacious, inspirational, Monday afternoon, in high and down-to-earth message to school gyms across the counwomen everywhere. try, kids were battling for the • “The Squeaky Wheel” by only glory American culture Guy Winch explains how to seems to want to dispense to complain the right way to get the young these days: sports results, improve your relation- glory. But at the engineerships, and enhance self-esteem. ing academy at Dos Pueblos In the days of the horse and High School in Goleta, Calif., carriage, we complained much in a gear-cluttered classroom, less, and when we did, our com- a different type of “cool” was plaints were more likely to get brewing. A physics teacher results. Today we complain with a dream — the first high about everything — yet most school teacher ever to win a of us grumble, vent, and kvetch MacArthur genius award — neither expecting nor getting had rounded up a band of highmeaningful resolutions. Wast- I.Q. students who wanted to put ing prodigious amounts of time their technical know-how to and energy on unproductive work. If you asked these braicomplaints can take an emo- niacs what the stakes were tional and psychological toll the first week of their projon our moods and well-being. ect, they’d have told you it was We desperately need to relearn all about winning a robotics the art of complaining effec- competition: building the ultitively. Psychotherapist Winch mate robot and prevailing in a offers practical and psycholog- machine-to-machine contest in ically grounded advice on how front of 25,000 screaming fans to determine what to complain at Atlanta’s Georgia Dome. For about and what to let slide. He their mentor, Amir Abo-Shaeer, demonstrates how to convey much more hung in the balour complaints in ways that ance. Amir had a vision of eduencourage cooperation and cation that was not based on increase the likelihood of get- rote learning but on active creting resolution to our dissat- ation. He wanted a more robust isfactions. The principles he academy at Dos Pueblos and he spells out apply whether we are knew he was poised to make dealing with a rude store clerk, that dream a reality. To get a bureaucrat, a co-worker, our the necessary funding all he teenager, or a spouse or partner needed was one flashy win. who’s driving us crazy. Com• “The Price of Everything” plaining constructively can be by Edurado Porter solves the personally empowering and it mystery of why we pay what can significantly strengthen we do. This book starts with a our personal familial and work simple premise: there is a price relationships. Applying our behind each choice, whether newfound complaining skills we’re deciding to have a baby, to customer-service represen- drive a car or buy a book. We tative, corporate leaders, and often fail to appreciate just elected officials increases the how critical prices are as motiodds that our comment will be vating forces. But their power taken seriously. becomes clear when distorted • “Surviving a Shark Attack prices steer our decisions the (On Land)” by Dr. Laura wrong way. • Schlessinger tells how to overcome betrayal and dealing Denise Hogan is reference interlibrary with revenge. Betrayals are loan librarian at the Warren Countyfrightening, destructive, pain- Vicksburg Public Library. Write to her at ful, humiliating, and so very 700 Veto St., Vicksburg, MS 39180. hard to repair. Betrayals under-
The Vicksburg Post
Awards 601-631-0400 1601 N. Frontage • Vicksburg, MS
The associated press
Farmers’ Almanac editor Sondra Duncan and publisher Peter Geiger with the 2012 edition of the almanac.
In Farmers’ Almanac, folksy meets the future
LEWISTON, Maine (AP) — for a hurricane threat to the The Farmers’ Almanac has a Southeast between Aug. 28 and hole punched in the corner, 31 this year. Hurricane Irene made for hanging it on a hook made landfall in North Caroin the outhouse “library” in lina on Aug. 27 — though critthe olden days. These days, ics might note that predicting though, there are some higher- a hurricane in August is like tech options, including social shooting fish in a barrel. Geiger said people shouldn’t networks, cell phones and be surprised that the almae-readers. Known for forecasts that use nac’s website gets 21 million page views an old-fasheach year, has ioned formula, 32,000 fans on the almanac Facebook and now has a a large Twitter mobile webfollowing. site for smart But the print phones and version isn’t nearly 6,000 folgoing away. lowers on TwitThe almanac ter. More than has a circula30,000 people tion of 4 mil“like” the publion, including lication’s Faceretail editions b o o k p ag e . and promoBy year’s end tional versions there’ll be softgiven away by ware applicabusinesses. tions for Kindle, The forecast, Nook and iPad. Karen Shack- The 2012 Farmers’ Almanac along with recipes, brainteasles, of Dillon, Colo., follows the almanac on ers, trivia and tips for resourceTwitter and Facebook, checks ful living, comprise a formula its website and receives its that’s largely unchanged from e-mail newsletter. She likes the the first publication in 1818. folksy style of the almanac and appreciates its embrace of technology. She and her husband use the information for their snow-plowing business. “We try to reach out to see who is giving some longrangeterm 2 feet of snow and crippled cities for days. On the other hand, it called
Sunday, September 4, 2011
The Vicksburg Post
B9
THE VICKSBURG POST
Business Karen Gamble, managing editor | E-mail: newsreleases@vicksburgpost.com | Tel: 601.636.4545 ext 137
GASOLINE PRICES
as business
Average regular unleaded self-service prices as of Friday: Jackson..............................$3.42 Vicksburg..................$3.49 Tallulah..............................$3.52 Sources: Jackson AAA, Vicksburg and Tallulah, Automotive. com
Mobile shopping: More buzz than buy.
By Ellen Gibson AP retail writer
Warren-Yazoo gets statewide award Warren-Yazoo Mental Health Service has been honored by Friends of the Mississippi State Hospital with a Together We Make a Difference Award. The honor was presented during the Community Mental Health Centers’ annual awards ceremony Aug. 24. Also receiving a Together We Make a Difference Award was Timber Hills Mental Health Service, in the CMHC’s Region 4. WYMH is in Region 15. The CMHC is a network of 15 regional centers offering community-based mental health services to Mississippi’s 82 counties. Mississippi State Hospital is at Whitfield and is operated by the Mississippi Department of Mental Health. WYMH’s offices are in Vicksburg and Yazoo City. Timber Hills serves people in Tippah, Alcorn, Tishomingo, Prentiss and Desoto counties.
Retired Entergy exec honored for work Retired Entergy Nuclear South president John McGaha has received the American Nuclear Society’s Utility Leadership Award. McGaha, of Ridgeland, has more than 32 years of experience with Entergy’s nuclear program, and served as president of the five nuclear units in its southern service area — two units at Arkansas Nuclear One; and single units at Grand Gulf Nuclear Station in Port Gibson, south of Vicksburg, River Bend Station in St. Francisville, La., and Waterford 3, in south Louisiana. He also was a member of the American Nuclear Society board, chairman of the ANS-Utility Integration Oversight Committee, and worked on advisory committees for the Nuclear Energy Institute and the Institute of Nuclear Power Operations. He is currently on the NuScale Power advisory board for a new small modular reactor design, and works part time as an independent consultant to the nuclear industry. He is a Tulane University graduate in electrical engineering and served in the U.S. Navy nuclear submarine program for five years. He retired from the U.S. Naval Reserve in 1994 with the rank of captain.
Marilyn Bu tler, foregro und, and Ta classes in V mmie Forte icksburg. nberry pain file•The Vick t during on sburg Post e of Tammy Tillotson’s T hat’s Sooo Cool
Creativity vital to state’s purse, official says By Terri Cowart Frazier tfrazier@vicksburgpost.com The arts and creative fields are key contributors to Mississippi’s economy, said Malcolm White, executive director of the Mississippi Arts Commission. “The conception of most people is that supporting the arts only focuses on painting, music and dance, but in reality the creative fields include designers, architects and professors — to name a few,” White told Vicksburg Lions Club members Wednesday. For example, White said, the creation of the Mississippi Blues Trail, has been an “economic plus.” The trail is comprised of blues history markers, with at least four in Vicksburg, set up in towns that might otherwise go unnoticed. Other areas profiting from creativity, White said, are Ocean Springs, on the Coast; Oxford, hometown of Ole
‘The conception of most people is that supporting the arts only focuses on painting, music and dance, but in reality the creative fields include designers, architects and professors — to name a few.’ Malcolm White
Director, Mississippi Arts Commission Miss; and the historic Fondren District in Jackson. Also, “The Help,” a movie based on the best-seller book by Kathryn Stockett that tells of Jackson in the 1960s, is filling box office coffers — and it was filmed in Mississippi. “Creative people help develop a creative community,” White said. Arts businesses are popping up in Vicksburg. Tammy Tillotson is, by day, a teacher at South Park Elementary — and moonlights as an art instructor. She has started That’s Sooo Cool, which specializes in youth
and adult classes. In a room beneath the Cricket Box on Halls Ferry Road, Tillotson guides adults through a painting and, in about three hours, they have their own pieces of art to take home. “Even though students are painting the same design,” she said, “everyone has their own technique and style.” Lauren Bolar, one of Tillotson’s adult students, said, “I have no talent, and I thought this would be a good way to start. It’s a great way to relax after work, too.” Bridget Tisdale’s Easely Amused, with locations in
Ridgeland and Flowood, runs off the same concept. “We have a guest artist every month,” said Jessica Wood, an employee of the business. Art has a place in education, too, White said. “Schools generally focus on left-brain (or logical) thinking,” he said, “when, in essence, we need to be teaching from both sides of the brain.” Lisa Grant, a Vicksburg High School teacher with 30 years under her belt and a side business as an art instructor, said, “Art absolutely does miracles. It teaches hands-on learning, problem-solving skills and promotes self-esteem.” Even if a person is not a natural at art, he or she can learn, she said. “When the kids first started coming to class they were a bit hesitant, but now they are running to class See Art, Page B10.
50 years of style Barber Billy Downey cuts Shane Quimby’s hair Thursday, the 50th anniversary of Downey’s Barber & Style Shop on Clay Street. Downey, 71, first worked in road construction, he said, but quickly realized it was not his style. He graduated from Hinds Community College’s barber school in 1959 and began working for Butler’s Barber Shop, downtown. In 1961, Downey opened his own shop at 2837 Clay St., and has been there since. Todd Downey, his son, is taking barber classes and the plan is for him to run the shop once his father retires. “I told him to go over there and get his diploma and come back over here and I’d teach him to cut hair,” Downey said.
KATIE CARTER•The Vicksburg Postusers accessing online retail stores through their phones. Retailers are partly to blame for shoppers’ apathy. Less than a third of retailers polled by the National Retail See Mobile, Page B10.
B10
Sunday, September 4, 2011
The Vicksburg Post
Redbox’s golden opportunity: Higher Netflix prices SAN FRANCISCO (AP) — Netflix is giving Redbox a golden opportunity to gain some ground. Beginning Thursday, Netflix, the largest U.S. video subscription service,speed Internet connections, but will look for other places to rent DVDs at a low price. Most people won’t have to go far before coming across a Redbox kiosk; two-thirds of
The associated press
Gary Cohen, senior vice president of marketing and customer experience at Redbox, by a working kiosk at the company’s offices in Oakbrook Terrace, Ill. the U.S. population now lives within a five-minute drive of one of the company’s red vending machines, which are largely stationed in Walmview..
Mobile
Art
Continued from Page B9.
Continued from Page B9. says. “You have to do things that are easy that don’t require you to give up your money first.” A few retailers are far ahead in mobile shopping. Although she hasn’t tested a lot of sites on her iPhone because her AT&T cell phonepitch TV network, showing shoppers the item currently being sold on-air. Additionally, users’ payment info is stored, so they need only
land transfers No commercial land transfers were recorded in the Chancery Clerk’s Office for the week ending Sept. 2, 2011.
sales tax revenue The City of Vicksburg receives 18.5 percent of all sales taxes collected by businesses in the city limits. Revenues to the city lag actu-
al sales tax collections by two months, that is, receipts for April reflect sales taxes collected on sales in February. Here are the latest monthly receipts:
June 2011.....................$601,976 Fiscal year 2010-11 to date... $5,372,334
June 2010.....................$609,165 2009-10 fiscal year to date..... $5,467,142:
June 2011 City...................................$529,071 County............................$194,114 Schools...........................$752,729
June 2010 City...................................$644,494 County............................$248,275 Schools..............................$67,380 Fiscal year 2009-10 to date City............................... $4,938,646 County........................ $2,121,072 Schools...........................$575,736
Fiscal year 2010-11 to date City............................... $4,643,603 County........................ $1,964,451 Schools...........................$533,166
enter a four-digit passcode to complete the purchase. “I usually have my phone sitting right there, and they make it very easy,” Pelaia says. The Home Depot, a shopper can launch the store’s app and get more information about a lawn mower or other item without having to ask a salesperson.
because they enjoy it so much,” she said. “Kids are totally amazed they can do something like that.” H.C. Porter, an artist who operates a gallery in downtown Vicksburg, is working on a project called “Blues@ Home.” She also documented the devastation of Hurricane Katrina on the Mississippi Gulf Coast, and that project was called “Backyards and Beyond: Mississippians and Their Stories.” “Blues@Home” will feature photos, paintings and audio interviews with blues leg-
PORTFOLIO Goldman will lead Corps center The Vicksburg District of the U.S. Army Corps of Engineers has named Ron Goldman director of the National Modeling, Mapping and Consequence Production Center. The center supports Ron Goldman.
ends from across the state. “My hope is that this project will raise awareness nationally and internationally to Mississippi’s contribution to American music and bring much-needed tourism to the state,” she said. White said it’s up to Mississippians to promote Mississippi. “By learning to tell our story, we are building up the community,” he said. “We have allowed other people to tell our story, but with a cultural community we can tell our own story.”
Customer Service 1601-C North Frontage Road • Vicksburg Phone: (601) 638-2900 speediprint@cgdsl.net
THE VICKSBURG POST
TOPIC SUN DAY, se p te mbe r 4, 2011 • SE C TI O N C LOCAL EVENTS CALENDAR C2 | WEDDINGS C3 Karen Gamble, managing editor | E-mail: newsreleases@vicksburgpost.com | Tel: 601.636.4545 ext 137
THIS & THAT from staff reports
Beautiful Bride set for Sept. 17 The annual Beautiful Bride Showcase at the Vicksburg Convention Center will be Sept. 17. The event, set up like a wedding and reception and featuring vendors and a bridal fashion show, will run from 5 to 8 p.m. The theme is “Fall...in love.” Admission is free for brides and $15 for others. Call the VCC at 601-630-2929.
Musical, magical show set at SCHC
SCHC seeks knitters for four-day class The Southern Cultural Heritage Center will offer a fourday knitting workshop. From 10 a.m. to noon Thursdays this month, Brenda Harrower, instructor, will teach students how to create a small sock and an adult sock. Harrower has been certified by the National Yarn Council since 1985. Participants must be comfortable with knit and purl stitching. The cost is $80 for members and $90 for nonmembers, and will include all supplies. Reservations are required. For more information, call 631-2997 or e-mail info@ southernculture.org.
Annual hawk watch at VNMP Sept. 17 The Jackson Audubon Society’s annual hawk migration watch will be Sept. 17 at the Vicksburg National Military Park. From 9 a.m. to noon, JAS expert birder Skip Anding will lead the watch at Fort Hill. The group will meet at the VNMP parking lot at 9 a.m., or at Fort Hill. Entrance to the park is $8 per car. For more information, call 601-956-7444 or visit www. jacksonaudubonsociety.org.
Downtown geared up for Hit the Bricks Hit the Bricks, an afterhours shopping event in downtown businesses, will be Thursday. From 5:30 to 8 p.m., businesses along Washington and adjoining streets will be open to diners and shoppers. For more information, call Vicksburg Main Street at 601-634-4527 or 601-831-8043, or e-mail kimh@vicksburg. org.
Youth orchestra seeks musicians for fall Auditions for the Mississippi Youth Symphony Orchestra’s fall season will be Saturday. Instrumentalists ages 6 to 22 may audition at the F.D. Hall Music Center at Jackson State University. The Jackson State campus is at 1400 J.R. Lynch St. For audition times, call 601983-7380 or e-mail mysoms@ yahoo.com.
Gourd Festival in two weekends The second annual Mississippi Gourd Festival is set for Sept. 17 and 18 in Raleigh. The festival will run from 8 a.m. to 5 p.m. Sept. 17 and 10 a.m. to 4 p.m. Sept. 18 and will feature handcrafted items and workshops. Smith County Agricultural Complex is located at 131 Oil Field Road. For more information, call 601-782-9444, visit, or e-mail miketom1950@yahoo. com.
By Terri Cowart Frazier tfrazier@vicksburgpost.com
The associated press
Bill Peterson, curator of Interpretation for the Montana Heritage Commission, stands with the Gypsy.
Collectors champing at bits for rare Gypsy Century-old fortune-teller could be last
ago, and collectors realized the machine was one of two or three “verbal” fortune tellers left in the world. One of those collectors, magician David Copperfield, said he thinks she is even rarer than that. By The Associated Press “I think it’s only one of one,” Copperfield said in a VIRGINIA CITY, Mont. — recent telephone interview The Gypsy sat for decades with The Associated Press. in a restaurant amid the Old Copperfield wanted the West kitsch that fills this Gypsy to be the crown jewel former gold rush town, her in his collection of turn-of the unblinking gaze greeting the century penny arcade games. tourists who shuffled in from It would occupy a place of the creaking wooden sidepride among the magician’s walk outside. mechanized Yacht Race, Some mistook her for Temple of Mystery and variZoltar, the fortune-telling ous machines that tested a machine featured in the Tom person’s strength. Hanks movie “Big.” Others Copperfield took one acknowllook at those Word got out when edged piercing eyes and got the the Montana Heritage approaching the curaheebie-jeeCommission began tors about bies so bad they couldn’t restoring the Gypsy more buying the Gypsy a few get away fast than five years ago, and years ago enough. But until collectors but declined to say what a few years realized the he offered. ago, nobody, not even her machine Janna Norby, the Montana owner, knew was one of Heritage the nonfunctioning two or three Commission curator machine gath‘verbal’ who received ering dust in the call from Bob’s Place fortune Copperfield’s was an undistellers left in the world. assistant, covered treait was in sure sitting One of those collectors, said the ballpark in plain sight magician David of $2 million, in this ghost along with town-turnedCopperfield, said he a proposal themed tourthinks she is even rarer to replace it ist attraction. another The than that. ‘I think it’s only with fortune-tell100-yearone of one,’ he said. ing machine. old fortune On top of teller was an that, he extremely pledged to promote Virginia rare find. Instead of dispensCity in advertisements. ing a card like Zoltar, the But Heritage commission Gypsy would actually speak curators, representing the your fortune from a hidden Gypsy’s owner — the state record player. When you of Montana — rejected the dropped a nickel in the slot, idea, saying cashing in on her eyes would flash, her this piece of history would be teeth would chatter and her akin to selling their souls. voice would come floating “If we start selling our colfrom a tube extending out of lection for money, what do the 8-foot-tall box. we have?” said Norby, the Word got out when the commission’s former curator Montana Heritage Commisof collections. sion began restoring the Gypsy more than five years
See Gypsy, Page C6. 8-foot-tall box.
A “21st century one-man vaudeville show” is headed to the Southern Cultural Heritage Center, its director says. “This is a show like you have not seen in this town,” said Annette Kirklin. Joe M. Turner, a Brandon native, will entertain guests with his sleightof-hand tricks and One Enchantpsychoed Evening logical will begin illusions . at 6:30 p.m. The event, Thursday called One with a social, Enchanted followed by Evening, the show. will start Tickets are Thurs$25 for memday at 6:30 bers, $30 for p.m. with a social, nonmembers followed and $225 for by the a corporate/ show at 7. private table. Turner Visit the SCHC is a motior Paper Plus, vational call 601-631speaker 2997 or log and coron to oneenporate chantedeveenterning.eventtainer in brite.com. Atlanta. He is a member of the Academy of Magical Arts at The Magic Castle in Hollywood, the Society of American Magicians, the International Brotherhood of Magicians, The Magic Circle in London and the Fellowship of Christian Magicians. He has been featured on ABC’s “Good Morning America” and “Nightline” and the Headline News network. Turner attended Mississippi State University, where he studied physics and theater. He is a pianist, vocalist, composer and playwright. He has a business background and was working in the corporate world in Atlanta before his physics, musical theater and business backgrounds merged. “What I do now embodies all the things that I studied,” he said. “It’s a roller coaster ride — and I’m not getting off.” Turner said his performance at the SCHC Thursday will join music and magic. “The music will be a part of the experience of the magic,” he said. “There will be elements of fun, comedy, laughter and some jaw-dropping entertainment.” Tickets are $25 for members, $30 for nonmembers and $225 for a corporate/private table, and include heavy hors d’oeuvres, punch and a cash bar. They are available at the SCHC and Paper Plus, or may be charged by phone at 601-631-2997 or by visiting oneenchantedevening.eventbrite.com. Turner’s website is www. turnermagic.com.
If you go
C2
Sunday, September 4, 2011
The Vicksburg Post
Poetry Out Loud seeking students for contest The 2011-2012 Poetry Out Loud Program is seeking high school participants. The Mississippi Arts Commission, the National Endowment for the Arts and the Poetry Foundation are sponsoring the poetry memorization and recitation program, which runs through December. Poetry Out Loud is for grades nine through 12 and includes free materials, instructional guides and live professional assistance. Beginning at the classroom level, winners advance to a schoolwide competition, regional and state contests and to the National Finals. To participate, schools must register by Nov. 18. Visit For more information, call 601-327-1294, e-mail poetryoutloud@arts.state.ms.us or visit and click on the POL logo.
Monroe museum sets art, photo classes The Masur Museum in Monroe will offer an adult print-making class and a beginner digital photo class. The print-making class will be from 6 to 8 p.m. Friday and from 9 a.m. to 4 p.m. Saturday. The cost is $130 for members and $170 for nonmembers. The digital photography
Take Note
from staff reports class will be Tuesdays from 6:15 to 8 p.m. Oct. 11 through Nov. 15. The cost is $120 for members and $160 for nonmembers. To register, call 318-3292237. The Masur Museum is at 1400 South Grand.
Children’s museum sets Saturday events The Mississippi Children’s Museum will offer Saturday art activities this month. Classes will be from 10 a.m. to 2 p.m. The schedule: • Saturday — Grandparents Day. • Sept. 17 — Papel Picado Day to celebrate the history and culture of Mexico. • Sept. 24 — Birthday party for Muppets creator Jim Henson. Admission to the museum is $8. For more information, call 601-981-5469 or visitwww. mississippichildrensmuseum.com. The Mississippi Children’s Museum is located at 2145 Highland Drive in Jackson.
Preservation tips on tap in Delta The Smithsonian’s National Museum of African-American History and Culture, the
Call 866-447-3275 or visit. Guests must be at least 21.
B.B. King Museum and the Delta Interpretive Center will host a weekend program. “Treasures” will feature presentations, hands-on activities and preservation tips from 10 a.m. to 4 p.m. Saturday and from 1 to 4 p.m. Sept. 11 at the B.B. King Museum, 400 Second St. in Indianola. The free event will help Delta-area residents identify and preserve items of historical and cultural significance that may be tucked away in closets, attics and basements of their homes. For more information, call 202-633-5285, 202-633-1000 or visit nmaahc.si.edu.
The 39th annual Cottonland Cluster dog shows will run through Monday at the Monroe Civic Center Arena. Shows will be from 9 a.m. to 4 p.m. and include allbreed shows and 24 speciality shows. Admission is $5 for adults and $2 for children. Call 318-644-4498 or visit bispb@aol.com. The Monroe Civic center is at 401 Lee Joyner Expressway.
Country duo set for Pearl River show
Frogs will kick off museum lectures
Montgomery Gentry will perform Oct. 8 at the the Pearl River Resort in Choctaw. The show will be at 8 p.m. at the Arena at Golden Moon Casino. Tickets range from $10 to $50 and may be purchased at Ticketmaster. com. Montgomery Gentry’s hits include “Gone,” “She Didn’t Tell Me” and “Break My Heart Again.” The resort, operated by the Mississippi Band of Choctaw Indians, is off Mississippi 16 West. It includes two casinos, a golf course and a water park.
The endangered Mississippi gopher frog and others will be discussed at the Rotwein Theatre at the Mississippi Museum of Natural Science in Jackson. From noon to 1 p.m. Tuesday, biologist Kathy Shelton will explain her Mississippi Amphibian Monitoring Program and other frog conservation efforts. Visitors also may tour the “Frogs:
Dog show today, Monday in Monroe
local happenings In town Second annual Bricks and Spokes 8-11 a.m. Oct. 1, beginning at China and Washington streets; 10-, 30- and 50-mile bike rides; $30 before Friday, $35 after; 601634-4527,, or mainstreet@vicksburg.com.
17th annual Downtown Fall Festival 10 a.m.-5 p.m. Oct.1; sidewalk sales, food, live entertainment and children’s activities; 601-634-4527 or downtownvicksburg. org.
Fourth annual Classics in the Courtyard Noon-1 p.m. at Southern Cultural Heritage Center; entertainment, free; lunch, $9 with reservations due by 5 p.m. Thursdays; Oct. 14: Celtic folk music by Nick and Julia Blake, lunch by Southern Sisters Cafe; Oct. 21: classic pop and country favorites by Maria Signa and Jim Robinson, lunch by Martin’s at Midtown; Oct. 28: classic pops and originals by Osgood and Blaque, lunch by Goldie’s Express; Nov. 4: classic blues, rock, pop and originals by Patrick Smith, lunch by Palmertree Catering; 601-631-2997 or info@southernculture.org.
Vicksburg Cruisers Car Club Red Carpet Classic Auto and Bike Show Sept. 17 at Blackburn Motor Co. on North Frontage Road; registration, 8-11 a.m.; poker run, 10 a.m.; awards, 3 p.m.; 601-4150421, 601-831-2597.
and older, $7 for students and $5 for younger than 12; tickets for “Gold in the Hills,” other shows vary; Contact: Parkside Playhouse, 101 Iowa Ave.; 601-636-0471 or.
Southern Cultural Heritage Center “One Enchanted Evening”: 7 p.m. Thursday; $25 members, $30 nonmembers, $225 corporate tables; cash bar available; tickets at SCHC, Paper Plus, oneenchantedevening.eventbrite. com.; Beginner Spanish course: 5:30-7 p.m. Sept.13, 20, 27 and Oct. 4, 11, 18; Olivia Foshee, VWSD Spanish teacher, instructor; $70 members, $75 nonmembers; Wreath workshop: 5:30-7 p.m. Sept. 20; Beau Lutz of Belvedere & Co., instructor; $55 members, $60 nonmembers; supplies provided; Two-day drawing workshop: 5:30-7 p.m. Sept. 26-27; Mark Bleakley, instructor; $55 members, $60 nonmembers; Contact: 601-6312997, info@southernculture.org or.
Out of Town Making Strides Against Breast Cancer 5k walk Oct. 8; registration, 7:30 a.m.; opening ceremony, 8:30; walk, 9; south steps of the Capitol on High Street in Jackson; 601-3215500, makingstridesjackson.org.
National Multiple Sclerosis Society’s Bike MS Oct. 8-9; begins at Baptist Healthplex in Clinton, ends at Battlefield Inn in Vicksburg; 35-mile, 75-mile, 150-mile routes; 601856-5831,.
Free Mississippi Museum of Art admission Constitution Week kickoff 4 p.m. Sept.17 at Old Court House Museum; bell-ringing ceremony by Ashmead Chapter of the Daughters of the American Revolution; 202-628-1776, 601-629-7655.
Vicksburg National Military Park Fee-free days: Sept. 24 and Nov. 11-13; $8 per vehicle.
23rd annual Over the River Run 8 a.m. Oct. 8; 5-mile run, 5-mile walk, 1-mile fun run; U.S. 80 bridge over the Mississippi River; entry fees: $25 individual, $15 for 10 and younger, $55 for family of five, $75 for corporate or civic teams of three to five members; $5 added after Oct.1; 601631-2997.
Haunted Vicksburg ghost tours Fridays-Sundays through October; walking tour, $20 per person; haunted hearse, $25 for group of six; 601-618-6031 or www. hauntedvicksburg.com.
River Region Medical Center Women’s Health Expo 9 a.m.-3 p.m. Oct. 19; Vicksburg Convention Center; $23 for fashion show, lunch; booth fees: $75 for nonprofits, $150 for others; 601-883-6916, 601-883-5217.
Mississippi Fruit and Vegetable Growers Conference and Tradeshow Nov. 14-16 at Vicksburg Convention Center;, info@msfruitandveg.com or 601-955-9298.
Vicksburg Theatre Guild Performances: “Breaking Up is Hard to Do,” 7:30 p.m. Friday and Saturday and Sept. 16-17, 2 p.m. Sept. 11 and 18; opening night reception, Friday; Auditions: “It’s A Wonderful Life,” 2-5 p.m. Sept. 17 and 6-8:30 p.m. Sept. 19-20 for Dec. 2-4 and 9-11 shows; “Forever Plaid,” 2-5 p.m. Oct. 1-2 for Jan. 20-22 and 27-29 shows; “The Foreigner,” Feb. 11-12 for May 4-6 and 11-13 shows; Tickets for main-stage plays: $12 for adults, $10 for 55
For active duty military personnel and their families through Monday; 380 S. Lamar St., Jackson; 601-960-1515 or.
For Foodies Sushi workshop 5:30-7:30 p.m. Friday at Southern Cultural Heritage Center; William Furlong, DiamondJacks food and beverage manager, instructor; $30 members, $35 nonmembers; includes supplies; 601-631-2997 or info@southernculture.org.
Tailgate Cooking Workshop 5:30-7:30 p.m. Sept. 13 at Southern Cultural Heritage Center; $30 for members and $35 for nonmembers; William Furlong, food and beverage manager of DiamondJacks Casino, instructor; 601-631-2997 or oinfo@southernculture.org.
For kids FitZone Elite Cheer Fall Schedule Runs through Dec. 20; Mondays: 4:15-5:15 p.m. for ages 4-8; 5:15-6:15 for 9 and older; and 6:15-7:15 for advanced students 7 and older; Tuesdays: 4:15-5:15 for 9 and older; 5:15-6:15 for ages 4-8; Thursdays: 5:15-6:15 for 9 and older; Fees: $50 per month, $25 registration fee for new members; Location: next to Tan Tastic in Big Lots shopping area on South Frontage Road; Contact: Liz Curtis, 601-638-3778 or.
Nightlife Eddie Monsour’s at the Biscuit Company, 1100 Washington St., 601-638-1571 • 8-11 p.m. Tuesdays and Fridays — Karaoke. • 8 p.m. Wednesdays — Biscuit & Jam; open mic. • Thursdays — Ladies night.
Ameristar Casino, 4116 Washington St., 601-638-1000, Free at Bottleneck Blues Bar: • Dr. Zarr’s Funkmonster — Variety/funk; Friday-Saturday.
Beyond Green” exhibit. Lectures will continue each first Tuesday, except December and January. The schedule: • Oct. 4 — “Horseshoe Crabs: Social and Ecological Relevance, Fringe Lifestyles, and the Deepwater Horizon Oil Spill.” • Nov. 1 — “On the Move: Remarkable Migrations of the River Shrimp.”
Children’s book writer set for Monroe event The Ouachita River Art Gallery in Monroe will feature a children’s book writer and illustrator Saturday.
From 10 a.m. to 5 p.m., Linda Snider Ward, a member of the Society of Children’s Book Writers and Illustrators and the Canine Art Guild, will discuss her work. Snider’s illustrations are at and. Gallery hours are from 10 a.m. to 5 p.m. Tuesday through Saturday, and admission is free. The gallery is located at 308 Trenton St. Fall 318-322-2380. Admission is $6 for adults, $4 for ages 3 to 18, $5 for 60 and older and free for children younger than 3. The museum is located at 2148 Riverside Drive. For more information, call 601-354-7303 or visit www. mdwfp.com.
Eddie Montgomery, left, and Troy Gentry of the country duo Montgomery Gentry Submitted to the Vicksburg Post
• Jarekus Singleton — R&B/blues; Sept. 16-17. • The King Beez — R&B/blues; Sept. 23-24. • The Beat Daddy’s — Blues/variety; Sept. 30-Oct.1. Free at the Cabaret Lounge: • Area Code — Variety; Friday-Saturday. • LaNise Kirk — Variety; Sept. 16-17. • Sinamon Leaf — Variety; Sept. 23-24. • Groove Inc. — Variety; Sept.30-Oct.1.
Vicksburg Auditorium, 901 Monroe St., 601-630-2929 • Bryan Adams, An Exclusive Engagement — 8 p.m. Oct.11 at Vicksburg Auditorium; $37, $52 and $77; ticketmaster.com, Vicksburg Convention Center box office on Mulberry Street or 800-745-3000.
Beechwood Restaurant & Lounge, 4451 Clay St., 601-636-3761 On stage, with a cover charge, at 9:15 p.m.: • Snazz — Friday-Saturday.
Jacques’ Cafe at Battlefield Inn, 4137 N. Frontage Road, 601-661-6264 • 9 p.m. Wednesday-Saturday — Karaoke.
LD’s Kitchen, 1111 Mulberry St., 601-636-9838 • 8:30-8:30 p.m. Wednesdays — Ben Shaw. • 7-10 p.m. Fridays — Dustin.
The Upper End Lounge, 1306 A Washington St., 601-634-8333 With a $3 cover charge: • 7-11 p.m. Tuesdays and Wednesdays — Karaoke. • 7-9 p.m. Thursdays — Ladies night. • 10 p.m. Fridays and Saturdays — D.J.
Sunday, September 4, 2011
The Vicksburg Post
C3
Mitchell to wed Ballinger Nov. 12 Mr. Tilley marries Miss Hall July 9 Melba and Rickey Mitchell of Vicksburg announce the engagement of their daughter, Sarah Michele of Brandon, to J.R. Ballinger, also of Brandon. Mr. Ballinger is the son of Karen and Kent Parnell of Lucedale and the late Tommy Ballinger of Hurley, Miss. Miss Mitchell is the granddaughter of Timmie I. Fedell and the late Michel Fedell and the late Loyce and Joe Mitchell, all of Vicksburg. Mr. Ballinger is the grandson of Belle Ledbetter and the late Ralph Ledbetter and Juanita Ballinger and the late James and Martha Ballinger, all of Hurley. The bride-elect is a 2006 graduate of Warren Central High School, where she was a freshman cheerleader, recipient of the Norseman Award and member of the Lady Vikes basketball team, Valhalla yearbook staff, Key Club, Beta Club, Mu Alpha Theta and National Honor Society. She was a member of Sub Debs and Vicksburg Cotillion Club. She received a Bachelor of Science degree in nursing in 2010 from the University of Southern Mississippi, where she was a member of Phi Mu fraternity, Southern Miss Diamond Darlings, Student Nursing Association and Gamma Beta Phi honor society. Miss Mitchell is a registered nurse with Baptist Health Systems in Jackson.
Sarah Michele Mitchell Engaged to marry J.R. Ballinger The prospective groom is a 2006 graduate of East Central High School in Hurley, where he was a three-year starter for the baseball team. He was selected to the all-district and all-state teams and played in the MAC All Star Game. He received the Heart of a Champion Award, Marine Corps Sports’ Player Award and the English Award. He attended the University of Southern Mississippi, playing in the 2009 College World Series, and was selected to the 2007 Conference USA AllFreshman team, the 2009 All-
Conference USA Tournament Team and the 2009 Atlanta Regional Tournament Team. Mr. Ballinger was selected in the 11th round of the Major League Baseball draft by the Chicago White Sox. He is currently playing in the Carolina League with the WinstonSalem Dash High A Club. The wedding will be at 5 p.m. Nov. 12, 2011, at St. George Antiochian Orthodox Church. A reception will follow at the B’nai B’rith Literary Club. All relatives and friends are invited to attend.
Jesse Julius Erving Tilley III and Kimberly La-Sheá Hall were married at 3 p.m. July 9, 2011, at Rainbow Arena. The Rev. Michael Dorsey officiated at the ceremony. The bride is the daughter of Mattie Hall of Vicksburg and the late Herman Morris. She is the granddaughter of the late Roosevelt Watkins and the late Annie Bell Braxton of Vicksburg. The groom is the son of Mr. and Mrs. Jesse Julius Erving Tilley II of Harlem. He is the grandson of Jesse Julius Erving Tilley and Jane Tilley of Manhattan, N.Y. The bride was escorted by her brother, Eric Hall. Her chosen colors were black, red and white. Music for the ceremony was presented by Deborah Jackson. Maids of honor were Daphne Shepard Bridge of Meridian and Foluke Houston of Vicksburg. Matron of honor was LaSandra Davis Hudson of Vicksburg. Bridesmaids were Tia Doss and Yvette Brown, both of Vicksburg, and LaQuanda Williams of Jackson. Junior bridesmaid was Mercedes Hall of Vicksburg. Marcus Lovette and Richard Bradford, both of Vicksburg, served as best men. Groomsmen were Christopher Melton, Cordell Watkins, Marquees Lovette and Vincent Woods,
Mr. and Mrs. Jesse Julius Erving Tilley III The bride is the former Kimberly La-Sheá Hall all of Vicksburg. Flower girl was Taniya Adams of Jackson. Ring bearer was Jesse Julius Erving Tilley IV of Vicksburg. Donald Brown escorted the bride’s mother. Poem reader was Darnisha James of Vicksburg. Felicia Wilson of Vicksburg served as the bride’s special assistant. A reception followed the ceremony. Hostesses were Trina Felton, Gwendolyn Appleby and Indiana Brown, all of Vicksburg.
For a wedding trip, the couple traveled to New Orleans. They will make their home in Vicksburg, where the bride is employed at Warren-Yazoo Mental Health Services and the groom is employed with the Warren County Sheriff’s Department. Rehearsal dinner A rehearsal dinner was held at Rainbow Casino on the eve of the wedding. Shower The bride’s family honored her with a shower.
Allen, Barnard to marry in Natchez Page to marry Davies at Immanuel The engagement of Bridgette Elizabeth Allen to Joseph “Clayton” Barnard, both of Vicksburg, is announced today. Vows will be exchanged in a private ceremony at 6 p.m. Oct. 8, 2011, at Monmouth Plantation in Natchez. A reception will follow. Miss Allen is the daughter of Kathy Allen and Mark Hilderbrand of Vicksburg and Graham Allen of Savannah, Ga. She is the granddaughter of the late Paul W. and Bonnie J. Kerr of Valley Park and the late Earl A. and Elizabeth “Anne” Allen and Vivian Hilderbrand and the late Earl Hilderbrand, all of Vicksburg. Mr. Barnard is the son of Larry and Donna Barnard of Vicksburg. He is the grandson of Carolyn Laster and the late Don Laster of Dermott, Ark., and the late Manuel and Evelyn Barnard of Collins, Ark. The bride-elect is a 2003 graduate of Warren Central High School. She attended Hinds Community College. Miss Allen is employed at the Animal Medical Clinic. The prospective groom is a 2001 graduate of Porters Chapel Academy. He is attending Hinds Community College. Mr. Barnard is employed at Armstrong World Industries.
Bridgette Elizabeth Allen Engaged to marry Joseph Clayton Barnard
The engagement of Tiffany Michelle Page to Nicholas Mark Davies, both of Vicksburg, is announced today. The wedding will be at 2 p.m. Oct. 8, 2011, at Immanuel Baptist Church. A reception will follow. All relatives and friends are invited to attend. Miss Page is the daughter of Wendell and Michelle Jarvis of Vicksburg and Glen and Christi Page of Bethel, Ohio. She is the granddaughter of Letha Bailey and James and Debra Hartley, all of Vicksburg; Frankie and Lisa Page of Yazoo City; and Dwight and Dorothy Talley of Jackson, La. Mr. Davies is the son of Bobby and Rae Rufus of Vicksburg and Simon and Jenny Davies of DeLand, Fla. He is the grandson of Norma Chappell of Vicksburg and Bob Rufus of Bud, W.Va. The bride-elect is a 2009 graduate of Vicksburg High School. She attended Hinds Community College. Miss Page is employed at Beechwood Elementary. The prospective groom is a 2009 graduate of Vicksburg High School. Mr. Davies is employed with Smith Lawn Care Inc.
Whitehead, Butler to recite vows
Delancey, Thornton to recite vows
Mr. and Mrs. Neal E. Whitehead of Vicksburg announce the engagement of their daughter, Mary Paige, to Brandon Jacob Butler. Mr. Butler is the son of Mr. and Mrs. Delton L. Butler of Meadville. Miss Whitehead is the granddaughter of Mr. and Mrs. Michael Tanner of Delta, La.; the late Gary M. Jordan Sr. and Mr. and Mrs. Gardner Carpenter, all of Vicksburg; and Mr. and Mrs. Merrill Whitehead of Edwards. Mr. Butler is the grandson of Myrtle Butler and the late Jacob Butler of Meadville and Mr. and Mrs. Abraham Ton and the late Joyce Ton of Magnolia. The bride-elect is a 2009 graduate of Warren Central High School. She is a President’s Scholar at Hinds Community College, where she will graduate this month as a barber and stylist. The prospective groom is a 2005 honor graduate of Franklin High School. He graduated from Hinds Community Col-
Mr. and Mrs. James W. Delancey of Hattiesburg announce the engagement of their daughter, Amanda Brooke, to Matthew Westley Thornton. Mr. Thornton is the son of Karen Thornton of Clinton and Philip Thornton of Vicksburg. Miss Delancey is the granddaughter of the late Mr. and Mrs. Edward E. “Gene” Knight of Laurel and the late Mr. and Mrs. J.C. Delancey of Hattiesburg. Mr. Thornton is the grandson of the late Mr. and Mrs. Lawrence T. West of Clinton and the late Mr. and Mrs. John H. Thornton of Florence. The bride-elect is a 2006 graduate of Oak Grove High School and a 2010 graduate of the University of Mississippi, where she received a Bachelor of Science degree in business. Miss Delancey is a sales representative with Clinique at Belk in Hattiesburg. The prospective groom is a 2004 graduate of Clinton High
Mary Paige Whitehead Engaged to marry Brandon Jacob Butler lege in 2007 as a physical therapy assistant. He is self-employed with Quality Rehab Services LLC, working as a physical therapy assistant in Texas. The couple will exchange vows at 1 p.m. Oct. 16, 2011,
at Castle Hill Pavilion in Florence. Pastor Garland Boyd will officiate. Following a honeymoon in the Western Caribbean, the couple will make their home in Carrollton, Texas.
Tiffany Michelle Page Engaged to marry Nicholas Mark Davies
Amanda Brooke Delancey Engaged to marry Matthew Westley Thornton School and a 2010 graduate of the University of Mississippi, where he received a Bachelor of Science degree in biology. Mr. Thornton is a neurophys-
iology technologist with Hattiesburg Clinic. Vows will be exchanged Oct. 1, 2011, at Hardy Street Baptist Church in Hattiesburg.
C4
Sunday, September 4, 2011
The Vicksburg Post
Thomas, Ruff marry at First Presbyterian
Mr. and Mrs. Mark Benjamin Thomas The bride is the former Laura Carolyn Ruff
Mark Benjamin Thomas and Laura Carolyn Ruff were married at 4 p.m. June 25, 2011, at First Presbyterian Church, Vicksburg. The Rev. Timothy B. Brown, pastor, officiated at the ceremony. The bride is the daughter of Greg and Jessica Ruff of Vicksburg. She is the granddaughter of the late Ben and Bernice Ruff of Plantersville and the late Jay and Carolyn Schilling of Vicksburg. The groom is the son of John C. and JoAnn Thomas of Starkville. He is the grandson of Earl and Dorothy Thomas of Starkville and the late Norvel and Lucy Burkett of Columbia. The bride was given in marriage by her father. A program of nuptial music was presented by Barbara Tracy, organist; David Demirbilek, violinist; and Sharon Penley, vocalist. Matron of honor was Blair Trusty McBride of Vicksburg. Bridesmaids were Laura Katherine Johnston, Whitney Saxon Clardy and Catherine Thomas, all of Starkville;
Michelle Mayer of Meridian; and Michelle Lee Ruff of Baton Rouge. The groom’s father served as best man. Groomsmen were Jake White of Madison, Tyler Hardy of Brandon, Heath Serio of Leland, T.J. Reece of Starkville and Tim Ruff of Baton Rouge. Ushers were Davis Hunt of Starkville, Scott Mckinnie of Brandon, Cody Arnold of Pearl and Landon McCaskill of Birmingham, Ala. Special wedding assistant was Sara Leach of Vicksburg. A reception followed at The Duff Green Mansion. For a wedding trip, the couple traveled to Cabo San Lucas, Mexico. They will make their home in Alexandria, La. The bride is a process improvement engineer at Rapides Regional Medical Center, and the groom is project engineer with Pan American Engineers. Showers Prior to the wedding, friends of the couple hosted several events in Vicksburg and Starkville in their honor.
Mary Margaret Reeves Engaged to marry Dewey Key Arthur
Ashley Suzanne Bruce Engaged to marry Joseph Brett Hossley.
Mary Katherine Johnson Engaged to marry Dane Michael Dixon
Miss Reeves to wed Miss Bruce to marry Johnson and Dixon to Mr. Arthur on Nov. 5 exchange vows Oct. 8 Mr. Hossley Sept. 10 Mr. and Mrs. Jimmy M. Bruce of West, Miss., announce the engagement of their daughter, Ashley Suzanne of Texas, to Joseph Brett Hossley, also of Texas. Mr. Hossley is the son of Kim Nosser of Vicksburg and Mr. and Mrs. Mike Hossley of Madison. Miss Bruce is the granddaughter of the late Mr. and Mrs. Brown Lee Bruce of Blackhawk, Miss., and the late Mr. and Mrs. Singleton Goss of West. She is a therapist with U.S. Physical Therapy Inc. in Texas.
Mr. Hossley is the grandson of Ruth Nosser and the late Larry Nosser and Elsie Hossley and the late Earl Hossley, all of Vicksburg. He is an aluminum manager at Trulite Glass and Aluminum Company in Texas. The wedding will be at 5 p.m. Sept. 10, 2011, at Unity Baptist Church in West. A reception will follow at the Redbud Springs Golf Course in Kosciusko. All relatives and friends are invited to attend. No local invitations are being sent.
Mr. and Mrs. Charles E. Reeves of Lafayette, La., announce the engagement of their daughter, Mary Margaret of Jackson, to Dewey Key Arthur of Clinton. Mr. Arthur is the son of Mr. and Mrs. Basil K. Arthur of Vicksburg. Miss Reeves is the granddaughter of the late Mr. and Mrs. Frank A. Reeves and the late Mr. and Mrs. Ralph L. White. Mr. Arthur is the grandson of Mr. and Mrs. Denver L. Rigdon of Union and the late Mr. and Mrs. James B. Arthur. The bride-elect is a 2000 graduate of Lafayette High School and a 2004 graduate of Austin College in Sherman, Texas.
Miss Sanders to marry Mr. LaGrone The engagement of Emily Renae Sanders to Thomas Anthony LaGrone, both of Vicksburg, is announced today. Vows will be exchanged at 4 p.m. Oct. 8, 2011, at First Baptist Church. A reception will follow at Unique Banquet Hall. All relatives and friends are invited to attend. Miss Sanders is the daughter of Karen C. Sanders of Vicksburg and Mackie Pearson of Columbia, Miss. She is the granddaughter of Don and Grace Ford and the late Willard Sanders, all of Vicksburg; the late Levonia Vaughn of Carriere; and Malcolm and Lou Pearson of Purvis. Mr. LaGrone is the son of Jerry R. and Toni Dickerson of Vicksburg and Bruce LaGrone of Tuscaloosa, Ala. He is the grandson of the late Margerie Montgomery of Vicksburg and Tom O. and Joyce Logue of Ridgeland. The bride-elect is a 2005 graduate of Vicksburg High School, where she was a member of the National Honor Society
Emily Renae Sanders Engaged to marry Thomas Anthony LaGrone and Vicksburg “Pride” Band. She attended Hinds Community College. Miss Sanders is assistant store manager of Reebok Outlet. The prospective groom
attended Vicksburg High School and Hinds Community College, where he studied electrical technology. Mr. LaGrone is an electrician for Wesley B. Jones Electrical Inc.
She graduated in 2009 from Mississippi College School of Law. Miss Reeves is a law clerk to Hon. Jess Dickinson of the Supreme Court of Mississippi. The prospective groom is a 1994 graduate of Central Hinds Academy and a 1999 graduate of Mississippi College. He graduated in 2002 from Mississippi College School of Law. Mr. Arthur is Assistant District Attorney in Rankin County. The wedding will be at 6:30 p.m. Nov. 5, 2011, at Grace Presbyterian Church in Lafayette. A reception will follow at the City Club at River Ranch.
Mary Katherine Johnson and Dane Michael Dixon, both of Vicksburg, will be married at 2 p.m. Oct. 8, 2011, at Grand Gulf Military Park in Port Gibson. A reception will follow. All relatives and friends are invited to attend.
Miss Johnson is the daughter of Curt and Tina Johnson of Monticello, Ark. She is the granddaughter of Kenneth and Betty Foreman of Monticello. Mr. Dixon is the son of Darlene Hoben of Vicksburg.
Sunday, September 4, 2011
The Vicksburg Post
C5
Israel lures Hollywood to film in the Holy Land By Daniel Estrin The Associated Press JERUSALEM — pro-
film Other projects the film fund is courting include an Indiandollar.
The associated press
French actress Juliette Binoche on the set of the film “Disengagement” by Israeli director Amos Gitai, in Nitzan, Israel. duction more than 600 Israeli movies filmed since the country’s founding, only about 30 have been filmed in Jerusalem, said Yoram Honig, an Israeli film director and 10th-generation Jerusalemite. Even Israeli producers have shied away from the city: Out of more than 600location.
‘The Debt’ a classy, well-made thriller By Christy Lemire AP movie criticAss,”husband,.. Under the leadership of the
Announce the Happy News with Fashionable Wedding Invitations from Speediprint. The associated press
Jessica Chastain and Sam Worthington in “The Debt”
On screen “The Debt,” a Focus Features and Miramax Films release, is rated R for some violence and language. Running time: 113 minutes. Three stars out of four.
film review, espe-
ciallyHinds is the most baffling of all. Unless maybe we’re supposed to believe that all those years of secrets and lies have really taken a toll., September 4, 2011
The Vicksburg Post
Gulf Coast beaches enjoying rebound 1 year after spill TravEL
By Melissa Nelson The Associated Press PENSACOLA, Fla. —’s. “Tourists don’t even mention the spill now. They haven’t mentioned it really at all in the last six months,� said Ehrenreich. Tourism leaders say the postspill economic bounce is fueled in part by an influx of BP money that has gone to promote Gulf
Coast beaches. Another positive for the string of white sand beaches from Alabama to Florida’s Big Bend has been making it through the end of August without any disruptions from tropical storms or hurricanes. While hurricane season isn’t. “This is the type of summer we had hoped to have last year,� Dan Rowe, president of the Panama City Convention and Visitors Bureau, said recently. Rowe credited the strong 2011 rebound on numerous things including the new airport, an infusion of advertising cash from BP and worldwide
their mine shaft,� Holstein said. “It’s good that it’s there and it survived, but now it really needs to be part of the world.� Holstein said he wouldn’t be surprised if the machine ultimately sold for $10 million or more. Copperfield also said he is still interested in purchasing it. That could put pressure on the state, which, like the rest of the nation, is facing hard fiscal times. Montana’s budget is in the black, but keeping the effects of the recession at arm’s length has meant deep budget cuts. Those cuts have hit the Montana Heritage Commission particularly hard. Just weeks
after Norby spoke to the AP, her position and three others were eliminated as part of a larger reorganization to cut $400,000 from the commission’s’s couldn’t find replicas or period materials, they didn’t replace the parts. “We don’t want to make her anything that she wasn’t,� Norby said. In 2008, they installed the Gypsy as the centerpiece of the Gypsy Arcade amid the ancient wooden buildings of Virginia City’s associated press
Tourists on the beach at Pensacola Beach, Fla.
publicity from an August 2010 visit by the Obamas to Panama City Beach that included photographs of the president and daughter Sasha swimming in the oil-free Gulf. “A lot of people heard about us as were telling our story and responding to the spill. They saw our emerald-green waters and sugar- white beaches. More than 8.5 billion people saw the first family coming to visit,� Rowe said. Unlike Florida vacation spots farther south, Panhandle beaches are largely summer destinations. Rowe said more than 50 percent of his city’s tourism revenue is generated between Memorial Day and Labor Day. Gulf Coast beaches hope to continue the strong summer after Labor Day with a string of targeted discounts, promotional events and fall concerts. Beach towns also are planning Oktoberfests this fall, weekend concert series and art festivals.
Gypsy Continued from Page C1. The commission’s acting director, Marilyn Ross echoed Norby’s sentiments: “That is not something we would ever consider, selling off these antiques.�’s offer. “They don’t have any idea what they have. It’s like they have the world’s best diamond and they just pulled it out of
the rear, ropes keeping visitors at a distance. All of that care in restoring, preserving and displaying the Gypsy causes state curators to reject Holstein’s argument that the machine should be removed from Virginia City and placed in a private collection. “A lot of these collectors, they come and say the same thing: ‘Why is this out in the public? Why don’t you just take the money and have a collector restore it the way it should be restored and have it in his private collection?’ Well, nobody would ever see it,� said Peterson, whose position also was eliminated in the cutbacks.
:KDW 'R <RX .QRZ :KDW 'R <RX .QRZ $ERXW D /LYLQJ 7UXVW" $ERXW D /LYLQJ 7UXVW"
'R \RX NQRZ WKH IRXU FULWLFDO GRFXPHQWV IRU D FRPSOHWH HVWDWH SODQ"
'R \RX NQRZ WKH IRXU FULWLFDO GRFXPHQWV IRU D FRPSOHWH HVWDWH SODQ" ‡ :K\ D /LYLQJ 7UXVW" :K\ QRW MXVW D ZLOO" ,W UHDOO\ GHSHQGV RQ ZKDW \RX ZDQ
‡ :K\ D /LYLQJ 7UXVW" QRW MXVW D ZLOO" ,W UHDOO\ GHSHQGV RQ ZDQWRU IRUQHHG" \RXUVHOI DQG IRU \RXU:K\ IDPLO\ +RZ PXFK SURWHFWLRQ GRZKDW \RX\RX ZDQW \RXUVHOI DQG IRU \RXU IDPLO\ +RZ PXFK SURWHFWLRQ GR \RX ZDQW RU QHHG" ‡ :KDW DERXW SUREDWH" $ /LYLQJ 7UXVW LV D JUHDW ZD\ WR DYRLG SUREDWH EX ‡ :KDW DERXW SUREDWH" $ /LYLQJ 7UXVW LV D JUHDW ZD\ WR DYRLG SUREDWH EXW LV SUREDWH VRPHWKLQJ \RX UHDOO\ ZDQW WR DYRLG" 'R \RX NQRZ ZKDW SUREDWH DFWX SUREDWH VRPHWKLQJ \RX UHDOO\ ZDQW WR DYRLG" 'R \RX NQRZ ZKDW SUREDWH DFWXDOO\ LV ZKDWLW LW GRHV" ,I QRW \RX QHHG WRVR NQRZ VRPDNH \RX FDQ PDNH \RXU RZQ P LV DQG DQG ZKDW GRHV" ,I QRW \RX QHHG WR NQRZ \RX FDQ XS \RXU RZQXS PLQG ‡‡$UH GHDWKWD[HV WD[HV DYRLGDEOH" )RU WKH YDVW PDMRULW\ DQVZHU LV ³DEVROXWHO $UH GHDWK DYRLGDEOH" )RU WKH YDVW PDMRULW\ WKH DQVZHUWKH LV ³DEVROXWHO\´ :KDW DERXW \RXU\RXU DVVHWV IURP WKH QXUVLQJ 0HGLFDUH SD\V ‡‡:KDW DERXWSURWHFWLQJ SURWHFWLQJ DVVHWV IURP WKH KRPH" QXUVLQJ KRPH" 0HGLFDUH S RQO\ D WLQ\ SDUW LI DQ\ $QG \RX SD\ WKH UHVW 2U QRW 0U +RZHOO FRDXWKRUHG WKH RQO\ D WLQ\ SDUW LI DQ\ $QG \RX SD\ WKH UHVW 2U QRW 0U +RZHOO FRDXWKRUHG 0LVVLVVLSSL HGLWLRQ RI +RZ WR 3URWHFW <RXU )DPLO\œV $VVHWV IURP 'HYDVWDWLQJ 0LVVLVVLSSL HGLWLRQ RI +RZ WR 3URWHFW <RXU )DPLO\œV $VVHWV IURP 'HYDVWD 1XUVLQJ +RPH &RVWV 7KHVH LVVXHV ZLOO EH GLVFXVVHG LQ GHSWK DW WKH 6HPLQDU 1XUVLQJ +RPH &RVWV 7KHVH LVVXHV ZLOO EH GLVFXVVHG LQ GHSWK DW WKH 6HPLQD
:,//,$0 % +2:(//
0HPEHU RI WKH 1DWLRQDO $FDGHP\ RI :,//,$0(OGHU % + 2:(// /DZ $WWRUQH\V
0HPEHU RI WKH 1DWLRQDO $FDGHP\ RI (OGHU /DZ $WWRUQH\V
‡ )UHH /LYLQJ 7UXVW DQG $VVHW 3URWHFWLRQ 6HPLQDU *DLQ WUXH LQVLJKW LQWR SODQQLQJ WKDW DFWXDOO\ ZRUNV3URWHFWLRQ -RLQ XV DW RQH RI WKHVH IUHH DQG WUXH YHU\ LQVLJKW LQWR ‡HIIHFWLYH )UHH /LYLQJ 7UXVW DQG $VVHW 6HPLQDU *DLQG DQG YHU\ HIIHFWLYH SODQQLQJ WKDW DFWXDOO\ ZRUNV -RLQ XV DW RQH RI WKHVH IUHH LWœV \RXUV DEVROXWHO\ )5(( DQG ZLWK QR REOLJDWLRQ RQ \RXU SDUW ZKDWVRHYHU ‡ )UHH %RRNV DQG 0RUH <RX ZLOO DOVR UHFHLYH D IUHH FRS\ RI WKH SDJH ERRN LWœV )5(( DQG ZLWK QRE\REOLJDWLRQ RQ \RXU SDUW ZKDWVRHYHU 7KH\RXUV /LYLQJ DEVROXWHO\ 7UXVW DQG (VWDWH 3ODQQLQJ ZULWWHQ 0U +RZHOO ZKR KDV PRUH WKDQ ‡WKLUW\ )UHHVHYHQ %RRNV DQG 0RUH <RX ZLOO<RX DOVRPD\ UHFHLYH D IUHHD FRS\ RI WKH SDJH ERR \HDUV RI OHJDO H[SHULHQFH DOVR UHFHLYH IUHH SULYDWH 5HVHUYDWLRQV DUH+RZHOO UHTXLUHGZKR KDV PRUH WK FRQVXOWDWLRQ YDOXHG DW PRUH WKDQ ZULWWHQ 7KH /LYLQJ 7UXVW DQG (VWDWH 3ODQQLQJ E\ 0U ‡ +RZVHYHQ 'R <RX\HDUV *HW D RI 5HVHUYDWLRQ" 7KDWœV DOVR IUHH -XVWDOVR FDOO KRXUV DD IUHH GD\ SULYDWH WKLUW\ OHJDO H[SHULHQFH <RX PD\ UHFHLYH
FRQVXOWDWLRQ YDOXHG DW PRUH WKDQ 5HVHUYDWLRQV DUH UHTXLUHG ‡ +RZ 'R <RX *HW D 5HVHUYDWLRQ" 7KDWœV DOVR IUHH -XVW FDOO KRXUV D G
CLASSIFIEDS
THE•VICKSBURG•POST ■SUNDAY • SEPTEMBER 4 • 2011
SECTION D
PHOTOS BY OUR READERS Jimmy Mullen
Sam Andrews
Jimmy Mullen of Vicksburg was driving in the Vicksburg National Military Park when he spotted this hearse, or what is believed to have formerly been a hearse, carrying a scooter on the tour road.
Sam Andrews of Vicksburg focused on a butterfly as it fed on the remains of a summer flower.
Joseph JackThe Vicksburg Post, News photos, P.O. Box 821668, Vicksburg, MS 39182.
Ronnie Williams
Joseph Jackson of Vicksburg spotted this gypsy moth resting on a window at his home.
Joe Simpson
Joe Simpson was on a visit to Fort Morgan, Ala., when this summer squall rolled ashore.
Ronnie Williams of Vicksburg spotted these two does in his yard on Campbell Swamp Road.
02. Public Service.)
Find a Honey of a Deal in the Classifieds...Zero in on that most wanted or hard to find item.
07. Help Wanted
of Vicksburg is hiring:
★Outside Sales Representative ★Receptionist Apply online at: netlinkwebservices.com/careers
05. Notices Effective March 25, 2011. The Horizon chips were discontinued. You may redeem Horizon Casino chips during normal business hours at the Grand Station Casino cage through July 25, 2011.
07. Help Wanted
05. Notices ENDING HOMELESSNESS. WOMEN with children or without are you in need of shelter? Mountain of Faith Ministries/ Women's Restoration Shelter. Certain restrictions apply, 601-661-8990. Life coaching available by appointment.
05. Notices KEEP UP WITH all the local news and sales...subscribe to The Vicksburg Post Today! Call 601-636-4545, ask for Circulation.
Discover a new world of opportunity with The Vicksburg Post Classifieds.
Don’t miss a day of The Vicksburg Post! Our ePost now available! Call 601-636-4545 Circulation, for details!
07. Help Wanted
07. Help Wanted
05. Notices
06. Lost & Found
Runaway Are you 12 to 17? Alone? Scared? Call 601-634-0640 anytime or 1-800-793-8266 We can help! One child, one day at a time.
07. Help Wanted
06. Lost & Found
LOST A DOG? Found a cat? Let The Vicksburg Post help! Run a FREE 3 day ad! 601-636-SELL or e-mail classifieds@vicksburg post.com
MALE POMERANIAN MISSING from Greenbriar/ Halls Ferry Road vicinity, needs medication. 601-4151312
Classifieds Really Work!
CALL 601-636-SELL AND PLACE YOUR CLASSIFIED AD TODAY.
07. Help Wanted
LOST!
07. Help Wanted
Apply Today For Dealer School!
EXECUTIVE DIRECTOR The Housing Authority of the City of Vicksburg, MS (Vicksburg Housing Authority) is seeking an Executive Director to manage a 430 unit public housing program. Candidates must be able to exercise independent judgment within the framework of established policy and existing laws governing housing authorities. Possess excellent verbal and written communication skills, be knowledgeable of HUD rules & regulations, have experience in public housing & affordable housing programs. Experience in the creation of affordable housing is a plus. Minimum Requirements : Computer skills, Fiscal planning, Administrative and Management skills, Bachelor’s degree in public administration or related field and five (5) years progressive experience in Public Housing programs. To Apply, submit your resume to: Christopher M. Barnett, Sr. Chairman, Board of Commissioners Vicksburg Housing Authority P.O. Box 865 Vicksburg, MS 39181-0865 Open Until Filled.
Help create an exciting gaming and entertainment experience at Ameristar. AMERISTAR OFFERS: 6"3rding jobs 6fun, friendl53,rk environment 6Training and education assistance 6Opportunity for advancement Apply online at ameristar.com/careers by Monday, September 12, then stop by the Administration Building to speak to a HR Recruiter
The Classified Marketplace... 4116 Washington Street Vicksburg, Mississippi 601.638.1000 866.MORE FUN (667.3386) Ameristar.com
Where buyers and sellers meet.
Please see Human Resources for complete details. Equal opportunity employer – M/F/D/V. Gambling Problem? Call 1-888-777-9696. Š 2011 Ameristar Casino Vicksburg
D2
Sunday, September 4, 2011
418 Melrose Avenue Immaculate home decorated to perfection with 3 BR, 2 BA, living/ dining room, and den. All updated. Fenced back yard with lots of charm. A MUST SEE HOUSE!
515 Kavanaugh Location near Oak Park (in county) with 3BR/2B brick home. Open living/kitchen floor plan. Move in ready! $129,000.
JONES & UPCHURCH, INC. Call Andrea at
601-831-6490 Over 33 years of experience put to work for you! EMAIL: ANDREA@JONESANDUPCHURCH.COM Andrea Upchurch
115 Maison Rue Custom built home in Acadia Hills on #1 hole at Vicksburg Country Club. Spacious rooms, high ceilings, 4 bedrooms, 2.5 baths, family room, dining room, with all the amenities you deserve. Screened porch on back overlooks golf course. Privacy & in-town convenience. 2681 square feet downstairs. Additional 1600+ square feet upstairs can be finished. $329,600.
601-415-6868
111 HILDA DRIVE
3350 Eagle Lake Shore
3 bedrooms, 2 baths, lakefront. Home has all cypress interior with a metal roof, deck, pier, boat house and screened porch. $155,000.
Bette Paul Warner 601-218-1800
Marianne May Jones REALTOR ASSOCIATEÂŽ
The Vicksburg Post
This lovely 3 bedrooms, 2 full bath cozy brick home is waiting on you. Nice large back yard that leads out to the lake where you can fish in your own your backyard. This is the perfect home for that first time buyer. Call Valorie for more details at 601-618-6688. REDUCED TO $105,000. Presented By
Valorie Spiller
McMillin Real Estate Bette@Vicksburgrealestate.com
REALTOR ASSOCIATEÂŽ
601-634-8928 601-618-6688
marianne.jones@coldwellbanker.com
Classified...Where Buyers And Sellers Meet. 1405 SWEETGUM LANE 10 secluded acres with
2 Mill Wood Circle Convenient County location. 3 bedrooms, 2.5 baths, living room, dining room, large kitchen with granite counter tops. Wood floors in entry, living room, dining and main bedroom. Rear fenced yard.
$235,000
147+/- acres with rock bluffs over Bayou Pierre adjacent to Point Lookout Historic Site. Scenic property with hardwood, awesome waterfall and one of a kind view overlooking Bayou Pierre. Prime hunting and development location with trophy deer, duck, turkey, wild boar, squirrel, boating, fishing, gator with numerous home and camp sites w/access to paved road.
Real Estate McMillin And
available house site, this immaculate 2,537 sqft home features 4BRs, 3BAs, sun rm, vaulted ceiling family rm/fireplace, dining area/ wood stove, & large wired workshop over looking large fishing pond.
93 PLANTATION DRIVE Best buy in Openwood Plantation: Located on large lot close to River Region Hospital. This immaculate 2,131 sq.ft. home features 4BRs, 2BAs, family rm/ fireplace, living rm, formal dining area, eat-in kitchen & sunroom. MUST SEE TO APPRECIATE!
Jimmy Ball
Beverly McMillin 601-415-9179 2735 Washington Street, Vicksburg, MS 39180 • 601-638-6243
07. Help Wanted
07. Help Wanted
601-218-3541
Home for Sale? Show it to the world at
07. Help Wanted
07. Help Wanted
Director of Nursing position available Registered Nurse with supervisory experience sought for full-time Director of Nursing position ✰ Insurance provided ✰ Bonus Program Contact Eva Pickle at Heritage Manor of Rolling Fork 431 W. Race St. Rolling Fork, MS 662.873.6218
Fall Home Improvement Ads
A Reputable Real Estate Company with Proven Results 601-636-5947 Vanessa Leech, Broker Andrea Lewis Nina Rocconi Mindy Hall Tommy Shelton
07. Help Wanted Truck Driver Training With a Difference Job Placement Asst. Day, Night & Refresher Classes Get on the Road NOW! Call 1-888-430-4223 MS Prop. Lic. 77#C124
Classified Advertising really brings big results!
07. Help Wanted Attention Students! Back to School Work $15 Base-Appt Flex hrs around classes Cust. Sales/Srvc Interview in Clinton Work in your area All ages 17+ Call NOW (601) 910-6111
07. Help Wanted
07. Help Wanted
AVON. EARN MONEY now! Representatives needed in your area. Will train. Call 601-259-2157.
AVON. NEED EXTRA CASH? Become an Avon Representative today. Call 601-454-8038.
CHEER & TUMBLE COACHES Part-time. Previous coaching experience required. Applications available Monday- Thursday, 2pm -6:30pm, Friday 12 noon- 2pm, Saturday 10am-12 noon. FitZone - Big Lots area.
VISITOR INFORMATION CENTER MANAGER The Vicksburg Convention & Visitors Bureau is seeking applicants for the position of Visitor Information Center Manager. The work involves responsibility for training and supervising a staff of Travel Counselors involved in providing informational services to visitors. Expanded knowledge of Vicksburg and area history and attractions a must. Ability to communicate clearly and effectively, both verbally and in writing. Associate or Bachelor degree with minimum of 3 years customer service experience including 2 years of management required. Vacation and benefits. Salary commensurate with experience. Send resumes to VCVB, P.O. Box 110, Vicksburg, MS 39181 by Sept. 16, 2011. EEOC
AMIkids NELA is seeking a Master Level Counselor. Master’s Degree in Social Work or Counseling required. Apply online at or contact KarVan Powell (318) 574-9475. TRUCK DRIVER needed for delivery of storage containers. Must have minimum Class A License. Apply in person @ Sheffield Rentals 1255 Hwy. 61 S. Vicksburg, MS
GARDENER NEEDED. EXPERIENCE preferred. Weeding, trim hedges, etcetera. 601-638-0528.
HEY! NEED CASH NOW? We buy JUNK CARS, VANS, SUV’S,.
Classifieds Section
Don’t miss out on this seasonal opportunity to let your customers know what you offer in terms of fixing up their home for the change of seasons. Other businesses whose ads have appeared here are sure to tell you that this is a wonderful one-stop information source for people to have when it’s time for home repair and/or preventative maintenance. Call us at 601-636-7355 (SELL) for more information.
601-415-4114 601-218-0644 601-415-4503 601-631-4144 601-415-2507
fewball@cablelynx.com
“ACE�
GIS Technician Position Available Southwest Mississippi Electric Power Association To submit resume and view job requirements go to at About Us.
Leech Real Estate of Vicksburg, Inc.
HIGH TRAFFIC SPA seeking massage therapist and hair stylist. Send response to P.O. Box 820081, Vicksburg, MS 39182, 601630-7170. LEGAL SECRETARYPART time, potential full time. Learn the basics of a legal office, with the opportunity to get more involved in typing of documents, file maintenance and administrative tasks. Ideally you will have one year's office experience; have a typing speed of 40-50 wpm and good knowledge of Microsoft Word. You should have excellent communication skills, a comfortable and professional phone manner, a positive attitude and a keenness to learn. Send Resume to: legal.secretary@bellsouth.net
!! " # $%&'$($' )*)* # ' + "
â? â? â? â? â? â? â? â? â? Every day is bright and sunny with a classified ad to make you
MONEY! Call Allaina, Michele or Vickie and place your ad today.
601-636-SELL
NOW TAKING APPLICATIONS for all positions. Apply in person at Saxton's Tire Barn Automotive Repair. 1401B South Frontage Road. Monday- Friday 7:30am- 5pm.
10. Loans And Investments.
The Vicksburg Post
Sunday, September 4, 2011
D3
D4
Sunday, September 4, 2011
Classified
• Something New Everyday •
13. Situations Wanted
17. Wanted To Buy
14. Pets & Livestock
VICKSBURG WARREN HUMANE SOCIETY Hwy 61 S. • 601-636-6631
DON’T SHOP...
Adopt Today!
Call the Shelter for more information.
HAVE A HEART, SPAY OR NEUTER YOUR PETS! Look for us on
If you are feeding a stray or feral cat and need help with spaying or neutering, please call 601-529-1535.
$ I BUY JUNK CARS $ Highest price paid, GURANTEED! Cash in your hand today! Call 601-618-6441.. WANTED: ANYTHING OLD-Money, coins, war relics, books, photos, documents, etcetera. 601-618-2727. WE BUY ESTATES. Households and quality goods. Best prices. You call, we haul! 601-415-3121, 601-661-6074. WE HAUL OFF old appliances, old batteries, lawn mowers, hot water heaters, junk and abandoned cars, trucks, vans, etcetera. 601940-5075, if no answer, please leave message. WE PAY CASH for junk. Cars, trucks. Vans, SUVs, and old dump trucks. 601638-5946 or 601-529-8249.
18. Miscellaneous For Sale
15. Auction LOOKING FOR A great value? Subscribe to The Vicksburg Post, 601-6364545, ask for Circulation.
CEMETERY PLOTS 5 Perpetual Care plots. City Cemetery #78, Square 2, Div M. $240 per plot or $1000 for all. 334-741-6912.
ESTATE AUCTION, KAREN Anderson-Smith, details
ELECTRIC HOSPITAL BED $200, shower chair $15, walker $50. 601-636-5089.
Don’t send that lamp to the curb! Find a new home for it through the Classifieds. Area buyers & sellers use the Classifieds every day. Besides, someone out there needs to see the light. 601636-SELL.
Hours: 8a.m. - 5p.m., Mon. - Fri., Closed Saturday & Sunday Call Direct: (601)636-SELL Post Plaza Online Ad Placement: 1601F North Frontage Rd. Vicksburg, MS 39180 601-636-4545
18. Miscellaneous For Sale 1949 PARTIALLY RESTORED Ford tractor. 601638-5397.
NEED A OVERNIGHT SITTER? Call 601-497-5144.
CLOSET PHOBIA? Clear out the skeletons in yours with an ad in the classifieds.
601-636-SELL
The Vicksburg Post
FOR LESS THAN 45 cents per day, have The Vicksburg Post delivered to your home. Only $14 per month, 7 day delivery. Call 601-636-4545, Circulation Department. HEAVY DUTY HOSPITAL bed. Great condition. $1200. Call 601-636-0441 or 601-636-0832. MOVING BOXES. 8 wardrobes, 30 to 40 smaller boxes and packing paper. Best offer! 601-636-8979, 815-252-6218. NEW CROP OF Cacti and Succulents $1.95 and up, Beautiful Bromeliads $2.95 and up, Tropicals $6.95. Cactus Plantation. Saturday 9am- 5pm. Sunday 1pm- 5pm. 601-209-9153.
THE PET SHOP “Vicksburg’s Pet Boutique” 3508 South Washington Street Pond fish, Gold fish, Koi, fish food aquarium needs, bird food, designer collars, harnesses & leads, loads of pet supplies! Bring your Baby in for a fitting today!
SNAPPER 9 HORSE power mower. 28 inch cut, pull start, good condition. $450. 601-638-3906 Twin mattress sets, $189. Full mattress sets, $209. Queen mattress sets, $280. Discount Furniture Barn 601-638-7191. USED TIRES! LIGHT trucks and SUV's, 16's, 17's, 18's, 19's, 20's. A few matching sets! Call TD's, 601-638-3252.
19. Garage & Yard Sales 116 BROOKWOOD DRIVE off Culkin Road. Saturday, Sunday and Monday 7am-12 Noon. Furniture, clothing, household items, other treasures. Harley outer farning cover, $100.
19. Garage & Yard Sales
24. Business Services
BIG SALE! PETERSON'S Art & Antiques, 1400 Washington Street, Labor Day, Monday, September 5th, 9:30am-2:30pm.
I-PHONE REPAIR. Buy, sell and repair. Arcue Sanchez - 601-618-9916..
15. Hunting Auction 20. 2005 KING QUAD 700. 86 hours, 441 miles. $4500 firm. 601-415-7434.
QUALITY PAINTING and Pressure Washing for the lowest price. Call Willie Walker at 601-638-2107. River City Lawn Care You grow it - we mow it! Affordable and professional. Lawn and landscape maintenance. Cut, bag, trim, edge. 601-529-6168.
Roofing • Carpentry •Brick masonry •Demolition
SMITH & WESSON 44 MAGNUM. Model 29, 8.38 inch. Dirty Harry Special. In Vicksburg. Call 256-5276636.
21. Boats, Fishing Supplies 14 FOOT ALUMINUM boat. 20 horse power motor, seats, trolling motor, anchors, etcetera. Call 601218-9654 days, 601-6360658 nights. Dealer. What's going on in Vicksburg this weekend? Read The Vicksburg Post! For convenient home delivery, call 601-636-4545, ask for circulation.
24. Business Services D & D TREE CUTTING •Trimming • Lawn Care • Dirt Hauled • Insured For FREE Estimates Call “Big James” 601-218-7782 D.R. PAINTING AND CONSTRUCTION. Painting, roofing, carpentry service. Licensed, bonded. Free estimates! Call 601-638-5082.
Ask us how to “Post Size” your ad with some great clip art! Call the Classified Ladies at 601-636-Sell (7355).
PLUMBING SERVICES24 hour emergency- broken water lines- hot water heaters- toilets- faucetssinks. Pressure Washingsidewalk- house- mobile homes- vinyl siding- brick homes. 601-618-8466.
DIRT AND GRAVEL hauled. 8 yard truck. 601638-6740.
Classifieds Really Work!
LOOKING TO MOVE UP IN THE JOB MARKET?
Electrical
•Plumbing •
26. For Rent Or Lease RICHARD M. CALDWELL BROKER SPECIALIZING IN RENTALS (INCLUDING CORPORATE APARTMENTS) CALL 601-618-5180 caldwell@vicksburg.com LARGE OFFICE SPACE. Ideal for Daycare center. Includes furniture, equipment. Evenings only 601-218-4543. WAREHOUSE WITH OFFICE. 4000 square feet. 5537 Fisher Ferry Road. $850 monthly. 601-6383211 or 601-831-1921.
27. Rooms For Rent BOARDING HOUSE. $100 weekly, includes cable and utilities. $220 Deposit. References required. 601-218-4543.
section of The Vicksburg Post Classifieds.
2 BEDROOMS. CENTRAL air/ heat, Speed Street, appliances. $350. 601-415-8197. 2006 CHERRY STREET. 1 bedroom, 1 bath. $525 monthly. Great location. 601-415-0067. Apartments/ downtown. 1, 2, 3 bedrooms. $400 to $650. Deposit/ credit check required. 601-638-1746.
CONFEDERATE RIDGE APARTMENTS 780 Hwy 61 North
$200 Blow Out Special! Call for details!
601-638-0102
THE COVE Tired of high utility bills? Country Living at it’s BEST! Paid cable, water & trash! Washer & Dryer, Microwave included! Ask about our
SPECIAL!
BEAUTIFUL LAKESIDE LIVING
• 1, 2 & 3 Bedroom Apts. • Beautifully Landscaped • Lake Surrounds Community
601-415-8735
• Pool • Fireplace • Spacious Floor Plans 601-629-6300
501 Fairways Drive Vicksburg
28. Furnished Apartments
Bonded
•
Call Malcolm 601-301-0841
STEELE PAINTING SERVICE LLC Specialize in painting/ sheet rock. All home improvements Free Estimates 601-634-0948. Chris Steele/ Owner
26. For Rent Or Lease ✰✰FOR LEASE✰✰
1911 Mission 66 Suite B-Apprx. 2450 sq. ft. Suite E-Apprx. 1620 sq. ft. Office or Retail! Great Location!
PRE-VIEW VICKSBURG'S FINEST furnished apartments on-line at www. vicksburgcorporatehousing. com Call today! 601-874-1116.
Commodore Apartments
29. Unfurnished Apartments
605 Cain Ridge Rd. Vicksburg, MS 39180
1 BEDROOMS $425. 2 bedroom townhouses $525-$550. 3 bedroom apartments , $525- $550. Call Management, 601-631-0805.
601-638-2231
2 BEDROOM. ALL electric includes water $450. With stove and refrigerator. $200 deposit. 601-6348290.
BRIAN MOORE REALTY Connie - Owner/ Agent
318-322-4000
29. Unfurnished Apartments
Finding the car you want in the Classifieds is easy, but now it’s practically automatic, since we’ve put our listings online.
1, 2 & 3 Bedrooms
LOOKING FOR YOUR DREAM HOME? Check the real estate listings in the classifieds daily.
Enjoy the convenience of downtown living at
The Vicksburg Apartments UTILITIES PAID! 1 & 2 Bedroom Apartments Studios & Efficiencies 801 Clay Street 601-630-2921
Barnes Glass Quality Service at Competitive Prices #1 Windshield Repair & Replacement
Vans • Cars • Trucks •Insurance Claims Welcome•
Touching Hearts, LLC Private Duty Sitting and Homemaker Service Caregivers available WHEN and WHERE you need them. •LPN’s •CNA’s •NURSE ASSISTANTS
601-429-5426
A.C.’S FOUNDATION
If your floors are sagging or shaking, WE CAN HELP! We replace floor joists, seals & pillars. We also install termite shields. ✰ Reasonable ✰ Insured
601-543-7007
BUFORD CONSTRUCTION CO., INC. 601-636-4813 State Board of Contractors Approved & Bonded
ROSS
M&M HOUSE
New Homes
MOVING & RAISING
CONSTRUCTION Framing, Remodeling, Cabinets, Flooring, Roofing & Vinyl Siding State Licensed & Bonded
Jon Ross 601-638-7932
Simmons Lawn Service
Professional Services & Competitive Prices • Landscaping • Septic Systems • Irrigation: Install & Repair • Commercial & Residential Grass Cutting Licensed • Bonded • Insured 12 years experience Roy Simmons (Owner) 601-218-8341
SPEEDIPRINT & OFFICE SUPPLY • Business Cards • Letterhead • Envelopes • Invoices • Work Orders • Invitations (601) 638-2900 Fax (601) 636-6711 1601-C North Frontage Road Vicksburg, MS 39180
All Business & Service Directory Ads MUST BE PAID IN ADVANCE !
601-636-SELL (7355)
HILLVIEW ESTATES. VICKSBURG'S Premier Rental Community, on-site manager for 24/ 7 service for YOU. Professionally maintained grounds, new carpet, new paint. Come take a look. 5.1 miles on Highway 61 South, across from airport. 601-941-6788.
Live in a Quality Built Apartment for LESS! All brick, concrete floors and double walls provide excellent soundproofing, security, and safety. 601-638-1102 • 601-415-3333
Haul Clay, Gravel, Dirt, Rock & Sand All Types of Dozer Work Land Clearing • Demolition Site Development & Preparation Excavation Crane Rental • Mud Jacking
601-636-SELL
DOWNTOWN, BRICK, MARIE Apartments. Total electric, central air/ heat, stove, refrigerator. $520, water furnished. 601-636-7107, trip@msubulldogs.org
Bradford Ridge Apartments
NEED AN APARTMENT?
✰ HOUSE LEVELING ✰
HELP WANTED
29. Unfurnished Apartments
Ready to Work
AUTO • HOME • BUSINESS Jason Barnes • 601-661-0900
Step this way to the top of your field! Job opportunities abound in the
29. Unfurnished Apartments
•34 years experience •Fully
insured
865-803-8227
Chopper’s
Olde Tyme Barber Shop • Hair Cuts • Cut & Style • Hot Towel Shave • Shoe Shine Dan Davis - Tracie Nevels 4407 Halls Ferry Rd.
601-638-2522 M-F: 8a-7p Sat: 8a-4p Discount for Military/Civil Service
PATRIOTIC • FLAGS • BANNERS • BUMPER STICKERS • YARD SIGNS
Show Your Colors!
To advertise your business here for as little as $2.83 per day, call our Classified Dept. at 601-636-7355.
The Vicksburg Post
Sunday, September 4, 2011
D5
30. Houses For Rent
33. Commercial Property
1630 CRAWFORD STREET. 4 bedrooms, 1 baths. $500 monthly, deposit/ references required. Call today. 414-324-3202.
PRIME RETAIL/ OFFICE space available January 1st, 2012. 6000 square feet located on North Frontage Road. One of the MOST desirable locations in the city. Interested parties should reply to Dept. 3762, In Care of The Vicksburg Post, P.O. Box 821668, Vicksburg, MS 39182.
909 NATIONAL STREET. 2 bedrooms, 1 bath, $600, deposit required. 601-4150067. HOUSE FOR RENT. 3 bedrooms 1.5 baths, Nice neighborhood. Fenced backyard. $650. 601-218-4543. LOS COLINAS. SMALL 2 Bedroom, 2 Bath Cottage. Close in, nice. $795 monthly. 601-831-4506.
Classified Advertising really brings big results!
32. Mobile Homes For Sale KEEP UP WITH ALL THE LOCAL NEWS AND SALES... SUBSCRIBE TO THE VICKSBURG POST TODAY! CALL 601-636-4545, ASK FOR CIRCULATION.
The
ABCs
of writing a classified ad
AA voidfewAbbreviations accepted and recognizable abbreviations are ok, but an ad full of them just confuses the reader A good rule of thumb is “Spell it out or leave it out”.
Ads to appear Monday Tuesday Wednesday Thursday Friday Saturday Sunday
34. Houses For Sale
34. Houses For Sale
Licensed in MS and LA
601-636-6490
Open Hours: Mon-Fri 8:30am-5:30pm
601-634-8928 2170 S. I-20 Frontage Rd.
Daryl Hollingsworth..601-415-5549
Sybil Carraway...601-218-2869 Catherine Roy....601-831-5790 Mincer Minor.....601-529-0893 Jim Hobson.........601-415-0211
Eagle Lake - 16853 Hwy 465. 1.5 story, 3/2, open living area, apartment downstairs, furnished, pier, bar, porch. $149,500. McMillin Real Estate. Bette Paul-Warner, 601-218-1800.
V
ARNER
REAL ESTATE, INC
JIM HOBSON
REALTOR®•BUILDER•APPRAISER
601-636-0502
Classifieds Really Work!
29. Unfurnished Apartments
29. Unfurnished Apartments
CPut onsider Your Readers yourself in the reader’s place. If you were considering buying this item, what would you want to know about it? Give the item’s age, condition, size, color, brand name and any other important information needed to describe it completely & accurately.
DMisleading on’t Exaggerate information may bring potential buyers to your home but it will not help you make the sale. You’ll lose the prospect’s trust and faith as well at the sale.
EPricenteris theonePriceof the biggest concerns of classified shoppers. Ads that list prices will get their attention first. Including price also helps you avoid inquiries from callers not in our price range. Place Your Classified Ad Today!
601-636-SELL
34. Houses For Sale
36. Farms & Acreage
Classified Ad Rates Classified Classified Line Line Das Ads: Starting Startingatat1-4 1-4Lines, Lines, 11 Day Day for for $8.32 $8.28 38. Farm Implements/ Heavy Equipment HEY! NEED CASH NOW? We buy junk cars, vans, SUVs, heavy equipment and more! Call today, we'll come pick them up with money in hand! 1-800826-8104.
43 ACRES. CLOSE-IN, first 4 lots approved, will not subdivide. 601-831-1326.
39. Motorcycles, Bicycles
37. Recreational Vehicles
2007 HONDA CRF100F Dirt bike. With helmet, great shape. $1200. 601-6380964.
TRAVEL TRAILER 1996 Jayco Eagle, 28 ft. no slides, 1/2 ton towable. Good condition. $2750.00 601-529-0102
2007 HONDA SPIRIT 1100. Accessories, silver, garage kept, 2000 miles. MUST SELL. $5500 or best offer. 601-301-0432.
38. Farm Implements/ Heavy Equipment
40. Cars & Trucks
29. Unfurnished Apartments
Bienville Apartments The Park Residences at Bienville 1, 2 & 3 bedrooms and townhomes available immediately.
and
1996 CROWN VICTORIA LT. Good condition, keyless entry, air. $2800. 601636-5838..
Get a Late Model Car With a Low Down Payment IF B.K. W WH E D O REPO WE AT Y N’T CA OU HAV DIVORCE N G WA E NT LOST JOB ET IT! , ! MEDICAL YOU ARE STILL OK!!! NO CREDIT APP REFUSED!!! 24 Month Warranties Available
601-636-3147 2970 Hwy 61 North • Vicksburg Monday - Saturday 8am-7pm
No ad will be deliberately mis-classified. The Vicksburg Post classified department is the sole judge of the proper classification for each ad.
40. Cars & Trucks
40. Cars & Trucks
29. Unfurnished Apartments
29. Unfurnished Apartments
Cars, Trucks & SUV’s Pick yours today!
Gary’s Cars Hwy 61S
601-883-9995
HEY! NEED CASH NOW? We buy JUNK CARS, VANS, SUV’S, TRUCKS, SCHOOL BUSES, HEAVY EQUIPMENT, HEAVY DUTY TRUCKS & TRAILERS. Whether your junk is running or not, & PAY YOU CASH NOW. Call today, we'll come pick your junk up with CASH in hand!
MAGNOLIA MANOR APARTMENTS Elderly & Disabled 3515 Manor Drive Vicksburg, Ms. 601-636-3625 Equal Housing Opportunity
1-800-826-8104
601-638-7831• •201 201Berryman Berryman Rd 601-638-7831 Rd.
S ALES/ R ENTALS
Mis-Classification
BUY HERE, PAY HERE. Cars start at $500 down. Located: George Carr old Rental Building. Check us out. 601-218-2893.
1999 to 2005
Great Staff Great Location, Location, Hard-Working Hard-Working Staff
O K C ARS
In the event of errors, please call the very first day your ad appears. The Vicksburg Post will not be responsible for more than one incorrect insertion.
HEY! NEED CASH NOW? We buy junk cars, vans, SUVs, heavy equipment and more! Call today, we'll come pick them up with money in hand! 1-800826-8104.
COME CHECK US OUT TODAY OME UT TYODAY YCOU ’LLCWHECK ANT TUOSMOAKE OUR YOU’LL WANT TO MAKE YOUR HHOME HERE ERE OME H
YOU ARE APPROVED! START REBUILDING YOUR CREDIT HERE!
Errors
FINANCING GUARANTEED!
29. Unfurnished Apartments •
Place your classified line ad at
2001 CADILLAC ELDORADO. One owner, mature adult driven, excellent shape. Call 601-218-9654 days, 601-636-0658 nights. Dealer.
29. Unfurnished Apartments
FOR LEASING INFO, CALL 601-636-1752
Internet
1998 CADILLAC D'ELEGANCE. One owner, mature adult driven, White Diamond, beautiful. $6900. Call 601-218-9654 days, 601636-0658 nights. Dealer.
Call 601-636-SELL to sell your Car or Truck!
EQUAL HOUSING OPPORTUNITY
CROSS OVER
40. Cars & Trucks
Finding the car you want in the Classifieds is easy, but now it’s practically automatic, since we’ve put our listings online.
VICKSBURGS NEWEST, AND A WELL MAINTAINED FAVORITE. EACH WITH SPACIOUS FLOOR PLANS AND SOPHISTICATED AMENITIES.
BListeyourAvailable telephone number so that the potential buyer will know how to contact you. State the best hours to call so they’ll know when they can reach you.
Deadline 5 p.m., Thursday 3 p.m., Friday 3 p.m., Monday 3 p.m., Tuesday 3 p.m., Wednesday 11 a.m., Thursday 11 a.m., Thursday
1411 ELM STREET. 2 bedroom, 1 bath, new roof. $7,500. 601-529-5376.
Broker, GRI
31. Mobile Homes For Rent MEADOWBROOK PROPERTIES. 2 or 3 bedroom mobile homes, south county. Deposit required. 601-619-9789.
Classified Display Deadlines
S HAMROCK A PA RT M E N T S SUPERIOR QUALITY, CUSTOM CABINETS, EXTRA LARGE MASTER BDRM, & WASHER / DRYER HOOKUPS. SAFE!! SENIOR CITIZEN DISCOUNT
601-661-0765 • 601-415-3333
Please call one of these Coldwell Banker professionals today: Jimmy Ball 601-218-3541 John H. Caldwell 601-618-5183 Reatha Crear 601-831-1742 Herb Jones 601-831-1840 Marianne Jones 601-415-6868 Remy Massey 601-636-3699 Kim Steen 601-218-7318 Harley Caldwell, broker
Interest Rates As Low As 3% 601-634-8928 2170 I-20 S. Frontage Road
D6
Sunday, September 4, 2011
The Vicksburg Post
relish CELEBRATING AMERICA’S
LOVE OF FOOD
SEP 2011
VISIT THE ALL NEW
RELISH.COM
Chicken with 20 Cloves of Garlic (Page 16)
Easy Garlic Chicken CASSEROLES The Only Pad Shaped Like You. Better Fit with Better Protection.† † vs.
©2011 KCWW
leading brand
Corn Chowder Sign up for our newsletters at relish.com
Relish stylist Teresa Blackburn transforms garbage f cans into cool recycle bins. For the complete how-to,
This & That Vegetable Heaven Reflecting the season and our appetites for more hearty foods, this issue is full of veggie-centric chowders, casseroles and braises. Quick, easy, healthy and robust. Enjoy. —Jill Melton
We love this bright OXO limited-edition “Good Cookie” Spatula. Proceeds go to Cookies for Kids’ Cancer, a nonprofit organization dedicated to battling pediatric cancer (SEE STORY ON PAGE 5). $6.99 at oxo.com. LEGAL NOTICE
If you purchased Innova, EVO, California Natural, HealthWise, Mother Nature, or Karma dog or cat food you could get a payment from a class action settlement. A $2,150,000 settlement has been reached with Natura Pet Products, Inc., Natura Pet Food, Inc., Natura Manufacturing and Peter Atkins (“Defendants” or “Natura”) in a class action lawsuit about the statements made in the advertising of Natura brand dog and cat food. Natura denies all of the claims in the lawsuit, but has agreed to the settlement to avoid the cost and burden of a trial. IS INCLUDED?
Those included in the class action, together called a “Class” or “Class Members” include anyone in the U.S. who purchased Natura brand dog or cat food products from March 20, 2005 through July 8, 2011.
WHAT
DOES THE SETTLEMENT PROVIDE?
The maximum payment you can get is $200. A $2,150,000 settlement fund will be created by Natura. After paying the lawyers representing the Class for attorneys’ fees of up to 35% of the fund and costs and expenses of up to $60,000; costs to administer the settlement of up to $400,000; and up to $20,000 to the Class Representative (Judy Ko), payments will be made to Class Members who submit valid claim forms.
HOW
DO YOU ASK FOR A PAYMENT?
Submit a claim form online, or get one by mail by calling the toll free number. The deadline to submit or mail your claim form is January 8, 2012.
WHAT
SEPTEMBER 2011
A FRUGAL GOURMET
From the Editor
WHO
go to relish.com/recycle.
ARE YOUR OPTIONS?
You have a choice about whether to stay in the Class or not. If you submit a claim form or do nothing, you are choosing to stay in the Class. This
means you will be legally bound by all orders and judgments of the Court, and you will not be able to sue or continue to sue Natura about the legal claims resolved by this settlement. If you stay in the Class you may object to the settlement. You or your own lawyer may also ask to appear and speak at the hearing, at your own cost, but you don’t have to. The deadline to submit objections and requests to appear is December 28, 2011. If you don’t want to stay in the Class, you must submit a request for exclusion by December 28, 2011. If you exclude yourself, you cannot get a payment from this settlement, but you will keep any rights to sue Natura for the same claims in a different lawsuit. The detailed notice explains how to do all of these things.
THE
COURT’S FAIRNESS HEARING.
The U.S. District Court for the Northern District of California will hold a hearing in this case (Ko v. Natura Pet Products, Inc., Case No 5:09cv2619), on February 17, 2012, at 9:00 a.m. to consider whether to approve: the settlement; attorneys’ fees, costs, and expenses; and the payment to the Class Representative. If approved, the settlement will release the Defendants from all claims listed in the Settlement Agreement.
HOW
DO YOU GET MORE INFORMATION?
The detailed notice and Settlement Agreement are available at the website. You can also call 1-888-768-2047, or write to Natura Settlement Administrator, c/o Analytics, Inc., PO Box 2005, Chanhassen, MN 55317-2005, or contact Class Counsel at 800-851-8716.
1-888-768-2047
Robin Mather’s book The Feast Nearby (Ten Speed Press, 2011) grew out of a cloud of heartache and bad luck. It outlines her quest to spend just $40 a week on food in the midst of unemployment, solitude and heartbreak. But what a silver lining. “My decision to eat locally brought new friends into my new life and gave me a sense of purpose and community that I had never had before,” she writes. A great read about living a simpler life in sync with nature and its foods, her book includes simple, fresh recipes like this zucchini bread.
Zucchini Bread with Walnuts 3 1 1 ½ ½ 3 ½ ½ 2¼ 1 2 1
cups self-rising flour tablespoon ground cinnamon teaspoon grated nutmeg teaspoon ground cloves teaspoon salt eggs cup vegetable oil cup 2 percent reduced-fat milk cups sugar tablespoon vanilla extract cups grated zucchini cup chopped walnuts or pecans
1. Preheat oven to 325F. Grease 2 (9 x 5-inch) loaf pans and dust with flour. 2. Sift flour, cinnamon, nutmeg, cloves and salt into a bowl. 3. Beat eggs, oil, milk, sugar and vanilla in a large bowl until combined. Add flour mixture to egg mixture and stir to combine thoroughly. Stir in zucchini and nuts. Pour into prepared pans. 4. Bake 45 to 60 minutes, until a wooden pick inserted in the center comes out clean. Cool in pans on a wire rack 20 minutes. Remove from pans and cool completely before slicing. Makes 2 loaves, 10 slices each. Per serving: 260 calories, 10g fat, 30mg chol., 4g prot., 39g carbs., 24g sugars, 1g fiber, 250mg sodium.
relish
®
Visit us relish.com
EDITOR Jill Melton MANAGING EDITOR Candace Floyd CREATIVE DIRECTOR Tom Davis MULTIMEDIA EDITOR Stacey Norwood PHOTO EDITOR Katie Styblo ALL PHOTOS BY: Mark Boughton Photography PROP AND FOOD STYLING BY: Teresa Blackburn l Relish is published by: Publishing Group of America, 341 Cool Springs Boulevard Suite 400, Franklin, Find us Tennessee 37067 Phone: 800-720-6323. Mail editorial queries and contributions to Editor, Relish, 341 Cool Springs Blvd., Suite 400, Franklin, TN 37067. Publishing Group of America, Inc. will not be responsible for unsolicited materials, and cannot guarantee the return of any materials submitted to it. ©2011 Publishing Group of America, Inc. Relish™ is a trademark of Publishing Group of America, Inc. All rights reserved. Reproduction in whole or part of any article, photograph, or other portion of this Follow us magazine without the express written permission of Publishing Group of America, Inc. is prohibited.
2 relish.com
C E L E B RAT I NG AME RI C A' S LOV E O F FO O D
chill me
Keep Chillin’. Reprinted from Lobsters Scream When You Boil Them and 100 Other Myths About Food and Cooking (Gallery Books, 2011) by Bruce Weinstein and Mark Scarbrough.
GET THE PARTY STARTED
Gearing up for the entertaining season? Add color (and nutrition) to any party by placing vibrant baby carrots and beets in a glass for dipping in coarse salt or any dip. Look for baby vegetables at your farmers’ market. relish.com 3
Experience the team for a superior clean. Purchase a Bosch Ascenta® dishwasher and receive a FREE 100-DAY SUPPLY of Finish® Quantum® dishwashing tablets.*
Bosch recommends for superior cleaning results.
Get the most out of your new Bosch dishwasher. When you get a superior, German-engineered product, you should use complementary products that are just as advanced. That’s why Bosch and Finish® are a perfect match. With the Bosch Ascenta, you get one of the world’s quietest high-performance dishwashers. With Finish® Quantum®, you get a top-performing dishwashing tablet that’s so powerful there is no need to pre-rinse. That’s why, for a limited time, we’re adding in a FREE 100-DAY SUPPLY* of Finish® Quantum® when you buy a Bosch Ascenta. After all, the best deserves nothing less than the best.
Offer valid through October 10, 2011.
For more information on Finish® Quantum® go to www.finishdishwashing.com/Relish © 2011 RB © 2011 BSH Home Appliances Corporation. BFD831-04-99561-2
Eligible models: SHE3AR7_UC, SHE3ARF_UC, SHX3AR7_UC, SHE3AR5_UC, SHE3ARL_UC, SHX3AR5_UC. *100 day supply based on average US Nielsen annual washloads includes 45 tablets.
relish l
food hero
A Mother’s Love
Gretchen Holt-Witt with son Liam (2004-2011)
working mother of two, Gretchen Holt-Witt changed her goals in an instant when her son, Liam, was diagnosed with cancer in 2007. What she discovered was that not nearly enough treatment options are available for kids battling cancer, and that research on new treatments is sorely lacking in funding. So Gretchen went to work, turning her son’s favorite cookies into a force for raising funds for cancer research. She took orders for cookies from friends, coworkers and neighbors, and with the help of more “Liam was my hero. He was the one who than 250 volunteers (many of whom Gretchen didn’t fought so hard. I had the blessing of being even know), baked and sold 96,000 cookies in three able to love him and cherish him for his weeks. The event raised more than $400,000 for time here . . . and now I have to spend my pediatric cancer research, but it was soon clear that something bigger than a bake sale had begun. Even time making sure that when I see him again, weeks after the event was over, requests for cookies I can tell him that mommy did everything kept coming in. k How is food an act of she could to make it better for others. It’s the What started as love to you? Tell us on only thing he’d want.”—Gretchen Holt-Witt a simple act to facebook.com/relish raise money and magazine or email us at awareness for her child’s own cancer blossomed into something much cookies@relish.com bigger than any had planned. The event caught the eyes and the hearts of the media and people all over the country.
A
COOKIES BY MAIL With more than eight varieties to choose from, you can send cookies as a gift, with 100 percent of the profits from your purchase directly funding pediatric cancer research. Cookies are packed in pretty gift boxes with a card explaining the program. Go to cookiesforkidscancer.org. to order. Sixty-six brownies, cookies and bars are captured in the Best Bake Sale Cookbook (Wiley, 2011). For Groups around the country host bake sales. Go blondies studded with white chocolate chips, to relish.com/gretchenscookies for more go to relish.com/ information and tips on hosting a bake sale. gretchenscookies. CELEBRATI NG AM E RI C A' S LOV E OF FOOD
5 relish.com
relish l
the pantry
Summer in a Casserole Fresh, bright casseroles feature end-of-summer tomatoes, squash and basil for potlucks and parties.
Fresh Squash Casserole Fresh summer squash is sautéed, bound together by eggs and a splash of cream, seasoned with fresh thyme, and baked. 3 1 1 1 2 2 3 ½ 1 ¼ 1 ¼ ¼
tablespoons butter tablespoon olive oil medium onion, chopped garlic clove, chopped pounds zucchini squash, sliced pounds yellow squash, sliced eggs cup half-and-half teaspoon salt teaspoon pepper tablespoon fresh thyme leaves cup panko breadcrumbs cup finely. 4. Bake 35 minutes. Remove from oven and top with panko and cheese. Place under broiler and broil until brown, about 3 minutes. Serves 8.
Fresh Summer Casserole Fresh summer vegetables meld with pasta in this cheesy, creamy casserole. Whole-milk ricotta makes it extra creamy, but the part-skim variety will work just as well. 2 3 3 12 6 ½ 12 2 ½ ½ 1
tablespoons olive oil cups cherry or grape tomatoes garlic cloves, chopped ounces whole-milk ricotta cheese ounces feta cheese or combination of goat and feta cheeses cup 2 percent reduced-fat milk ounces cooked short pasta (such as gemelli or penne) medium zucchini, cut into thin strips with a vegetable peeler cup fresh basil, chopped cup cracker crumbs tablespoon cold butter, cut into small pieces Grated Parmigiano Reggiano cheese (optional)
Per serving: 161 calories, 11g fat, 99g chol., 7g prot., 11g carbs., 6g sugars, 3g fiber, 409mg sodium.
Recipe by Liz Shenk.
1. Preheat oven to 350F. 2. Heat oil in a large skillet. Add tomatoes and
sauté over medium heat until browned, about 10 minutes. Add chopped garlic. Cook 1 minute. 3. Combine ricotta, feta and milk in a large mixing bowl; stir well. Add cooked pasta, tomatoes, zucchini, cheese mixture and basil; stir gently. 4. Transfer to a 2-quart casserole dish. Top with cracker crumbs. Sprinkle butter evenly over top. Sprinkle with Parmigiano Reggiano, if using. Cover and bake 30 to 40 minutes or until hot and bubbly. Serves 8. Per serving: 505 calories, 22g fat, 61g chol., 21g prot., 57g carbs., 8g sugars, 9g fiber, 464mg sodium.
CELEBRATI NG AM E RI C A' S L OV E OF F OOD
7 relish.com
relish l
No Contract. No Risk. Great Value.
Corn Fed
*†
From the #1 rated no contract wireless service provider.
in-season
Off the cob and into the pan, sweet corn kernels star in these savory dinner dishes.
A
EASY PHONE $25
PLANS FROM JUST $10/MO Easy cell plans that’ll save you money. We’ll transfer your existing number for you and we’ll even let you know when your minutes are almost up.
FREE*
bout 50 years ago, plant scientists at the University of Illinois discovered a supersweet gene that produced corn kernels packed with sugar. The rest is history, with Silver Queen, Super Sweet, Kandy Korn and Sweetie being found at most farm stands. Although this intense sweetness has fostered corn ice cream and crème brulée, we still prefer it buttered and salted for dinner. However, since one can eat only so much corn-on-the-cob, we’ve taken the sweet creamy kernels off the cob and matched them with sharp, salty ingredients for a dynamite flavor contrast.
NO CONTRACTS Upgrade, change or cancel your plan at any time. You’re in control. FREE PHONES* A great selection of phones from Motorola, Samsung and Doro. From user friendly phones with big buttons and screens to full feature models, we have what you want with prices starting at Free. 100% RISK FREE GUARANTEE With our no obligation return policy you have nothing to lose.
SMART PHONE
CALL CONSUMER CELLULAR 888.427.3512 OR VISIT ®
AARP members ask for your special discount when starting new service!
Now available at
*Requires new service activation on approved credit and $35 activation fee. Pricing at retail stores will include the $35 activation fee. Not all phones displayed are retailed at Sears. Certain models are free beyond activation fee. Cellular service is not available in all areas and is subject to system limitations. Phones are limited to stock on hand. Terms and Conditions subject to change. †If you’re not satisfied within 30 days or 30 minutes of usage, whichever comes first, cancel and pay nothing, no questions asked. AARP member benefi ts are provided by third parties, not by AARP or its affi liates. Providers pay a royalty fee to AARP for the use of AARP’s intellectual property. These fees are used for the general purposes of AARP. Provider offers are subject to change and may have restrictions. Please contact the provider directly for details.
Smoky Corn and Shrimp Chowder Salty, crispy bacon and smoky paprika (found in the spice section of most supermarkets) flavor this creamy soup studded with sweet corn and briny shrimp. Arugula gives it color, but parsley or spinach will work, too. 4 4 1 1 1 ¼ 2 2 5 1 4
ears fresh sweet corn slices bacon medium white onion, chopped teaspoon salt teaspoon smoked paprika teaspoon cayenne pepper tablespoons all-purpose flour medium baking potatoes, peeled and chopped cups 2 percent reduced-fat milk pound shrimp, peeled (deveined, if large) cups baby arugula or other peppery greens, such as mache or chopped and stemmed mustard or turnip greens
8 relish.com
C E L E B RAT I NG AME RI C A' S L O V E O F FO O D
1. Cut corn kernels from cob. Scrape cobs with back of knife to release milk. Set aside. 2. Cook bacon in large Dutch oven. Remove from pan, drain and crumble. 3. Add onion, corn, salt, smoked paprika and cayenne pepper to bacon drippings. Sauté 10 minutes. Add flour, whisking well, and cook 2 minutes. Add potatoes and milk. Cook 10 minutes or until thick and creamy. Add shrimp and cook until pink. Stir in arugula. Serve with crumbled bacon. Serves 6. Recipe by Jill Melton Per serving: 382 calories, 17g fat, 120mg chol., 25g prot., 33g carbs., 15g sugars, 3g fiber, 794mg sodium.
As an added bonus, the Smoky Corn and Shrimp Chowder uses a FOOD substantial amount of milk, which tempers the sweet and pungent flavors and provides calcium. This recipe provides more than 300 milligrams calcium—the same amount in a cup of milk—which can help prevent osteoporosis.
health
Chicken Maque Choux Here’s our rendition of “Maque Choux” (pronounced mock-shoe), the braised corn, tomato and pepper dish of French Acadian and Native American origin, popular in Louisiana. 1 1 1 1 1 ½ ½ 1 4 2 2½ 1 1
tablespoon garlic powder tablespoon onion powder teaspoon cumin teaspoon cinnamon k For shrimp and teaspoon sugar teaspoon salt corn pudding, go to teaspoon pepper relish.com/corn. tablespoon oil chicken quarters green onions, chopped cups fresh corn kernels (4 ears) large tomato, chopped red bell pepper, chopped
Corn and Orzo Salad with Arugula Pesto Peppery arugula pesto and tangy feta cheese encase orzo and sweet corn kernels for a great dish that is as good hot as it is cold. Arugula Pesto: 3 tablespoons chopped walnuts (about 1 ounce) 1 ½ cups packed arugula ½ cup packed flat-leaf parsley 1 garlic clove, pressed 3 tablespoons extra-virgin olive oil 2 tablespoons lemon juice 3 tablespoons grated Parmigiano Reggiano cheese ½ teaspoon coarse salt Freshly ground black pepper Salad: 1 cup uncooked orzo (rice-shaped pasta) 2 cups fresh corn kernels 1 cup cucumber, cut into small cubes (about 1 medium cucumber) 2 large plum tomatoes, cut lengthwise into wedges ¾ cup crumbled feta cheese
rub over chicken. 3. Heat oil in large ovenproof skillet. Brown chicken 5 minutes on each side. Remove chicken to a plate. 4. Add onions, corn, tomato and bell pepper to pan; sauté 5 minutes. Place chicken on top of corn mixture in skillet. Cover and bake 25 minutes. Uncover and bake 15 minutes. Serves 4.
1. To prepare pesto, place walnuts in processor and finely chop. Add arugula and parsley; pulse to coarsely chop. With motor running, add garlic, oil, lemon juice, cheese, salt and pepper and process until blended. 2. To prepare salad, cook orzo according to package directions. Drain, rinse under cold running water and drain well. Transfer to a large bowl. Add corn, pesto and cucumber and mix gently. Spoon onto serving plates and garnish with tomatoes and feta. Serves 8. Recipe by Jean Kressy.
Per serving: 500 calories, 27g fat, 145mg chol., 40g prot., 26g carbs., 6g sugars, 5g fiber, 380mg sodium.
Per serving: 233 calories, 11g fat, 14mg chol., 8g prot., 27g carbs., 5g sugars, 2g fiber, 246mg sodium.
1. Preheat oven to 375F. 2. Combine the first 7 ingredients (garlic powder through pepper);
C E L E B RAT I NG AME RI C A' S L OV E OF F OO D
9 relish.com
“My doctor and I chose Prolia®. Ask your doctor if Prolia® is right for you.” Blythe Danner Award winning actress
Prolia® is a prescription medicine used to treat osteoporosis in women after menopause who: • have an increased risk for fractures • cannot use another osteoporosis medicine or other osteoporosis medicines did not work well
Important Safety Information.
Take calcium and vitamin D as your doctor tells you to.. Skin problems. Skin problems such as inflammation of your skin (dermatitis), rash, and eczema have been reported..
For women with postmenopausal osteoporosis at increased risk for fractures: there’s Prolia®. ®
Prolia
2 shots a year proven to help strengthen bones. Prolia® is different. It’s the first and only prescription therapy for postmenopausal osteoporosis that is a shot given 2 times a year in your doctor’s office. Prolia® helps stop the development of bone-removing cells before they can reach and damage the bone. Prolia® is proven to: • Significantly reduce fractures of the spine, hip, and other bones • Help increase bone density • Help reverse bone loss Is Prolia® right for you? Ask your doctor today. By Prescription Only..
Ask your doctor about your bone strength and if Prolia® is right for you.
2 shots a year to help reverse bone loss. © 2011 Amgen Inc., Thousand Oaks, CA 91320. All rights reserved. 60207-R1-V4
7/2
Ver: P
Signature / Initials Date _______________________ _______________________ _______________________ _______________________ _______________________ _______________________ _______________________ _______________________ _______________________ _______________________ _______________________
APPROVAL STAMP.
M3427 Bill M3362
Trim: 17"w x 9.25"h Live: N/A Gutter: .25" each side Fonts: Din, Helv Neue
Traffic: A. Chu x3511 VQC: M. Parrelli, L. Powell Category: Consumer Pharma Artist(s): tp, pc, tp, dc, pc
Traffic Proofreader Art Director Copywriter Creative Dir. Acct. Exec. Acct. Dir. Mgt. Dir. Production Studio Studio QC
THIS ADVERTISEMENT PREPARED BY DRAFTFCB Production: S. Curry x3029 AD: J. Cameron x3452 AD ID#: Colors: 4C AE: B. Traino x2962 AMGA_DENO_M3427 P Size: Bleed: N/A D. Carroll x2913.
Skin problems. Skin problems such as inflammation of your skin (dermatitis), rash, and eczema have been reported.
Client Folder: AMGA Job #: AMGA_DENO_M3427 Filenm: M3362_M3427_P.indd Date: 7/28/11 Proof #: 6
Amgen Manufacturing Limited, a subsidiary of Amgen Inc. One Amgen Center Drive Thousand Oaks, California 91320-1799 This Medication Guide has been approved by the US Food and Drug Administration. v2 Issued: 07/2011
Important Safety Information
How should I handle Prolia if I need to pick it up from a pharmacy? • Keep Prolia in a refrigerator at 36°F to 46°F (2°C to 8°C) in the original carton. • Do not freeze Prolia. • When you remove Prolia from the refrigerator, Prolia must be kept at room temperature [up to 77°F (25°C)] in the original carton and must be used within 14 days. • Do not keep Prolia at temperatures above 77°F (25°C). Warm temperatures will affect how Prolia works. • Do not shake Prolia. • Keep Prolia in the original carton to protect from light. Keep Prolia and all medicines out of reach of children..
Prolia® is a prescription medicine used to treat osteoporosis in women after menopause who: • have an increased risk for fractures • cannot use another osteoporosis medicine or other osteoporosis medicines did not work well
Blythe Danner
Award winning actress
Prolia
© 2011 Amgen Inc., Thousand Oaks, CA 91320. All rights reserved. 60207-R1-
2 shots a year to help reverse bone loss
Ask your doctor about your bone strength and if Prolia® is right for you..
How will I receive Prolia? • Prolia is an injection that will be given to you by a healthcare professional. Prolia is injected under your skin (subcutaneous). • You will receive Prolia 1 time every 6 months. • You should take calcium and vitamin D as your doctor tells you to while you receive Prolia. • If you miss a dose of Prolia, you should receive your injection as soon as you can. • Take good care of your teeth and gums while you receive Prolia. Brush and floss your teeth regularly.
Take calcium and vitamin D as your doctor tells you to.
3. Skin problems. Skin problems such as inflammation of your skin (dermatitis), rash, and eczema may happen if you take Prolia. Call your doctor if you have any of the following symptoms of skin problems that do not go away or get worse: • Redness • Itching • Small bumps or patches (rash) • Your skin is dry or feels like leather • Blisters that ooze or become crusty • Skin peeling
®
2. your immune system. People who have weakened immune system or take medicines that affect the immune system may have an increased risk for developing serious infections. Call your doctor right away if you have any of the following symptoms of infection: • Fever or chills • Skin that looks red or swollen and is hot or tender to touch • Severe abdominal pain • Frequent or urgent need to urinate or burning feeling when you urinate
What should I tell my doctor before receiving Prolia? Before taking Prolia, tell your doctor if you: • with Amgen’s Pregnancy Surveillance Program or call 1-800-772-6436 (1-800-77-AMGEN). The purpose of this program is to collect information about women who have become pregnant while taking Prolia. • Are breast-feeding or plan to breast-feed. It is not known if Prolia passes into your breast milk. You and your doctor should decide if you will take Prolia or breast-feed..
“My doctor and I chose Prolia®. Ask your doctor if Prolia® is right for you.”
What are the possible side effects of Prolia? Prolia may cause serious side effects. • See “What is the most important information I should know about Prolia?” • Long-term effects on bone: It is not known if the use of Prolia over a long period of time may cause slow healing of broken bones or unusual fractures. The most common side effects of Prolia are: • Back pain • Pain in your arms and legs • High cholesterol • Muscle pain • Bladder infection These are not all the possible side effects of Prolia. For more information, ask your doctor or pharmacist. Call your doctor for medical advice about side effects. You may report side effects to FDA at 1-800-FDA-1088.
Who should not receive Prolia? Do not take Prolia if you have been told by your doctor that your blood calcium level is too low. By Prescription Only.
Prolia can cause serious side effects including: 1. Prolia. Take calcium and vitamin D as your doctor tells you to.
What is Prolia? Prolia is a prescription medicine used to treat osteoporosis (thinning and weakening of bone) in women after menopause (“change of life”) who • Have an increased risk for fractures (broken bones). • Cannot use another osteoporosis medicine or other osteoporosis medicines did not work well. Is Prolia® right for you? Ask your doctor today.
What is the most important information I should know about Prolia? If you receive Prolia, you should not receive XGEVA®. Prolia contains the same medicine as Xgeva (denosumab).
Call your doctor right away if you have any of these side effects.
Prolia® is proven to:
For women with postmenopausal osteoporosis at increased risk for fractures: there’s Prolia®..
• Significantly reduce fractures of the spine, hip, and other bones • Help increase bone density • Help reverse bone loss
Prolia® helps stop the development of bone-removing cells before they can reach and damage the bone.
Prolia® is different. It’s the first and only prescription therapy for postmenopausal osteoporosis that is a shot given 2 times a year in your doctor’s office.
2 shots a year proven to help strengthen bones.
MEDICATION GUIDE Prolia® (PRÓ-lee-a) (denosumab) Injection
• Tell your dentist that you are receiving Prolia before you have dental work..
ST E! E T SU LA 1 IS 1 20
Limit one per customer at this special low price!
You can’t purchase this You’ll also receive our fully Uncirculated American Eagle illustrated catalog, plus ONLY silver dollar directly from the other fascinating selections $39.00 U.S. Mint. But you can now from our Free purchase the official 2011 U.S. Examination Coins-on-Approval silver dollar from Littleton Coin Service, from which you may Company at our cost! purchase any or none of the coins – return balance in 15 days – with The beautiful and sought-after option to cancel at any time. Don’t $1 American Eagle is over 99.9% delay – order your 2011 American pure silver, and carries the same Eagle silver dollar at our cost today! design as the popular “Walking Liberty” silver coins of 1916-47. IMPORTANT NOTICE: ORDERS MUST BE RECEIVED WITHIN 15 DAYS
45-Day Money Back Guarantee of Satisfaction
©2011 LCC, LLC
Get a 2011 American Eagle Silver Dollar at our cost!*
Latest 2011 Issue! ★ One ounce of 99.93% pure silver! ★ Beautiful mint Uncirculated condition! ★ 2011 marks the 26th anniversary of the American Eagle series
FREE Gift!
★ Limited-time offer for new customers
when you order within 15 days
Due to fluctuations in the coin market, prices are subject to change. * “At our cost” reflects market price as of August 4, 2011.
Complete 4-Coin Uncirculated Set of 2009 cents, featuring special designs honoring the bicentennial of Lincoln’s birth!
Limited-Time Offer! Special Offer for New Customers Only
Get a 2011 American Eagle Silver Dollar at our cost!
Please send me the Uncirculated American Eagle Silver Dollar at Littleton’s ✓YES! cost ❐ (limit 1). Plus, send my FREE Uncirculated 2009 4-Coin Lincoln Cent Set (one per customer, please).
39.00 Limit One: $ __________
Please send coupon to: Dept. 2FZ400 1309 Mt. Eustis Road Littleton NH 03561-3737
Shipping & Handling: $ __________ 4.95
Method of payment: ❏ Check payable to Littleton Coin Co. ❏ VISA ❏ MasterCard ❏ American Express ❏ Discover Network
43.95 Total Amount: $ __________
Card No.
Exp. Date _______ /_______
Name __________________________________________________________________________________________________ Please print your name and address clearly
Address ________________________________________________________________________ Apt#_________ City_______________________________________________ State ________ Zip ______________________ E-Mail ____________________________________________________________________________________ America’s Favorite Coin Source • TRUSTED SINCE 1945
Quick and Easy Eggs Cooking incredible eggs can be quick and easy, especially when hard-boiling or using the microwave to prepare them. Perfect Hard-Boiled Eggs 1. Place eggs in saucepan large enough to hold them in a single layer. Add cold water to cover eggs by 1 bowl of ice water, then refrigerate.
TIPS: s 6ERY FRESH EGGS CAN BE DIFlCULT TO PEEL SO BUY and refrigerate them 7 to 10 days in advance of cooking. This brief “breather� allows the eggs time to take in air, which helps separate the membranes from the shell. s (ARD BOILED EGGS ARE EASIEST TO PEEL RIGHT AFTER cooling, which causes the egg to contract slightly in the shell. s 4O PEEL A HARD BOILED EGG 'ENTLY TAP EGG ON COUNTERTOP UNTIL SHELL IS lNELY CRACKLED ALL over. Roll egg between hands to loosen shell. Start peeling at large end, holding egg under cold running water to help ease the shell off.
Microwave Cooking Tips Incredible edible eggs, nature’s own convenience food, and the microwave oven add up to quick and easy meals with minimal clean up. 7HEN YOU COOK EGGS IN THE MICROWAVE KEEP THESE FEW POINTS IN MIND s %GG YOLK BECAUSE IT CONTAINS FAT TENDS TO COOK MORE QUICKLY THAN egg white. To more evenly cook unbeaten eggs in the microwave, cook more slowly by using 50% or 30% power. For omelets, scrambled eggs and poached eggs cooked in water, you can use full power (high). s .EVER MICROWAVE AN EGG IN ITS SHELL BECAUSE IT WILL EXPLODE %VEN out of the shell, eggs may explode in the microwave because rapid heating causes steam to build up under the yolk membrane faster than it can escape. To create a steam vent, before microwaving, use a wooden pick or the tip of a knife to break the yolk membrane of an unbeaten egg. s 4O ENCOURAGE MORE EVEN COOKING COVER MICROWAVE COOKING containers with a lid, plastic wrap or waxed paper; stir the ingredients, if possible; and, if your oven doesn’t have a turntable, rotate the dish once or twice during cooking. For more recipes and tips, visit us on Facebook.
LOOK FOR THE NEW NUTRITION FACTS PANEL ON EGG CARTONS
New USDA Study Shows:
Eggs Have Less Cholesterol, More Vitamin D
Nutrition Facts Serving Size 1 egg (50g) Serving per Container 12 Amount Per Serving
Calories 70
Calories from Fat 45 % Daily Value*
The. Consuming an egg a day fits
MICROWAVE COFFEE CUP SCRAMBLE
14% LESS
GOOD SOURCE GOOD SOURCE
Total Fat 5g Saturated Fat 1.5g Polyunsaturated Fat 1g Monounsaturated Fat 2g Trans Fat 0g Cholesterol 185mg Sodium 70mg Potassium 70mg Total Carbohydrate 0g Protein 6g
8% 8%
60% 3% 2% 0% 13%
Vitamin A 6% • Vitamin C 0% Vitamin D 10% • Calcium 2% Iron 4% • Thiamin 0% Riboflavin 10% • Vitamin B-6 4% Folate 6% • Vitamin B-12 8% Phosphorus 10% • Zinc 4% Not a significant source of Dietary fiber or Sugars * Percent Daily Values are based on a 2000 Calorie diet. Your daily volumes may be higher or lower depending on your calorie needs. Calories Total Fat Less than Sat fat Less than Cholesterol Less than Sodium Less than Potassium Total Carbohydrate Dietary Fiber Protein
2,000 65g 20g 300mg 2,400mg 3,500mg 300g 25g 50g
2,500 80g 25g 300mg 2,400mg 3,500mg 375g 30g 65g
Calories per gram Fat 9 - Carbohydrate 4 - Protein 4
Prep Time: 1 minute Cook Time: 45 to 60 seconds Makes: 1 serving
WHAT YOU NEED 2 EGGS 2 Tbsp. milk 2 Tbsp. shredded Cheddar Cheese Salt and pepper
HERE’S HOW 1. COAT 12-oz. microwave-safe coffee mug with cooking spray. ADD eggs and milk; beat until blended. 2. MICROWAVE on HIGH 45 seconds; stir. MICROWAVE until eggs are almost set, 30 to 45 seconds longer. 3. TOP with cheese; season with salt and pepper Microwave ovens vary. Cooking times may need to be adjusted.. Average amount of cholesterol in one egg is 185 mg, down from 215 mg.
relish l
America’s harvest
Made in America In New York’s Hudson Valley, farmers grow more than 70 varieties of garlic, celebrated at the annual garlic festival every September.
Photo by Diane Welland
B
ob Yerina’s father came home one evening about 40 years ago with 1 ½ Frankie intensive. “You plant in fall, then weed constantly. Palermo Stuffed Baked festival is scheduled for Sept. 24-25 in Saugerties, N.Y. Tomatoes
Story by Diane Welland, a food writer in Springfield, Va.
health FOOD gIn this recipe, pungent raw garlic
2 large ripe summer tomatoes 1 cup small curd cottage cheese ¼ cup finely chopped fresh basil, spinach or arugula 2 crushed garlic cloves ½ cup grated Parmesan cheese
combines with cool, creamy cottage cheese. This versatile dairy product packs a healthy dose of protein (28g per cup) and calcium (150mg per cup). Stuff it in a fresh tomato, use in place of ricotta for creamier lasagnas, and don’t forget the garlic.
16 relish.com
1. Preheat oven to 400F. 2. Cut tomatoes in half horizontally.
Combine remaining ingredients; stir gently. Divide evenly among 4 tomato halves. Bake 15 minutes or until hot and browned. Serves 4. Per serving: 110 calories, 4.5g fat, 15mg chol., 11g prot., 6g carbs., 4g sugars, 1g fiber, 350mg sodium.
C E L E BRATI NG AMERI CA'S LOVE OF FOOD
Chicken with 20 Cloves of Garlic (cover) When cooked long and slow, garlic becomes soft and mellow, perfect for spreading on bread. 1 1 2 20 2 ½ ¾ ¼ ¼ 24
tablespoon butter tablespoon olive oil pounds chicken leg quarters garlic cloves, peeled cups cherry tomatoes teaspoon salt cup reduced-sodium chicken broth teaspoon freshly ground black pepper cup shredded (chiffonade) basil leaves (¼-inch-thick) slices baguette
1. Preheat oven to 425F. 2. Heat butter and oil in a heavy ovenproof skillet
over medium-high heat. Add chicken and cook until browned (about 5 minutes per side). Remove chicken to a plate. 3. Add garlic to pan and cook until it begins to brown, about 1 minute, stirring frequently. Return chicken to pan on top of garlic. Sprinkle with tomatoes and salt. Add broth. 4. Cook 30 minutes, or until chicken is done. Sprinkle with pepper and fresh basil. Serve with baguette slices. Serves 8. Per serving: 306 calories, 13g fat, 72mg chol., 19g prot., 25g carbs., 1.3g sugars, 1g fiber, 472mg sodium.
k For 10 yummy uses for garlic, go to relish.com/garlic.
100% Pure Cottage Cheese daisybrand.com/cottagecheese
Expires: October 31, 2011
SAVE 45¢ on any Daisy Brand Cot tage Cheese ®
0073420-110918 RETAILER: DAISY BRAND will reimburse you for the face value of this coupon plus 8¢ when accepted in accordance with our redemption policy (copy available upon request). Retailers and authorized clearing houses send to Daisy Brand, CMS Dept. 73420, #1 Fawcett Drive, Del Rio, TX 78840. Limit 1 coupon per transaction. Void if transferred or copied. Good only in U.S.A. Void where taxed or prohibited by law. Not valid in Colorado or North Dakota. Cash value .001¢. © 2011 Daisy Brand.
5
73420 50045
5
relish l
breakfast
Emeril’s Farmers’ Market Frittata We love this fresh frittata from Emeril Lagasse that uses summer vegetables and garden herbs. 8 3 ½ ¼ 3 1 1 1 1 1 2
eggs tablespoons heavy cream teaspoon salt teaspoon freshly ground black pepper tablespoons butter cup thinly sliced onions cup thinly sliced green, red or orange bell peppers, or a mix cup thinly sliced mushrooms (about 4 ounces) cup fresh corn kernels cup diced smoked ham tablespoons chopped fresh herbs, such as chives, basil, thyme, parsley and oregano 1 cup grated Swiss cheese (about 4 ounces)
Photo by Teresa Blackburn
1. Set a rack in the upper third of the oven and preheat the broiler. 2. Whisk eggs, cream, salt and pepper together in a medium bowl until
Fresh Frittata This Italian omelet will become your new go-to brunch or dinner, perfect for fresh vegetables as well as recycled leftovers. Best yet—no flipping or folding required.
W
hen. —Jill Melton
k For 3 more frittatas, go to relish.com/breakfastfrittatas. 18 relish.com
C E LE B RATI NG AMERI CA'S LOVE OF FOOD
combined. 3. Melt 2 tablespoons butter in a 10-inch ovenproof sauté pan over medium-high heat. Add onions and peppers and cook, stirring as needed, until soft, 7 to 8 minutes. Add mushrooms and corn and cook 2 minutes. Add ham and cook until warmed, about 1 minute. Add remaining 1 tablespoon butter; when melted, add egg mixture. Sprinkle fresh herbs over eggs, and top with cheese. Reduce heat to medium and cook eggs, undisturbed, 3 minutes, or until the surface begins to bubble and the bottom starts to set. 4. Immediately place pan in oven and broil until golden brown on top, 3 to 4 minutes. 5. Remove pan from oven. Using a rubber spatula, loosen frittata from sides of pan. Tilt pan and gently slide frittata onto a platter. Serve hot or warm. Serves 8. Per serving: 260 calories, 19g fat, 230mg chol., 14g prot., 8g carbs., 3g sugars, 1g fiber, 510mg sodium.
Recipe by Emeril Lagasse. Reprinted with permission from his Farm to Fork: Cooking Local, Cooking Fresh (William Morrow Cookbooks, 2010). HOW TO
o Use the sharp edge of a
knife to cut kernels off the cob; then flip the knife over and use the dull side to scrape again, releasing all the milk.
MAD ABOUT OLIVES! RECIPE CONTEST
Do canned olives send you down memory lane? Us too. Tap into that nostalgia and your inner cook and enter our online olive recipe contest. We’re teaming up with the California Olive Committee to find the best original quick-and-easy main dishes using canned black California olives. Win a two-day boot camp to the Culinary Institute of America in the heart of Napa Valley. Entries accepted Sept. 1 – Oct. 31. Submit recipes online at relish.com/ oliverecipes.
WE WANT YOUR WARM AND FUZZIES Have a favorite Christmas cookie memory? For the upcoming holiday issue of Relish, we’re featuring best Christmas cookie moments, recipes or pictures. Send yours, by Oct. 1, to cookies @relish.com.
relish.com 19
facebook.com/mms
va G FRE lu I E ed FT at $3 5
“My Medical Alarm saved my life 3 times!
I’m sure glad I didn’t wait.”
The Designed For Seniors® Medical Alarm provides emergency notification that is simple, reliable and affordable. It’s simply the best value on the market today. Don’t wait until its too late… read a real life saving story below!
Simple, Reliable, and Affordable Designed For Seniors®
MedicalAlarm Equipment Cost Activation Contract UL Approved Call Center Senior Approved™ Warranty Free Shipping
✓ FREE ✓ FREE ✓ NONE ✓ YES ✓ YES ✓ LIFETIME ✓ YES
Competition $30-$300 $10-$30 1-2 Years Some No Varies ?
Best of all, it’s affordable. There is no equipment charge, no activation fee, no long term contract. Call now and within a week you or someone you love will have the peace of mind and
independence that comes with this remarkable system. Order now and receive free shipping and a free gift – valued at $35. It’s yours to keep.
Designed For Seniors® Medical Alarm Please mention promotional code
43070.
1-877-686-1523
Copyright © 2010 by firstSTREET for Boomers and Beyond, Inc. All rights reserved.
56808
“I’m 79 years old and live Why wait, it’s simple to alone in a small town. install and use. Unlike other “Good morning. This is I own and wear the Nancy with Medical Alarm. products that require Do you need assistance firstSTREET professional installation, Mrs. Smith?” Medical Alarm this product is “plug button. The Medical and play.” The Alarm has saved my life unit is not once but three times! designed for The first incident was on easy use in May 15th, when I had an emergency, a stroke. The second with large, incident was on Oct easy-to-identify 15th, I found wear as a buttons. pendant, or myself on the floor, on your belt, with a knot on my head or on your wrist Plus it’s and a hole in the wall. The reliable. From the third incident was on Oct 23rd, I felt waterproof pendant to the strange sitting in the chair. I could not sophisticated base unit the move my right arm or leg. I learned that the hole in my heart (from birth), was state-of-the-art 24/7 call center, the entire system is designed to forcing the high blood pressure through the give you the peace of mind in hole and right up to my brain, this was the reasons for all three strokes. I can walk and knowing you are never alone in talk with the exception of a weak right arm. an emergency. You get two-way If it was not for the Medical Alarm, who communication with a live person in our Emergency Response Center, knows what the outcome could’ve been.” W. Blackledge and there’s a battery backup in case of a power failure. | https://issuu.com/vicksburgpost/docs/090411 | CC-MAIN-2016-50 | refinedweb | 41,210 | 66.74 |
Hough Transform on Live Video
I am trying to run a hough transform on live video input. It runs fine on what seems to be the first frame then it throws this:
The thread 'Win32 Thread' (0x12b0) has exited with code 1950154752 (0x743d0000). HEAP[openCV.exe]: Invalid address specified to RtlFreeHeap( 01DC0000, 041EC660 ) Windows has triggered a breakpoint in openCV.exe. This may be due to a corruption of the heap, which indicates a bug in openCV.exe or any of the DLLs it has loaded. This may also be due to the user pressing F12 while openCV.exe has focus. The output window may have more diagnostic information. The program '[900] openCV.exe: Native' has exited with code 0 (0x0).
And the code:
#include "opencv2/highgui/highgui.hpp" #include "opencv2/imgproc/imgproc.hpp" #include <iostream> #include <stdio.h> #include <stdlib.h> using namespace std; using namespace cv; int main(int, char**) { VideoCapture stream1(0); if(!stream1.isOpened()){ cout << "cannot open camera" << endl; } while (true) { Mat cannyImage; Mat cameraFrame; Mat houghImage; stream1.read(cameraFrame); imshow("Source Input", cameraFrame); cvtColor(cameraFrame, cannyImage, CV_BGR2GRAY); Canny( cannyImage, cannyImage, 100, 200, 3 ); imshow("Canny", cannyImage); vector<Vec4i> lines; HoughLinesP(cannyImage, lines, 1, CV_PI/180, 50, 10, 10 ); if (waitKey(30) >= 0) break; } return 0; }
I have been googleing and searching around to no avail. Anyone have an pointers?
You haven't release the resource of the camera(cvCaptureFromCAM and cvQueryFrame are c api, you need to deal with resource by yourself), please use c++ api(VideoCapture) instead of c api.Try your best to stick with c++ api rather than c api, this could save you from a lot of headaches.
Alright. Thanks! I changed my code, however I am still getting the same windows breakpoint error. Code posted above. | https://answers.opencv.org/question/21903/hough-transform-on-live-video/ | CC-MAIN-2021-17 | refinedweb | 296 | 67.76 |
Eventually.
Relevant blog posts
Original motivating post
Estimating forecast scalars
Instruments weights, correlations, and diversification multipliers
Optimising in the presence of costs
The breakout trading rule
Capital correction
Docker and automated trading systems
A note on support
However I can't guarantee that I will reply immediately, or at all. If you need that level of support then you are better off with another project. The most efficient way of doing this is by opening an issue on github.()
Bought the book from Amazon.co.uk
Is there a forum for the user/adopter of the accompany tools?
God blesses!!!
Best regards,
Sanyaade
I guess this is the forum :-)
You can also post on any of the threads I started on ET, like
or
Hello Robert,
I've been working through your book (slowly, I might add, at weekends and so on) and actually came along to your recent talk to the MTA held at CMC Markets in Aldgate.
Thank you for a great text!
BTW I bought the book on amazon.co.uk ... do I need to register or anything to use the website?
Thanks for the kind words. No registration is needed.
Hi Robert,
Great book! Forgive my ignorance, but what does the y axis of your equity curve represent? Does it mean that if I had 100k in 1983, I would have 600k in profits in 2016? Thanks.
Thanks glad you liked it.
Yes, if you didn't compound your profits. See
If I had put 100k in an sp500 index fund from 1982 till now, and just left it alone, I would have made about 10% per year. If I had instead put that money in your system, as plotted by your equity curve in this post, what would my average annualized return have been? Thanks.
There's a simple answer to this and a complicated answer.
The simple answer is: about 16.7% a year
The complicated answer is:
- the equity curve shown is non compounding (equivalent to a log scale). My average return can be compared directly with the s&p but if you plotted them on top of each other mine would look worse.
- the vol on the s&p 500 was lower; around 15% a year compared to the 25% a year targeted here (and the 30% realised). You ought to chop my return in half to reflect it's higher risk: so call it 8.3%
- trading futures you can put most of your money in the bank because you only need <10% for margin. So really you should add on LIBOR to all my performance figures, or deduct them from the S&P 500. That's probably about 4% a year on average, so my comparable return is really around 12.3%
- no investor would hold a trend following system as their only investment (or the equivalent as a share of a CTA like hedge fund). They'd own equities, and bonds, as well. The main benefit of a trend following system is that it adds diversification to traditional portfolios.
Thanks for the thorough answer. Might I be better off investing in a strategy such as the one on page 52 of this paper?
It shows an average return of 19% over a very long backtest, with 15% volatility. Plus it only requires a monthly (rather than daily) rebalance. What do you think?
Firstly I think I should state clearly that my 'raison d'etre' isn't to push one particular trading strategy over another. It's to provide people with the tools to intelligently evaluate and construct their own strategies.
Having said that there are several points I'd make (in the spirit of teaching you how to evaluate)
[I can't work out exactly what the other backtest is doing, so these comments are based on my impressions rather than a detailed analysis]
- like the S&P 500 the other backtest also includes return on invested cash.
- I think it is a 'long only' strategy, so it will have more beta exposure. To evaluate the two strategies properly you'd need to account for this.
- you could rebalance my system monthly and it wouldn't affect the performance much
- it looks like they're using a single moving average. Using a blend of three like in mine is more robust on an out of sample basis
- it looks like the system is binary, going from fully long to flat with nothing in between. My system is continous. They don't include trading costs which will be higher in a binary system
- VERY IMPORTANT POINT: backtested performance is subject to a large amount of statistical uncertainty. You should NEVER pick a system purely because it has a higher backtest than another because of the dangers of overfitting. I could have very easily presented a system which made 50% a year on 15% volatility (but it's not one I'd advise you trading!)
These are just a few brief comments that you might want to consider when decididing what is the right strategy for you.
Having said all that as I said above I'm not here to push one strategy over another.
If the way the other system works suits you, and you think it's robust, then by all means trade it. Almost any trading system is better than none. This system doesn't seem to commit many of the crimes I mention in my book: in particular as it doesn't use leverage it's probably not too toxic.
And if anything I'd rather you read my book (naturally!) and perhaps used it to build something that combined the elements of systems you like together.
Great points, thanks. Indeed, the system I referenced has performed quite unimpressively the past few years, calling into question whether it is overfitted to past data. However, it does simply use a 12 month look-back for all instruments. Also, don't you think there's something to be said for such a long backtest. Sure, a five year backtest may be meaningless, but a 30 year backtest (like your own) is very convincing to me.
A 30 year backtest is better than a 5 year but still no panacea.
I'm surprised when you say you think that if you only rebalanced your system monthly, it wouldn't make much difference. I thought that one of the main benefits of rebalancing your system daily is that you'll quickly catch any dangerous dips before they become too large. Given that you don't use stop losses, I would think a daily rebalance would be quite important.
Not really; because the underlying trading rules aren't super quick they aren't massively affected by less frequency rebalancing.
Good to know, thanks. I'm considering running the 8 instrument system with thresholding, using excel and placing orders manually, and only checking in on it once a month. In this scenario, do you think there's a benefit in incorporating stop losses?
What trading rules are you using? If mainly trend following then you probably don't need seperate stop losses.
I was going to use the rules you use for your system in chapter 15: carry along with the three slower exponential moving averages.
Then personally I don't think you need seperate stop losses.
Thanks for all your guidance, Robert. I'm finding that my broker doesn't offer the instruments in your list that are traded in euros. Given that fact, would it make sense for me to just replace them with the next three instruments on your list: 2yr notes, Lean hogs, and GBP?
Are you trading futures?
Yes, they don't seem to offer european volatility, eurostoxx, or Korean 3 year bonds. It's TDAmeritrade. Have you ever heard of such a limitation?
It appears they only offer about 50 futures contracts. I would switch brokers, but there are few that allow futures trading in a US retirement account (IRA). Interactive brokers does, but they grossly inflate the maintenance margin required for futures in retirement accounts. Given this limitation, and my desire to diversify your system across about 8 instruments, should I replace the 3 unavailable instruments with the next 3 available ones on your recommended list? Thanks.
So from here for eight instruments I have KR3, V2X, Eurodollar, MXP, Corn, Eurostoxx, US Gas, Platinum
You can't trade KR3, V2X and Eurostoxx. So you should replace them with a bond (US 2 year), a vol (VIX) and an equity (Nasdaq).
I see, makes good sense, thanks. I hope I am not cluttering your comments section with my very specific questions, but with that risk in mind, here's another question: I noticed this quote from your chapter 15: "With this relatively high volatility target [of 20% annually] you’ll be checking your account value, and adjusting your risk, every day." I am intending to run at 20% volatility, but am planning to adjust risk (or rebalance) only once a month. Granted, I will be diversified across 8 instruments rather than 6, but am I taking on too much risk in this case?
With a vol target of 20% a year a one month one standard deviation move will be 5.7%. So if you have a 3 std dev move over one month that's 17% of your position you'd need to cut. Quite a big cut in your position to delay. Personally I'd run at a lower vol or rebalance weekly or daily.
Hi Rob,
Quick question: in get_positions_from_forecasts in accounting.py, the "multiplier" formula is divided by 10; is this because your average_absolute_forecast is 10? That would mean we need to change that value if we're using a different average absolute value I believe. Thank you!
Yes - but you don't need to change that value - in that part of the code it's just an arbitrary number. The p&l for forecasts doesn't know about the real trading system, and just produces a figure for an arbitrary risk target.
If you want to change 10 where it matters in
average_absolute_forecast: 10
Got it, thanks Rob!
Hi Rob, when I run the below code, I get a sharpe of -0.27. Might this indicate that the Carry rule simply doesn't work well for CORN? Should I exclude CORN from my list of instruments, and use something else like WHEAT? Thanks.
from systems.provided.futures_chapter15.basesystem import futures_system
system=futures_system()
print(system.accounts.pandl_for_instrument_forecast("CORN", "carry").sharpe()) ## Sharpe for a specific trading rule variation
Congratulations. You're well on your way to overfitting your trading system. By all means trade wheat as well if you have sufficient cash, but use proper robust fitting techniques in the code to decide what allocation to give to different forecasting rules and instruments.
Hi Rob. thank for the series of blogs. In the main page of the githug for the project, the backtesting and "trade with IB" is identified as eventual futures. Are they available now? What is the timeframe to have support for live IB trading?
Thank you
Not available. No idea when. I do this in my spare time, when I'm not doing other things.
Hello Rob,
Supposing I wanted to use your full system for forecasting & portfolio design in the nightly backtesting routine of my own Python trading system (for private, individual use), would you say it'd be a painful and foolish endeavor to attempt to crop out all components linked to the 'simplesystem' script from Chapter 15 of your book, versus just keeping pysystemtrade intact as its own component of my algo? Apologies in advance for the brazenness of that question :)
It would be trivial to delete all the trading rule codes, which is only a small fraction of the codebase anyway, although they appear in multiple places. But why would you want to do this?
Sorry, I misphrased that a bit...should've used "transfer all components" from 'simplesystem' instead of stating 'crop out'. Basically, I'm just looking to save time and mistakes by using your system, but I'm targeting a different data connector than IB, and so on...still, the differences are a moot point, I guess, if everything is partitioned off appropriately on my side.
Yes, hopefully it's sufficiently loosely coupled that you can replace the data class supplied with your own (read the docs but basically the data class is specific to a particular asset class and source, but inherits from more generic classes).
Hi Rob, I was running your latest pysystemtrade code with volatility set to 30. When I backtested it, it showed an annual standard deviation of 16.45. Shouldn't that be closer to 30?
Please tell me exactly the code you ran.
Hi, Rob. As I investigate further, it looks like I forgot to change the volatility in the YAML config file from its default of 20. Sorry for the false alarm. On another note, I am now getting the warning "Carry is deprecated, used Carry2". Is this something I switch in the YAML file? It looks like Carry2 requires different arguments. Thanks.
No, don't worry about that.
Hi Rob,
When I look at the annual return of my pysystemtrade backtest it shows 25K so far for 2018. But, when I look at the backtested daily returns for 2018, they sum up to zero. Do you know why? Thanks.
NO but if you give me some code so I can reproduce your error I might be able to tell you why
Sorry, Rob. I realized I made a dumb formula error in my summation in Excel. So, the pysystemtrade backtest results are vindicated! Thanks, anyway, for offering to help.
Dear Rob,
I came across your repository and subsequently your site after hearing suggestion from Michael Halls-Moore. I must say I am immensely impressed and overwhelmed by the amount of you had put into the Python module.
I'm currently studying MSc in computer science and hoping to one day become a quant developer for a firm in New York or London. I would really appreciate it if you could give me some advice on achieving such goal.
As a first step I am reading Michael Halls-Moore's ebook on algorithmic trading and would like to read your book soon. What are other things I should do? Thank you.
Ernie Chans books are also very good.
Hi Rob,
I appreciate highly your work lightining this hard path for non professionals CS. I've learned a lot cause your blog. Now I feel more confident to start my own project and I have a doubt, if you don't mind:
Despite I'm not pretending any HFT system, I'm still aiming to build one which could be fast enough for decent intraday work. Since I work with IB ( Interactivebrokers ) I'm studying its API. As you know it comes in several languages. I understand that using C/C++ is the best choice for max speed, but I see you use C++ with Phyton, I wonder if this set is optimal for speed or is there a toll ? I imagine that developing speed is better with Phyton but I worry that it might not be the fastest in live-work.
A Note: I'm economist but absolutly enthusiast one by this work area. I can code in C# but never tried before C++ or Phyton.
Should I follow working in C# ?
Should I work in C++ plus Phyton?
Thanks in advance!
Sorry for the delay in responding.
You've identified the issues: python is relatively slow compared to C++ (I don't know C# at all); but it's easier to develop in. So it comes down to whether python is too slow for what you intend to do. It might be worth asking here
Hi Rob,
I have been running a lot of backtests with your system, and it seem like in all cases, when I leave out the Carry rule, I get a better sharpe. For example, if you run your base system, but change the allocation from 50% to 0%, you get a sharpe of .47, rather than .41 with Carry. This has happened to me when I backtested many more complicated systems: Carry is always a negative impact on the sharpe. Why do you think this is? Thanks.
How many instruments are you using? If it's just a few, then you might by chance of picked instruments on which carry hasn't worked so well. In any case the reduction isn't statistically significant so I'd be cautious about taking carry out given the overwhelming evidence elsewhere that it has worked in the past.
In my example, I was running the exact system for which you show the plot on this page, so 5 instruments. But I get a similar percentage reduction in sharpe, when I run it with 16 instruments, and a couple of other flavors. May I ask if your current system seems to improve if you remove carry, as an experiment?
Apologies for the slow response to your comment. 99% of the comments I get are spam, so inevitably when deleting them a few real comments get overlooked.
No, removing carry in my current system (37 futures markets) is harmful to returns.
Hi Rob, wonderful book which came at the perfect time for me as I recently decide to leave a 20y finance career as a senior quant in electronic trading and was planning to spend some time trading for my own account. This framework was exactly what I was looking for!
Also looking at your python framework that looks very well organized and planning to use it at least as as starting point.
Quick question on that actually. The data api seems limited to futures and fx and there are no objects hierarchy for equities. Any reason for that? Should I use the futures data model for equities or is something you just haven't implemented yet?
Thank you!
Daniel
I'm not planning to; equity trading is hard because of things like dividends and splits. It's more likely I'll add options, and that isn't very likely in the near future. | https://qoppac.blogspot.com/p/pysystemtrade.html | CC-MAIN-2020-40 | refinedweb | 3,048 | 71.95 |
Using APIs in your react project is a common use case. In this tutorial, we will be looking at two use cases
We will be using functional components and the useEffect hook. Some familiarity is expected.
In this use case, the data is only loaded once - whenever the user views the app or refreshes the page. Initially, a 'Loading…' text is shown. This text is later updated with the actual API data. Below is the code snippet of the component which causes the above behavior
Let's discuss the code in 3 parts, the states, the useEffect hooks, and the rendering logic
const [isLoading, setIsLoading] = React.useState(true);
We have two states. The isLoading state is a boolean variable initialized to True. This state is used to keep a track of whether the data is still loading or it has already been loaded. The setIsLoading function is used to toggle this state variable. After the API returns the data, we will use this function to toggle the value for isLoading
const [data, setData] = React.useState([]);
Next, we have the data state. This state is initialized to an empty array. It will be used to store the data returned by the API. You can initialize the state to an empty object as well. However, the API I am using in the example returns a list and therefore an empty list seems like the right choice. The setData function is used to update the state variable data after the API returns the data.
React.useEffect(() => { const url = ""; fetch(url) .then((response) => response.json()) .then((json) => setData(json['results'])) .catch((error) => console.log(error)); }, []);
The above useEffect Hook is used to make the request to the API. The '[]' parameter tells React to run this hook only once. The hook runs after the page has loaded. A simple fetch request is made and after the promise(s) are resolved, we use the setData function to update the state variable data
React.useEffect(() => { if (data.length !== 0) { setIsLoading(false); } console.log(data); }, [data]);
The next useEffect hook runs whenever the state variable data is updated. It does a simple check, if the state variable data is not empty, i.e the API has returned the data, it sets the state variable isLoading to False.
Note: The above case assumes a happy case, i.e the API will always return the data. However, this is not true. Therefore error handling should also be added.
return ( <div> {isLoading ? ( <h1>Loading...</h1> ) : ( data.map((user) => ( <h1> {user.name.first} {user.name.last} </h1> )) )} </div> ); }
The rendering logic is pretty straightforward, if the state variable 'isLoading' is True, we will display the 'Loading…' indication. If it is false, we simply map over the state variable 'data' and display all the items in the array.
Below is the entire code snippet
We will discuss the code in 3 parts.
const [showData, setShowData] = React.useState(false);
The first two state variables are the same as the ones in the previous section. We will discuss the third state variable showData.
When the user views the page for the first time, we do not want them to see the API data or the 'Loading……' text. Therefore we add a simple check to see if the user has clicked the button. After the user clicks the button once, there are only two views
Every time the user clicks the button again, we just toggle between the two views mentioned above.
const handleClick = () => { setisLoadingData(true); setShowData(true) const url = ""; fetch(url) .then((response) => response.json()) .then((json) => { setisLoadingData(false); setData(json["results"]) console.log(data); }) .catch((error) => console.log(error)); };
This is similar to the first useEffect Hook in the first use case. The only difference is that we set our state variable showData to True.
return ( <div> <button onClick={handleClick}> Load Data </button> {showData ? ( isLoadingData ? ( <h1>LOADING DATA........</h1> ) : ( data.map((user) => ( <h1> {user.name.first} {user.name.last} </h1> )) ) ) : ( <div></div> )} </div> );
First, we have a check for showData, this is to ensure that initially, the user doesn't see the 'Loading….' text nor the API data. After the user clicks the button, showData is set to True. After this, the rendering logic is similar to the first use case.
I hope you found this article helpful. Add me on LinkedIn, Twitter | https://realpythonproject.hashnode.dev/how-to-use-apis-with-react-functional-components | CC-MAIN-2021-49 | refinedweb | 719 | 67.76 |
In this example I will demonstrate how to expose product catalog as syndication feed using either atom or rss. I will start of with creating a new syndication library as shown below.
The code generated by Syndication library has a working example that you can immediately run and see the exact behavior. The output generated when you run the example is shown below.
Output contains a single feed that has a title and content. You confirm its an rss feed by doing view source.
The project consists of WCF service. Service has a contract, an interface attributed with ServiceContract. It has one method called CreateFeed attributed as OperationContract. There is another class Feed1 that implements the contract. The output is shown below.
In addition to the service contract attribute, there are two other attribute that define that this service can return either atom or rss. CreateFeed method return a SyndicationFeedFormatter that could either be rss or atom depending upon what is requested based on the querystring. SyndicationFeedFormatter is a complex type that is defined inside System.ServiceModel.Syndication namespace. This is the namespace where you will find all the classes necessary to create syndication feed. The method is also attributed with WebGet attribute with UriTemplate set to *. What this indicates is, if the request is made to root url of the service, it needs to map to CreateFeed method. The implementation for the contract is as shown below.
In the above example we start with creating a syndication feed based on title and description. Next we are creating a list of syndication items and than assigning it to the items property of SyndicationFeed. We are hard coding two items but we will change that soon with a LINQ query that returns products from NorthWind database. The next step is checking to see, based on querystring if user is looking for rss or atom feed. Depending on the querystring passed, we are returning either atom10 or rss20 feed formatter. Let's go ahead and modify the the code above to return feed for the products table.
From the code below, you can see that I am generating SyndicationItem based on linq query for products.
Another intresting place to look at is the App.config and the end point that gets generated.
In the configuration above, you would notice that we are using webHttpBinding which is the new binding that got added in .net 3.5 that does all the magic of exposing rest based services. | http://weblogs.asp.net/zeeshanhirani/archive/2008/05/05/exposing-product-catalog-as-syndication-feed-using-wcf.aspx | crawl-002 | refinedweb | 414 | 57.16 |
The Java Specialists' Newsletter
Issue 122
2006-03-08
Category:
Tips and Tricks
Java version: JDK 1.5
Subscribe
RSS Feed
Welcome to the 122nd edition of The Java(tm) Specialists' Newsletter. On Monday we had the hottest day in Cape Town since they began keeping records in 1957. It was a sweltering 41 degrees Celsius at the airport, and probably much hotter in the city centre. I took most of the day off and spent it with a newsletter subscriber visiting me from Amsterdam. We went to the second largest granite outcrop in the world, which is quite close to where I live. If you ever come to Cape Town, make it a priority to go see the Paarl Rocks.
We are busy streamlining our website to make it more navigable. In addition, we are moving over to a dedicated server, which should sort out all the downtime issues he have had recently. Until that is complete, you might have the misfortune of not getting through to our javaspecialists.eu. We will send out another newsletter when we have moved..
Part of the job of installing our own dedicated server involves downloading software from the internet onto our machine. I did not want to punch a hole in my router to allow me to open up an X session onto the server. Considering my slow internet connection, I also did not want to first download the files onto my machine, then upload onto the server.
A technique that I have used many times for downloading files from the internet is to open up a URL, grap the bytes, and add them to a local file. Here is a small program that does this for you. You can specify any URL, and it will fetch the file from the internet for you and show you the progress.
You can either specify the URL and the destination filename or let the Sucker work that out for himself.
Some URLs can tell you how many bytes the content is, others
do not reveal that information. I use the Strategy Pattern to
differentiate between the two. We have a top level Strategy
class called
Stats and two implementations,
BasicStats and
ProgressStats.
The stats are displayed in a background thread. This means that the Stats class has to ensure that changes to the fields are visible to the background thread.
In my System.out.println(), I output a new Date() to show the progress of the download. This is usually a bad practice. It would be better to use the DateFormat to reduce the amount of processing that needs to be done to display the date.
The last comment about this class is the size of the buffer. At the moment it is set to 1MB. This is larger than necessary, so actual length will often be much smaller.
import java.io.*; import java.net.*; import java.util.*; public class Sucker { private final String outputFile; private final Stats stats; private final URL url; public Sucker(String path, String outputFile) throws IOException { this.outputFile = outputFile; System.out.println(new Date() + " Constructing Sucker"); url = new URL(path); System.out.println(new Date() + " Connected to URL"); stats = Stats.make(url); } public Sucker(String path) throws IOException { this(path, path.replaceAll(".*\\/", "")); } private void downloadFile() throws IOException { Timer timer = new Timer(); timer.schedule(new TimerTask() { public void run() { stats.print(); } }, 1000, 1000); try { System.out.println(new Date() + " Opening Streams"); InputStream in = url.openStream(); OutputStream out = new FileOutputStream(outputFile); System.out.println(new Date() + " Streams opened"); byte[] buf = new byte[1024 * 1024]; int length; while ((length = in.read(buf)) != -1) { out.write(buf, 0, length); stats.bytes(length); } in.close(); out.close(); } finally { timer.cancel(); stats.print(); } } private static void usage() { System.out.println("Usage: java Sucker URL [targetfile]"); System.out.println("\tThis will download the file at the URL " + "to the targetfile location"); System.exit(1); } public static void main(String[] args) throws IOException { Sucker sucker; switch (args.length) { case 1: sucker = new Sucker(args[0]); break; case 2: sucker = new Sucker(args[0], args[1]); break; default: usage(); return; } sucker.downloadFile(); } }
The Stats class needs a little bit of explaining. The field
totalBytes is written to by one thread, and read
from by another. Since we are writing with only one thread, we
can get away with just making the field
volatile. We have to make it at least
volatile to ensure that the timer thread
can see our changes.
The printf() statement
"%10dKB%5s%% (%d KB/s)%n"
looks beautiful, does it not? The %10d means a decimal number
with 10 places, right justified. The "KB" stands for kilobytes.
The %5s means a String with 5 spaces, right justified. Then we
have a %%, which represents the % sign. The newline is done
with %n. Cryptic I know, but for experienced C programmers this
should read like poetry :-)
The Stats class contains a factory method that returns a different strategy, depending on whether the content length is known. Having the factory method inside Stats allows us to introduce new types of Stats without modifying the context class, in this case Sucker.
import java.net.*; import java.io.IOException; import java.util.Date; public abstract class Stats { private volatile int totalBytes; private long start = System.currentTimeMillis(); public int seconds() { int result = (int) ((System.currentTimeMillis() - start) / 1000); return result == 0 ? 1 : result; // avoid div by zero } public void bytes(int length) { totalBytes += length; } public void print() { int kbpersecond = (int) (totalBytes / seconds() / 1024); System.out.printf("%10d KB%5s%% (%d KB/s)%n", totalBytes/1024, calculatePercentageComplete(totalBytes), kbpersecond); } public abstract String calculatePercentageComplete(int bytes); public static Stats make(URL url) throws IOException { System.out.println(new Date() + " Opening connection to URL"); URLConnection con = url.openConnection(); System.out.println(new Date() + " Getting content length"); int size = con.getContentLength(); return size == -1 ? new BasicStats() : new ProgressStats(size); } }
The
ProgressStats class is used when we know the
content length of the URL, otherwise
BasicStats
is used.
public class ProgressStats extends Stats { private final long contentLength; public ProgressStats(long contentLength) { this.contentLength = contentLength; } public String calculatePercentageComplete(int totalBytes) { return Long.toString((totalBytes * 100L / contentLength)); } } public class BasicStats extends Stats { public String calculatePercentageComplete(int totalBytes) { return "???"; } }
Let's run the Sucker class. To download a picture of me at the Tsinghua University in China, you would do the following:
java Sucker
which produces the following output on my slow connection to the internet:
Wed Mar 08 12:24:27 GMT+02:00 2006 Constructing Sucker Wed Mar 08 12:24:27 GMT+02:00 2006 Connected to URL Wed Mar 08 12:24:27 GMT+02:00 2006 Opening connection to URL Wed Mar 08 12:24:27 GMT+02:00 2006 Getting content length Wed Mar 08 12:24:27 GMT+02:00 2006 Opening Streams Wed Mar 08 12:24:28 GMT+02:00 2006 Streams opened 6 KB 2% (6 KB/s) 56 KB 17% (28 KB/s) 104 KB 32% (34 KB/s) 158 KB 49% (39 KB/s) 203 KB 63% (40 KB/s) 257 KB 79% (42 KB/s) 295 KB 91% (42 KB/s) 322 KB 100% (46 KB/s)
When I tried downloading the latest Tomcat version from my server, the speed was far more acceptable:
Wed Mar 08 11:25:52 CET 2006 Constructing Sucker Wed Mar 08 11:25:52 CET 2006 Connected to URL Wed Mar 08 11:25:52 CET 2006 Opening connection to URL Wed Mar 08 11:25:52 CET 2006 Getting content length Wed Mar 08 11:25:57 CET 2006 Opening Streams Wed Mar 08 11:25:58 CET 2006 Streams opened 1056 KB 18% (1056 KB/s) 2272 KB 38% (1136 KB/s) 3200 KB 54% (1066 KB/s) 4121 KB 70% (1030 KB/s) 5200 KB 89% (1040 KB/s) 5829 KB 100% (1165 KB/s)
There are ways of running this through a proxy as well, which you apparently do like this (according to my friends Pat Cousins and Leon Swanepoel):
System.getProperties().put("proxySet", "true"); System.getProperties().put("proxyHost", "193.41.31.2"); System.getProperties().put("proxyPort", "8080");
If you need to supply a password, you can do that by changing the authenticator:
Authenticator.setDefault(new Authenticator() { protected PasswordAuthentication getPasswordAuthentication() { return new PasswordAuthentication( "username", "password".toCharArray()); } });
I have not tried this out myself, so use at own risk :)
That is all for this week. Thank you for your continued support by reading this newsletter, and forwarding it to your friends :)
Kind regards
Heinz
Tips and Tricks Articles
Related Java Course
Would you like to receive our monthly Java newsletter, with lots of interesting tips and tricks that will make you a better Java programmer? | http://www.javaspecialists.eu/archive/Issue122.html | CC-MAIN-2017-09 | refinedweb | 1,448 | 63.59 |
Is there a cygwin analogue to the msvc _set_fmode()? That is, a function that sets the default mode of fopen, even if you don't explicitly specify it "rb" or whatever. Obviously, there's "use binary (or text) mounts". Less obviously, you can link against /usr/lib/binary.o (or -lbinmode), or text.o (or automode.o or textreadmode.o and the similar .a's). But I'm looking for an actual function call to replace the following code in libarchive: +#if defined(_WIN32) && !defined(__CYGWIN__) /* Make sure open() function will be used with a binary mode. */ /* on cygwin, we need something similar, but instead link against */ /* a special startup object, binmode.o */ _set_fmode(_O_BINARY); #endif I'm using binmode.o at present, but I'd prefer to just make a func call at the same place the WIN32-specific code does. (FWIW, you can't call the w32api _set_fmode() function and expect it to work; the msvc runtime and cygwin maintain different default _fmode variables). -- Chuck -- Unsubscribe info: Problem reports: Documentation: FAQ: | https://cygwin.com/pipermail/cygwin/2009-March/173813.html | CC-MAIN-2021-25 | refinedweb | 173 | 61.02 |
I have 5 threads and each having multiple request (nearly 50).When I
start to execute all threads every thread running fine with all requests
but only one thread (Thread3) not executing all required requests.
All my requests are executing through some conditional statement (ex:
If TC1=TRUE --> execute).In Thread 3 all conditions are
correctly defined but alwa
If TC1=TRUE --> execute
This is the Google Apps Script that makes the request:
UrlFetchApp.fetch('', {
method: 'put', headers: { 'Accept': '*/*' }, payload:
'foo=bar&baz=qux'});
Which can be
successfully posted to, for examination:
PUT /1bfk94g1 HTTP/1.1User-Agent: Mozilla/5
I have a request which works fine when i hit the service using POSTMAN
and POSTER plugins but when i try to hit load-balancing url i received
ERROR.
ERROR in POSTER ( plugin ):
ERROR in POSTMAN ( plugin ):
Part of
code:
@Path( "/" )public class Check {
public ResultSet rs; @POST @Pr
Please help me guysI am trying to crawl the sites using NUTCH, But
it gives me error "java.io.IOException: Job failed!"
java.io.IOException: Job failed!
I am running this command "bin/nutch solrindex http://<host
name>:8080/solr/ crawl/crawldb -linkdb crawl/linkdb
crawl/segments/*" and i am using NUTCH 1.5.1 and SOLR 3.6.1 and jdk
java-7-openjdk-i386 and ubuntu 12.04.
bin/nutch solrindex http://<host
name>:8080/solr/ crawl/crawldb -linkdb crawl/linkdb
crawl/segments/*
I am using the restkit FrameWork to design an application and i have a
question.
I am retrieving a json response from the webservice.
"items":[{ "nodeRef":
"workspace://SpacesStore/b1d5831-990d-47a8-a018-f1c1bf33f594",
"type": "document", "name": "x.jpg", "displayName":
"x.jpg", }]
Now
I am using mod_perl to handle file upload (multipart/form-data).When
files get uploaded, Apache2::Request automatically parse the request body
and store the content into a tmp file.
My question is that is
there any easy way I can write my own request body process method which can
override the default one since I need to do additional processing on the
file body such as encryptio
I am trying to extract the subdomain in the application controller, for
app wide use, this way,
@subdomain =
request.subdomains(0)
While this code works in any
other controller, in the app controller it throws an
undefined local variable or method `request' for
ApplicationController:Class
exception.
I
am running
Is there a recommended way to pass a variable to all my views? Namely in
my case, I want to pass a UserProfile object that Foreign Keys a
django.contrib.auth.models.User object. I find most if not all my views
need to pull the UserProfile object and putting it in Middleware seems like
the way to go. It seems like I could do something like the following (I've
seen a couple of solutions online th
I'm experiencing problems with image not being encoded properly in my
custom multipart-form/data being posted. After sending out the HTTP
POST packet, I noticed the bytes representing the image is completely
different. I did this comparison by capturing data packets for a working
scenario (using the web browser) and using my python app.
There's no issues otherwise with how th
I am running S60 SDK 5th with Eclipse pulsar on win 7.
I
have oauth_token using with this Url. To get that
grant access screen by LinkedIn.
I am loading above Url using
htmlComponent, and adding HtmlComponent to form and show it.Occasionally when I click on the "Ok I'll Allow It" button (i.e. after
the butto | http://bighow.org/tags/request/1 | CC-MAIN-2017-26 | refinedweb | 594 | 63.19 |
Is there any way to have more than 1 file containing translations strings for each language?
At the moment I have 1 index.js file for each translation, but I would like to break these into smaller files to separate, for example, strings that never change (like country names) with those that are more likely to change due to changes in the interface, resulting in having both an index.js and static.js file for each language.
Is that possible?
There are several workarounds that allow this, depending on what you want to do. I have several files that I combine inside the
/src/i18n/index.jsfile to produce the file that the app eventually uses:
import en from './en-gb/index.json' import es from './es/index.json' import ar from './ar/index.json' import ru from './ru/index.json' import fr from './fr/index.json' import fa from './fa/index.json' import it from './it/index.json' import tr from './tr/index.json' import enstrings from './en-gb/strings.json' import esstrings from './es/strings.json' import arstrings from './ar/strings.json' import rustrings from './ru/strings.json' import frstrings from './fr/strings.json' import fastrings from './fa/strings.json' import itstrings from './it/strings.json' import trstrings from './tr/strings.json' const enCombined = Object.assign({}, en, enstrings) const esCombined = Object.assign({}, es, esstrings) const arCombined = Object.assign({}, ar, arstrings) const ruCombined = Object.assign({}, ru, rustrings) const frCombined = Object.assign({}, fr, frstrings) const faCombined = Object.assign({}, fa, fastrings) const itCombined = Object.assign({}, it, itstrings) const trCombined = Object.assign({}, tr, trstrings) export default { 'en-gb': enCombined, es: esCombined, ar: arCombined, ru: ruCombined, fr: frCombined, fa: faCombined, it: itCombined, tr: trCombined }
The above is just for all the system wide strings, I further have a set of components that are translated individually (using weblate and then via webhooks checked into my repo), and these I then have a script combine (merging all langs for the particular component) and then import the resulting file with the i18n-loader () into each component. It was a little fiddly to set up initially, but now it all works great. | https://forum.quasar-framework.org/topic/7134/multiple-i18n-translation-files | CC-MAIN-2022-33 | refinedweb | 355 | 53.88 |
Build Your own Simplified AngularJS in 200 Lines of JavaScript
Edit · Mar 9, 2015 · 20 minutes read · Follow @mgechev
My practice proved that there are two good/easy ways to learn a new technology:
- Re-implement it by your own
- See how the concepts you already know fit in it
In some cases the first approach is too big overhead. For instance, if you want to understand how the kernel works it is far too complex and slow to re-implement it. It might work to implement a light version of it (a model), which abstracts components that are not interesting for your learning purposes.
The second approach works pretty good, especially if you have previous experience with similar technologies. A proof for this is the paper I wrote - “AngularJS in Patterns”. It seems that it is a great introduction to the framework for experienced developers.
However, building something from scratch and understanding the core underlying principles is always better. The whole AngularJS framework is above 20k lines of code and parts of it are quite tricky. Very smart developers have worked with months over it and building everything from an empty file is very ambitious task. However, in order to understand the core of the framework and the main design principles we can simplify the things a little bit - we can build a “model”.
Scientific modelling is a scientific activity, the aim of which is to make a particular part or feature of the world easier to understand, define, quantify, visualize, or simulate by referencing it to existing and usually commonly accepted knowledge. It requires selecting and identifying relevant aspects…
We can achieve this simplification by:
- Simplifying the API
- Removing components, which are not essential for our understanding of the core concepts
This is what I did in my “Lightweight AngularJS” implementation, which is hosted on GitHub. The code is only with educational purpose and should not be used in production otherwise a kitty somewhere will suffer. I used this method of explaining AngularJS in classes I taught at HackBulgaria and Sofia University. You can also find slides from my talk “Lightweight AngularJS” in the bottom of the blog post.
Before reading the rest of the article I strongly recommend you first to get familiar with the basics of AngularJS. A good start could be this short overview of AngularJS.
Here are some links with code snippets/demos for the following article:
So lets begin with our implementation!
Main Components
Since we are not following the AngularJS implementation completely we will define a set of components and make references to their sources from the original implementation. Although we will not have 100% compatible implementation we will implement most of our framework in the same fashion as it is implemented in AngularJS but with simplified interface and a few missing features.
The AngularJS components we are going to be able to use are:
- Controllers
- Directives
- Services
In order to achieve this functionality we will need to implement the
$compile service, which we will call
DOMCompiler, the
$provider and the
$injector, grouped into our component called
Provider. In order to have two-way data-binding we will implement the scope hierarchy.
This is how the relation between
Provider,
Scope and
DOMCompiler will look like:
Provider
As mentioned above, our provider will union two components from the original framework:
$provide
$injector
It will be a singleton with the following responsibilities:
- Register components (directives, services and controllers)
- Resolve components’ dependencies
- Initialize components
DOMCompiler
The
DOMCompiler is a singleton, which will traverse the DOM tree and find directives. We will support only directive, which could be used as attributes. Once the
DOMCompiler finds given directive it will provide scope management functionality (since given directive may require a new scope) and invoke the logic associated to it (in our case the
link function). So the main responsibilities of this component will be:
- Compile the DOM
- Traverse the DOM tree
- Finds registered directives, used as attributes
- Invoke the logic associated with them
- Manages the scope
Scope
And the last major component in our Lightweight AngularJS, will be the scope. In order to implement the data-binding logic we need to have
$scope to attach properties. We can compose these properties into expressions and watch them. When we discover that the value of given expression has changed we can simply invoke a callback (observer) associated with the expression.
Responsibilities of the scope:
- Watches expressions
- Evaluates all watched expressions on each
$digestloop, until stable
- Invokes all the observers, which are associated with the watched expression
Theory
In order to have better understanding of the implementation, we need to dig a bit in theory. I’m doing this mostly for completeness, since we will need only basic graph algorithms. If you’re familiar with the basic graph traversal algorithms (Depth-First Search and Breath-First Search) feel free to skip this section.
First of all, what actually graphs are? We can think of given graph as pair of two sets:
G = { V, E }, E ⊆ V x V. This seems quite abstract, I believe. Lets make it a bit more understandable. We can think of the set
V as different Tinder users and the set
E as their matches. For example, if we have the users
V = (A, B, C, D) and we have matches between
E = ((A, B), (A, C), (A, D), (B, D)), this means not only that
A swipes right everyone but also that the edges inside our graph are these matches. Our “social graph” will look like this:
This is an example for undirected graph, since both users like each other. If we have partial match (only one of the users like the other one), we have directed graph. In the case of directed graph, the connections between the nodes will be arrows, to show the direction (i.e. which is the user who is interested in the other one).
Graph theory in AngularJS
But how we can apply graph theory in our AngularJS implementation? In AngularJS instead of users we have components (services, controllers, directives, filters). Each component may depend (use) another component. So the nodes in our AngularJS graph are the different components and the edges are the relations between them. For example, the graph of the dependencies of the
$resource service, will look something like:
There are two more places we are going to use graphs - the DOM tree and the scope hierarchy. For example, if we turn the following HTML:
<html> <head> </head> <body> <p></p> <div></div> </body> </html>
into a tree, we will get:
For discovering all directives in the DOM tree, we need to visit each element and check whether there is registered directive associated with its attributes. How we can visit all nodes? Well, we can use the depth-first search algorithm, which is used in AngularJS:
1 procedure DFS(G,v): 2 label v as discovered 3 for all edges from v to w in G.adjacentEdges(v) do 4 if vertex w is not labeled as discovered then 5 recursively call DFS(G,w)
Implementation
Since we are done with theory, we can begin our implementation!
Provider
As we said the
Provider will:
- Register components (directives, services and controllers)
- Resolve components’ dependencies
- Initialize components
So it will has the following interface:
get(name, locals)- returns service by its name and local dependencies
invoke(fn, locals)- initializes service by its factory and local dependencies
directive(name, fn)- registers a directive by name and factory
controller(name, fn)- registers a controller by name and factory. Note that controllers are not part of the AngularJS’ core. They are implemented through the
$controllerservice.
service(name, fn)- registers a service by name and factory
annotate(fn)- returns an array of the names of the dependencies of given service
Registration of components
var Provider = { _providers: {}, directive: function (name, fn) { this._register(name + Provider.DIRECTIVES_SUFFIX, fn); }, controller: function (name, fn) { this._register(name + Provider.CONTROLLERS_SUFFIX, function () { return fn; }); }, service: function (name, fn) { this._register(name, fn); }, _register: function (name, factory) { this._providers[name] = factory; } //... }; Provider.DIRECTIVES_SUFFIX = 'Directive'; Provider.CONTROLLERS_SUFFIX = 'Controller';
The code above provides a simple implementation for registration of components. We define the “private” object called
_providers, which contains all factory methods of the registered directives, controllers and services. We also define the methods
directive,
service and
controller, which delegate their call to
_register. In
controller we wrap the passed controller inside a function for simplicity, since we want to be able to invoke the controller multiple times, without caching the value it returns after being invoked. The method
controller will get more obvious after we review the
get method and the
ngl-controller directive. The only methods left are:
invoke
get
annotate
var Provider = { // ... get: function (name, locals) { if (this._cache[name]) { return this._cache[name]; } var provider = this._providers[name]; if (!provider || typeof provider !== 'function') { return null; } return (this._cache[name] = this.invoke(provider, locals)); }, annotate: function (fn) { var res = fn.toString() .replace(/((\/\/.*$)|(\/\*[\s\S]*?\*\/))/mg, '') .match(/\((.*?)\)/); if (res && res[1]) { return res[1].split(',').map(function (d) { return d.trim(); }); } return []; }, invoke: function (fn, locals) { locals = locals || {}; var deps = this.annotate(fn).map(function (s) { return locals[s] || this.get(s, locals); }, this); return fn.apply(null, deps); }, _cache: { $rootScope: new Scope() } };
We have a little bit more logic here so lets start with
get. In
get we initially check whether we already have this component cached in the
_cache object. If it is cached we simply return it (see singleton).
$rootScope is cached by default since we want only one instance for it and we need it once the application is bootstrapped. If we don’t find the component in the cache we get its provider (factory) and invoke it using the
invoke method, by passing its provider and local dependencies.
In
invoke the first thing we do is to assign an empty object to
locals if there are no local dependencies. What are the local dependencies?
Local Dependencies
In AngularJS we can think of two types of dependencies:
- Local dependencies
- Global dependencies
The global dependencies are all the components we register using
factory,
service,
filter etc. They are accessible by each other component in the application. But how about the
$scope? For each controller we want a different scope, the
$scope object is not a global dependency registered the same way as lets say
$http or
$resource. The same for
$delegate when we create a decorator.
$scope and
$delegate are local dependencies, specific for given component.
Lets go back to the
invoke implementation. After taking care of
null or
undefined for
locals value, we get the names of all dependencies of the current component. Note that our implementation will support resolving of dependencies only declared as parameter names:
function Controller($scope, $http) { // ... } angular.controller('Controller', Controller);
Once we cast
Controller into a string we will get the string corresponding to the controllers definition. After that we can simply take all the dependencies’ names using the regular expression in
annotate. But what if we have comments in the
Controller’s definition:
function Controller($scope /* only local scope, for the component */, $http) { // ... } angular.controller('Controller', Controller);
A simple regular expression will not work here, because invoking
Controller.toString() will return the comments as well, so that’s why we initially strip them by using
.replace(/((\/\/.*$)|(\/\*[\s\S]*?\*\/))/mg, '').
Once we get the names of all dependencies we need to instantiate them so that’s why we have the
map, which loops over all the strings in the array and calls
this.get. Do you notice a problem here? What if we have component
A, which depends on
B and
C and lets say
C depends on
A? In this case we are going to have infinite loop or so called
circular dependency. In this implementation we don’t handle such problems but you can take care of them by using topological sort or keeping track of the visited “nodes” (dependencies).
And that’s our provider’s implementation! Now we can register components like this:
Provider.service('RESTfulService', function () { return function (url) { // make restful call & return promise }; }); Provider.controller('MainCtrl', function (RESTfulService) { RESTfulService(url) .then(function (data) { alert(data); }); });
And later we can invoke
MainCtrl by:
var ctrl = Provider.get('MainCtrl' + Provider.CONTROLLERS_SUFFIX); Provider.invoke(ctrl);
Pretty cool, ah? And that’s how we have 1⁄4 of our Lightweight AngularJS implementation!
DOMCompiler
The main responsibility of the
DOMCompiler is to:
- Compile the DOM
- Traverse the DOM tree
- Finds registered directives, used as attributes
- Invoke the logic associated with them
- Manages the scope
The following API is enough:
bootstrap()- bootstraps the application (similar to
angular.bootstrapbut always uses the root HTML element as root of the application).
compile(el, scope)- invokes the logic of all directives associated with given element (
el) and calls itself recursively for each child element of
el. We need to have a scope associated with the current element because that’s how the data-binding is achieved. Since each directive may create different scope, we need to pass the current scope in the recursive call.
And here is the implementation:
var DOMCompiler = { bootstrap: function () { this.compile(document.children[0], Provider.get('$rootScope')); }, compile: function (el, scope) { var dirs = this._getElDirectives(el); var dir; var scopeCreated; dirs.forEach(function (d) { dir = Provider.get(d.name + Provider.DIRECTIVES_SUFFIX); if (dir.scope && !scopeCreated) { scope = scope.$new(); scopeCreated = true; } dir.link(el, scope, d.value); }); Array.prototype.slice.call(el.children).forEach(function (c) { this.compile(c, scope); }, this); }, // ... };
The implementation of
bootstrap is trivial. It delegates its call to
compile with the root HTML element. What happens in
compile is far more interesting.
Initially we use a helper method, which gets all directives associated to the given element. We will take a look at
_getElDirectives later. Once we have the list of all directives we loop over them and get the provider for each directive. After that we check whether the given directive requires creation of a new scope, if it does and we haven’t already instantiated any other scope for the given element we invoke
scope.$new(), which creates a new scope, which prototypically inherits from the current
scope. After that we invoke the link function of the directive, with the appropriate parameters. What follows after that is the recursive call. Since
el.children is a
NodeList we cast it to an array by using
Array.prototype.slice.call, which is followed by recursive call with the child element and the current scope. What does this algorithm reminds you of? Doesn’t it look just like DFS - yes, that’s what it is. So here the graphs came handy as well!
Now lets take a quick look at
_getElDirectives:
// ... _getElDirectives: function (el) { var attrs = el.attributes; var result = []; for (var i = 0; i < attrs.length; i += 1) { if (Provider.get(attrs[i].name + Provider.DIRECTIVES_SUFFIX)) { result.push({ name: attrs[i].name, value: attrs[i].value }); } } return result; } // ...
This method iterates over all attributes of
el, once it finds an attribute, which is already registered as directive it pushes its name and value in the result list.
Alright! We’re done with the
DOMCompiler. Lets go to our last major component:
Scope
This might be the trickiest part of the implementation because of the dirty checking functionality. In AngularJS we have the so called
$digest loop. Basically the whole data-binding mechanism happens because of watched expressions, which are getting evaluated in the
$digest loop. Once this loop is called it runs over all the watched expressions and checks whether the last value we have for the expression differs from the current result of the expression’s evaluation. If AngularJS finds that they are not equal, it invokes the callback associated with the given expression. An example for a watcher is an object
{ expr, fn, last }, where
expr is the watched expression,
fn is the function, which should be called once the expression has changed and
last is the last known value of the expression. For instance, we can watch the expression
foo with a callback, which on change is being invoked with the expression’s value and sets the
innerHTML of given element (a simplified version of what
ng-bind does).
The scope in our implementation has the following methods:
$watch(expr, fn)- watches the expression
expr. Once we detect change in the
exprvalue we invoke
fn(the callback) with the new value
$destroy()- destroys the current scope
$eval(expr)- evaluates the expression
exprin the context of the current scope
$new()- creates a new scope, which prototypically inherits from the target of the call
$digest()- runs the dirty checking loop
So lets dig deeper the scope’s implementation:
function Scope(parent, id) { this.$$watchers = []; this.$$children = []; this.$parent = parent; this.$id = id || 0; } Scope.counter = 0;
We simplify the AngularJS’ scope significantly. We will only have a list of watchers, a list of child scopes, a parent scope and an id for the current scope. We add the “static” property counter only in order to keep track of the last created scope and provide a unique identifier of the next scope we create.
Lets add the
$watch method:
Scope.prototype.$watch = function (exp, fn) { this.$$watchers.push({ exp: exp, fn: fn, last: Utils.clone(this.$eval(exp)) }); };
In the
$watch method all we do is to append a new element to the
$$watchers list. The new element contains a watched expression, a callback (observer) and the
last result of the expression’s evaluation. Since the returned value by
this.$eval could be a reference to something, we need to clone it.
Now lets see how we create and destroy scopes!
Scope.prototype.$new = function () { Scope.counter += 1; var obj = new Scope(this, Scope.counter); Object.setPrototypeOf(obj, this); this.$$children.push(obj); return obj; }; Scope.prototype.$destroy = function () { var pc = this.$parent.$$children; pc.splice(pc.indexOf(this), 1); };
What we do in
$new is to create a new scope, with unique identifier and set its prototype to be the current scope. After that we append the newly created scope to the list of child scopes of the current scope. In destroy, we remove the current scope from the list of its parent’s children.
Now lets take a look at the legendary
$digest:
Scope.prototype.$digest = function () { var dirty, watcher, current, i; do { dirty = false; for (i = 0; i < this.$$watchers.length; i += 1) { watcher = this.$$watchers[i]; current = this.$eval(watcher.exp); if (!Utils.equals(watcher.last, current)) { watcher.last = Utils.clone(current); dirty = true; watcher.fn(current); } } } while (dirty); for (i = 0; i < this.$$children.length; i += 1) { this.$$children[i].$digest(); } };
Basically we run our loop until it is dirty and by default it is clean. The loop “gets dirty” only if we detect that that result of the evaluation of given expression differs from its previously saved value. Once we detect such “a dirty” expression we run a loop over all watched expressions all over again. Why we do that? We may have some inter-expression dependencies, so one expression may change the value of another one. Thats why we need to run the
$digest loop until everything gets stable. If we detect that the result of the evaluation of given expression differs from its previous value we simply invoke the callback associated to the expression, update the
last value and mark the loop as
dirty.
Once we’re done we invoke
$digest recursively for all children of the current scope. So one more time we apply what we learned (or already knew) about graph theory! One thing to note here is that we may still have circular dependency (a cycle in the graph), so we should be aware of that! Imagine we have:
function Controller($scope) { $scope.i = $scope.j = 0; $scope.$watch('i', function (val) { $scope.j += 1; }); $scope.$watch('j', function (val) { $scope.i += 1; }); $scope.i += 1; $scope.$digest(); }
In this case we will see:
at given moment…
And the last (and super hacky) method is
$eval. Please do not do that in production, this is a hack for preventing the need of creating our custom interpreter of expressions:
// In the complete implementation there're // lexer, parser and interpreter. // Note that this implementation is pretty evil! // It uses two dangerouse features: // - eval // - with // The reason the 'use strict' statement is // omitted is because of `with` Scope.prototype.$eval = function (exp) { var val; if (typeof exp === 'function') { val = exp.call(this); } else { try { with (this) { val = eval(exp); } } catch (e) { val = undefined; } } return val; };
We check whether the watched expression is a function, if it is we call it in the context of the current scope. Otherwise we change the context of execution, using
with and later run
eval for getting the result of the expression. This allows us to evaluate expressions like:
foo + bar * baz(), or even more complex JavaScript expressions. Of course, we won’t support filters, since they are extension added by AngularJS.
Directives
So far we can’t anything useful with the primitives we have. In order to make it rocks we need to add a few directives and services. Lets implement
ngl-bind (called
ng-bind in AngularJS),
ngl-model (
ng-model),
ngl-controller (
ng-controller) and
ngl-click (
ng-click)
ngl-bind
Provider.directive('ngl-bind', function () { return { scope: false, link: function (el, scope, exp) { el.innerHTML = scope.$eval(exp); scope.$watch(exp, function (val) { el.innerHTML = val; }); } }; });
ngl-bind doesn’t require a new scope. It only adds a single watcher for the expression used as value of the
ngl-value attribute. In the callback, when
$digest detects a change, we set the
innerHTML of the element.
ngl-model
Our alternative of
ng-model will work only with text inputs. So here is how it looks like:
Provider.directive('ngl-model', function () { return { link: function (el, scope, exp) { el.onkeyup = function () { scope[exp] = el.value; scope.$digest(); }; scope.$watch(exp, function (val) { el.value = val; }); } }; });
We add
onkeyup listener to the input. Once the value of the input is changed we call the
$digest method of the current scope, in order to make sure that the change in the property will reflect all other watched expressions, which have the given property as dependency. On change of the watched value we set the element’s value.
ngl-controller
Provider.directive('ngl-controller', function () { return { scope: true, link: function (el, scope, exp) { var ctrl = Provider.get(exp + Provider.CONTROLLERS_SUFFIX); Provider.invoke(ctrl, { $scope: scope }); } }; });
We need a new scope for each controller, so that’s why the value for
scope in
ngl-controller is true. This is one of the places where the magic of AngularJS happens. We get the required controller by using
Provider.get, later we invoke it by passing the current scope. Inside the controller, we can add properties to the scope. We can bind to these properties by using
ngl-bind/
ngl-model. Once we change the properties’ values we need to make sure we’ve invoked
$digest in order the watchers associated with
ngl-bind and
ngl-model to be invoked.
ngl-click
This is the last directive we are going to take a look at, before we’re able to implement a “useful” todo application.
Provider.directive('ngl-click', function () { return { scope: false, link: function (el, scope, exp) { el.onclick = function () { scope.$eval(exp); scope.$digest(); }; } }; });
We don’t need a new scope here. All we need is to evaluate an expression and invoke the
$digest loop once the user clicks a button.
Wiring Everything Together
In order to make sure we understand how the data-binding works, lets take a look at the following example:
<!DOCTYPE html> <html lang="en"> <head> </head> <body ngl- <span ngl-</span> <button ngl-Increment</button> </body> </html>
Provider.controller('MainCtrl', function ($scope) { $scope.bar = 0; $scope.foo = function () { $scope.bar += 1; }; });
Lets follow what is going on in using the following diagram:
Initially the
ngl-controller directive is found by the
DOMCompiler. The
link function of this directive creates a new
scope and pass it to the controller’s function. We add
bar property, which is equals to
0 and a method called
foo, which increments
bar. The
DOMCompiler finds
ngl-bind and adds a watcher for the
bar property. It also finds
ngl-click and adds
click event handler to the button.
Once the user click on the button, the
foo method is being evaluated by calling
$scope.$eval. The
$scope used is the same on, passed as value to
MainCtrl. Right after that,
ngl-click invokes
$scope.$digest.
$digest loops over all watchers and detects change in the value of the expression
bar. Since we have associated callback for it (the one added for
ngl-bind) we invoke it and update the value of the
span element.
Conclusion
The framework we just built is far from a usable into production one, however some of its features:
- Data-binding
- Dependency Injection
- Separation of Concerns
work in a similar way they do in AngularJS. This helps understanding AngularJS in deep much easier.
But still you should not forget to not use this code in production, much better would be to just
bower install angular and enjoy!
And here are the slides from my talk “Lightweight AngularJS” as promised: | https://blog.mgechev.com/2015/03/09/build-learn-your-own-light-lightweight-angularjs/ | CC-MAIN-2018-47 | refinedweb | 4,217 | 56.25 |
This article will help you to create your own plugins for XrmToolBox.
XrmToolBox has plenty of useful plugins, and awesome people like you from community keep adding new plugins to solve Dynamics 365 developer’s day to day hurdles and make them more productive. Recently I was working on an XrmToolBox plugin (have a look at GitHub Dynamics 365 Bulk Solution Exporter). Let me share my learning experience with you.
Start by taking a class library project and installing XrmToolBoxPackage using NuGet. You need to create one Custom Windows Form Control in this approach. I created Dynamics 365 Bulk Solution Exporter this way because the template was not available back then.
Tanguy Touzard (Creator of XrmToolBox) has simplified the process by creating XrmToolBox Plugin Project Template for Visual Studio, which configures most of the things automatically. This is the preferred way of creating plugins now.
You won’t be getting this project template by default in Visual Studio, to install goto new project dialog, navigate to online section in the left pane and search for XrmToolBox in the right pane; the template will appear in the result. Install it. Visual Studio needs to be restarted in order to install the template.
You will get 2 main files in the newly created project where you need to work on.
This file contains metadata like the name of the plugin, icons, and color etc., which you can change according to the purpose of your plugin. I am changing the name of the plugin to “WhoAmI Plugin”.
This file help you to save/update any configuration value permanently, which will be available when you next time open the tool.
This is a Windows Form Control which is composed of 3 files.
This file has few other examples too like ShowInfoNotification(), LogWarning() & UpdateConnection() etc. for a complete list of available methods you can check PluginContolBase class from which this is inherited to.
Here in this sample we will me making WhoAmIRequest and will be showing the response to the user. Before that, you should have a look at GetAccounts() that how it is written. We need to understand 2 main methods while getting started one is WorkAsync(WorkAsyncInfo info) and other is ExecuteMethod(Action action).
In XrmToolBox plugins all requests to the server should be made asynchronously but here is a twist, we won’t be using async & await, instead WorkAsync(WorkAsyncInfo info) is provided in XrmToolBox.Extensibility namespace, Let’s look into WorkAsyncInfo class of framework which is a main class to execute code of any plugin.
You can look into constructors and properties yourself, let me talk more about callbacks available, which are Work, PostWorkCallback & ProgressChanged.
WorkHere we do our main processing, it has 2 arguments BackgroundWorker & DoWorkEventArgs.
PostWorkCalllBackOnce Work is completed, this is triggered to show output to a user. This gets the results from RunWorkerCompletedEventArgs parameter, which is returned from of Workcallback.
ProgressChangedIf our process is long-running, Unlike Message = “Getting accounts”; in GetAccounts(), we must show the progress to the user. We can pass progress as an integer in ProgressChangedEventArgs parameter.
ExecuteMethod(Action action) helps to get rid of connection hurdles for a plugin developer, all methods which require CRM connection should be called from ExecuteMethod which accepts Action as a parameter. If CRM is not connected then it will show a popup to connect before executing method.
Open MyPluginControl.cs[Design] and place a Button control in panel. Change name property to btn_WhoAmI. Optionally, you can change other properties and decorate.
Add one list box also with name lst_UserData below the button to show current user’s data.
Double click on this button to create an event and open codebehind file(MyPluginControl.cs) and write the below code.
Congratulations! You are done with your first XrmToolBox plugin now. Let’s test it now.
Build your code and grab the DLL from bin/debug folder, and place it in %AppData%\MscrmTools\XrmToolBox\Plugins folder. (You may refer to my previous article Installing XrmToolBox Plugins in No Internet OnPremises Environments).
Open XrmToolBox and search for your plugin, click to open it, when it asks to connect, click No, so you can verify ExcuteMethod functionality.
Here is your brand new plugin, all created by yourself. Click on Who Am I Button, it will ask to connect an orgnization first, because we have used ExecuteMethod() here.
Connect to an organization, after connecting to CRM, it will show the retriving message which is set in our Message property is WhoAmI(). Finally, it will show all informaion about current user in ListBox.
This DLL can be shared with anyone and they can use it. But to make it available to everyone you need to publsh it, which I will discuss in next article.
View All | https://www.c-sharpcorner.com/article/how-to-create-plugins-for-xrmtoolbox/ | CC-MAIN-2020-05 | refinedweb | 791 | 55.54 |
The Python 2:
- Datastore calls such as
ndb.get_multi(),
ndb.put_multi(), or
ndb.gql().
- Memcache calls such as
memcache.get(), or
memcache.get_multi().
- URL Fetch calls such as
urlfetch.fetch().
- Mail calls such as
mail.send().
Optimizing or debugging a scalable application can be a challenge because numerous issues can cause poor performance or unexpected costs. These issues are very difficult to debug with the usual sources of information, like logs or request time stats. Most application requests spend the majority of their time waiting for network calls to complete as part of satisfying the request.
To keep your application fast, you need to know:
- Is your application making unnecessary RPC calls?
- Should it cache data instead of making repeated RPC calls to get the same data?
- Will your application perform better if multiple requests are executed in parallel rather than serially?
The Appstats library helps you answer these questions and verify that your application is using RPC calls in the most efficient way by allowing you to profile your RPC calls. Appstats allows you to trace all RPC calls for a given request and reports on the time and cost of each call.
Optimizing your application's RPC usage may also reduce your bill. See the Managing Your App's Resource Usage article.
Watch a video demonstration.
Setup
There is nothing to download or install to begin using Appstats. You just need to configure your application, redeploy, and access the Appstats console as described in the steps below. The Appstats library takes care of the rest.function, if found.
See Optional Configuration below for more information on
appengine_config.py.
Django framework
To install the Appstats middleware in a Django application, edit your
settings.pyfile, and add the following line to be the first item in
MIDDLEWARE_CLASSES:
<pre suppresswarning="yes" class="prettyprint"> MIDDLEWARE_CLASSES = ( 'google.appengine.ext.appstats.recording.AppStatsDjangoMiddleware', # ... ) </pre>:
runtime: python27 api_version: 1 threadsafe: yes builtins: - appstats: on handlers: - url: .* script: main.app
Custom URL
If you need to map Appstats to a directory other than the default, you can use the
urldirective in
app.yaml:
- url: /stats.* script: google.appengine.ext.appstats.ui.app access the console at.
A tour of the Appstats console
The Appstats Console provides high-level information on RPC calls made, URL paths requested, a history of recent requests, and details of individual requests:
The RPC Stats table shows statistics for each type of RPC made by your application. Clicking a plus button expands the entry to show a breakdown by path request for the RPC:
The Path Stats table shows statistics for each path request sent to your application. Clicking a plus button expands the entry to show a breakdown by RPC for the path request:
If you have enabled the API cost tracking feature, this will also display costs.
The Requests History table shows data pertaining to individual requests. Clicking a plus button expands the entry to show a breakdown by RPC. Clicking on a request link shows a timeline for the request including individual RPC timing:
The RPC Timeline graph shows when specific RPC calls were made and how long the requests took to process. The RPC Total bar shows the total time spent waiting on RPC calls, and the Grand Total bar shows total time spent processing the request. As you can see from the timeline below, the majority of time was spent on RPC calls. This is often the case. The other tabs show additional information about the request. Understanding the impact of RPC calls on your application response time is invaluable when analyzing its performance.:
<pre suppresswarning="yes" class="prettyprint"> appstats_SHELL_OK = True </pre>
How it works
Appstats uses API hooks to add itself to the remote procedure call framework that underlies the App Engine service APIs. It records statistics for all API calls made during the request handler, then stores the data in memcache, using a namespace of
__appstats__. Appstats retains statistics for the most recent 1,000 requests.:
<pre suppresswarning="yes" class="prettyprint"> INFO 2009-08-25 12:04:07,277 recording.py:290] Saved; key: __appstats__:046800, part: 160 bytes, full: 25278 bytes, overhead: 0.019 + 0.018; link:.[REGION_ID].r.appspot.com/stats/detail?time=1234567890123 </pre>
This line reports the memcache key that was updated, the size of the summary (
part) and detail (
full) records, and the time (in seconds) spent recording this information. The log line includes the link to the Appstats administrative interface that displays the data for this event. | https://cloud.google.com/appengine/docs/standard/python/tools/appstats?hl=ar | CC-MAIN-2020-10 | refinedweb | 748 | 56.05 |
Today's Page Hits: 465
My contract with Sun has now expired. I have completed my graduation and I am no longer a student. I would like to thank everyone at Sun and the campus ambassador program for the wonderful 2 years that you have given me. Sun has given me a lot. It has transformed my life in many ways. My association with Sun has been very fruitful. It has been the most rewarding journey. I was recognized for my work at every step of the way. Sun has a beautiful open community culture, one that I'd want to take with me wherever I go. I have met great, intelligent, sharp professionals who strive hard at work and know how to have fun too at the right time. I met great people, gained a lot of knowledge and technical as well as practical real-life experience that will help me wherever I go. It was a pleasure serving as a campus ambassador and then as a Campus Ambassador Tech Lead.
It was an honor to work with Sun. Hope our paths cross again.
I'd like to take this opportunity to thank all the readers of my Sun blog and welcome you to come over to my personal blog, where I'll be blogging full time from now:
My JavaOne was OSUM!
My JavaOne experience was like an explosion of technical knowledge, networking opportunity, and all the geek-fun that I could have ever had. In short, it was incredible, unbelievable, fabulous, exceptional, marvelous, astounding, Breath-taking, knowledgeable and a lot of fun. So many companies, So many developers. So many students. So many countries. So many innovative ideas implemented in amazing projects. So many brain-feeding technical sessions. So many rockstars, evangelists, Java champions, JUG leaders, original book authors, CEOs, Java hackers, hardcore professionals and extreme programmers roaming about the conference hall, always read to have a chat with you if you want. JavaOne is true platform of convergence and collaboration for the entire Java community at one single place. The volume of things you can do at this place is overwhelming. There was just too much to catch up with – the innovative technology spotlights at the Pavilion, the fun robotics and Sun SPOT demos at the Change Your World playground, the LincVolt car, the Java Real time system, web-based sensor networks, next generation server processors, Sun’s compelling new cloud computing software, the activities at the Java Utopia, the raffles, the goodies and giveaways, the spinwheel, getting photos taken with the duke, bagging T-shirts, T-shirts and more T-shirts! I talked with James Gosling in person, face to face for more than 30 minutes. I had fun meeting a lot of students, enthusiastic attendees, Java rockstars and community leaders on the conference floor to take a minute for them to ask about their JavaOne experience so far and capture it in a JavaOne Minute. I enjoyed some very thrilling keynote sessions, seeing people like Scott McNealy and Jonathan Schwartz speak on stage. The James Gosling’s Toy show was very inspirational and enigmatic – it really showed how versatile the Java platform is and how people are making so many innovative projects with the platform. It was good to be a part of the OSUM booth and get to talk to university students about the OSUM community and it’s benefits. I met all the people I had just contact with on email till now and wanted to meet including people from the Sun SPOT team, Wonderland Team, Netbeans Evangelists and Dream Team members, OpenSolaris engineers, the Swing team, the JavaFX team, the Alice project team, Java Champions, Duke’s Choice award winners and many others. I attended some amazing technical sessions which were relevant to my interests and needs like "How to run PHP faster by using Java technology", "Alice 3: Introducing Java Technology Based Programming with 3D Graphics", "Continuous integration in the cloud with Hudson", "Fusing 3D Java technologies to create a mirror world", "Augmented Reality with Java Platform Micro Edition", "Maximizing Java technology based application performance on multicore platforms", "Storing Data in the Cloud" and "Ajax versus JavaFX technology". In the daytime, JavaOne was this platform for gaining vast amounts of technical knowledge and awareness and in the night time, things were a little different.. or I should say totally happening! – there were multiple parties every night hosted by different groups within Sun – first the OpenSolaris and Cloud party on the night of CommunityOne, then the Connected Student party, the JavaFX party, the JCP party, the JavaOne After Dark Bash. It was crazy. Sun folks really know how to kick-butt and have fun all at the same time. Mind-blowing keynotes, Quality technical sessions, BOFs, fun taking JavaOne Minute videos, checking out the cool pods at the Pavilion, networking and meeting people i've always wanted to meet in person, and other exciting JavaOne stuff in the day time and partying hard and eating at the best places at night was the usual order of the day. On the last day of the conference I took some time off from the conference and took a southbound caltrain to visit Palo Alto, the heart of the silicon valley. There I spent some time at Stanford University with a JIIT alumni who’s studying there.
Everyone was speculating about the future of Java after the acquisition of Sun by Oracle but Larry sent some very positive vibes for reassuring us that Java will continue to rule, and the combined company will do expanded investment for Java. Scotty McNealy left the stage with some emotional last words and everyone gave him a standing ovation. There were many grand announcements and product launches from Sun at JavaOne. The biggest undoubtedly being the Java Store – something which the Java community should have been given much earlier, but it’s never too late. This will open up consistent revenue streams for all the innovative Java developers out there. Sony Ericsson launched a similar app store too. All this is very exciting news for Java and specially JavaFX developers – if you haven’t already learnt JavaFX yet, jump on board before its too late. The opportunities are limitless now with such definitive distribution channels for your apps! Those who use iPhones know what I’m talking about – its the same thing now for Java/JavaFX. It’s going to be huge! Apart from that we got to see the first sights of JavaFX TV in action, the amazing JavaFX designer tool, JavaFX Mobile 1.2 along with the first JavaFX mobile handsets, Project Jigsaw, new offerings from various Sun partners like Paypal’s developer program (X.com), eBay’s JavaFX app, and lots of cool Duke’s choice award winner projects at the toy show!
Also, this year Sun has really made lot of efforts to connect with students at JavaOne and CommunityOne, starting with the OSUM lounge, which was THE place to hangout for students and all the fun activities like the scavenger hunt, the duke photo opportunity, a hang space with play stations, a lot of places where you can just sit down and relax on bean bags or watch a movie or play games! On the technical side, there were a lot of sessions of interest to students like a lot of stuff on JavaFX, Cloud computing, Project Kenai, etc.Not to mention the on the spot Java certification exams, Deep Dive sessions on OpenSolaris and a complete Java University track just for students and educators. Did I mention students got in free this year? :). In addition to all that, James Gosling led 80 students on a guided tour of the Pavilion at noon on the June 2nd (first day of JavaOne). David Douglas also conducted a tour of the cloud zone for students. The most exciting part of JavaOne was when I myself got a chance to sit down face to face with James Gosling in the students Q&A session at the Pavilion. There were giveaways at almost every exhibitor booth and lot of lucky draws to be won just for filling up a survey form. There were various other cool things for students at the Java Utopia and fun robotics and Sun SPOTs demos at the Change Your World playground. There was a lot of opportunity to network and connect at the Pavilion.
When we were done with our day full of serious conference business, Gary and David would make sure we had a fun time in the evening, taking us out to dine at the best restaurants. They even took us to a tour of San Francisco. Honestly, you guys have done more than we could have expected. Thanks for everything :) I had great fun at JavaOne. I whole-heartedly thank Sun and the CA Program for giving me this golden opportunity. Personally, I want to thank Gary, Tzel, David and Lin Lee for making sure we had a nice time both during the conference and outside in San Francisco. Thank you Liana, Colin, Kirby for helping out at the OSUM booth. Thanks Tom, Felipe, Kevin, Hyejin, Avinash and Ashwin for your company. Thanks Ganesh for giving us the entire opportunity to be present there. It would never have been possible without your support.
Me and other folks had participated in this episode of the JavaOne Radio Show with Chris Melissinos, Chief Evangelist and Chief Gaming Officer at Sun, talking about what we were looking forward to at JavaOne.
Here are all my JavaOne minutes:
Here all my photos from the conference. I’ve uploaded around 6000 photos and videos taken from all our cameras and organized them into 50+ sets in my shiny new flickr pro account (upgraded just for this purpose). I’ve put all those sets into a single collection:
Those of you could not attend JavaOne this year, don’t be disheartened – you can still watch all the keynote replays online, read articles on them, and download their presentations in PDF! It turns out that there’s even a blogging contest. Share what you’ve learned from the online PDF’s and get a chance to win $300.
I have published a series of 9 blog posts on JavaOne 2009 describing my experience in a bit more detail under these headings:
JavaOne 2009: The Prelude and the Journey
JavaOne 2009: CommunityOne
JavaOne 2009: The Pavilion
JavaOne 2009: The Conference Floor
JavaOne 2009: Keynotes
JavaOne 2009: James Goslng Q&A
JavaOne 2009: The Parties and Dinners
JavaOne 2009: San Francisco Sightseeing Tour
We did our San Francisco tour on June 4th. First we went to Pier 39 by Taxi, where we had lunch at the sea-facing restaurant Neptune.
Pier 39 is a festive marketplace built atop a pier near fisherman's warf, a very popular tourist place. Enjoy the fresh and lively atmosphere with soothing blues music playing in the background, go for a ride on the merry go round and enjoy the waterfront sight of the Alcatraz Island. The Neptune served some amazing Fish & Chips and an assortment of seafood items. It gave us a beautiful view of the Alcatraz Island while through the wall sized windows. We had lunch quickly and then took off for a bus tour of the city from Pier 39. It was an antique looking motorized bus with lovely wooden interiors and a very funny tour guide cum driver who kept us amused all the way telling different stories connected with the places we visited as well as important history. The guy was an expert. The bus took us to all the popular landmarks and attractions with 3-4 hours. We took a tour of downtown San Francisco, which covered the Palace of Fine Arts, Japantown, Chinatown, North Beach, Old Mason Street, Fort Point, the Golden Gate Bridge (!!), and back to Fisherman’s Warf.
San Francisco is a beautiful city.. a blend of old suburban homes in a variety of styles with modern infrastructure and amazing sights. I remember Gary asking the tour guide to tell us a joke and he said the biggest joke is that he's driving :). Fort Point is a small fort located just off the shore. It had some amazing views, compelling us to take lots of photos there. We just got 15 minutes wherever we stopped (which I guess was enough or our camera batteries would have drained anyway). We walked half way through over the Golden Gate bridge and came back.We came back and had some crepes at Pier 39. We then walked out to see the sea lions! Fresh sea breeze, soft barks of the lazy californian sea lions, and an evening sun in the backdrop.. that was just an amazing experience. I then walked back to the hotel with David and his english friend discussing iPhone and Guitars :)
This is part of a series of blog posts on my JavaOne 2009 experience..
I have to admit: Sun folks really know how to have fun! My JavaOne experience was full of parties, dinners and evenings, all with Sun employees and peers! Sun has super-geeks know how to have fun! Here is the roundup of all the official and unofficial JavaOne evenings I’ve had!
Sunday (May 31st)
Monday (June 1st)
Tuesday (June 2nd)
Wednesday (June 3rd)
Thursday (June 4th)
Friday (June 5th)
This well deserved a seperate blog post. If someone ever asks me, what was my most memorable moment at JavaOne – it would be this one. A student getting to sit down and chat face to face with a geek god is nothing that happens everyday. James Gosling is a really down to earth person. First he gave 80 students a tour of the best technology being demoed at the Pavilion and the same evening he invited students to have a Q&A session with him asking him whatever they want to ask about his life and work, and so we did.
I have captured the entire conversation in this video over here. And if you just want to see a short clip, here’s a JavaOne Minute made around it:
Here’s a transcript of our entire conversation:
Student: I have a good question for you I think. So when you were developing Java, what influence did other languages which were popular at the time on what you made Java
James Gosling: Huge.. huge.. you can find bits and pieces of doesn't different languages in there. The syntax is like C++, mostly because I wanted to trick a bunch of C++ programmers into using.. smalltalk. I mean there was basically an act of subversion to do that. You know, and, the smalltalk crowd never really got performance. So the way that the object model works is somewhat different. It's all about performance. Some of the object model comes from Simula. I used to maintain. I used to maintain a Simula compiler.
Student: Are there things that you see now, that you think you should have done back then?
James Gosling: If I could go back in time and give myself a period of 3-4 years to do stuff, you know if I could get a little time bubble off of the side of the timeline, there's a huge pile of stuff I would do. Generics should have been in the runtime model. But time is a funny thing, you don't get to do that, right. You know, once I had a long fight with Bill Joy about this- Ok Bill we could sit around for another 2-3 years and nail a bunch of these issues.. and by that time who will care.. or.. we've got this opening.. let's just do it now and get the hell out there and at some point you just gotta decide whether you're going to get out there or just sit there polishing. Yea, some of the things I would have loved to have, but.. I wouldn't given up those features in exchange for.. time. But unfortunately, the evil twin of having released a product is that your life gets consumed by it.. one of the 2 things happens.. either it fails miserably or you get the life sucked out of you.. because its a success.. and then you've got a gazzilion people using it. It's really hard to chose. You can't really decide if you want to change such software. I mean like, someone inside Coca Cola says lets change the formula for Coca Cola.. but they'll be like.. Not really. So as long as we don't make changes that break the NASDAQ or the Chicago Board Options Exchange, it's fine. We've got some spectacular Java apps out there. Almost every stock exchange on the planet runs a Java app. We don't get to break them. Every financial transaction system on the planet is a Java app. We don't get to break 'em either. On one hand, you know, that's like a millstone around our neck, and on the other hand, it's kinda cool. Student: Would there by any change in Java in the future which could possibly harm its past legacy, like as we see with the modularization happening with Project Jigsaw. James Gosling: Right, so there's breakage at different levels. Like at the language level, I don't think we'll ever break everything. Fortunately its a 2 level language. There's a Java virtual machine and the language itself. If we ever felt that there was something so compelling, we wouldn't break it, we would just call it a new language. There's some pretty interesting languages that run on the JVM, like Scala, and that's all goodness, instead of breakage. The libraries get to be kind of an interesting deal. There are things that we can do because we can partition the namespace. Classloaders can load different versions of effectively the same class. It actually works pretty well. You know, there's 2 things that I would love to break.. but not so much that it'd be worth the carnage. One of the sad things is that when you've got 6 million professional developers out there, who code everyday. Even at the Mythical man-month rate of 3 lines a day.. 6 million developers.. thats a lot of code. And its been that rate for well over a decade. And it's amazing how much some of that stuff people are still running. The numbers are pretty big. The problem is still pretty big.
Student: Is there a core team? Do people contribute and oversee that? How does the model actually work?
James Gosling: Actually, it's none of the above. It's all broken down into different subgroups that have responsibilities of their own. Within Java SE, there's a group that just does the VM, just does HotSpot. There's a group of people that just does the compiler and one which just does the graphics stuff. That's wrapped up in the Java SE group. And there's kind of a coherence mechanism. But's sort of like business teams who look at the feature lists and make sure that all the stuff is put together. Some companies like Apple, they have chinese walls between their teams, and they're not allowed to talk to each other and they're really not allowed to talk to each other. You know, at the cafeteria, people are talking about stuff like basketwall because they can't talk to each other about work with their coworkers. We do exactly the opposite, right. We have all kinds of different teams and we do all sorts of things to get them to cross-polinate. In some companies, there are architects and then there are developers. Na, we don't do that. Pretty much all developers are fully responsible for some piecce or the other and that sort of bundles up. We've got less experienced folks and more experienced folks. It's a fairly different model than what most companies do.
Student: Where'd the name come from?
James Gosling: Where'd the name come from? There's no interesting story about the name. There was just this other project and it became clear that doing a programming languagish thing was required. And I really wanted to do just the lower levels and didn't wanna do the whole compiler or anything like that. But since I was the only one who had made compilers before, so that was my part of the project. And you know I was sitting at the office, staring at the window. And while I was staring, there was an Oak tree outside, so Oak it was. Incredibly non-creative. I just needed a name, any name. If you think hard about it, it just sucks up too much time. By the time we got to 95 when we were about to release it, couldn't use that name, cuz the lawyers said there were so many conflicts and if we tried to launch a compiler named Oak, we'd get sued by a half a douzen different companies. Somebody tried to come up with clever names and intelligent names. And every clever or intelligent name was already taken so you had to come up with something kind of goofy.. umm and this whole thing came to a point when the number one thing stopping us from shipping was that we didn't have a name. We had this great meeting where they said things like "How does make you feel?". What makes you excited? Java? What else makes you excited? What is it about? It's about the web. Well, Silk then. So what happened was that we ended up with a list of dozen names, ordered them from top to bottom, handed them all to the lawyers. So whatever was it on the top, that's the one we'd go with, no more discussion on that. Java was number four in the list. Number one which I hated, but since it was being done like a democracy and the one which most people liked, I don't know why they liked it, was Silk. The lawyers said naaaah.. Forget what number 2 was, number 3 was my favorite, which is Lyric. Would have loved calling it Lyric, I think Lyric would have worked really really well. The problem was that it was already taken. There was some programming language used for control systems for submarines or something called Lyric.
Student: What features you'd like to add if you had the time (asked second time in a different way by a different student).
James Gosling: Well one thing is closures. I am a big fan of type inferencing and the recurrent syntax is really based around the way C does it. So I would do a lot more type inferencing. Make things a lot simpler. There's bits of Scala that I would pick up. I'm a big fan of functional languages. I'm more of a fan of functional langauges because of you know in multi-threaded systems there's all kind of techniques for mapping functional programs to thousands of threads. Sun makes machines that will easily support tens of thousands of threads and huge address space. We actually have one machine which 230 or so CPU's, its a strange number of CPU's. People use that for massive protein coding to various sorts of simulations and it's very popular in the stock trading crowd. Things about the object-model, or different approaches to object oriented programming with delegations. It's more like a research project. There are different ways of handling concurrency and multi-threading. I mean in every app that anyone writes, you really have to think well about multi-threading. Fortunately, for enterprise apps it works out pretty well, cuz all the framework does all the multi threading for you, but that isn't gonna scale.. well.. it will scale well in that particular problem domain. Any other question?
Student: Yea I've got a question. Why are the first few characters of a compiled Java file CAFEBABE?
James Gosling: Well there were 2 of them. CAFEBABE and CAFEDEAD. CAFEDEAD was picked up because around the corner from my office was a small cafe that we ate lunch at everyday. One of it's claims to fame was that the greatful dead before they were famous used to play there all the time. You know, when Jerry died, a little shrine appeared on the wall. At some point when we ate lunch, we refered to the place as CAFEDEAD. Somebody observed that, you know that's a hex number! That's a funny joke. So we used that for the object file format. And I needed one for the class file format, cuz originally the magic number had been ASCII string /bin/0. And that made a class file executable on unix. Problem was that it didnt make it executable on anything other than unix. So, when I first went .. nooo I need to worry about the Windows and Mac too. And grep was your friend.. so.. find something next to CAFEDEAD.. and so.. CAFEB.. umm CAFEBABE was just something that was weird enough to appeal my twisted self.
Student: How do you think about the JavaFX technology?
James Gosling: JavaFX is I think, incredibly cool. Its a totally different technology. 10-15 years ago, the effects that we used to do with Java were just stupid, we couldn't do it. None of the graphics accelerators were that fast enough to do all this stuff. Taking an image and then rotating it and then putting that little drop shadow behind it. You couldn't do it. But now it's what everybody wants to do.
Student: James, do you have something to tell our student audience watching this session?
James Gosling: You know the important thing for me is to have fun. I can't do a good job unless I'm having fun. One of the nice things about software development is that you can do anything with it. You can write applications that do art, you can do applications that do banking. You can sort of mix-match and have a good time. And just, you know, screw around at it for hours. Things like writing games is a good thing to do.
Student: What's the next big thing you're going to do?
James Gosling: Umm.. Dinner, Sleep.. getting the Java store really launched..
Thank You so much James for your time
This is part of a series of blog posts on my JavaOne 2009 experience..
The following are my writeups on all the JavaOne keynotes I could attend. In case you’re looking for my blog on the CommunityOne keynote, it’s over here. I missed the IBM General Session and Microsoft General session on June 4th as I was out in San Francisco, but I guess I had no interest in attending those anyway. There’s one keynote I wish I hadn’t missed – the Sun Technical General Session called “Intelligent Design: The Pervasive Java Platform” on Tuesday afternoon, but I enjoyed watching the replay webcast online. The major news coming out of the Sony Ericsson General Session was about their new “Application Shop” and the bigger part of that news is that unlike “other app stores” out there, Sony’s developer program will be completely free! The model is that the once the application is purchased by a user, the revenue is split 70-30 between the developer and Sony (yep, that’s the catch).Note that I use the terms “general session” and keynote interchangeably.
Here’s a list of links to all the general session replay webcasts:
The JavaOne Keynote (June 2nd)
There was a lot of anticipation, excitement and suspense before the JavaOne keynote. As Scott McNealy rightly said, there was an elephant in the room.
I got into the keynote 10 minutes late and the moment I entered I saw Jonathan Schwartz presenting on stage, and was I happy to see him. It was everyone’s speculation that he might not come up this year to JavaOne at all due to the recent changes in the company, but I’m glad they were all proved wrong. I felt very satisfied and happy to see him being the first to present at the JavaOne keynote. He has led Sun through rough seas and has completely redefined the company and he deserves to lead it going forward too. He talked about how Java has evolved from a simple virtual machine meant to isolate the hardware from a progam’s runtime to highly scalable systems powered by Java EE and the resource limited Java Me running on billions of mobile devices. He took us down memory lane and talked about Java’s success and rapid growth throughout the years.
Sony’s new java-enabled blueray players will now feature a high level of interactivity and peer to peer connectivity across set top boxes. Now thats what I call impressive. Next up was the Chicago Boards Options Exchange, who happen to be running completely on Java. Sun & Intel's hardware and software optimization helps CBOE reach 300,000 transactions/sec. We finally saw JavaFX on TV. Loved the demo. JavaFX was finally truly on all screens of your life (fully baked!). Then we had Nandini Ramani, Director of JavaFX up on the stage demoing the much awaited JavaFX Designer! I think that’s the coolest thing they’ve done for JavaFX’s adoption… now any artist who does not coding at all can easily develop rich, compelling, cross-platform and cross-device user interfaces. The best thing is that it can be loaded from within a browser window wherever you are! Talk about versatility! I was in all complete awe to see James Gosling for the first time in person, on stage when he came up. He talked about the launch of the next big thing for Java developers wanting to make some money from their hard work – the Java Store. I like the fact that many companies are adopting the store model started and executed successfully Apple. It’s ok to copy a concept, as long as you adapt it well to the problem domain you target. The Java Store will be a big hit. The only thing which matters then is the money model for the store, which I hope Sun dwells out nicely. Runescape is pretty slick. 150 million users isn’t a joke, specially for a game developed in Java! They deserved the Duke’s choice award. They seem to be a little overconfident about getting it to run so easily on JavaFX TV, but let’s see. Soon I found myself looking at 2 demigods and 1 god on-stage: Jonathan Schwartz, Scott McNealy and James Gosling. It couldn’t get any better.
I was glad to see Scott McNealy taking the stage to reassure the Java community about Java’s positive future and the healthy growth. As funny as Scott always is, he talked a bit about the possibilities that had opened up for Sun after being acquired by Oracle Corp including but not limited to “free advertising” – Java logo’s on sails boats, then he showed a slide with Larry shaking hands with Steve Jobs inside an iPhone – now that really excited.. really if that could happen, it would be the best thing ever for so many iPhone users (especially those like me who would get best of both worlds on the same device one and for all!). And Oh Yes – he speculated that now JavaOne could be conducted in Japan as well. We were all shocked to see him calling up Larry Ellison on stage. It was very reassuring to hear Larry speak positive things about Java’s future including 2 words that I cannot ever forget : “expanded investment” :). He plans to use JavaFX for the user interface to OpenOffice. He said JavaOne will definitely continue to happen in the years to come (yay!).
Scott made us all emotional in the end first taking the opportunity to thank all the Java developers and the Java community at large for all their contributions and then telling us that this is the last year that he’ll be the president of JavaOne. Everyone stood up in ovation for him when got off the stage, the crowd was cheering and applauding him as he settled down. Touchy moment there.
The wifi at the Moscone center was severely down during the keynote session and then lightly broken throughout the day as there were probably thousands of tweeters buzzing about the announcements made and then the goings-on of the first day of the conference. The funniest thing was that I did tweet during the keynote via my GPRS connection but my tweets still did not appear. Some problem with the Twitterific app on the iPhone. I then had to retweet everything in the afternoon.
We were all anticipating Larry Ellison to be present at the JavaOne keynote, and present he was. His presence, along with Scott’s confidence in Oracle reassured our faith in Java and put our worried minds to rest for a while.
Sun Mobility Keynote (June 3rd)
I was really looking forward to this session. This is one session which would demonstrate the ultimate versatility and pervasiveness of the Java platform and its empowerment across all devices defining your digital lifestyle. I got myself a seat right upfront in the 3rd row, all ready to be blown away by the cool demos. Eric Klein took the stage. He tells us that he’s a virgin at giving keynotes as it is his first time. It’s ok Eric, we all have our first times. He called up James and Chris to help him with some T-Shirt mobility ;). 3 lucky guys caught those special edition T-shirts. And they were special not because they were special, but because the catches won themselves the first JavaFX mobile phone by HTC. One of them was a campus ambassador, sitting just 3 rows behind me, shooting past by me. Here’s a video of how that happened.
I thought I heard Eric giving a hint that JavaFX would soon run on the iPhone? Maybe that was just me. We soon had some tweet love on the stage with a live demo of a twitter app running on JavaFX mobile phone. Paypal showed off its mobile app, allowing one to pay a friend on the go using their JavaFX mobile app. Way cool! Now I’ll just wish the people I owe money to don’t get hands on these ever! Anyway I’m glad Paypal got 2.6 billion new users thanks to Sun, the JavaFX Mobile team, and ofcourse that 1 developer who made the app in a week’s time! They then further make us envy with their one letter domain name – X.com, their new developer platform website.
Eric unveils the much awaited major release of JavaFX mobile as well its developer environment, along with the world’s first JavaFX enabled phone and a book I’m soon going to review, called “Essential JavaFX”
The JavaFX content authoring tool was already amazing and it just got better with the ability to develop for JavaFX Mobile as well! Open it up in a browser, connect your phone via USB, write your app and deploy! Select target device screen and everything adjusts automatically! Lots of other cool features like drag and drop binding. You just have to see this in action in the keynote replay!
Next up was Qualcomm with their new concept of Smartbooks – something between a laptop and a smartphone. It’s a netbook with 3G connectivity, built-in GPS receiver, always-on internet connectivity and a long battery life to keep it running for long as 24 hours. Now that is something I’d like to keep in my bag of gadgets too! Eric then shows off a Sun cloud connected media center app called the ClouDVR. And ofcourse, he ran it on the desktop, mobile and.. Yes JavaFx TV!
Klein also called on stage, folks from Orange to celebrate their 5th birthday and talk about JATAF and another initiative of Orange. He ended the inspiring and exciting keynote talking about the Java Store, coining the phrase “Submit once, sell anywhere”.
James Gosling’s Toy Show (June 5th)
This particular session is one that everyone waits for, for the duration of the entire conference. Many people come to JavaOne only to see this session. It is the prime attraction of JavaOne for genuine Java enthusiasts (not for the business minded, they’d prefer the keynotes instead). This is the session for geeks, by the biggest geek. Every year James picks up the best work done in Java and brings all of those folks together to give cool demos at his awe-inspiring, mind-blowing and enigmatic toy show. I wouldn’t miss this for anything. James started the show talking about how late have got this year with him having to meet people even at 3 am :). He then starts calling up the Duke’s Choice award winners on stage. He calls up Terracotta first and lauds their brilliant work, mentioning that they probably know more java internals than anyone else on the planet! We then had Atlassian.
You gotta love them for all good open source projects they’ve been contributing in the past week. Talks about clover, their continuous integration cum testing system. Then we had something new. James called upon stage, the folks from the BlueJ project just to recognize their work and to wish them their 10th birthday. Runescape is much more than just the game. It’s a complete server infrastructure powered by Java serving 175 million accounts, managed by a team of just 6-7 people and a server which hasnt been rebooted since years. It’s a complete free and open source production workflow. It’s an ecosystem of tools and technologies. All that on java. All that completely free to play! Simply amazing!
This was just the beginning and there were ofcourse a whole bunch of other breathtaking demos. Sun people were on stage for a cool magical demo showcasing the 4th screen javafx can be projected on, a Wii-powered vision-based human computer interface which uses JavaFX for all the awesomeness. It’s similar to what the NUIGroup is doing (although not multi-touch). Tor Norbye, the "demo stud" is then called on stage to show off the cool JavaFX visual binding across all screens of your life! Love the master-slave strategy! FX authoring tool is smart
The coolest duke’s choice award winner IMHO was the Java Jukebox guy. This guy was not a developer, he was a musician. And he had went ahead and made a complete jukebox for wannabe bands to be able to upload their music and be heard at bars. What’s so great in that? Well, the stunning part is that It’s user interface is completely written in JavaFX and.. it runs on Solaris! with a touch screen! Unbelievable, but true! :)
Now: Your SIM card gets as smart as your phone! WiFi enabled Java Card 3.x based SIM card coming soon! SIM cards now have a life of their own thanks to Java! PlaySIM uses a Sun SPOT to power a SIM card with Java programmability and interface to things like GPS! Way cool! Lunacy folks then called on stage. Gee, there's kids younger than me featured on the gosling toy show. Robots on stage get James from the back! He needs to be a lil more careful during these sessions :). Next we see the Squawk VM ported to a previously C-only microcontroller, allowing over-the-air java debugging of a robot! smart work! Then we have Sven Reimers on stage showing how they control and monitor satellites using the Netbeans platform!
Volkswagon, Stanford and Sun have worked together to create the world’s fastest automated car.. a race car that drives by itself, being able to go upto speeds of 160 kph. It’s an Audi TTS! It uses the Java Real Time System! There’s more.. the system is completely running on Solaris! Best story I’ve heard yet. It’s competing for the DARPA challenge, obviously. Stanford is working on the control algorithms for the car to follow the GPS points. Sun’s providing the Java RTS and Solaris goodness. The car isn’t ready yet so that hey to show a video. Neil young’s car wonder was demonstrated just off the stage and he was given a special dukey for making it to the toy show 4th time in a row and to be using Java in so many innovative ways. They’ve revved up a 1955 model 6000-pound, fully-finned, circa 1960 convertible automobile to turn into an energy-efficient vehicle.
This is part of a series of blog posts on my JavaOne 2009 experience..
There a lot to explore around the Moscone center at JavaOne this year. Get your photo clicked at the Duke’s Photo Op booth or play games on a PS3 or watch a movie or buy some cool T-shirts from the Java Retail store or buy yourself some cheap books at the bookstore or just hang loose on the bean bags at Gosling’s memory lane.
James Gosling’s Memory Lane
As soon as we entered the Moscone center and walked down the escalator, we witnessed a wall with a huge banner stuck on to it like a wallpaper with various cartoons of James Gosling in different Duke T-shirts. The board on the left reads “For 14 years, James Gosling has created limited-edition Duke t-shirts for JavaOne. In this banner, he's brought together many of those designs”. This place was called the James Gosling Memory Lane. On the first and second day this place was full of floating balloons with small cards attached to them where you could put up any message you wanted and after that it had tonnes of bean bags for conference attendees to sit and relax between sessions.
Duke’s Photo Op
You couldn’t miss this one! Here you could stop by and get a photo taken with our beloved Duke! There were 2 versions – either get a photo with Duke in person or with a virtual Duke!
The AMD Hang Space
This in my opinion, was the best thing I liked about JavaOne. When you’ve participated in a lot of sessions and just need a break, this was the place to be at. Bean bags with playstations, a lounge space, a movie theatre and a whole lotta fun! The great escape from all the seriousness!
Hacker Stations
Sun Ray powered thin clients providing access to the internet so that you can stop by to check email, surf the web, check your session schedule in the schedule builder, or tweet your JavaOne updates!
This is part of a series of blog posts on my JavaOne 2009 experience..
The JavaOne pavilion is THE place to hangout for JavaOne attendees when they are not attending technical sessions. This is the place where all top rockstars – top JUG leaders, Java experts, CEOs, Duke’s Choice Award winners, JavaOne speakers and core developers from Sun are found giving real-time demonstrations of their projects in person and interacting face to face with other java enthusiasts and students. It is THE place to network and interact with industry professionals – with people from over 85 countries. As Chris said during the keynote, you’ll never be able to visit 85 countries in your lifetime so JavaOne is the platform to meet people from so many countries at once, share experiences, learn and collaborate at one single place. This place was filled with more than 50 exhibitor pods of companies working with Java and related technologies. There were fun activities all around the place and lots and lots of goodies and giveaways and T-shirts to bag! It is the place to get all your doubts answered from the people who made the technology themselves! Interesting spots in the Pavilion floor include the OSUM booth, the Change Your World Playground, the Java Utopia, the JavaOne spinning wheel, the Cloud zone, the Sun SPOT booths, the OpenSolaris Install Lounge. There were special 5 minute “Lightning talks” at the Pavilion (a new trend in technical conferences), which are fast-paced and informative BOF sessions for the non-speakers to become speakers for a short while :). There even was a place where you could get yourself a T-shirt with a custom slogan on it and other fun stuff like that! I met a lot of people I always wanted to meet right there on the Pavilion Floor including Geertjan Wielenga (netbeans rockstar!), Eric Reid from ISVe Engineering (him and Scott Mattoon lead the Drupal efforts within Sun), Roger Meike (Director of Operations, Sun Labs!). I was also happy to meet those few I had met last year in India: Arun Gupta (Glassfish evangelist), Vipul Gupta (Sun SPOT team), David Lindt (Sun Learning Services) and couple others. The booths that interested me were those of Intel, the Sun Cloud offerings, Project Kenai, Zemblai, the OpenSolaris SourceJuicer Pod, Netbeans, Amazon, Sun SPOT, LiveScribe, Caucho Quercus, Java RTS, Sun Studio Mixed-Language development, Spring, Atlassian, The Alice Project, Project Speedway (and I’m obviously missing a lot of names as I’m blogging this almost 2 weeks after the conference!).
The OSUM booth and Community Corner
The Java.net Community Corner was a meeting point for members of java.net communities, JUGS, Java champions, Netbeans Dream Team as well as our very own OSUM. It also had a podcast booth, book signings and casual Q&A sessions. The Open Source University Meetup Community had a booth setup over at the Community Corner on the JavaOne pavilion floor for the first time this year. It was voluntarily manned by all the campus ambassadors present at JavaOne along with Gary, Tzel, Liana, Colin and Kirby. Our work at the booth was to inform students coming over at the booth about the ever-growing OSUM community, the Campus Ambassador program, interesting opportunities for Students at JavaOne and any other query they may have. Sun had also organized a scavenger hunt just for students at JavaOne, and we collected the stamps from students and also conducted the daily raffles at the OSUM booth itself. We had plenty of giveaways for students coming to the booth – caps, pens, T-Shirts, and most importantly – OSUM badges and bands. We got these cool red T-Shirts which we were supposed to wear while at the OSUM booth - at the back it read "Javaholic. Are you a student? Got questions? Ask me!". Gary was wearing one all the time. We had students from all parts of the world coming over and not just San Francisco. To our surprise, we even had an ex-campus ambassador, Kira with us. It's cool to see how campus ambassadors still hang out together and get to meet like this even after their term is over. Me and Ashwin interviewed Kira for a quick JavaOne minute.
The Student Scavenger Hunt (Dude, Where’s my treasure?)
Sun designed a scavenger hunt just for students, called the "Dude, Where's My Treasure?" student scavenger hunt. On the first three days of JavaOne, June 1 through June 3, students followed a checklist of tasks that led them through the different areas on the conference. To earn stamps on the scavenger hunt card, they had to do specific Java-related activities, and when they filled their cards with stamps, they were entered in raffles to win either an iPod Touch or a Sun SPOT developer kit. I had captured the raffle and prize distribution for the scavenger hunt on Monday in this JavaOne Minute video.
Winners of the “Dude, Where’s My Treasure” student scavenger hunt
The Spinning Duke Game
This was one place I visited everyday for sure! It’s called the “Spining Duke Game”. All you had to do is to visit the various booths like the Cloud Zone, Kenai booths, the Intel booths or the Zembly booth and interact with the expect / receive their demonstrations- and then get your stamp. After that you come spin the wheel here and win a prize! The more stamps you collect, the better the prize is! There were give levels of prizes including one grand prize per day! I got upto level 2 :)
The OpenSolaris Install Lounge
Abhishek, the guy who has many roles in Sun (former campus ambassador, current campus ambassador coordinator and intern with OpenSolaris marketing team) could almost always be found at the OpenSolaris Install lounge in the middle of the Pavilion, helping students install OpenSolaris on their laptops and engaging them in fun activities. He had also given a talk on the same. There was also an area in the install lounge called the “Rockband 2”, a cool hangout spot for all the rockers at JavaOne – where you could play Rockband 2 on the guitar, live! The third area was the Apps of Steel challenge – where one could checkout the winners of the OpenSolaris Apps of Steel challenge
The Change Your World Playground
The most intersting place on the Pavilion floor was this place! Meet the duke’s choice award winners in person, check out cool high school robotics (JavaOne Minute!), witness all the Sun SPOT goodness or have a seat in Neil Young’s Java-enabled biodiesel and electricity powered LincVolt car (JavaOne minute)! Here I met the director of Sun Labs – Roger Meike and Vipul Gupta, who’s a distinguished engineer working in the Sun SPOT team. He gave us a cool demo of the web-based sensor network monitoring system, which we captured in this JavaOne minute.
CommunityOne and The First Day at the Moscone Center
Gary, Me, Ashwin, Avinash, Tom, Hyejin, Felipe and Kevin rushed down to the Moscone center on the first day of the conference, it was most certainly an enjoyable hustle. We entered Moscone, registered for JavaOne and CommunityOne 2009, got our conference attendee badges, conference agenda and other material.. and then we went off to get more material downstairs.
Arrival, Registration and material collection at the Moscone center
The CommunityOne General Session
CommunityOne started with a bang. We all took up front seats few minutes before the session started. Gary introduced us to Lin Lee, our super-super-boss, VP of Global Communities in Sun. Checkout Gary’s JavaOne minute taken just when the halls were filling up. The session's highlights were Sun's new cloud computing offerings. David Douglas, Senior Vice President at Sun for Cloud Computing took the stage to talk about citizen engineering, eco computing and various other stuff and then came then came to the exciting part: he called on stage, 4 of the campus ambassadors who were sponsored to attend JavaOne this year: Avinash Joshi from India, Felipe Cerda and Tom Petreca from Brazil, Hyejin Park from South Korea and Kevin Li from China. David asked them to talk about their experiences as campus ambassadors, in fostering open source clubs on their campus and leading their OSUM communities.
We all listened to them proudly. Then John Fowler came to set the stage on fire with the launch of OpenSolaris 2009.06 and a discussion of all its great new features! While the presenters kept throwing all the cool stuff at us I tried to keep up with them and tweet updates along the way. Unfortunately the WiFi really sucked at the Moscone Center and I had to do that on my international roaming airtime, but I guess it was worth it. Gary did a JavaOne minute with Avinash to get to know about how he felt being up there on the stage at the CommunityOne keynote! While there were many exciting things announced and demoed during the keynote (you can watch the replay here!), my personal favorites are Crossbow's new drag-drop GUI, multicore optimized networking stack and JavaFX finally working on OpenSolaris (and so well too!). As expected there was a huge crowd at the keynote and lot of people were standing just outside the hall to discuss how things went in the keynote. Thats when we ran into Valeriya Alaverdova, Program Coordinator in the Sun Marketing Team who had travelled to JavaONE all the way from Russia. We did a JavaOne minute with her about how things are going for her and what she's there for. Then we caught hold of the OpenSolaris Community rockstar - Jim Grisanzio in the flesh! I have been following Jim's blog since a long time and keep reading his mails on opensolaris-announce, advocacy-discuss, etc. He is undoubtedly the best community manager the opensolaris community could ever have. We did another quick JavaOne minute with him on his thoughts about CommunityOne and the amazing keynote we just attended. We also ran into Sriram Narayanan! He's a very active member of the Bangalore OpenSolaris Usergroup. I was quite happy to meet him there.
You can see all the keynote replays as well as read their summaries here.
At the end of the exciting as well as tiring day, we had the Pavilion reception, with good food. The catering was good. There was a long line for the beer so I skipped that. At around 6.25 PM we were surprised by a group of dancers and musicians taking a backdoor entry and catching everyone’s attention by parading through the Pavilion..!
This is part of a series of blog posts on my JavaOne 2009 experience..
My trip to JavaOne in San Francisco was an amazing experience. One that I can probably not describe completely in words (with my limited vocabulary). It was the most thrilling and exciting journey of my life, yet. It was great to meet many of the people I've been interacting with since the past 2 years on IM, email and virtual worlds, in person. It was the right mix of conference business as well as a whole lot of fun. We used to attend engaging technical sessions/labs, meet tech rockstars and industry professionals in the Pavilion, check out all the amazing products showcased at the exhibitor pods, interview people high on the JavaOne spirit to share their JavaOne experience and get a slice of all the amazing activities scattered all over Moscone Center. Through this series of blog posts I would like to take you through my enticing journey to the beautiful city of San Francisco and the best developer conference in the world, JavaOne 2009.
The Prelude (Before June 30th)
Journey to San Francisco (June 30th-June 31st)
It was my first international flight but it all just went fine. We were flying Emirates! It was all very exciting. Ashwin and Avinash were boarding from Bangalore and me from Delhi, we were meeting mid-way at Dubai and then taking the same flight from there to SFO. The flight uptil Dubai was just a 3 hour journey.
Dubai International Airport
The Dubai international airport was amazingly huge, ultra-modern and breath taking. We had 2 hours to rest and take snaps there and then move on to the longest flight of our lives. The in-flight experience with Emirates is good. They have this entertainment system called ICE (information, communications, entertainment) which provides you bundles of movies, music, news and interactive services like SMS, Email, Phone. It even lets you enjoy watching pictures you took with your digital camera via USB on the wide touch-screen. We had plenty of time to figure out that it's a thin-client system powered by Linux and Flash (yep, we caught the linux bootscreen). After a 16 hour journey, with lots of food and lots of drinking we finally set foot on San Francisco International Airport. The first impressions of being in the US weren't really that impressive. The airport in Delhi and Bangalore were just as impressive. There were certainly more security cameras and much more security, but the level of technology was pretty much the same (except the finger print scanners perhaps!). Unbelievably someone stole the lock on my baggage!
As we started moving out, things started getting better. We went over to the BART station inside the airport (woot!), completely automated ticketing system. The train was pretty empty though, quite unlike the metro in Delhi, but what am I comparing anyway :). We got a bird's eye view of the city while in the train for the 35 minute journey from SFO airport to the Powell Street station.
First impressions..
We got a first look at the real San Francisco just after we exited the Powell Street BART station. It was awesome - lovely cool weather, beautiful streets, trams and limousines! We were finally witnessing what we had been able to see in Hollywood movies and TV - the US of A in all its glory - in the flesh. Then began the real excitement. I quickly took out my iPhone to start making use of all the navigation apps I had downloaded from the appstore back in India to get around SFO - an app called iBART to find BART routes, nearby stations, and schedules, an app for MUNI schedules, and a couple of tourist apps specific to San Francisco. And then also began, my photo taking spree! I have captured around 6000+ photos and videos from my trip and at the time of writing of this blog entry I was still struggling to upload them all to my flickr pro account :).
We walked down to Hotel Nikko on Mason Street, bumping into Mayuresh on the way (talk about being in a small world!). It was a pretty decent hotel with a room big enough for 2 people. We quickly freshened up and came down to meet Tzel in the lobby. She took us to a Starbucks right inside the Hotel premises and then we discussed a few things related to a few initiatives in next year's CA program. Tzel wanted to get our feedback on it. We soon caught up with Gary, who was a lil tired running around whole day. He took us to the japanese restaurant in our hotel - Anzu.
We had some amazing food and a great time there. I had this amazing cook-it-yourself japanese speciality called the "Rock". They serve raw slices of beef with a real sizzling black rock heated up to 400 degrees F. You're supposed to cook the slices by laying 'em on the rock. That was so cool! Just like barbeque! The vegetarians from India had some trouble choosing what to order though, as in most restaurants in the US veg items on the menu are too scanty. I just couldn't wait for the next day so after dinner, I went on a night walk with Avinash to explore the neighbourhood. We went off to Powell Street again. I saw a real Apple store!
The city looked even prettier in the night. The air was full of energy. People followed the traffic rules. Although many shops were closed, they left the lights on. There were just so many restaurants. For the first time it actually felt nice walking down the street! There was this peculiar trait of SF's roads - you get to hear a lot of echo of the sounds coming from the vehicles on the road - it has be the road's material. It sounds nice and creates good ambience. After coming back to the Hotel, I did 2 quick JavaOne minutes, which were a prerequisite to entering the JavaOne street team and went off to sleep. The next morning we had to report to the lobby at 7 am sharp to quickly go and register for JavaOne and CommunityOne.
This is part of a series of blog posts on my JavaOne 2009 experience.. Beijing Olympics 2008 have commenced almost a week back and its high time I talked about an
interesting thing Sun has done this time to praise the olympian gods!
Apart from being chosen to provide
a high-performance eco-efficient technology platform to NBC Universal's
web site NBCOlympics.com, Sun has gone out of the way to serve the
online spectators of the Olympics with it's very own Facebook game
called "myPicks Beijing 2008"
NetBeans 6.5 Beta is now available!
NetBeans™ IDE 6.5 Beta is the latest release of Sun's award-winning open-source IDE. 6.5 Beta.
For more information:
Download Url:
I am really very excited to blog this (more than anything I have ever blogged about, really).
There is a new Netbeans plugin which adds Drupal support to the Netbeans IDE! | http://blogs.sun.com/angad/ | crawl-002 | refinedweb | 10,217 | 71.44 |
The
@now/bash Builder takes an entrypoint of a bash function, imports its dependencies, and bundles them into a Lambda.
A simple "hello world" example:
handler() { echo "Hello, from Bash!" }
When to Use It
This Builder is the recommended way to build lambdas from shell functions.
How to Use It
This example will detail creating an
uppercase endpoint which will be accessed as
my-deployment.url/api/uppercase. This endpoint will convert the provided querystring to uppercase using only Bash functions and standard Unix CLI tools.
Start by creating the project structure:
Inside the
my-bash-project > api > uppercase directory, create an
index.sh file with the following contents:
import "string@0.0.1" import "querystring@1.3.0" handler() { local path local query path="$(jq -r '.path' < "$REQUEST")" querystring "$path" | querystring_unescape | string_upper }
A shell function that takes querystrings and prints them as uppercase.
The final step is to define a build that will take this entrypoint (
index.sh), build it, and turn it into a lambda using a
now.json configuration in the root directory (
my-bash-project):
{ "version": 2, "builds": [{ "src": "api/**/index.sh", "use": "@now/bash" }] }
A
now.json file using a build which takes a shell file and uses the
@now/bash Builder to output a lambda.
The resulting deployment can be seen here:
Furthermore, the source code of the deployment can be checked by appending
/_src e.g..
Note, however, that it will return an empty response without a querystring.
By passing in a querystring, the lambda will return the uppercased version. For example:
Technical Details
Entrypoint
The entrypoint of this Builder is a shell file that defines a
handler() function.
The
handler() function is invoked for every HTTP request that the Lambda receives.
Build Logic
If your Lambda requires additional resources to be added into the final bundle,
an optional
build() function may be defined.
Any files added to the current working directory at build-time will be included in the output Lambda.
build() { date > build-time.txt } handler() { echo "Build time: $(cat build-time.txt)" echo "Current time: $(date)" }
Demo:
Response Headers
The default
Content-Type is
text/plain; charset=utf8 but you can change it by setting a response header.
handler() { http_response_header "Content-Type" "text/html; charset=utf8" echo "<h1>Current time</h1><p>$(date)</p>" }
Demo:
JSON Response
It is common for serverless functions to communicate via JSON so you can use the
http_response_json function to set the content type to
application/json; charset=utf8.
handler() { http_response_json echo "{ "title": "Current time", "body": "$(date)" }" }
Demo:
Status Code
The default status code is
200 but you can change it with the
http_response_code method.
handler() { http_response_code "500" echo "Internal Server Error" }
Demo:
Redirect
You can use the
http_response_redirect function to set the location and status code. The default status code is
302 temporary redirect but you could use a permanent redirect by setting the second argument to
301.
handler() { http_response_redirect "" "301" echo "Redirecting..." }
Demo:
Importing Dependencies
Bash, by itself, is not very useful for writing Lambda handler logic because it
does not have a standard library. For this reason,
import
is installed and configured by default, which allows your script to easily include
additional functionality and helper logic.
For example, the
querystring import may be
used to parse input parameters from the request URL:
import "querystring@1.3.0" handler() { local path local query path="$(jq -r '.path' < "$REQUEST")" query="$(querystring "$path")" echo "Querystring is: $query" }
Demo:
Bash Version
With the
@now/bash Builder, the handler script is executed using GNU Bash 4.
handler() { bash --version }
Demo:
Maximum Lambda Bundle Size
To help keep cold boot times minimal, the default maximum output bundle size for a Bash lambda function is
10mb.
This limit is extendable up to
50mb.
maxLambdaSizeconfiguration:
{ "builds": [ { "src": "*.sh", "use": "@now/bash", "config": { "maxLambdaSize": "20mb" } } ] } | https://docs-560461g10.zeit.sh/docs/v2/deployments/official-builders/bash-now-bash/ | CC-MAIN-2019-35 | refinedweb | 634 | 56.25 |
.\" $OpenBSD: patch.1,v 1.25 2009/02/08 17:33:01 jmc Exp $ .\" Copyright 1986, Larry Wall .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following condition .\" is met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this condition and the following: February 8 2009 $ .Dt PATCH 1 .Os .Sh NAME .Nm patch .Nd apply a diff file to an original .Sh SYNOPSIS .Nm patch .Bk -words .Op Fl bCcEeflNnRstuv .Op Fl B Ar backup-prefix .Op Fl D Ar symbol .Op Fl d Ar directory .Op Fl F Ar max-fuzz .Op Fl i Ar patchfile .Op Fl o Ar out-file .Op Fl p Ar strip-count .Op Fl r Ar rej-name .Op Fl V Cm t | nil | never .Op Fl x Ar number .Op Fl z Ar backup-ext .Op Fl Fl posix .Op Ar origfile Op Ar patchfile .Ek .Nm patch .Pf \*(Lt Ar patchfile .Sh DESCRIPTION .Nm will take a patch file containing any of the four forms of difference listing produced by the .Xr diff 1 program and apply those differences to an original file, producing a patched version. If .Ar patchfile is omitted, or is a hyphen, the patch will be read from the standard input. .Pp .Nm will attempt to determine the type of the diff listing, unless overruled by a .Fl c , .Fl e , .Fl n , or .Fl u option. Context diffs (old-style, new-style, and unified) and normal diffs are applied directly by the .Nm program itself, whereas ed diffs are simply fed to the .Xr ed 1 editor via a pipe. .Pp If the .Ar patchfile contains more than one patch, .Nm .Sx Filename Determination below). .Pp The options are as follows: .Bl -tag -width Ds .It Xo .Fl B Ar backup-prefix , .Fl Fl prefix Ar backup-prefix .Xc Causes the next argument to be interpreted as a prefix to the backup file name. If this argument is specified, any argument to .Fl z will be ignored. .It Fl b , Fl Fl backup Save a backup copy of the file before it is modified. By default the original file is saved with a backup extension of .Qq .orig unless the file already has a numbered backup, in which case a numbered backup is made. This is equivalent to specifying .Qo Fl V Cm existing Qc . This option is currently the default, unless .Fl -posix is specified. .It Fl C , Fl Fl check Checks that the patch would apply cleanly, but does not modify anything. .It Fl c , Fl Fl context Forces .Nm to interpret the patch file as a context diff. .It Xo .Fl D Ar symbol , .Fl Fl ifdef Ar symbol .Xc Causes .Nm to use the .Qq #ifdef...#endif construct to mark changes. The argument following will be used as the differentiating symbol. Note that, unlike the C compiler, there must be a space between the .Fl D and the argument. .It Xo .Fl d Ar directory , .Fl Fl directory Ar directory .Xc Causes .Nm to interpret the next argument as a directory, and change working directory to it before doing anything else. .It Fl E , Fl Fl remove-empty-files Causes .Nm to remove output files that are empty after the patches have been applied. This option is useful when applying patches that create or remove files. .It Fl e , Fl Fl ed Forces .Nm to interpret the patch file as an .Xr ed 1 script. .It Xo .Fl F Ar max-fuzz , .Fl Fl fuzz Ar max-fuzz .Xc Sets the maximum fuzz factor. This option only applies to context diffs, and causes .Nm. .It Fl f , Fl Fl force Forces .Nm to assume that the user knows exactly what he or she is doing, and to not ask any questions. It assumes the following: skip patches for which a file to patch can't be found; patch files even though they have the wrong version for the .Qq Prereq: line in the patch; and assume that patches are not reversed even if they look like they are. This option does not suppress commentary; use .Fl s for that. .It Xo .Fl i Ar patchfile , .Fl Fl input Ar patchfile .Xc Causes the next argument to be interpreted as the input file name (i.e. a patchfile). This option may be specified multiple times. .It Fl l , Fl Fl. .It Fl N , Fl Fl forward Causes .Nm to ignore patches that it thinks are reversed or already applied. See also .Fl R . .It Fl n , Fl Fl normal Forces .Nm to interpret the patch file as a normal diff. .It Xo .Fl o Ar out-file , .Fl Fl output Ar out-file .Xc Causes the next argument to be interpreted as the output file name. .It Xo .Fl p Ar strip-count , .Fl Fl strip Ar strip-count .Xc .Pa /u/howard/src/blurfl/blurfl.c : .Pp Setting .Fl p Ns Ar 0 gives the entire pathname unmodified. .Pp .Fl p Ns Ar 1 gives .Pp .D1 Pa u/howard/src/blurfl/blurfl.c .Pp without the leading slash. .Pp .Fl p Ns Ar 4 gives .Pp .D1 Pa blurfl/blurfl.c .Pp Not specifying .Fl p at all just gives you .Pa blurfl.c , unless all of the directories in the leading path .Pq Pa u/howard/src/blurfl exist and that path is relative, in which case you get the entire pathname unmodified. Whatever you end up with is looked for either in the current directory, or the directory specified by the .Fl d option. .It Fl R , Fl Fl reverse Tells .Nm that this patch was created with the old and new files swapped. (Yes, I'm afraid that does happen occasionally, human nature being what it is.) .Nm will attempt to swap each hunk around before applying it. Rejects will come out in the swapped format. The .Fl R option will not work with ed diff scripts because there is too little information to reconstruct the reverse operation. .Pp If the first hunk of a patch fails, .Nm will reverse the hunk to see if it can be applied that way. If it can, you will be asked if you want to have the .Fl.) .It Xo .Fl r Ar rej-name , .Fl Fl reject-file Ar rej-name .Xc Causes the next argument to be interpreted as the reject file name. .It Xo .Fl s , Fl Fl quiet , .Fl Fl silent .Xc Makes .Nm do its work silently, unless an error occurs. .It Fl t , Fl Fl batch Similar to .Fl f , in that it suppresses questions, but makes some different assumptions: skip patches for which a file to patch can't be found (the same as .Fl f ) ; skip patches for which the file has the wrong version for the .Qq Prereq: line in the patch; and assume that patches are reversed if they look like they are. .It Fl u , Fl Fl unified Forces .Nm to interpret the patch file as a unified context diff (a unidiff). .It Xo .Fl V Cm t | nil | never , .Fl Fl version-control Cm t | nil | never .Xc Causes the next argument to be interpreted as a method for creating backup file names. The type of backups made can also be given in the .Ev PATCH_VERSION_CONTROL or .Ev VERSION_CONTROL environment variables, which are overridden by this option. The .Fl B option overrides this option, causing the prefix to always be used for making backup file names. The values of the .Ev PATCH_VERSION_CONTROL and .Ev VERSION_CONTROL environment variables and the argument to the .Fl V option are like the GNU Emacs .Dq version-control variable; they also recognize synonyms that are more descriptive. The valid values are (unique abbreviations are accepted): .Bl -tag -width Ds -offset indent .It Cm t , numbered Always make numbered backups. .It Cm nil , existing Make numbered backups of files that already have them, simple backups of the others. .It Cm never , simple Always make simple backups. .El .It Fl v , Fl Fl version Causes .Nm to print out its revision header and patch level. .It Xo .Fl x Ar number , .Fl Fl debug Ar number .Xc Sets internal debugging flags, and is of interest only to .Nm patchers. .It Xo .Fl z Ar backup-ext , .Fl Fl suffix Ar backup-ext .Xc Causes the next argument to be interpreted as the backup extension, to be used in place of .Qq .orig . .It Fl Fl posix Enables strict .St -p1003.1-2008 conformance, specifically: .Bl -enum .It Backup files are not created unless the .Fl b option is specified. .It If unspecified, the file name used is the first of the old, new and index files that exists. .El .El .Ss Patch Application .Nm will try to skip any leading garbage, apply the diff, and then skip any trailing garbage. Thus you could feed an article or message containing a diff listing to .Nm patch , and it should work. If the entire diff is indented by a consistent amount, this will be taken into account. .Pp With context diffs, and to a lesser extent with normal diffs, .Nm, .Nm will scan both forwards and backwards for a set of lines matching the context given in the hunk. First .Nm. .Pq The default maximum fuzz factor is 2. .Pp If .Nm cannot find a place to install that hunk of the patch, it will put the hunk out to a reject file, which normally is the name of the output file plus .Qq . .Pp As each hunk is completed, you will be told whether the hunk succeeded or failed, and which line (in the new file) .Nm. .Ss Filename Determination If no original file is specified on the command line, .Nm will try to figure out from the leading garbage what the name of the file to edit is. When checking a prospective file name, pathname components are stripped as specified by the .Fl p option and the file's existence and writability are checked relative to the current working directory (or the directory specified by the .Fl d option). .Pp If the diff is a context or unified diff, .Nm is able to determine the old and new file names from the diff header. For context diffs, the .Dq old file is specified in the line beginning with .Qq *** and the .Dq new file is specified in the line beginning with .Qq --- . For a unified diff, the .Dq old file is specified in the line beginning with .Qq --- and the .Dq new file is specified in the line beginning with .Qq +++ . If there is an .Qq Index: line in the leading garbage (regardless of the diff type), .Nm will use the file name from that line as the .Dq index file. .Pp .Nm will choose the file name by performing the following steps, with the first match used: .Bl -enum .It If .Nm is operating in strict .St -p1003.1-2008 mode, the first of the .Dq old , .Dq new and .Dq index file names that exist is used. Otherwise, .Nm will examine either the .Dq old and .Dq new file names or, for a non-context diff, the .Dq index file name, and choose the file name with the fewest path components, the shortest basename, and the shortest total file name length (in that order). .It If no file exists, .Nm checks for the existence of the files in an SCCS or RCS directory (using the appropriate prefix or suffix) using the criteria specified above. If found, .Nm will attempt to get or check out the file. .It If no suitable file was found to patch, the patch file is a context or unified diff, and the old file was zero length, the new file name is created and used. .It If the file name still cannot be determined, .Nm will prompt the user for the file name to use. .El .Pp Additionally, if the leading garbage contains a .Qq Prereq:\ \& line, .Nm will take the first word from the prerequisites line (normally a version number) and check the input file to see if that word can be found. If not, .Nm will ask for confirmation before proceeding. .Pp The upshot of all this is that you should be able to say, while in a news interface, the following: .Pp .Dl | patch -d /usr/src/local/blurfl .Pp .Fl B , .Fl V , or .Fl z options. The extension used for making backup files may also be specified in the .Ev SIMPLE_BACKUP_SUFFIX environment variable, which is overridden by the options above. .Pp If the backup file is a symbolic or hard link to the original file, .Nm. .Pp You may also specify where you want the output to go with the .Fl o option; if that file already exists, it is backed up first. .Ss Notes For Patch Senders There are several things you should bear in mind if you are going to be sending out patches: .Pp First, you can save people a lot of grief by keeping a .Pa patchlevel.h file which is patched to increment the patch level as the first diff in the patch file you send out. If you put a .Qq Prereq: line in with the patch, it won't let them apply patches out of order without some warning. .Pp Second, make sure you've specified the file names right, either in a context diff header, or with an .Qq Index: line. If you are patching something in a subdirectory, be sure to tell the patch user to specify a .Fl p option as needed. .Pp Third, you can create a file by sending out a diff that compares a null file to the file you want to create. This will only work if the file you want to create doesn't exist already in the target directory. .Pp Fourth, take care not to send out reversed patches, since it makes people wonder whether they already applied the patch. .Pp Fifth, while you may be able to get away with putting 582 diff listings into one file, it is probably wiser to group related patches into separate files in case something goes haywire. .Sh ENVIRONMENT .Bl -tag -width "PATCH_VERSION_CONTROL" -compact .It Ev POSIXLY_CORRECT When set, .Nm behaves as if the .Fl Fl posix option has been specified. .It Ev SIMPLE_BACKUP_SUFFIX Extension to use for backup file names instead of .Qq .orig . .It Ev TMPDIR Directory to put temporary files in; default is .Pa /tmp . .It Ev PATCH_VERSION_CONTROL Selects when numbered backup files are made. .It Ev VERSION_CONTROL Same as .Ev PATCH_VERSION_CONTROL . .El .Sh FILES .Bl -tag -width "$TMPDIR/patch*" -compact .It Pa $TMPDIR/patch* .Nm temporary files .It Pa /dev/tty used to read input when .Nm prompts the user .El .Sh DIAGNOSTICS Too many to list here, but generally indicative that .Nm couldn't parse your patch file. .Pp The message .Qq Hmm... indicates that there is unprocessed text in the patch file and that .Nm is attempting to intuit whether there is a patch in that text and, if so, what kind of patch it is. .Pp The .Nm utility exits with one of the following values: .Pp .Bl -tag -width Ds -compact -offset indent .It \&0 Successful completion. .It \&1 One or more lines were written to a reject file. .It \*[Gt]\&1 An error occurred. .El .Pp When applying a set of patches in a loop it behooves you to check this exit status so you don't apply a later patch to a partially patched file. .Sh SEE ALSO .Xr diff 1 .Sh STANDARDS The .Nm utility is compliant with the .St -p1003.1-2008 specification (except as detailed above for the .Fl -posix option), though the presence of .Nm itself is optional. .Pp The flags .Op Fl BCEFfstVvxz and .Op Fl -posix are extensions to that specification. .Sh AUTHORS .An Larry Wall with many other contributors. .Sh CAVEATS .Nm cannot tell if the line numbers are off in an ed script, and can only detect bad line numbers in a normal diff when it finds a .Qq change or a .Qq. .Pp .Nm usually produces the correct results, even when it has to do a lot of guessing. However, the results are guaranteed to be correct only when the patch is applied to exactly the same version of the file that the patch was generated from. .Sh BUGS Could be smarter about partial matches, excessively deviant offsets and swapped code, but that would take an extra pass. .Pp Check patch mode .Pq Fl C will fail if you try to check several patches in succession that build on each other. The entire .Nm code would have to be restructured to keep temporary files around so that it can handle this situation. .Pp If code has been duplicated (for instance with #ifdef OLDCODE ... #else ... #endif), .Nm is incapable of patching both versions, and, if it works at all, will likely patch the wrong one, and tell you that it succeeded to boot. .Pp If you apply a patch you've already applied, .Nm will think it is a reversed patch, and offer to un-apply the patch. This could be construed as a feature. | http://opensource.apple.com//source/patch_cmds/patch_cmds-16/patch/patch.1 | CC-MAIN-2016-36 | refinedweb | 2,884 | 77.94 |
Many Java developers today are interested in using scripting languages on the Java platform, but using a dynamic language that has been compiled into Java bytecode isn't always possible. In some cases, it's quicker and more efficient to simply script parts of a Java application or to call the particular Java objects you need from within a script.
That's where
javax.script comes in. The Java Scripting API,
introduced in Java 6, bridges the gap between handy little scripting
languages and the robust Java ecosystem. With the Java Scripting API, you
can quickly integrate virtually any scripting language with your Java
code, which opens up your options considerably when solving small problems
on someone else's dime.
1. Executing JavaScript with jrunscript
Each new Java platform release brings with it a new set of command-line
tools buried away inside of the JDK's bin directory. Java 6 was no
exception, and
jrunscript is no small addition to the Java
platform utilities.
Consider the basic problem of writing a command-line script for performance
monitoring. The tool will borrow
jmap (introduced in the previous article in the series) and run it every 5 seconds
against a Java process, in order to get a feel for how the process is
running. Normally, command-line shell scripts would do the trick, but in
this case the server application is deployed on a variety of different
platforms, including Windows® and Linux®. Sysadmins will testify
that trying to write shell scripts that run on both platforms is a pain.
The usual solution is to write a Windows batch file and a UNIX® shell
script, and just keep the two in sync over time.
But, as any reader of The Pragmatic Programmer knows, this is a horrendous violation of the DRY (don't repeat yourself) principle and is a breeding ground for bugs and defects. What we'd really like to do is write some kind of OS-neutral script that can run across all the platforms.
The Java language is platform neutral, of course, but this really isn't a case for a "system" language. What we need is a scripting language — like JavaScript, for instance.
Listing 1 starts with a basic shell of what we want:
Listing 1. periodic.js
while (true) { echo("Hello, world!"); }
Many, if not most, Java developers already know JavaScript (or ECMAScript; JavaScript is an ECMAScript dialect owned by Netscape) thanks to our forced interactions with web browsers. The question is, how would a system administrator run this script?
The solution, of course, is the
jrunscript utility that ships
with the JDK, as shown in Listing 2:
Listing 2. jrunscript
C:\developerWorks\5things-scripting\code\jssrc>jrunscript periodic.js Hello, world! Hello, world! Hello, world! Hello, world! Hello, world! Hello, world! Hello, world! ...
Note that you could also use a
for loop to execute the script
a given number of times before quitting. Basically,
jrunscript lets you do almost everything you normally would
do with JavaScript. The only exception is that the environment is not a
browser, so there's no DOM. The top-level functions and objects available
are therefore slightly different.
Because Java 6 ships with the Rhino ECMAScript engine as a part of the JDK,
jrunscript can execute any ECMAScript code that is fed to it,
either from a file (as shown here) or in a more interactive shell
environment called a REPL ("Read-Evaluate-Print-Loop"). Just run
jrunscript by itself to access the REPL shell.
2. Accessing Java objects from a script
Being able to write JavaScript/ECMAScript code is nice, but we don't want
to have to rebuild everything we use in the Java language from scratch
— that would defeat the purpose. Fortunately, anything using the
Java Scripting API engines has full access to the entire Java ecosystem
because, at heart, everything is still Java bytecode. So, going back to
the earlier problem, we could launch processes from the Java platform
using the traditional
Runtime.exec() call, as shown in
Listing 3:
Listing 3. Runtime.exec() launches jmap
var p = java.lang.Runtime.getRuntime().exec("jmap", [ "-histo", arguments[0] ]) p.waitFor()
The
arguments array is the ECMAScript standard built-in
reference to the arguments passed to this function. In the case of the
top-level script environment, this is the array of arguments passed to the
script itself (the command-line parameters). So, in Listing 3, the script
is expecting an argument containing the VMID of the Java process to
map.
Alternately, we could take advantage of the fact that
jmap is
a Java class and just call its
main() method, like in Listing
4. This approach eliminates the need to "pipe" the
Process
object's
in/out/err streams.
Listing 4. JMap.main()
var args = [ "-histo", arguments[0] ] Packages.sun.tools.jmap.JMap.main(args)
The
Packages syntax is a Rhino ECMAScript notation used to
refer to a Java package outside of the core
java.* packages
already set up within Rhino.
3. Calling into scripts from Java code
Calling Java objects from a script is only half of the story: The Java
scripting environment also provides the ability to invoke scripts from
within Java code. Doing so just requires instantiating a
ScriptEngine, then loading the script in and evaluating it,
as shown in Listing 5:
Listing 5. Scripting on the Java platform
import java.io.*; import javax.script.*; public class App {(); } } }
The
eval() method can also operate against a straight
String, so the script needn't come from a file on the
filesystem — it can come from a database, the user, or even be
manufactured within the application based on circumstance and user
action.
4. Binding Java objects into script space
Just invoking a script isn't enough: Scripts often want to interact with
objects created from within the Java environment. In these cases, the Java
host environment must create objects and bind them so that they're easy
for the script to find and use. This is a task for the
ScriptContext object shown in Listing 6:
Listing 6. Binding objects for scripts); engine.eval(fr, bindings); } } catch(IOException ioEx) { ioEx.printStackTrace(); } catch(ScriptException scrEx) { scrEx.printStackTrace(); } } }
Accessing the bound object is straightforward — the name of the
bound object is introduced at the script level as a member of the global
namespace, so using the
Person object from Rhino is as easy
as Listing 7:
Listing 7. Who wrote this article?
println("Hello from inside scripting!") println("author.firstName = " + author.firstName)
As you can see, JavaBeans-style properties are reduced down to straight name accessors, as if they were fields.
5. Compiling oft-used scripts
The drawback of scripting languages has always been performance. The reason is that in most cases, the scripting language is interpreted "on the fly" and loses time and CPU cycles having to parse and validate text as it's executed. Many scripting languages running on the JVM ultimately transform the incoming code into Java bytecode, at least the first time the script is parsed and validated; this on-the-fly compilation gets tossed when the Java program shuts down. Keeping frequently used scripts in bytecode form would give us a sizable performance boost.
We can do this in a natural and meaningful way with the Java Scripting API.
If the
ScriptEngine returned implements the
Compilable interface, then the compile method on that
interface can be used to compile the script (passed in as either a
String or a
Reader) into a
CompiledScript instance, which can then be used to
eval() the compiled code over and over again, with different
bindings. This is shown in Listing 8:
Listing 8. Compiling interpreted code); if (engine instanceof Compilable) { System.out.println("Compiling...."); Compilable compEngine = (Compilable)engine; CompiledScript cs = compEngine.compile(fr); cs.eval(bindings); } else engine.eval(fr, bindings); } } catch(IOException ioEx) { ioEx.printStackTrace(); } catch(ScriptException scrEx) { scrEx.printStackTrace(); } } }
In most cases, the
CompiledScript instance should be held
somewhere in long-term storage (
servlet-context, for example)
in order to avoid recompiling the same script over and over again. Should
the script contents change, however, you must create a new
CompiledScript to reflect the change; once compiled, the
CompiledScript no longer executes the original script file's
contents.
In conclusion
The Java Scripting API is a huge step forward in extending the reach and
functionality of Java programs, and it brings the productivity gains
associated with scripting languages into the Java environment. Coupled
with
jrunscript— which is obviously not all that
complex of a program to write —
javax.script gives Java
developers the benefits of scripting languages like Ruby (JRuby) and
ECMAScript (Rhino) without having to surrender the ecosystem and
scalability of the Java environment.
Coming up next in the 5 things series: JDBC.
Resources
Learn
- Develop and deploy your next app on the IBM Bluemix cloud platform.
- 5 things you didn't know about ... : Find out how much you don't know about the Java platform, in this series dedicated to turning Java technology trivia into useful programming tips.
- "Invoke dynamic languages dynamically, Part 1: Introducing the Java scripting API" (Tom McQueeney, developerWorks, September 2007): Part 1 of this two-part article introduces the Java scripting API's features; Part 2 dives deeper into its many powerful applications.
- "JavaScript EE, Part 3: Use Java scripting API with JSP" (Andrei Cioroianu, developerWorks, June 2009): Learn more about combining JavaScript with the Java platform and how to build Ajax user interfaces that remain functional when JavaScript is disabled in the web browser.
- JDK Tools and Utilities: Learn about the experimental monitoring and troubleshooting tools discussed in the 5 things focus on performance monitoring, including
jmap.
- The developerWorks Java technology zone:. | http://www.ibm.com/developerworks/java/library/j-5things9/index.html | CC-MAIN-2015-35 | refinedweb | 1,614 | 52.9 |
”
Hi Frank,
Is there a reason the lists are “initialized” twice?
names = []
jobs = []
Hi Kathryn,
Thanks for noticing! It should have been “initialized” only once.
Hi Frank. I have some problem for the first program. After running the code. I have a file persons.csv but it looks like this .
Name,Profession
Derek,Software Developer
Steve,Software Developer
Paul,Manager
How can I cancel those empty lines between them? Thanks.
Try to add a comma after your print command. Alternative is print(row.replace(“\n”,””))
Great resource. However, when I try to read a cvs file I keep getting
Could not find a part of the path ‘C:\Program Files\Rhinoceros 5 (64-bit)\System\artist_song_list\artists-songs-albums-tags.csv’.
Traceback:
line 4, in script
even though I just put the file in Rhino’s System. Thoughts?
Thanks Jenna! Try to put the file between quotes or using double slashes: ‘C:\\Program Files\\Rhinoceros 5 (64-bit)\\System\\artist_song_list\\artists-songs-albums-tags.csv’. A single slash in Python is used as escape characters, such as \n for newline.
Frank
Thanks for this. New to Python and haven’t programmed for many years. I want to read a multi-row.csv file (float), and I’ve copied your code into Phycharm.
import csv
# open file
with open(‘gisp2short.csv’, ‘rb’) as f:
reader = csv.reader(f)
# read file row by row
for row in reader:
print row
I’m getting an error message “SyntaxError: Missing parentheses in call to ‘print'”.
What am I doing wrong?
TIA
Mat
Hi Mat! Change the last line to print(row). Python3 requires brackets to be around the print command. | https://pythonspot.com/files-spreadsheets-csv/ | CC-MAIN-2019-04 | refinedweb | 276 | 79.16 |
Control the Physical World with Ruby and Artoo
It’s IoT Week at SitePoint! All week we’re publishing articles focused on the intersection of the internet and the physical world, so keep checking the IoT tag for the latest updates.
Ruby is a pretty slick language. But, in the past, whenever anyone mentioned “embedded” no one thought of the word “slick”. Instead, visions of terribly written C with lots of bit mashing or flashbacks to inordinate amounts of time spent with inadequate lighting trying to fight with a breadboard often came to mind. Fortunately, this is no longer the case with Artoo. Artoo is a cool library that allows Ruby to control and interact with lots of different hardware platforms (e.g. the Sphero, the Arduino, etc.). In this article, we’ll focus on getting Artoo up and running and check out a few different examples of Artoo in action using the Arduino and a standard keyboard.
Setting Up the Software
Assuming that you have a functional version of Ruby (preferably installed with RVM or
rbenv), installing Artoo is super easy:
gem install artoo
In addition, we also need separate gems for each bit of hardware we want to use. For this article, we will be using the Arduino as our primary platform. To get that working:
gem install artoo-arduino
That’s all we need for the software to start ticking! The next step is getting the Arduino talking to us through Artoo.
Setting Up the Hardware
As you might have heard, the Arduino is based on a microcontroller platform called Atmel AVR. There used to be a time in AVR development where you’d generally get started with either a hacked together parallel port programmer or buy an STK500 and start writing some bit-banging code in C. Then, if you were on Windows, you could use AVRStudio with no issues. If you were on Linux, you’d have to use the
gcc toolchain (which actually worked pretty well) to build your C code and then push it to the microcontroller using avrdude.
Generally, this process worked okay but when stuff broke, it was insanely hard to work out what was happening. I distinctly remember spending an insane amount of time trying to get a parallel port program working before giving in and getting myself an Arduino. The Arduino came along and made this whole process a million times easier, but it required using this graphical interface that was sort of annoying.
Fortunately, Artoo makes us go through none of that pain. Instead, we can use Gort which has a nice install page that works well for a variety of platforms. Gort makes it much easier to upload firmware to devices that are connected over serial ports (e.g. USB). It doesn’t reimplement the functionality that something like avrdude provides, but just wraps that functionality in a much easier-to-use interface. The first step is to plug in your Arduino into a USB port on your machine. Then, we can search for it with the
gort scan command:
gort scan serial
This will give you a list of serial ports and you can figure out which one has your Arduino connected to it. Gort can connect to all sorts of devices. Since we want it to talk to the Arduino in particular, we have to install the Arduino-related packages for gort:
gort arduino install
And, that’s it! In a bit, we’ll use the
gort arduino upload command in order to push our code to the Arduino.
The Artoo Architecture
Artoo isn’t just a library that will allow us to write code in Ruby that runs on the Arduino. Rather, it is a way to build devices in “groups” in a much more effective and clean way. Devices in the Artoo documentation are referred to as “robots.” These don’t necessarily have the characteristics we generally associate with robots (e.g. movement); they are just supposed to be devices that interact with the physical world in some way (e.g. an Arduino, a Sphero, etc). It receives events from these devices using “sockets”, although these may be over a serial connection for something like an Arduino. These events from the devices are handled by a piece of code called the Artoo Master Class. Basically, this piece of code exists to allow these devices to operate together rather than as totally independent units. You can think of it like a “master” node within a cluster.
So, the point of Artoo is to allow you to orchestrate multiple devices (“robots”) to do stuff together rather than all by themselves.
This would be cool enough to look into if Artoo stopped there. But, it doesn’t. Artoo allows you to expose an API over both HTTP and WebSockets so you can build a Web/iOS/Android/etc. application on top of these devices. For example, if you want to be able to control the lighting in a room with an Arduino, you don’t have to build all the stuff to communicate with the Arduino yourself anymore. Instead, using Artoo, you already have an exposed API that lets you do that. All you have to do is write the code that controls the lights (using Artoo’s helpers, etc. in Ruby) and write a slick user-facing app. Artoo will handle the rest. Considering how painful it has been to hook up a webserver with a hardware device in the past, this is a huge step forward. Especially if you’re using something like the Particle platform, you can build some pretty awesome, cloud-connected stuff.
LEDs
Now that we know how Artoo works at a higher level, let’s get into the nitty gritty. We’ll break down an example from Artoo that allows us to build a LED that responds to a button. Straight from the Artoo documentation, we have:
require 'artoo' connection :arduino, adaptor: :firmata, port: '/dev/ttyACM0' device :led, driver: :led, pin: 13 device :button, driver: :button, pin: 2, interval: 0.01 work do puts "Press the button connected on pin #{ button.pin }..." on button, :push => proc { led.on } on button, :release => proc { led.off } end
You should replace
/dev/ttyACM0 with the serial port you found with
gort scan serial. Although this code looks pretty simple, it is doing quite a bit behind the scenes and the documentation currently doesn’t do a great job explaining what the heck is actually going on. First of all, we set up the connection to the device with the following call:
connection :arduino, adaptor: :firmata, port: '/dev/ttyACM0'
To talk to the Arduino, Artoo uses a protocol called Firmata. The Firmata protocol has been implemented for all sorts of devices and allows Artoo to build an API, etc. on top of the Arduino.
device :led, driver: :led, pin: 13
Here we’re introducing an entirely new concept: device drivers. Artoo defines stuff that the Arduino can interact with using the concept of device drivers. These take arguments according to the function of the device. There are several device drivers that are included as part of the Artoo Arduino package and more can be added pretty easily. The
led driver takes only one argument: the pin to which he LED is connected. At this point, you should connect an LED to pin 13 of the Arduino.
device :button, driver: :button, pin: 2, interval: 0.01
The
button driver is a bit more complicated. It takes a
pin, which is the pin on the Arduino where the push button is connected along with an
interval. If you’ve read Arduino code before, this will look familiar: interval with which the event is fired.
work do ... end
This is where the event loop starts. Within this block of code, we’re supposed to be handling events that devices can fire (e.g. a button being depressed). We do that with the two lines within the block:
on button, :push => proc { led.on } on button, :release => proc { led.off }
The device driver exposes two events that we code to:
release and
push. We then use the device called
led (which we named earlier as part of the first argument to
device) and turn it on and off for the respective action. In order to “deploy” this code, we need the Arduino to learn how to speak with the computer:
gort arduino upload /dev/ttyACM0
Of course, you should replace
/dev/ttyACM0 with the device on which your Arduino is connected. This will allow the Arduino to talk in Firmata so that we can tell it what to do over USB. Let’s say we saved the code in
arduino_pushbutton.rb. Then, we can run it with just standard Ruby:
ruby arduino_pushbutton.rb
That should connect to the Arduino and turn the LED on and off according to the state of the pushbutton.
The Keyboard
Turns out, we’re not limited to microcontroller boards as physical input devices. Instead, we can even use the keyboard because there’s a device driver for it. To get hold of the device driver, we just do:
gem install artoo-keyboard
Now, let’s check out an example using the keyboard as an input device:
require 'artoo' connection :keyboard, adaptor: :keyboard device :keyboard, driver: :keyboard, connection: :keyboard work do on keyboard, :key => :got_key end def got_key(sender, key) puts sender puts key end
Let’s break this code down.
connection :keyboard, adaptor: :keyboard device :keyboard, driver: :keyboard, connection: :keyboard
Just like we told Artoo how to connect to the Arduino, we’re telling it that we want to connect to the keyboard here. Notice that all the keyboard needs is the name of the adaptor and it’ll figure out how to get a hold of keyboard input (i.e. no serial input device needed). Then, we register the device driver we want to use with the device so we can get the events that we want.
work do on keyboard, :key => :got_key end
The
:keyboard device driver exposes one event called
key. We are hooking that up with a particular callback.
def got_key(sender, key) puts sender puts key end
With the callback, we just print out the device that sent us that event (i.e. the keyboard) and the key value.
Putting It Together
Thus far, we’ve worked with one device at at a time. That’s a little bit boring and somewhat beats the point of using something like Artoo. We want these devices to be able to interact in order to produce something more meaningful than just an LED turned on by a pushbutton. So, we’ll build a “key logger” and try to steal people’s Gmail passwords.
Well, kind of. What we are going to do is use the keyboard driver to figure out when someone types in “gmail.com” and once they do, turn on an LED on the Arduino so we know to look for the password immediately after. Not terribly useful but should allow us to use the keyboard and Arduino together:
require 'artoo' connection :keyboard, adaptor: :keyboard device :keyboard, driver: :keyboard, connection: :keyboard connection :arduino, adaptor: :firmata, port: '/dev/ttyACM0' device :led, driver: :led, pin: 13 read_so_far = "" work do on keyboard, :key => :got_key end def got_key(sender, key) read_so_far = read_so_far + key if read_so_far.end_with? "gmail.com" led.on end end
And, that should get us some interaction! Let’s break down the code.
connection :keyboard, adaptor: :keyboard device :keyboard, driver: :keyboard, connection: :keyboard connection :arduino, adaptor: :firmata, port: '/dev/ttyACM0' device :led, driver: :led, pin: 13
Again, we setup the connections and device drivers first. Then, onto the event loop:
work do on keyboard, :key => :got_key end
The only thing that is really different here is the callback itself:
def got_key(sender, key) read_so_far = read_so_far + key if read_so_far.end_with? "gmail.com" led.on end end
We turn the LED on when we notice that the user has typed in “gmail.com”! Of course, this serves only as an example. The keyboard device driver won’t work when your keyboard focus isn’t in the console window.
Wrapping It Up
We’ve only really brushed the surface of what is possible with Artoo. However, the documentation that comes with Artoo is sort of lacking and we’ve filled in the gaps to make it possible to do more exciting stuff with it. For some inspiration, check out this talk by one of the founders of the project. Excited to see what people will build with Artoo! | https://www.sitepoint.com/control-the-physical-world-with-ruby-and-artoo/ | CC-MAIN-2022-21 | refinedweb | 2,094 | 70.53 |
Creates well-compressed PNG animations. More...
#include <qpngio.h>
Inherits QPNGImageWriter.
List of all member functions.
By using transparency, QPNGImagePacker allows you to build a PNG image from a sequence of QImages.
Create an image packer that writes PNG data to iod, using a storage_depth bit encoding (use 8 or 32, depending on the desired quality and compression requirements).
Add the image img to the PNG animation, analyzing the differences between this and the previous image to improve compression.
Align pixel differences to x pixels. For example, using 8 can improve playback on certain hardware. Normally the default of 1-pixel alignment (ie. no alignment) gives bets compression and performance.
Search the documentation, FAQ, qt-interest archive and more (uses):
This file is part of the Qt toolkit, copyright © 1995-2005 Trolltech, all rights reserved. | https://doc.qt.io/archives/2.3/qpngimagepacker.html | CC-MAIN-2021-21 | refinedweb | 135 | 51.04 |
Hi, so I need help with this program. I have to write a method called censor that gets an array of strings from the user as an argument and then returns an array of strings that contains all of the original strings in the same order except those with length of 4. For example if cat, dog, moose, lazy, with was entered, then it would return the exact same thing except for the words with and lazy. Any thoughts? Currently, my code so far just prints [Ljava.lang.String;@38cfdf and im stuck. Thank you very much in advance.
import java.util.Scanner; public class censorProgram { public static void main (String args[]){ Scanner input = new Scanner (System.in); System.out.println ("How many words would you like to enter?"); int lineOfText = input.nextInt(); System.out.println ("Enter your words."); String [] words = new String [lineOfText]; for (int i = 0; i < words.length; i ++){ words[i] = input.next(); } System.out.println (words); } } // prints [Ljava.lang.String;@38cfdf //have to make new string array in the method and return words without four letters in the main method | http://www.javaprogrammingforums.com/whats-wrong-my-code/37832-writing-method-returns-words-without-four-letters-back-main-method.html | CC-MAIN-2014-42 | refinedweb | 184 | 76.42 |
SORT-PYTHON
Python module bindings for SORT algorithm (Simple, Online, and Realtime Tracking) implemented in C++, with the goal of being fast and easy to use. Tracker is based on this repo.
Installation
Before you can install the package, you need to install the following dependencies:
$ sudo apt install libopencv-dev $ sudo apt install libeigen3-dev
Make sure pip is upgraded to last version:
$ pip install pip --upgrade
Then you can install the package using:
$ pip install sort-tracker
or
$ git clone $ cd sort-python $ pip install .
Usage
import sort # Create a tracker with max_coast_cycles = 5 and min_hits = 3 # Default values are max_coast_cycles = 3 and min_hits = 1 tracker = sort.SORT(5,3)
Methods:
Two main methods are available named
run and
get_tracks and you can specify format of input and output bounding boxes as follows:
# format (int): # 0: [xmin, ymin, w, h] # 1: [xcenter, ycenter, w, h] # 2: [xmin, ymin, xmax, ymax]
defaulf format is 0.
run
run method takes an array of bounding boxes and format, and then performs tracking.
# Input: # bounding_boxes: a numpy array of bounding boxes [n, 4] # format: format of bounding boxes (int) import numpy as np bounding_boxes = np.array([[10, 10, 20, 20], [30, 30, 40, 40]]) tracker.run(bounding_boxes, 0)
get_tracks
get_tracks method returns a array of tracks.
# Input: # format: format of bounding boxes (int) # Output: # tracks: a numpy array of tracks [n, 5] where n is the number of tracks # and 5 is (id, ..., ..., ..., ...) where id is the track id and ... is the bounding box in the specified format tracks = tracker.get_tracks(0)
Demo
Author: MrGolden1 | https://cpp.codetea.com/python-sortsimple-online-and-real-time-tracking-extension/ | CC-MAIN-2022-40 | refinedweb | 263 | 60.45 |
gd_flags man page
gd_flags — alter GetData operational flags
Synopsis
#include <getdata.h>
unsigned long gd_flags(DIRFILE *dirfile, unsigned long set, unsigned long reset);
Description
The gd_flags() function modifies the operational flags of the dirfile(5) database specified by dirfile, and returns the new value of the flags register.
The flags which may be queried or modified with this interface are a subset of the open flags (see gd_cbopen(3)). These are:
- has been opened read-only, this flag is ignored.
- GD_VERBOSE
Specifies that whenever an error is triggered by the library when working on this dirfile, the corresponding error string, which can be retrieved by calling gd_error.)
Flags which appear only in set will be turned on (enabled); flags which appear only in reset will be turned off (disabled); flags which appear in both set and reset will be toggled. Flags which appear in neither of these are left unchanged. Accordingly, to simply query the current flags, both set and reset should be zero, and to explicitly specify all the flags, ignoring their old values, the new flags register should be given in set, and it's bitwise complement in reset.
Return Value
The gd_flags() function returns a bitwise or'd collection those of the above flags which are enabled after performing the modifications specified (if any). This function does not fail.
History
The gd_flags() function appeared in GetData-0.8.0.
See Also
gd_open(3), gd_verbose_prefix(3), dirfile(5)
Referenced By
gd_error(3), gd_open(3), gd_verbose_prefix(3). | https://www.mankier.com/3/gd_flags | CC-MAIN-2017-47 | refinedweb | 248 | 60.35 |
This has been driving me nuts. So I want to create a script that makes a gameobject rotate. I was able to achieve this but I still think this could be done better. I initially tried creating a vector2, assigning values to it and making the axes I don't want to move 0. But with Mathf.PingPong it gave me an error saying the transform.position attempt was not valid. Do you guys have any suggestions for improvements of my script?
using UnityEngine;
using System.Collections;
public class Ping_Pong_Sway : MonoBehaviour {
public float sway;
public string swayAlongVector;
void Update() {
if(swayAlongVector == "x"){
transform.localPosition = new Vector3(Mathf.PingPong(Time.time * 5, sway), transform.localPosition.y, transform.localPosition.z);
}
else if(swayAlongVector == "y"){
transform.localPosition = new Vector3(transform.localPosition.x, Mathf.PingPong(Time.time * 5, sway), transform.localPosition.z);
}
else if(swayAlongVector == "x+y"){
transform.localPosition = new Vector3(transform.localPosition.x, Mathf.PingPong(Time.time * 5, sway), transform.localPosition.z);
}
}
}
I have no idea what you're asking. Could you include some screenshots of what you're expecting vs. what you're seeing?
Also, if it gives you an error message, post it here. That helps us diagnose the problem a lot easier.
I'm looking for a better way of creating my script. It calls transform.localPosition 3 times and uses what I think is an extra variable in swayAlongVector. I haven't found an alternative to this however.
Answer by DiegoSLTS
·
Jun 13, 2015 at 03:54 AM
You just want to refactor your code?
First, I think you have a bug, you do the same for "y" and for "x+y".
Second, I guess you could avoid most of that repetition writing it like this:
using UnityEngine;
using System.Collections;
public class Ping_Pong_Sway : MonoBehaviour {
public float sway;
public string swayAlongVector;
void Update() {
//this check is not neccesary if swayAlongVector will alywas have one of this values
if (swayAlongVector == "x" || swayAlongVector == "y" || swayAlongVector == "x+y") {
float aux = Mathf.PingPong(Time.time * 5, sway); //write this only once
Vector3 pos = Vector3.zero;
pos.z = transform.localPosition.z;
if(swayAlongVector == "x" || swayAlongVector == "x+y"){
pos.x = aux;
}
if(swayAlongVector == "y" || swayAlongVector == "x+y"){
pos.y = aux;
}
transform.localPosition = pos;
}
}
}
That code should work like your code.
Another improvement would be to use an enum instead of strings for the swayAlongVector variable. And would be even better if the enum is a "flags" enum:
EDIT: Anyway, this code is not more "efficient", that's something you have to measure, but with something so simple it's probably an insignificant difference. Your original code doesn't create useless variables nor call the same function more than once, on each update only one of the conditions will be true, so only one line of code will get called. You should aim for cleannes, and maintainability instead of efficience in this case.
For example, it's more maintainable if you write that PingPong line only once in case you have to change something (change in one place instead of 3). It's easier to understand what the code does if you change the localPosition once at the end and use the swayAlongVector to only change the x and y values you'll use. But it's all subjective, someone might think other options are better for other reasons.
Just like to add that your refactored code does not the same as the original. If "swayAlongVector" is either "x" or "y" you set the y or x position to 0 as you initialized your pos with Vector3.zero. You should have initialized pos with localPosition.
It's true that calculating the pingpong value only in one place is more maintainable since each case will need the value anyways.
However swayAlongVector is just a selector (which would be better as enum, as you said) which is considered constant for an instance. In such a case i wouldn't mix the different cases as it makes it more entangled.
Also even you merged the pingpong calculation and the assignment into one place, you duplicated the comparing of the string values. If you want to add for example 3 "rotate" cases as well and in that run you rename the parameters from "x" ro "mx" for "move x" and add "rx" for "rotate x" your code is less maintainable.
Finally your script executes 4(best case) up to 7(worst case) string compare operations while the original code executes 1(best case) up to 3(worst case).
To me your solution is less readable and actually less efficient ^^
I would suggest using an enum like this:
public enum SwayOperation { X, Y, XY };
public float sway;
public float speed = 5f;
public SwayOperation swayType;
void Update()
{
float val = Mathf.PingPong(Time.time * speed, sway);
Vector3 pos = transform.localPosition;
switch (swayType)
{
case SwayOperation.X:
pos.x = val;
break;
case SwayOperation.Y:
pos.y = val;
break;
case SwayOperation.XY:
pos.x = pos.y = val;
break;
}
transform.localPosition = pos;
}
In the hypothetical case of adding more "operations" i would even move the assignment of localPosition into each case. It doesn't matter how long the code is as long as you can easily follow what it does. Also maintainability can't be generalized. You can consider something like "what might get changed more likely?", but you never know what might be changed in the future. If you know it already then you usually implement it right away ^^.
Answer by Tick_Talos
·
Jun 13, 2015 at 03:17 PM
If your just trying to make it rotate in place (not around anything) you can just do something like this
tf.Rotate(-Vector2.up * rotateSpeed * Time.deltaTime);
Answer by PlanetVaster
·
Jun 13, 2015 at 03:18 PM
You could use a switch statement
Answer by Yuanfeng
·
Jun 13, 2015 at 03:20 PM
Make Sure that you didn't set the sway value to be 0. Mathf.PingPong(float t, float length); the parameter require to be bigger.
C# move y position of object not working
2
Answers
How to make cubes between 2 point and stick them together?
0
Answers
Simultaneously moving GameObjects using C# script.
3
Answers
GUIButton only executes once
2
Answers
How to rotate an object to face the direction it's going?
1
Answer | https://answers.unity.com/questions/985541/c-creating-a-more-efficient-script.html?sort=oldest | CC-MAIN-2019-47 | refinedweb | 1,042 | 57.98 |
Opened 10 years ago
Closed 8 years ago
#1954 closed Bug (Fixed)
ListViewItem returns 0 (failure) even though it populates the ListView
Description
When more sub-items are specified in the “text” parameter of function GUICtrlCreateListViewItem, the list view is populated with the sub-items that fit the ListView, but the control ID returned is 0.
The following script demonstrates this anomaly. In particular, $item2 is assigned 0 because it has one too many sub-items. The ID’s of each ListViewItem are shown in the input box.
#include <GUIConstantsEx.au3> #include <WindowsConstants.au3> Local $listview, $item1, $item2, $item3, $input1, $msg GUICreate("listview items", 220, 250, 100, 200, -1, $WS_EX_ACCEPTFILES) $listview = GUICtrlCreateListView("col1 |col2|col3 ", 10, 10, 200, 150);,$LVS_SORTDESCENDING) $item1 = GUICtrlCreateListViewItem("1a|1b|1c", $listview) $item2 = GUICtrlCreateListViewItem("2a|2b|2c|2d", $listview) $item3 = GUICtrlCreateListViewItem("3a|3b|3c", $listview) $input1 = GUICtrlCreateInput("$item1 = " & $item1 & ", $item2 = " & $item2 & ", $item3 = " & $item3, 10, 200, 200, 15) GUISetState() Do Until GUIGetMsg() = $GUI_EVENT_CLOSE
Attachments (0)
Change History (5)
comment:1 Changed 10 years ago by Jpm
comment:2 Changed 10 years ago by Zedna
Here is another modification
;~ $item2 = GUICtrlCreateListViewItem("2a|2b|2c|2d", $listview) $item2 = GUICtrlCreateListViewItem("2a|2b", $listview)
In this case: less (only two) columns given it sucessfully create listview item and also corect nonzero control ID is returned.
So there is question if in case of more columns given shouldn't be returned correct nonzero control ID if Autoit can display it correctly (cut thrid unnecessary column internally)?
comment:3 Changed 8 years ago by Jon
- Resolution set to Rejected
- Status changed from new to closed
comment:4 Changed 8 years ago by Jpm
- Resolution Rejected deleted
- Status changed from closed to reopened
comment $item2 is really invalid as it try to defined more field than the list number of columns can have.
The bug is it should not be displayed. Bad internal error handling. | https://www.autoitscript.com/trac/autoit/ticket/1954 | CC-MAIN-2021-25 | refinedweb | 311 | 51.62 |
I’ve really enjoyed working in the Grails ecosystem. There are a variety of reasons of which I won’t write in this post. I am going to focus on one and that is the plugin system. Plugins provide a great way to re-use capability easily in a way I don’t recall seeing elsewhere in the Java ecosystem. I recently authored my first Grails plugin and I want to share my experiences here.
The code for the plugin is at. The plugin is registered at.
Grails plugins provide more than Java or Groovy code like a Maven or Ivy. They can provide user interfaces, JavaScript, stylesheets, configurations and more without the developer needing to copy files into the project or update the web.xml, Spring config, etc. Note that some plugins do require additional configuration in Config.groovy, Spring beans in resources.groovy, etc., but these changes generally do not interfere with the project itself.
The plugin improves the performance of the web application by compressing the controller response via HTTP GZIP compression. One of the best practices for web site performance is to reduce network bandwidth. Compression is an easy win because HTML, JSON, XML, CSS and JavaScript compress well, usually reducing the size by 80-90%. Other resources such as images are usually already compressed and should not add additional compression over the HTTP transport.
Several plugins already provide for compression of non-controller output, such as CSS and JavaScript. Most commonly the asset-pipeline and resources plugins. Controller output is
not compressed with these plugins. Typically a web application container is fronted by an HTTP proxy, such as Apache. In this case, the proxy should be configured to compress output because it can be done more efficiently. This isn’t always possible in the deployment architecture. There may be no HTTP proxy in front of the web application container or in a PaaS environment the configuration may not be available to the owner of the web app. Thus it falls on the web application to do the compression.
There is an existing Java EE filter implementation that provides output compression that is mature and robust. It is called ziplet and is available on GitHub. The ziplet plugin integrates this filter into Grails.
Grails provides a command for creating a new plugin called create-plugin, and you give it the name of your new plugin. A new directory with the plugin name will be created, and then all the files under that. So don’t create a directory named “ziplet”, cd to that, and then
enter the command. Let Grails create the “ziplet” folder for you.
$ grails create-plugin ziplet
I won’t go through all the details of creating the plugin, the Grails documentation covers a lot. Pay particular attention to the plugin script, in this case it is
ZipletGrailsPlugin.groovy. Make sure all applicable fields are filled out, comments and unused fields are removed. Make sure to fill out the documentation, source and issues links. These are shown on the plugins site and it’s annoying to click the buttons and go no where.
After my plugin was submitted, Burt Beckwith created a pull request to cleanup a bunch of stuff. One of my favorite things about collaboration is learning from others. I recommend reviewing the pull request to see what files can be left out of the plugin and how things are to look. I’ve seen quite a few projects with generated files hanging around and wasn’t sure what was safe to delete.
The ziplet plugin brings in the ziplet filter dependency and configures the web.xml file. Grails makes this straight-forward and allows for other plugins and the project to add to the web.xml without error prone parsing.
Including a dependency is the same as doing it in the project. Add the dependency to BuildConfig.groovy:
dependencies {
compile('com.github.ziplet:ziplet:2.0.0') {
excludes 'slf4j-nop'
}
}
Java EE filters require filter and filter-mapping elements. There is also configuration needed in the filter. This was the most complicated part of the plugin. It looks something like:
def contextParam = xml.'context-param'
contextParam[contextParam.size() - 1] + {
filter {
'filter-name'('CompressingFilter')
'filter-class'(CompressingFilter.name)
if (config.debug) {
'init-param' {
'param-name'('debug')
'param-value'('true')
}
}
if (config.compressionThreshold) {
'init-param' {
'param-name'('compressionThreshold')
'param-value'(config.compressionThreshold)
}
}
...
}
Grails has excellent support for testing: unit, integration and functional. However, I found the plugin script difficult to test. It is not in the classpath for unit testing nor integration testing. Functional testing is too heavy for most code in the plugin script, and especially so
for this case. I solved the problem by moving as much code as possible to a helper class in
src/groovy.
ZipletGrailsPlugin.groovy:
…
def doWithWebDescriptor = { xml ->
new WebXmlHelper().updateWebXml(application, xml)
}
…
The plugin implementation is about configuring the web.xml, so all of the tests deal with checking that the web.xml was modified correctly. The plugin script uses XmlSlurper to modify the XML, so we need to do the same thing in the test. Also, assertions were difficult because modifying the XmlSlurper didn’t allow for checking the result, so I needed to re-parse the XML.
// xml is an XmlSlurper object
String str = new StreamingMarkupBuilder().bindNode(xml)
xml = new XmlSlurper().parseText(str)
The complete code is at.
Publishing a plugin is simple, the documentation is good. It took all of three days to get my request approved, fairly quick. I recommend reading the comments on pending plugins to learn anything you might need to do.
After your plugin is approved you’re not done. You need to login to grails.org, find your plugin, and edit it. Add some tags so your plugin will be categorized correctly. Refer to similar plugins for tags. In the ziplet case, I added “utility” and “performance”. I edited the install instructions because the version was “null”. Unfortunately this doesn’t track your releases, so you need to update for each release. I’ve learned when including a plugin to look at the top for the latest release instead of the install instructions, so it’s probably not a big deal if you don’t update this with each release.
Some plugins copy the full description, such as the README.md, into this page. The Documentation button will take users there, so I figured that’s better than having to modify the plugin page for each release.
Grails plugins are fantastic. They are simple to use and relatively painless to create and publish. I recommend developers consider contributing re-usable code using the plugin system. You’ll give back to the open source community and likely get others to improve your plugin, and thus improving your project. | https://objectpartners.com/2014/10/29/experiences-with-publishing-a-grails-plugin/ | CC-MAIN-2019-04 | refinedweb | 1,130 | 59.3 |
So in my C++ programming class, we have a problem where we need to write a program that asks for an integer from the user and then create a box out of X's that's side lengths are equal to the number inputted by the user.
For example, if the user inputted 5, the output would be:
XXXXX
XXXXX
XXXXX
XXXXX
XXXXX
I'm not sure how to go about this, I feel like I need to use a for loop but don't know how to structure it. Any help would be appreciated!
Simply use 2 for loops. First you have to get the input of the user. This is achived by using the standard input and the flux operator
std::cin >> store_input.
Then you loop:
n times for the columns and, inside,
n times for the lines.
#include <iostream> int main() { int number; // Output. std::cout << "Enter a number: "; // Gets the input. std::cin >> number; // For each column, process one line + return carriage for (int j = 0; j < number; ++j) { // For one line. for (int i = 0; i < number; ++i) { std::cout << 'X'; } std::cout << '\n'; } return 0; } | https://codedump.io/share/hqVqNWNkwusj/1/creating-a-box-out-of-x39s-depending-on-the-number-inputted-by-the-user | CC-MAIN-2017-34 | refinedweb | 191 | 80.11 |
508 on
Developer Community if you have new
information to add and do not yet see a matching new report.
If the latest results still closely match this report, you can use the
original description:
Entered the following sample code in a Workbook (Windows 10):
using Android.App;
using Android.Widget;
using Android.OS;
var rootActivity = StartedActivities.First ();
Switch mySwitch = new Switch (rootActivity);
mySwitch.ShowText = true;
When I try to run it, I get the error: "(7,10): error CS1061: 'Switch' does not contain a definition for 'ShowText' and no extension method 'ShowText' accepting a first argument of type 'Switch' could be found.
However, Switch.ShowText is supported by Xamarin.Android:
Also, I can instantiate a Switch and access its ShowText property without error in Visual Studio.
Although Xamarin docs claim that Switch.ShowText is available as early as API 14 (ICS, 4.0.3), Android's docs reveal the truth: it is not available until API 21 (Lollipop, 5.0).
Currently, the Xamarin Workbooks Android app is built using API 19 (KitKat, 4.4), and does not have access to newer APIs.
This decision was based on what device images were available for Xamarin Android Player, as well as what devices it seemed likely developers would already have installed.
Leaving this open as a feature request. Perhaps we could ship a version built against latest stable Android as well, and let the user get that if they have set up a device for it. Every app we add to Workbooks adds a big hit to install size though. | https://xamarin.github.io/bugzilla-archives/45/45508/bug.html | CC-MAIN-2019-39 | refinedweb | 259 | 66.13 |
."
One thing is for certain... (Score:5, Funny)
In the 47 years I have spent on this rock, I have yet to see a futurist reliably predict the future.
Where the fuck is my flying car?
--
BMO
Re: (Score:2)
Re: (Score:3)
Which is why people like Ray Kurzweil can still get away with the nonsense they write.
The singularity is bunk.
--
BMO - I'm turning into my maternal grandfather.
Re:One thing is for certain... (Score:5, Funny)
Which is why people like Ray Kurzweil can still get away with the nonsense they write.
I was going to say St, John and the revelation, but okay.
My predictions for 50 years from now:
- No human has returned to the moon.
- NovartoGlaxoSmithKline announcing the first pharmceutical cure for religion causes widespread riots in Pakistan and Alabama.
- In Europe and Canada, the banning of driver controlled cars on public roads go into effect.
- Texas becomes the last industrialized country to abolish paper money.
- The Sino-American war winds down. With neither side wiling to risk their mainland, it was fought in Korea, which is now in ruins, and Japan, which has become a Chinese protectorate.
- Coca-Cola reintroduces Cola with Coca extract.
- I am dead.
Re: (Score:2)
My predictions for 50 years from now:
- No human has returned to the moon.
Wrong. I'm fairly certain that in 50 years there would be a small colony of people living on the moon. Humans will also have visited Mars.
- NovartoGlaxoSmithKline announcing the first pharmceutical cure for religion causes widespread riots in Pakistan and Alabama.
Fun! - But I doubt that religion can be cured pharmaceutically. It isn't a medical condition and the general stupidity usually behind it cannot be cured, although less inbreeding will help.
- In Europe and Canada, the banning of driver controlled cars on public roads go into effect.
It think this will happen sooner.
- Texas becomes the last industrialized country to abolish paper money.
...and this will happen after having used their own Lone Star Dollars for two decades.
- The Sino-American war winds down. With neither side wiling to risk their mainland, it was fought in Korea, which is now in ruins, and Japan, which has become a Chinese protectorate.
Not that unlikely. I think North Korea would have helped creating
Re:One thing is for certain... (Score:5, Funny)
Wrong. I'm fairly certain that in 50 years there would be a small colony of people living on the moon. Humans will also have visited Mars.
I'm fairly certain that in 50 years Presidents will still be promising a small colony of people living on the moon and a Mars mission in 50 years.
Re: (Score:3)
You missed the news. A while back - 6 months to 2 years - there was a news article finding a link between religious belief and the (chemical or genetic, I forget which) makeup of the body. That implies religious belief is subject to encouragement or discouragement by biochemical means.
My
Re: (Score:3)
There is specific brain chemistry that happens to facilitate religious beliefs, so I would say it is possible.
ALL you decisions, beliefs, thoughts are all just chemical reactions.
"- I am dead.
So am I.
--"
Not me.
Re: (Score:3)
Wrong. I'm fairly certain that in 50 years there would be a small colony of people living on the moon.
Do you realize that nobody has even set foot on the moon in the past 40 years? Or even left low earth orbit.
Re: (Score:3)
They certainly aren't thinking.
Applied rationality and logic would mean one should not believe in something their is no evidence to support.
However the human brains loves harmless routine because people can go by rote and we don't spend the energy thinking.
Re: (Score:3)
Yes! A moon base by 1990. We could move all of the radioactive waste from all the reactors on Earth to the moon if we had such a base.
What could possibly go wrong?
Re: (Score:3).
Re: (Score:3)
>> Japan, which has become a Chinese protectorate.
Is that before or after Japan nukes the living shit out of China?
Re: (Score:3)
RE: Your Sig.
I'm looking for examples of beautiful open source code in every language. If you know of any, please let me know.
A programmer version of Diogenes, looking for "one honest man" finding none? [harkavagrant.com]
--
BMO
Re: (Score:2)
Re: (Score:3)
Maybe if you asked for examples in "any language" rather than "every language"?
Re: (Score:2)
Re: (Score:2, Funny)
Someone needs to petition Elon Musk to create a Tesla Air series car.
Re: (Score:2, Informative)
It will fly through a series of tubes.
Predicitng the future is hard (Score:5, Funny)
June 26, 2005
Ten things I learned about the future at the Wired NextFest
This past Saturday, I attended the Wired NextFest at Chicago's Navy Pier. The event promised visitors that they could "experience the future," and I just couldn't pass up that opportunity. I wish I had, though, because after spending a few hours at the NextFest I'm sad to report that the future isn't what it used to be. Maybe I was expecting to relive my first visit to Epcot Center as a child, or maybe I'm just jaded in my old age. Whatever the cause, my trip to the future was not very inspirational.
Here are the things I learned about the future, in no particular order.
1. The people of the future are a scantily clad people. They delight in showing off their naked, tattooed flesh.
2. In the future, an airport security checkpoint will work exactly the same as it does now, except that the scanning technology a event.
3. The elderly Japanese people of the future will be so desperately lonely for companionship that they'll purchase creepy android replicas of the sci-fi author Phillip K. Dick. Why the Japanese, and why Phillip K. Dick? It's a long story, and I'm not sure I fully understood it all when the android's makers explained it to me..
5. In the future, most robots will look pretty much like robots have looked since the 1970's. About the only difference is that robot antennaein the 70's, so
Re: (Score:2) [terrafugia.com]
Its not quite something you can buy today but you can put down a deposit and they have already passed all the regulatory hurdles and are soon to start production.
And yes it IS a car that flies. (although if you want to fly in it, you need to find a runway)
Re:One thing is for certain... (Score:5, Informative)
Search for AT&T's "you will" commercials from 20 years ago. They predicted the future to an astonishing degree. Except, of course, that the companies that brought you all those things weren't AT&T.
Re: (Score:3)
Well, not in name, but the money behind the scenes is pretty much the same.
Re: (Score:3)
They've also made many errors, but then again, they usually aren't trying to predict what will be, just write stories about what could be.
As to getting the dates right, nobody seems to get that past the 3 year mark, unless it's already got marketing pushing for a release date, but that's not really a prediction either.
Re:One thing is for certain... (Score:5, Interesting)
He did mention flying cars, but he got a lot of stuff right, too! Here's a tally.
Yes:
Photosensitive windows that block out extreme light levels (well, usually sunglasses)
Automatically prepared meals (sort of, in microwave dinners; there's no standard for automated scanning of cooking times yet)
Machine language translation (this is still a big thing; Microsoft had a pretty big demo just a year or two ago—but, of course, the game's all about Chinese now)
Large solar arrays
Heavy dependence on nuclear (although not as much as he hoped)
Automated driving (definitely show-off material, if not on the market much)
Video calls (still not as popular as futurists want them to be)
Satellite networking
Mostly-automated road construction
Still no manned missions to Mars
Optical networking (although he thought it'd be through pipes and not glass fibres)
Bus rapid transit (special lanes on highways)
Earth's population over 6.5 billion
US population around 350 million (actually 319)
Less developed areas will have slipped further behind the well-developed ones (although he didn't realise that some of them would actually fall backward)
Life expectancies around 85 in some countries (82.59 in Japan)
Slowing population growth (it peaked in the 60s)
Creative industries amongst the most valued ("The lucky few who can be involved in creative work of any sort will be the true elite of mankind, for they alone will do more than serve a machine.")
No:
Windows will be archaic replaced with ubiquitous light panels (apparently scenery had no appeal in 1964)
Cities will move underground so that the surface can be parks and farms.
Automatically prepared meals (he gave the example of ordering bacon, eggs, and coffee prepared in the usual manner)
Clumsy robot housekeepers (long live the Roomba—although the general spirit of the robot obsession is going strong in Japan, a land apparently unravaged by the Terminator franchise)
3D movies (on holographic cube TVs)
Radioisotope batteries in consumer electronics
FLYING CARS (well, actually hovering ones—but seriously, why?)
Outdoor moving sidewalks in cities
Heavy use of compressed air tubes for postal mail (these remain only used in special settings like moving samples around hospitals, although my supermarket has one for money, weirdly)
No parking on the street (well, except on big mainstreets, but that was common even in his day)
MOON COLONIES
Line-of-sight laser communications would be preferable to cable conduits (?!)
Boston and Washington DC will have merged into a giant city with 40 million inhabitants
Higher population in deserts and the Arctic due to population explosion (high-rises were apparently unanticipated)
Underwater housing
Attempts to sell yeast and algae as food sources (we have real tubesteak now, thank you very much)
Widespread birth control efforts (only in China, I think?)
All high school students will be able to program
Automation of all automatable jobs (going so far as to eliminate classroom teaching, apparently)
Psychiatry the most important medical specialty (due to boredom caused by automation—apparently we'll all be unemployed in about four months)
And I'm not really sure how to classify this one:
Indeed, the most somber speculation I can make about A.D. 2014 is that in a society of enforced leisure, the most glorious single word in the vocabulary will have become work!
So on the whole, about 50-50, mostly small things. There are some items in there that I thought were rather unexpectedly good (no manned Mars visits), but for the most part it seems we'll have to file this batch of Asimov's predictions as over-optimistic, with most other futurist forecasts.
As someone who didn't grow up being promised flying cars, I have to wonder why (other than "because they're cool" and/or
Re: (Score:2)
Re:One thing is for certain... (Score:5, Insightful)
Then there's a list of big things that he did not see coming at all:
- the internet!
- computers thousand times stronger than anything in 1964 - the size of your palm - in everybody's pocket
- advances in medical science - stem cells, 3d printing of tissue etc. and in medical technology - scanners creating a 3d model of your body (including the inside)
- detection and photography of extrasolar planets
- a man made probe exiting the solar system
- despite the overpopulation, abundance of food for everybody
etc.
multivac (Score:3)
Re: (Score:3)
Re: (Score:2)
"Where the fuck is my flying car?"
It's here. We have just gone way beyond flying cars and you missed it. [blogspot.com]
I will say, however, that Asimov pretty much described the Google car.
Re: (Score:2)
Where the fuck is my flying car?
Wisconsin? [dailymail.co.uk]
Re: (Score:2, Insightful)
You're a Space Nutter, aren't you? The dog-whistle "this rock" gave you away.
Technically, if that is a dog whistle, it means you are also a Space Nutter (whatever that is), since you were able to hear it as well. One space nutter to another.
Re: (Score:2)
Technically, if that is a dog whistle, it means you are also a Space Nutter (whatever that is), since you were able to hear it as well.
Maybe he just has a lot of space between his ears.
Re: (Score:3, Insightful)
If it isn't VTOL, it isn't a flying car.
Robocop was optimistic. (Score:3)
I re-watched that recently. I remember when I first watched it I took it for a dystopian vision of the future of Detroit. As it turns out, it was hopelessly optimistic.
Also, all robots should make the noise that Robocop makes.
Re:One thing is for certain... (Score:5, Insightful)
I can reliably predict the future!
In 50 years:
* We will have returned to the moon. Or we won't. Definitely one or the other.
* We will have self-driving cars, but they may not be in common use.
* New drugs will have been invented.
* Computers will be even faster and more capable than those of today!
* There will be robots. Of some sort.
* Rich people will continue to have power over the poorer members of society
* Numerous wars will have been fought, altering the political spheres and influence of various nations
* Music of the future will suck compared to what we grew up with, although the kids of that generation will love it.
So he was off by a year? Next one is in 2015. (Score:5, Informative)
Re:So he was off by a year? Next one is in 2015. (Score:5, Funny)
Re:So he was off by a year? Next one is in 2015. (Score:5, Funny)
Bite my shiny metal ass.
Re: (Score:2)
You didn't touch the Crushinator, did you?
and yet (Score:3)
classify it into 'throw away' and 'set aside.'
This is the hardest part. We have robots that are quite agile, but classifying objects into 'throw away' and 'set aside' is still extremely difficult.
Why even classify? (Score:5, Interesting)
This is the hardest part. We have robots that are quite agile, but classifying objects into 'throw away' and 'set aside' is still extremely difficult.
I think there are plenty of other harder parts because I don't care if a robot can do that. Just a simple robot that could dust every item on a shelf would be fantastic. Heck, even if it could just lift any arbitrary item and clean only the shelf it would be fantastic.
The ability to lift and replace arbitrary items on a crowded shelf would seem to be pretty hard all by itself, without any need of classifying them...
Re: (Score:2)
no, not hard at all. one little RFID in everything and problem is solved.
Re:and yet (Score:5, Informative)
We have machines that can sort trash on a conveyor belt with air jets at amazing speeds.
Re: (Score:3)
They can sort different types of trash.
Differentiating trash and non-trash is subjective. Not even a human can reliably do it. Assuming you instruct a machine what is trash and what isn't, with a clear and non-ambiguous definition, then there should be no problem making an algorithm that identifies which is which.
Simple object separating algorithm... (Score:2)
IF object.contains?(Carbon); THEN
object.throw_away();
ELSE
object.set_aside();
FI
Re:and yet (Score:4, Interesting)
Which leads to an interesting piece of economics that the writers of the time, most all versed in economics, seemed to miss. That we will pay for things, like cable, but not for things like a autonomous vacuum cleaner or lawn mower. That as long as people are willing to work cheaper than a machine, we will pay the people.
Re: (Score:2)
Re:and yet (Score:4, Interesting)
I'll be happy with an arm on a ceiling-mounted gantry that retracts into a niche above the bath-shower alcove and keeps the toilet nice and sparkly clean, including the unspeakably nasty region BEHIND the toilet.
Somebody tell me again why it's impossible to buy a house in America with master bathroom that's built like a big waterproof shower with floor drain, so you can just hose it down with soapy water to clean it? Oh, right... because our building codes force you to use P traps and dry-venting for every single drain in the house, instead of allowing drum traps for floor drains (building a floor drain with a drum trap is cheap and easy; P-traps intrude into the ceiling envelope of the floor below, and dry-venting a floor drain that's in the middle of the room in a code-compliant manner is hard due to the horizontal distance it has to run before going vertical).
Re: (Score:3)
I'll be happy when people stop being too lazy and/or proud to keep their own houses clean.
An unspeakably nasty region behind the toilet doesn't appear overnight.
Re: (Score:3)
Agreed. I have always wanted a bathroom with central drain and the ability to quickly wash down the bathroom. Behind the toilet always gets grimey as well as the area just under and behind the seat hinge. It could also allow for a smaller bathroom with a shower head hanging from the ceiling. Just stand and shower right in the middle, no need for a stall. Hell wouldn't it be even better if you could sit on the bowl and shower at the same time? That would save some time in the morning. Bonus if the bathroom c
Re: (Score:3)
Agreed. I have always wanted a bathroom with central drain and the ability to quickly wash down the bathroom.
Ever live as part of a family? All you'd have to do is move all the towels and electric razors and hair driers and all the other stuff real bathrooms have then hose everything down, wait for it to dry sufficiently, and then replace everything.
Voila! Labor savings!
Re: (Score:2)
Re: (Score:3)
Not true. It can be quite easy. From hoarderbot's source code:
def bin_or_save(item): return SET_ASIDE
sigh, python coders!
Did he ever revisit these predictions? (Score:5, Interesting)
Considering Asimov did not die until the early nineties, did he ever update or evaluate the progress towards his earlier predictions? I feel he would have revised his belief that, for instance, mankind would be increasingly interested in living in hermetically sealed, controlled bubbles.
Re:Did he ever revisit these predictions? (Score:5, Informative)
oh? the windows on my building at work don't open. my windows at home are open maybe 2 months total out of the year.
Re: (Score:2)
he lost interest in his writing, and more in staying alive.
I met him in late '79.
he was doing his last university circuit. windows wasn't on the horizon
Re:Did he ever revisit these predictions? (Score:5, Informative)
Isaac Asimov did not have cancer. He died of AIDS complications. He was a very early casualty and was infected by a tainted blood transfusion. He and his family kept the truth a secret for many years due to the early stigma of aids.
Re: (Score:2),
Re: (Score:3)
Asimov was on a TV show where the host asked him about his earlier prediction that the world would only have five computers. Asimov asked that the question be cut, where the confused newscaster pointed out it was a live show. So Asimov walked out of the interview.
Wonder why the dislike of sunlight (Score:4, Interesting)
The oddest part of the whole thing to me, was the thought that so many people would want to get rid of sunlight to the greatest extent possible.
The opposite has been true, luxury houses all have huge windows. People love natural light indoors, and a lot of money is spent trying to replicate it with artificial lighting...
I wonder if that was a prevailing opinion at the time, or if it was just something Asimov preferred.
Re: (Score:2)
most people don't live in luxury houses. and we sure as hell don't work in them. we bask most the working day and night under artificial light.
Re:Wonder why the dislike of sunlight (Score:5, Informative)
In 1964, most windows were still glazed with a single pane. They let lots of heat in during the summer and out during the winter. In addition, the sun coming in through the windows tended to fade carpets and furniture. Today, with double and triple glazing, and low e coatings, we get the light without the problems.
Re: (Score:3)
That's probably Texas's reflexive dislike of government regulations.
Asimov (Score:5, Interesting)
This is one of the reasons I like to read Asimov's work. It's not (completely) wild imagination - there's actually some thought into whether things are reasonable.
My favorite Asimov invention that actually came to be is "Psychohistory": The kinds of big data analysis that we can do today are pretty much exactly what he's talking about. I worked on a project recently about predicting the behavior of Indian terrorist groups like the Lashkar-e-taiba and the Mujahadeen based on the last 20 years of the actions they've done and the things that have happened in their environment. There were some things we were able to predict about future behavior with accuracy as good as 90%."
Re: (Score:3)
Asimov was a genius. Not a brilliant writer, but a genius all the same. He was best at short stories, where he could get an idea across quickly. But I find his novels by-the-numbers and tedious. Too many wooden discussions going on that repeat themselves in order to hammer a point home. And he never managed to write a female character that wasn't a two dimensional cypher.
But it doesn't surprise me that this essay is remarkably accurate.
Pocket Computers (Score:4, Insightful)
Asimov predicted 'pocket compiters (I think it was in one of the early foundation books) and when pocket calculators came out in the 70's they were using red LEDs and the 'good doctor' said "look I even got the colors right.
(but 40 years later pocket compiters are using multicolored displays, so much for his predictions.)
Like many SF authors he was obsessed with humanoid style robots, but that hasn't happened even though other robots are around in quantity.
The first law of Robotics doesn't seem to be around either (just the opposite when you think of drones)
Re: (Score:2)
Re:Pocket Computers (Score:5, Interesting)
I recently wrote a series of blogs analyzing Asimov's use of technology (esp. hyperspace and calculating jumps) in the original Foundation trilogy. The best it gets in the 3rd book is to have a room-sized computer that can project a picture of the galaxy and locate your position in space in only a half-hour ("the Lens"). Probably the two most jarring elements when re-reading these books is how all communication is still done on paper (stacks of paperwork, paper capsules for secure messaging, paper star charts for navigation), and that most everyone is smoking everywhere all the time. Follow-up would be the absence of women in any leadership or technical roles. This being set 50,000 years in the future. [blogspot.com]
Re: (Score:2)
the american obsession with humanoid robots is a harkening back to the good old days when you could own slaves - i.e. it's entirely due to the fact that they are slave substitutes who won't murder you in your sleep for mis-treating them.
in fact, due to Asimov's 3 laws, they *can't* murder you in your sleep.
Re: (Score:2)
Isn't humanoid robots an international thing? If anything, maybe Japanese.
Humanoids (Score:2)
I believe the Japanese lean towards octopi.
Re: (Score:2)
Tonight, on the SyFi channel - Roboctopus!
Re: (Score:3)
in fact, due to Asimov's 3 laws, they *can't* murder you in your sleep.
Actually they can. Turns out Asimov made a classic mistake when he created the three laws, a mistake he actually used later when it came to the robots on Solaria. The laws are immutable except through generalization which apparently is only possible for telepathic robots, but the definitions behind them aren't. You can modify the definitions of 'human' and emphasize the check of this, which makes such a robot clearly state when someone appears to be human but fails on a single important parameter: "You're n
Re: (Score:2)
The first law of Robotics doesn't seem to be around either (just the opposite when you think of drones)
There isn't yet artificial intelligence anywhere close to the level for which the laws of robotics would make sense. However, even if there ever is such AI, it is naive in the extreme to think there could be universal agreement on how such AIs should be constrained. I doubt even Asimov thought that was realistic. I think his interest in the laws was for thought experiments and plot devices more than anything else. Notice that he doesn't mention them in this essay.
Asimov didn't live long enough (Score:2)
Fair good, olympics bad (Score:2)
Is fun watching folks compete to be best, I have seen it at the Olympic level, but it is useless. And considering the expense, outrageous.
Fairs were a good custom in many ways, particularly World's Fairs. I miss them, and could do entirely without the Oily Pimpics.
True Sage (Score:2, Informative)
Several misses for each hit. A few from TFA:.
Jets of compressed air will also lift land vehicles off the highways, which, among other things, will minimize paving problems. Smooth earth or level lawns
Re: (Score:2)
Not a total miss. If you haven't seen these, they you haven't been in the right office buildings. Okay, so they're mostly backed by either fluorescent lights or LEDs, and the panels themselves don't
Re: (Score:2)
Have you looked at birth rates in developed nations? He pretty much nailed this one, notwithstanding the lack of a formal propaganda drive....
Wait, I thought that's what reality TV was.
Re: (Score:2)
Re: (Score:2)
Relatedly:
Isaac Asimov, Manhattanite that he was, clearly never paid a Mexican to cut his lawn for $7 an hour.
Propaganda (Score:2)
Re: (Score:2)
Then Again (Score:2)
I would say our cruise missiles and drones are wonderful examples of just how well robots can work right now. And those Google cars driving about are neat as well. Then we have a very sleek drone that sort of looks like a flying saucer that lands itself on an aircraft carrier better than human pilots can. The Navy is also working on large ships which are robotic that can carry and deploy drones in large numbers. Sometimes robots are around us and we just don't think of what they really are.
Re: (Score:2)
Misconstrue what you want. You know damn well the wording was that they were "wonderful examples" of what technology is capable of. You think they are not?
Moon colonies... (Score:5, Insightful)
The saddest part is that he doesn't feel the need to mention the moon colonies except to discuss improved communication with them. Humanities future in space was so obvious that it didn't even need to be stated.
Re: (Score:2)
It was interesting that in the same prediction he was still hedging his bets on whether fiber optic communication would be common. Laser tunnels are strung everywhere and are ubiquitous. So common we don't even use all of them.
Re: (Score:2)
The saddest part is that he doesn't feel the need to mention the moon colonies except to discuss improved communication with them. Humanities future in space was so obvious that it didn't even need to be stated.
It's not all that sad. People living in space will be about as happy and healthy as dolphins living on mountaintops. There might be a 3-day window of enjoyment of the novelty; after the spacesickness subsides and before the ennui sets in; after that, just an ever-growing sense of "I wish I was back on Earth so I could go outside and see a tree now and then".
Humans are adaptable (Score:2)
To the contrary, once permanent colonies are established, people will adapt. Just look at the extreme environments that people inhabit on earth, from Sahara to the Arctic.
It's past time for humans to be out there, exploring and exploiting the resources of the solar system.
Re: (Score:3)
Without some level of technology (here I'm including neolithic), the desert and the arctic are deadly - just a bit more slowly than space.
Of course we don't need to go to space any more than we needed to leave the trees. Hopefully likely the species who do explore space will find us adorable and give their children little stuffed human toys when they visit the zoo.
Personally (Score:3, Funny)
I'm still waiting for the orgasmatron to come into production.
Lumbering cleaner robot? (Score:4, Interesting)
Jetsons ran from 63-64. Asimov's predictions were from 64. So did Asimov get a kick out of Rosie? Heh. There's not enough information available to know this, but that's the first thing I thought of.
Sadly... (Score:5, Insightful)
Instead, we're still just as dependent on coal, oil and gas as ever.
Re: (Score:2)
once the isotype batteries are used up they will be disposed of only through authorized agents of the manufacturer.
And you know that if the radioisotope batteries had come to pass, that sentence might be true for first-world countries with a stable political infrastructure and a wide network of agents, but in some countries, people would be taking them apart on the street and harvesting still-usable components... and quite possibly doing so not only to their own used batteries but also to some sent to them by "authorized agents of the manufacturer".
Just look at what's happening with electronics, with people "cooking" ci
Inaccurate? (Score:4, Informative)
And, of course, the whole notion that we'd have a world's fair is among the inaccurate predictions.
It's only off by one year. Expo 2015 will be in Milan, Italy. There was one last year (2012) in Yeosu, S. Korea. The World's Fairs started using the term Expo with the 1967 Montreal World's Fair, Expo '67.
It's generally a good idea to know what you're talking about before you accuse someone else of inaccuracy.
In the year 2889 (Score:3)
There's a jules verne short story called "in the year 2889" which is a very interesting read as well.. . I'd say in many ways he was describing 2013, not 2889...
Definitely worth the 5-10 minutes to read.
More fun when they're way off (Score:2)
Like this one from Popular Mechanics in 1954: [osafoundation.org]
Re: (Score:3)
But why does it have a steering wheel?
Re:How True (Score:4, Insightful)
Alas, he was presuming a world which made sense. | https://tech.slashdot.org/story/13/08/26/2356204/the-world-fair-of-2014-according-to-asimov-from-1964 | CC-MAIN-2016-44 | refinedweb | 5,329 | 71.85 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.